testing
11 TopicsImplementing SSL Orchestrator - Validation & Troubleshooting
Introduction This article is part of a series on implementing BIG-IP SSL Orchestrator. It includes high availability and central management with BIG-IQ. Implementing SSL/TLS Decryption is not a trivial task. There are many factors to keep in mind and account for, from the network topology and insertion point, to SSL/TLS keyrings, certificates, ciphersuites and on and on. This article focuses on verification that the solution is working. This article is divided into the following high level sections: How to check if content is being decrypted How to check if content is being blocked How to check if content is being bypassed Logging and Troubleshooting Please forgive me for using SSL and TLS interchangeably in this article. Software versions used in this article: BIG-IP Version: 14.1.2 SSL Orchestrator Version: 5.5 BIG-IQ Version: 7.0.1 How to check if content is being decrypted Go to a computer that connects to the internet through SSL Orchestrator.Go to an encrypted website like www.cisco.com.The page should load without error.Click the padlock icon in the Address Bar. Notice that the connection is secure and the certificate is valid.Click the Certificate to view more details.The certificate was issued to www.cisco.com but has been issued by subrsa.f5labs.com.The connection to this web site was decrypted and re-encrypted by SSL Orchestrator. Switch to the Palo Alto UI and go to Monitor > Traffic.It should look like this. Click the link on the left in the red box. This drills into more details. Notice the IP address of 72.163.10.10 and port 80 (not port 443).A quick lookup of that IP shows it belongs to Cisco. How to check if content is being blocked Now go to the website www.eicar.org.This site hosts a harmless test malware file so you can see if your security measures are effective.Click Download Anti Malware Testfile on the right. Click the Download link on the left. Scroll down on the next page.You should see a section with 8 different downloads. There are 4 different file types.The first group uses HTTP.The second group uses HTTPS.Click any of the HTTPS links to verify it is detected and blocked. You should get a block page like the following. Notice the URL indicates an attempt to download one of the compressed .zip files. How to check if content is being bypassed Next try a Banking website and make sure it is bypassed.On the client open a browser and go to www.chase.com.Click the Padlock icon in the address bar and it should look like this. Click the highlighted section for more details about this certificate. This certificate was issued by Entrust CA.The SSL Orchestrator is not decrypting this connection.We can check the Palo Alto logs to make sure it doesn’t have a record of this transaction. The image below shows ping traffic generated by SSL Orchestrator to test the health of the L2 Service.Notice there is no web-browsing or port 80 traffic. Logging and Troubleshooting To enable logging: -Connect to the WebUI -Go to SSL Orchestrator and click on Logs -Enable the required logging levels Note: By default, logs are stored locally on the BIG-IP. Below is a reasonably-ordered list of troubleshooting steps. ·If the SSL Orchestrator deployment process fails, review the ensuing error message. It would be impossible to list here all of the possible error messages and their meanings, but often enough the messages will reveal the issue. ·Re-review the lab steps for any missing or misconfigured settings. ·Enable debug logging in the SSL Orchestrator configuration. Tail the APM log from a BIG-IP command line or from the logs page in the management UI. Debug logging will very often reveal important issues. Specifically, it will indicate traffic classification matches, mismatches or deployment issues. tail –f /var/log/apm tail -f /var/log/restnoded/restnoded.log tail -f /var/log/restjavad.0.log ·If the SSL Orchestrator deployment process succeeds, but traffic isn’t flowing through the environment made evident by lack of access to remote sites from the client: o Never enable debug logging on the Per-Request Policy logging option in a production environment. If needed, be sure to turn it back off after capture logs. The PRP logging is extremely verbose and WILL affect system performance. o /var/log/apm is used to log data plane traffic flow o /var/log/restnoded/restnoded.log is used to log control plane (when SSLO configurations are deployed) o /var/log/restjavad is similarly used to log control plane restjava functions o It’s also sometimes useful to tail /var/log/ltm, which is where SSL and generic data plane issues will show up. oEnsure that the client is properly configured to either default route to the ingress VLAN and self-IP of the BIG-IP for transparent proxy access or has the correct browser proxy settings defined for explicit proxy access. oEnsure that traffic is flowing to the BIG-IP from the client with a tcpdump capture at the ingress interface. oReview the LTM configuration created by the SSL Orchestrator. Specifically, look at the pools and respective monitors for any failures. oIsolate service chain services. If at least one service chain has been created, and debug logging indicates that traffic is matching this chain, remove all but one service from that chain and test. Add one service back at a time until traffic flow stops. If a single added service breaks traffic flow, this service will typically be the culprit. oIf a broken service is identified, insert probes to verify inbound and outbound traffic flow. Inline services will have a source (S) VLAN and destination (D) VLAN, and ICAP and receive only services will each have a single source VLAN. Insert a tcpdump capture at each VLAN in order to determine if traffic is getting to the device, and if traffic is leaving the device through its outbound interface. o How to insert probes. The services VLANs are wrapped in application services so must be addressed accordingly in the tcpdump. Note that each inline service will create two separate VLANs – one for inbound and one for outbound, so it becomes easy to surgically insert captures at specific points in the flow (ie. to the service, coming from the service). So a tcpdump capture of a service named “FEYE” might look like this: tcpdump -lnni ssloS_FEYE.app/ssloS_FEYE Where “ssloS_FEYE.app” is the application service container, which contains the “ssloS_FEYE” VLAN. oBy default the ‘all traffic’ rule doesn’t attach a service chain, so it can be as simple as removing service chains from all of the security rules. If traffic doesn’t flow with no service chains attached anywhere, then the problem is in SSLO proper, likely a routing or decryption issue. If the traffic only fails with a service chain applied, this is when it’s useful to isolate the services. If a broken service is identified, insert tcpdump probes as described above. oIf traffic is flowing through all of the security devices, insert a tcpdump probe at the egress point to verify traffic is leaving the BIG-IP to the gateway router. oIf traffic is flowing to the gateway router, perform a more extensive packet analysis to determine if SSL if failing between the BIG-IP egress point and the remote server. tcpdump –i 0.0:nnn –nn –Xs0 –vv –w <file.pcap> <any additional filters> Then either export this capture to WireShark or send to ssldump: ssldump –nr <file.pcap> -H –S crypto > text-file.txt oIf the WireShark or ssldump analysis verifies an SSL issue: oPlug the site’s address into the SSLLabs.com server test site at: www.ssllabs.com/ssltest/ This report will indicate any specific SSL requirements that this site has. oVerify that the SSL Orchestrator server SSL profiles (two of them) have the correct cipher string to match the requirements of this site. To do that, perform the following command at the BIG-IP command line: tmm --clientciphers ‘CIPHER STRING AS DISPLAYED IN SERVER SSL PROFILES’ Further SSL/TLS issues are beyond the depth of this guide. Seek assistance. • If all else fails, seek assistance. Summary In this article you learned how to verify that SSL Orchestrator is decrypting SSL and passing it to an inline security device. You can verify that SSL is decrypted simply by viewing web site certificates. You can also use logging or reporting capabilities of an inline security device to verify that SSL is decrypted. You learned how to test whether the inline security device was blocking malicious content. You also learned how to verify that a policy to bypass SSL decryption is working. Then you learned how to enable logging and perform troubleshooting steps. Next Steps Click Next to proceed to the next article in the series.1.1KViews2likes4CommentsBare Metal Blog: Throughput Sometimes Has Meaning
#BareMetalBlog Knowing what to test is half the battle. Knowing how it was tested the other. Knowing what that means is the third. That’s some testing, real clear numbers. In most countries, top speed is no longer the thing that auto manufacturers want to talk about. Top speed is great if you need it, but for the vast bulk of us, we’ll never need it. Since the flow of traffic dictates that too much speed is hazardous on the vast bulk of roads, automobile manufacturers have correctly moved the conversation to other things – cup holders (did you know there is a magic number of them for female purchasers? Did you know people actually debate not the existence of such a number, but what it is?), USB/bluetooth connectivity, backup cameras, etc. Safety and convenience features have supplanted top speed as the things to discuss. The same is true of networking gear. While I was at Network Computing, focus was shifting from “speeds and feeds” as the industry called it, to overall performance in a real enterprise environment. Not only was it getting increasingly difficult and expensive to push ever-larger switches until they could no longer handle the throughput, enterprise IT staff was more interested in what the capabilities of the box were than how fast it could go. Capabilities is a vague term that I used on purpose. The definition is a moving target across both time and market, with a far different set of criteria for, say, an ADC versus a WAP. There are times, however, where you really do want to know about the straight-up throughput, even if you know it is the equivalent of a professional driver on a closed course, and your network will never see the level of performance that is claimed for the device. There are actually several cases where you will want to know about the maximum performance of an ADC, using the tools I pay the most attention to at the moment as an example. WAN optimization is a good one. In WANOpt, the goal is to shrink the amount of data being transferred between two dedicated points to try and maximize the amount of throughput. When “maximize the amount of throughput” is in the description, speeds and feeds matter. WANOpt is a pretty interesting example too, because there’s more than just “how much data did I send over the wire in that fifteen minute window”. It’s more complex than that (isn’t it always?). The best testing I’ve seen for WANOpt starts with “how many bytes were sent by the originating machine”, then measures that the same number of bytes were received by the WANOpt device, then measures how much is going out the Internet port of the WANOpt device – to measure compression levels and bandwidth usage – then measures the number of bytes the receiving machine at the remote location receives to make sure it matches the originating machine. So even though I say “speeds and feeds matter”, there is a caveat. You want to measure latency introduced with compression and dedupe, and possibly with encryption since WANOpt is almost always over the public Internet these days, throughput, and bandwidth usage. All technically “speeds and feeds” numbers, but taken together giving you an overall picture of what good the WANOpt device is doing. There are scenarios where the “good” is astounding. I’ve seen the numbers that range as high as 95x the performance. If you’re sending a ton of data over WANOpt connections, even 4x or 5x is a huge savings in connection upgrades, anything higher than that is astounding. This is an (older) diagram of WAN Optimization I’ve marked up to show where the testing took place, because sometimes a picture is indeed worth a thousand words. And yeah, I used F5 gear for the example image… That really should not surprise you . So basically, you count the bytes the server sends, the bytes the WANOpt device sends (which will be less for 99.99% of loads if compression and de-dupe are used), and the total number of bytes received by the target server. Then you know what percentage improvement you got out of the WANOpt device (by comparing server out bytes to WANOpt out bytes), that the WANOpt devices functioned as expected (server received bytes == server sent bytes), and what the overall throughput improvement was (server received bytes/time to transfer). There are other scenarios where simple speeds and feeds matter, but less of them than their used to be, and the trend is continuing. When a device designed to improve application traffic is introduced, there are certainly few. The ability to handle a gazillion connections per second I’ve mentioned before is a good guardian against DDoS attacks, but what those connections can do is a different question. Lots of devices in many networking market spaces show little or even no latency introduction on their glossy sales hand-outs, but make those devices do the job they’re purchased for and see what the latency numbers look like. It can be ugly, or you could be pleasantly surprised, but you need to know. Because you’re not going to use it in a pristine lab with perfect conditions, you’re going to slap it into a network where all sorts of things are happening and it is expected to carry its load. So again, I’ll wrap with acknowledgement that you all are smart puppies and know where speeds and feeds matter, make sure you have realistic performance numbers for those cases too. Technorati Tags: Testing,Application Delivery Controller,WAN Optimization,throughput,latency,compression,deduplication,Bare Metal Blog,F5 Networks,Don MacVittie The Whole Bare Metal Blog series: Bare Metal Blog: Introduction to FPGAs | F5 DevCentral Bare Metal Blog: Testing for Numbers or Performance? | F5 ... Bare Metal Blog: Test for reality. | F5 DevCentral Bare Metal Blog: FPGAs The Benefits and Risks | F5 DevCentral Bare Metal Blog: FPGAs: Reaping the Benefits | F5 DevCentral Bare Metal Blog: Introduction | F5 DevCentral204Views0likes0CommentsBare Metal Blog: Test for reality.
#BareMetalBlog #F5 Test results provided by vendors and “independent testing labs” often test for things that just don’t matter in the datacenter. Know what you’re getting. When working in medicine, you don’t glance over a patient, then ask them “so how do you feel when you’re at your best?” You ask them what is wrong, then run a ton of tests – even if the patient thinks they know what is wrong – then let the evidence determine the best course of treatment. Sadly, when determining the best tools for your IT staff to use, we rarely follow the same course. We invite a salesperson in, ask them “so, what do you do?”, and let them tell us with their snippets of reality or almost-reality why their product rocks the known world. Depending upon the salesperson and the company, their personal moral code or corporate standards could limit them to not bringing up the weak points of their products to outright lying about its capabilities. “But Don!”, you say, “you’re being a bit extreme, aren’t you?” Not in my experience I am not. From being an enterprise architect to doing comparative reviews, I have seen it all. Vendor culture seems to infiltrate how employees interact with the world outside their HQ – or more likely (though I’ve never seen any research on it), vendors tend to hire to fit their culture, which ranges from straight-up truth about everything to wild claims that fall apart the first time the device is put into an actual production-level network. The most common form of disinformation that is out there is to set up tests so they simply show the device operating at peak efficiency. This is viewed as almost normal by most vendors – why would you showcase your product in less than its best light? and as a necessary evil by most of the few who don’t have that view – every other vendor in the space is using this particular test metric, we’d better too or we’ll look bad. Historically, in network gear, nearly empty communications streams have been the standard for high connection rates, very large window sizes the standard for manipulating throughput rates. While there are many other games vendors in the space play to look better than they are, it is easy to claim you handle X million connections per second if those connections aren’t actually doing anything. It is also easier to claim you handle a throughput of Y Mbps if you set the window size larger than reality would ever see it. Problem with this kind of testing is that it seeps into the blood, after a while, those test results start to be sold as actual… And then customers put the device into their network, and needless to say, they see nothing of the kind. You would be surprised the number of times when we were testing for Network Computing that a vendor downright failed to operate as expected when put into a live network, let alone met the numbers the vendor was telling customers about performance. One of the reasons I came to F5 way back when was that they did not play these games. They were willing to make the marketing match the product and put a new item on the roadmap for things that weren’t as good as they wanted. We’re still out there today helping IT staff understand testing, and what testing will show relevant numbers to the real world. By way of example, there is the Testing Configuration Guide on F5 DevCentral. As Application Delivery Controllers have matured and solidified, there has been much change in how they approach network traffic. This has created an area we are now starting to talk more about, which is the validity of throughput testing in regards to ADCs in general. The thing is, we’ve progressed to the point that simply “we can handle X Mbps!” is no longer a valid indication of the workloads an ADC will be required to handle in production scenarios. The real measure for application throughput that matters is requests per second. Vendors generally avoid this kind of testing, because response is also limited by the capacity of the server doing the actual responding, so it is easy to get artificially low numbers. At this point in the evolution of the network, we have reached the reality of that piece of utility computing. Your network should be like electricity. You should be able to expect that it will be on, and that you will have enough throughput to handle incoming load. Mbps is like measuring amperage… When you don’t have enough, you’ll know it, but you should, generally speaking, have enough. It is time to focus more on what uses you put that bandwidth to, and how to conserve it. Switching to LED bulbs is like putting in an ADC that is provably able to improve app performance. LEDs use less electricity, the ADC reduces bandwidth usage… Except that throughput or packets per second isn’t measuring actual improvements of bandwidth usage. It’s more akin to turning off your lights after installing LED bulbs, and then saying “lookie how much electricity those new bulbs saved!” Seriously, do you care if your ADC can delivery 20 million Megabits per second in throughput, or that it allows your servers to respond to requests in a timely manner? Seriously, the purpose of an Application Delivery Controller is to facilitate and accelerate the delivery of applications, which means responses to requests. If you’re implementing WAN Optimization functionality, throughput is still a very valid test. If you’re looking at the Application Delivery portion of the ADC though, it really has no basis in reality, because requests and responses are messy, not “as large a string of ones as I can cram through here”. From an application perspective – particularly from a web application perspective – there is a lot of “here’s a ton of HTML, hold on, sending images, wait, I have a video lookup…” Mbps or MBps just doesn’t measure the variety of most web applications. But users are going to feel requests per second just as much as testing will show positive or negative impacts. To cover the problem of application servers actually having a large impact on testing, do what you do with everything else in your environment, control for change. When evaluating ADCs, simply use the same application infrastructure and change only the ADC out. Then you are testing apples-to-apples, and the relative values of those test results will give you a gauge for how well a given ADC will perform in your environment. In short, of course the ADC looks better if it isn’t doing anything. But ADCs do a ton in production networks, and that needs to be reflected in testing. If you’re fortunate enough to have time and equipment, get a test scheduled with your prospective vendor, and make sure that test is based upon the usage your actual network will expose the device to. If you are not, then identify your test scenarios to stress what’s most important to you, and insist that your vendor give you test results in those scenarios. In the end, you know your network far better than they ever will, and you know they’re at least not telling you the whole story, make sure you can get it. Needless to say, this is a segue into the next segment of our #BareMetalBlog series, but first I’m going to finish educating myself about our use of FPGAs and finish that segment up.195Views0likes0CommentsDevOps. It’s in the Culture, Not Tech.
#F5 DevOps – Managers need to make use of existing technology and adopt culture change. It is entertaining to read all that is currently being written about DevOps. Having been a developer, a development manager, an operations manager, and even a CTO, I can attest to the fact that the “throw it over the wall” syndrome is real, and causes real problems for everyone involved. That is about where my agreement with the current round of pundits ends. The thing is that they talk like there is some fundamental technological reason why DevOps isn’t happening. That’s just not true. For those a little behind in your jargon, DevOps is making operations prevalent in the decisions of your development organization. We’ll take the discussion a little bit at a time. We have virtualization. We have astoundingly good virtualization from VMWare, Microsoft, RedHat, et al. So many of the concerns about development and ops go away immediately. “Developers test in the environment their tools run in!'” is oft-heard in the DevOps conversations. But that’s just not an issue. They can run three different OS’s to test with, and as many browsers in each OS as you want to support – all with a single set of hardware. So make testing in your operational environment mandatory. In fact, for most tools, they’re developing on whatever OS is being targeted anyway, because there are subtle differences – or no support at all – in other OS’s. Virtualization taken hand-in-hand with the capabilities of an Application Delivery Controller (ADC) like F5’s BIG-IP can also remove some of the “throw it over the wall” symptoms easily – particularly in Agile or other high-rev environments. Take a copy of the VM running the app today, modify that copy, place it on the network, then, utilizing one of the many algorithms available for load-balancing, switch a select load to the new server. Make it 1/3rd as likely as any other server to receive a given connection, and utilize persistence so users aren’t bouncing back and forth between revisions of the app. See how it performs under real-life usage. Have the Devs there for the first half hour, then make a “deploy/don’t deploy” decision, and if not deploying, take down the copy with the new code on it so the devs can continue to work on it. If deploying, then bring up copies of the new server and bleed connections off of the old ones, then bring those virtuals down and archive or destroy them, as your organization sees fit. Most of the issues with DevOps are gone at that point. Testing is the only other issue that comes up a lot, and let’s talk frankly about testing. There just aren’t enough really good test tools out there. A test tool needs to automate as close to 100% of the testing process as possible in order for thorough testing to be feasible. The complexity of today’s applications leave most test tools in the dust. When Microsoft had MS-Test available, one of the shops I worked at used it, and one of my friends was an expert at generating test scripts, but we were a software shop. Our product was the software, making it much more critical that there be as close to zero defects as possible. That’s not the case if your product is not software. In those cases, software is a supporting tool to help sell product, and as such the occasional bug, while regrettable and to be avoided, doesn’t reflect upon your entire product line. So there are some tools out there to do testing. They’ll help determine things like buffer overflows and such, and Microsoft is selling Microsoft Test Center, which looks to be a more comprehensive solution, but no program knows your business – and thus the testing context – in the manner that your employees do, and most companies are not willing to shell out the money required to start a dedicated testing group. If you are one of the lucky ones, well you can replicate your entire production environment with VMs and an ADC… But you can’t get the volumes you would see in a live public-facing Internet application with actual personal interaction and all the dumb mistakes we users make. So you can test, after a fashion, and with real people involved even set up intelligent testing, but it will take some effort. The best bet for most shops is to make unit testing part of the developers’ jobs, and system testing part of the project managers’ job, with help from both dev and ops. See, Devops! It is a testament to the quality of tools and developers out there that we don’t see a ton of issues all the time. Think if the Web had the percentage of problems on a daily basis that Windows Apps did early on… We’d never get anything done. So don’t take DevOps so seriously from a technological standpoint, instead, let’s talk about culture. Developers need to feel like part of the larger team if you want them to worry about DevOps. Here’s the catch, most of them don’t want to feel like part of the larger team. Network issues and storage shortages annoy them as much as your business users. They want to write cool apps and shove them out the door (or the Internet connection, more precisely) without worrying about deployment issues too much. It is up to management to take concrete steps to move dev closer to ops. To do this, first mandate that all testing occur on the OS that deployment will occur on. This shouldn’t be a big deal for most shops, but in a few it will be. Second, mandate that the dev team put resources on the deployment, and utilize the Virtualization/ADC scenario discussed above. Third, remind developers that their app isn’t great unless users think it is, and it is deployable. They’ll come around, but it will take some work on your part. The days of developing in a vacuum are long gone (at least in the enterprise), and the disconnect between dev and ops is one of the few remaining artifacts of that time. You just have to remove the artifacts and hook up the strengths of your ops team with the leet app skills on your dev team, and the whole concept of DevOps is significantly reduced. One last bit, troubleshooting when things do go wrong. Developers can build a lot into their apps to give them hints about status, but ops can use tools like iApps (built into TMOS v 11 on F5 BIG-IP LTM) to show that it is indeed the application not responding in a timely manner – or indeed to show exactly what IS the bottleneck in a deployment. The reporting functionality on iApps makes them a worthwhile endeavor without the “ease of infrastructure management” they offer. iApp reporting can tell you exactly which piece of the application environment is slow, dragging the entire system response time down. I think that’s huge. If you have a BIG-IP, check it out, I’m pretty certain you will too. So call a meeting. Make it around lunch time and announce that pizza will be provided. Both teams will show up, and you can start changing culture. The benefits will be long-term, with applications better suiting users needs and requiring less operations man-hours. And devs will get a better feel for what works and doesn’t in your environment.214Views0likes0CommentsWILS: SSL TPS versus HTTP TPS over SSL
The difference between these two performance metrics is significant so be sure you know which one you’re measuring, and which one you wanted to be measuring. It may be the case that you’ve decided that SSL is, in fact, a good idea for securing data in transit. Excellent. Now you’re trying to figure out how to implement support and you’re testing solutions or perhaps trying to peruse reports someone else generated from testing. Excellent. I’m a huge testing fan and it really is one of the best ways to size a solution specifically for your environment. Some of the terminology used to describe specific performance metrics in application delivery, however, can be misleading. The difference between SSL TPS (Transactions per second) and HTTP TPS over SSL, for example, are significant and therefore should not be used interchangeably when comparing performance and capacity of any solution – that goes for software, hardware, or some yet-to-be-defined combination thereof. The reasons why interpreting claims of SSL TPS are so difficult is due to the ambiguity that comes from SSL itself. SSL “transactions” are, by general industry agreement (unenforceable, of course) a single transaction that is “wrapped” in an SSL session. Generally speaking one SSL transaction is considered: 1. Session establishment (authentication, key exchange) 2. Exchange of data over SSL, often a 1KB file over HTTP 3. Session closure Seems logical, but technically speaking a single SSL transaction could be interpreted as any single transaction conducted over an SSL encrypted session because the very act of transmitting data over the SSL session necessarily requires SSL-related operations. SSL session establishment requires a handshake and an exchange of keys, and the transfer of data within such a session requires the invocation of encryption and decryption operations (often referred to as bulk encryption). Therefore it is technically accurate for SSL capacity/performance metrics to use the term “SSL TPS” and be referring to two completely different things. This means it is important that whomever is interested in such data must do a little research to determine exactly what is meant by SSL TPS when presented with such data. Based on the definition the actual results mean different things. When used to refer to HTTP TPS over SSL the constraint is actually on the bulk encryption rate (related more to response time, latency, and throughput measurements), while SSL TPS measures the number of SSL sessions that can be created per second and is more related to capacity than response time metrics. It can be difficult to determine which method was utilized, but if you see the term “SSL ID re-use” anywhere, you can be relatively certain the test results refer to HTTP TPS over SSL rather than SSL TPS. When SSL session IDs are reused, the handshaking and key exchange steps are skipped, which reduces the number of computationally expensive RSA operations that must be performed and artificially increases the results. As always, if you aren’t sure what a performance metric really means, ask. If you don’t get a straight answer, ask again, or take advantage of all that great social networking you’re doing and find someone you trust to help you determine what was really tested. Basing architectural decisions on misleading or misunderstood data can cause grief and be expensive later when you have to purchase additional licenses or solutions to bring your capacity up to what was originally expected. WILS: Write It Like Seth. Seth Godin always gets his point across with brevity and wit. WILS is an ATTEMPT TO BE concise about application delivery TOPICS AND just get straight to the point. NO DILLY DALLYING AROUND. The Anatomy of an SSL Handshake When Did Specialized Hardware Become a Dirty Word? WILS: Virtual Server versus Virtual IP Address Following Google’s Lead on Security? Don’t Forget to Encrypt Cookies WILS: What Does It Mean to Align IT with the Business WILS: Three Ways To Better Utilize Resources In Any Data Center WILS: Why Does Load Balancing Improve Application Performance? WILS: Application Acceleration versus Optimization All WILS Topics on DevCentral What is server offload and why do I need it?1.3KViews0likes3CommentsDon’t Confuse A Rubber Stamp With Validation
There are many instances in the world where third-party verification of thoughts and ideas are just a useful thing to have. Cases where the vested interest of one party makes their opinion suspect, even if it is unbiased. For those cases we have a whole collection of organizations and corporations that will research and verify, test and certify, validate and verify, whatever, depending upon the issue and the needs of the target audience. A good example of this that I know of for the obvious reasons is gluten-free foods. There is a “certified gluten free” program in the US, but some of the testers are more thorough than others, and the label is sometimes misleading if not outright wrong. Other foods are marked (not certified) as gluten free, but are actually produced on lines that process foods containing gluten, which means that the ingredients are technically gluten free, but the final product may be contaminated with gluten molecules. It gets worse if you have an actual Gluten allergy (as opposed to going on a GF fad diet because US labeling laws only require you report use of wheat in your product, ignoring other gluten sources) You see the same type of thing in nearly every market. “Independent” test labs are popular in the high-tech business, but the definition of “independent” is stretched when you have to pay the company to do the testing. Of course those who pay are paying for results they can use, and while some organizations are good at taking negative feedback and using it to improve, since the funds for independent test labs generally come out of the marketing budget, most aren’t. They’re paying for marketing collateral, not feedback on improving the product. And the same is often true of your use of outside professionals. In my day I have seen some uses of outside IT consultants that to this day leave me shaking my head. One place I worked was paying a suit-coat international consulting firm for an array of security services, including penetration testing. As the security staff walked out with one of the on-site consultants one day, he said “Oh, I forgot my briefcase”, walked back through the doors – where the guard let him pass because he had just come out – and quickly went from desk to desk in a user pod just inside the doors. It didn’t take him long to find usernames and passwords on sticky notes, and, jotting a few down, he left. Upon returning to his hotel, he called the penetration team and passed them actual user credentials. Image Courtesy of LiveGlutenFreely.com I was not on the security team, but I was in their department. This seemed like completely fair game to me. Those ne’er-do-wells who want into your organization will not play by a single rule in the book, and it only takes an unmanaged second for a visitor to your building to come away with useful information – from subnet IDs to usernames and passwords. It didn’t seem so to the security team, and it certainly didn’t seem so to the organization’s VP, who called the consulting firm and threatened to cancel their high-dollar contract over it. These results were mandatorily reported over said VP’s head, and he was not thrilled that there were multiple penetrations. My thought was that we should have taken the incident to heart and changed policy – either enforced the “don’t write down your credentials” rule, or enforce the “no visitors in the building unaccompanied” rule. But my thoughts were definitely in the minority. Needless to say, any security audit done after this affair was a rubber stamp, not a validation, and the outrageous fees the organization paid to said firm were wasted – not on their account, but on ours. At another employer, we were seeking a replacement for a mission-critical system that was twenty years old. Not only was it twenty years old, but we had bought the source, and the language it was written in was twenty years out of date. I’m pretty certain that, had they been able to find programmers for the old system, it would still be running today. But they couldn’t, so they were looking at replacing it with something updated. There were several vendors who provided the specific vertical market solution required, and there were definite “camps” concerning which solution was best. So IT hired an outside business systems consulting firm for a hefty fee to help navigate the requirements and selection waters. This firm came in, reviewed all of the data, went and interviewed managers, and then decided that… Surprise! The system preferred by those who write the consulting checks was the system to choose. It was very much not a fit for the environment it was being put into, the company was too small, the source too immature, and the design to burdened. But the price was… Outrageously cheap. So by kicking a percentage of the cost over to a consulting firm, a rubber stamp of the selection was purchased, but the organization was not served. Indeed, in the end, the product by a top-tier vendor, whose sticker price (with install and consulting to get up and running) would have been cheaper and would have cut a year or more of over-runs off of the project. And the third, which I’ve mentioned on this blog more than once, was the time that one of the top three analyst firms in the world gave me three different answers to “who holds the largest database market share” based upon whom I talked to and how I worded my questions. For the amount we were paying them every year, giving the answer they thought I wanted was not providing service. Wonder if they still rubber stamp rather than answer direct questions. Sometimes, the politics of the situation dictate that even though you know you are right, you have to go get third-party help to validate your decisions. As long as there are reasonable people disagreeing on the best solution, a third party is a good solution if it is an either-or choice. And third party validation that your data is secure is certainly a worthwhile proposition. But make sure you’re getting impartial third parties to validate choices, and make sure they’re given the background and leeway to help you come to the best solution. In all of the cases above, the money was completely wasted, and that’s just not good for the organization. Bring in specialists, get a champion for each solution to tell them why it should be selected, let them research, and take their advice to heart. Don’t pay them to tell you what you’d already decided, that’s not helping anyone, if you want to run IT systems by fiat, then do so and save the money. And good IT management knows that. Most organizations navigate these waters most of the time with no problems and manage to get the most out of their consulting and analyst help. Just be aware when you’re leaning away from unbiased, and don’t jade a project with it. It is also good to remember when you read claims by vendors touting third party validation – including us – to look the gift horse in the mouth. Of course they paid for that validation in one way or another, what you have to decide is whether the validating organization is an authority or a rubber stamp factory. About a year ago we took to testing our own stuff and publishing the test data on DevCentral so you and others could pick it up and validate it yourselves. But third party references are also important, so we still have those too, and they’re useful, as long as you scrutinize the source. Just keep doing what’s best for the organization, and even though less-than-optimal choices will be made along the way, you’ll recover from them. When you’re making 500 choices a year about technology, apps, storage, networking gear, staffing, etc etc etc. there will be road bumps. Just as long as they were made in good faith and not with fiat by rubber stamp, it’s a sign of institutional health. Related Blogs: F5 News - SSL VPN F5 Talks With Evan Loats From CSC About Remote Access It's like load balancing. On steroids. More Users, More Access, More Clients, Less Control Cloud is the Gift That Keeps On Giving Lori MacVittie F5's New Ask F5SM Portal Enhances Worldwide Customer Support and ...209Views0likes0CommentsA Million VMS. Test environments in Virtual and Cloud infrastructures
One thing that some companies seem to have grabbed onto and run with while others don’t seem to have made the correct connections to fully utilize is testing in a highly virtualized or cloud environment. Of all the things these environments can do well, testing is one of the best possible use cases to deploy them. For some of you, this isn’t news. I know some testing people who have this down to a science, and no doubt their wisdom is palely reflected in this post. Related Articles and Blogs: The Web Service May Not Be The Bottleneck! Configuring a multi-server Testing Environment With Teams and F5 BIG-IP LTM VE 100 Percent Virtual? Sure, Run That Test. Infrastructure Matters: Challenges of Cloud-based Testing Lots of Little Virtual Web Applications Scale Out Better Than Scaling Up188Views0likes0CommentsFor Thirty Pieces of Silver My Product Can Beat Your Product
One of the side-effects of the rapid increases in compute power combined with an explosion of Internet users has been the need for organizations to grow their application infrastructures to support more and more load. That means higher capacity everything – from switches to routers to application delivery infrastructure to the applications themselves. Cloud computing has certainly stepped up to address this, providing the means by which organizations can efficiently and more cost-effectively increase capacity. Between cloud computing and increasing demands on applications there is a need for organizations to invest in the infrastructure necessary to build out a new network, one that can handle the load and integrate into the broader ecosystem to enable automation and ultimately orchestration. Indeed, Denise Dubie of Network World pulled together data from analyst firms Gartner and Forrester and the trend in IT spending shows that hardware is king this year. "Computing hardware suffered the steepest spending decline of the four major IT spending category segments in 2009. However, it is now forecast to enjoy the joint strongest rebound in 2010," said George Shiffler, research director at Gartner, in a statement. That is, of course, good news for hardware vendors. The bad news is that the perfect storm of increasing capacity needs, massively more powerful compute resources, and the death of objective third party performance reviews result in a situation that forces would-be IT buyers to rely upon third-parties to provide “real-world” performance data to assist in the evaluation of solutions. The ability – or willingness - of an organization to invest in the hardware or software solutions to generate the load necessary to simulate “real-world” traffic on any device is minimal and unsurprising. Performance testing products like those from Spirent and Ixia are not inexpensive, and the investment is hard to justify because it isn’t used very often. But without such solutions it is nearly impossible for an organization to generate the kind of load necessary to really test out potential solutions. And organizations need to test them out because they, themselves, are not inexpensive and it’s perfectly understandable that an organization wants to make sure their investment is in a solution that performs as advertised. That means relying on third-parties who have made the investment in performance testing solutions and can generate the kind of load necessary to test vendor claims. That’s bad news because many third-parties aren’t necessarily interested in getting at the truth, they’re interested in getting at the check that’s cut at the end of the test. Because it’s the vendor cutting that check and not the customer, you can guess who’s interests are best served by such testing.161Views0likes0CommentsIt’s 2am: Do You Know What Algorithm Your Load Balancer is Using?
The wrong load balancing algorithm can be detrimental to the performance and scalability of your web applications. When you’re mixing and matching virtual or physical servers you need to take care with how you configure your Load balancer – and that includes cloud-based load balancing services. Load balancers do not at this time, unsurprisingly, magically choose the right algorithm for distributing requests for a given environment. One of the nice things about a load balancing solution that comes replete with application-specific templates is that all the work required to determine the optimal configuration for the load balancer and its associated functionality (web application security, acceleration, optimization) has already been done – including the choice of the right algorithm for that application. But for most applications there are no such templates, no guidance, nothing. Making things more difficult are heterogeneous environments in which the compute resources available vary from instance to instance. These variations make some load balancing algorithms unsuited to such environments. There is some general guidance you can use when trying to determine which algorithm is best suited to meeting the performance and scalability needs of your applications based on an understanding of how the algorithms are designed to make decisions, but if you want optimal performance and scalability you’ll ultimately have to do some testing. Heterogeneous environments can pose a challenge to scale if careful consideration of load balancing algorithms is not taken. Whether the limitations on compute resources are imposed by a virtualization solution or the hardware itself, limitations that vary from application instance to application instance are an important factor to consider when configuring your load balancing solution. Let’s say you’ve got a pool of application instances and you know the capacity of each in terms of connections (X). Two of the servers can handle 500 concurrent connections, and one can handle 1000 concurrent connections. Now assume that your load balancer is configured to perform standardround robin load balancingbetween the three instances. Even though the total capacity of these three servers appears to be 2000 concurrent connections, by the time you hit 1501, the first of the three servers will be over capacity because it will have to try to handle 501 connections. If you tweak the configuration just a bit to indicate the maximum connection capacity for each node (instance) you can probably avoid this situation, but there are no guarantees. Now let’s make a small change in the algorithm – instead of standard round robin we’ll useweightedround robin (often called “ratio”), and give the largest capacity server a higher weight based on its capacity ratio to the other servers, say 2. This means the “bigger” server will receive twice as many requests as the other two servers, which brings the total capacity closer to what is expected. You might be thinking that aleast connection algorithmwould be more appropriate in a heterogeneous environment, but that’s not the case. Least connection algorithms base distribution upon the number of connections currently open on any given server instance; it does not necessarily take into consideration the maximum connection capacity for that particular node. Fastest response time combined with per node connection limits would be a better option, but afastest response time algorithmtends to result in a very unequal distribution as load increases in a heterogeneous environment. This does not, however, say anything about the performance of the application when using any of the aforementioned algorithms. We do know that as application instances near capacity performance tends to degrade. Thus we could extrapolate that the performance for the two “smaller” servers will degrade faster than the performance for the bigger server because they will certainly reach capacity under high load before the larger server instance – when using some algorithms, at least. Algorithms like fastest response time and least connections tend to favor higher performing servers which means in the face of a sudden spike of traffic performance may degrade usingthatalgorithm as well. How about more “dynamic” algorithms that take into consideration multiple factors? Dynamic load balancing methods are designed to work with servers that differ in processing speed and memory. The resulting load balancing decisions may be uneven in terms of distribution but generally provides a more consistent user experience in terms of performance. For example, theobserved dynamic load balancing algorithmdistributes connections across applications based on a ratio calculated every second, andpredictive dynamic load balancinguses the same ratio but also takes into consideration the change between previous connection counts and current connection counts and adjusts the ratio based on the delta. Predictive mode is more aggressive in adjusting ratio values for individual application instances based on connection changes in real-time and in a heterogeneous environment is likely better able to handle the differences between server capabilities. What is TCP multiplexing? TCP multiplexing is a technique used primarily by load balancers and application delivery controllers (but also by some stand-alone web application acceleration solutions) that enables the device to "reuse" existing TCP connections. This is similar to the way in which persistent HTTP 1.1 connections work in that a single HTTP connection can be used to retrieve multiple objects, thus reducing the impact of TCP overhead on application performance. TCP multiplexing allows the same thing to happen for TCP-based applications (usually HTTP / web) except that instead of the reuse being limited to only 1 client, the connections can be reused over many clients, resulting in much greater efficiency of web servers and faster performing applications. Interestingly enough, chatting withDan Bartow(now CloudTest Evangelist and Vice President atSOASTA) about his experiences as Senior Manager of Performance Engineering atIntuit, revealed that testing different algorithms under heavy load generated externally finally led them to the discovery that a simple round robin algorithm combined with the application ofTCP multiplexing optionsyielded a huge boost in both capacity and performance. But that was only after testing under conditions which were similar to those the applications would experience during peaks in usage and normalization of the server environment. This illustrates well that performance and availability isn’t simply a matter of dumping a load balancing solution into the mix – it’s important to test, to tweak configurations, and test again to find the overall infrastructure configuration that’s going to provide the best application performance (and thus end-user experience) while maximizing resource utilization. Theoretical mathematically accurate models of load balancing are all well and good, but in the real world the complexity of the variables and interaction between infrastructure solutions and applications and servers is much higher, rendering the “theory” just that – theory. Invariably which load balancing algorithm is right for your application is going to depend heavily on what metrics are most important to you. A balance of server efficiency, response time, and availability is likely involved, but which one of these key metrics is most important depends on what business stakeholders have deemed most important to them. The only way to really determine which load balancing algorithm will achieve the results you are looking for is totest them, under load, and observe the distribution and performance of the application. FIRE and FORGET NOT a GOOD STRATEGY The worst thing you can do is “fire and forget” about your load balancer. The algorithm that might be right for one application might not be right for another, depending on the style of application, its usage patterns, the servers used to serve it, and even the time of year. Unfortunately we’re not quite at the point where the load balancer can automatically determine the right load balancing algorithm for you, butthere are ways to adjust – dynamically – the algorithm based on not just the application but also the capabilities of the servers(physical and/or virtual) being load balanced so one day it is quite possible that through the magic of Infrastructure 2.0, load balancing algorithms will be modified on-demand based on the type of servers that make up the pool of resources. In order for the level of sophistication we’d (all) like to see, however, it’s necessary to first understand the impact of the load balancing algorithm on applications and determine which one is best able to meet the service level agreements in various environments based on a variety of parameters. This will become more important as public and private cloud computing environments are leveraged in new ways and introduce more heterogeneous environments. Seasonal demand might, for example, be met by leveraging different “sizes” of unused capacity across multiple servers in the data center. These “servers” would likely be of different CPU and RAM capabilities and thus would certainly be impacted by the choice of load balancing algorithm. Being able todynamically modify the load balancing algorithmbased on the capacities of application instances is an invaluable tool when attempting to maximize the efficiency of resources while minimizing associated costs. There is, of course, a lack of control over algorithms in cloud computing environments, as well, that make the situation more difficult. With a limited set of choices available from providers the algorithm that’s best for your application and server resource composition may not be available. Providers need to make it easier for customers to take advantage of modern, application and resource-aware algorithms that have evolved through trial-and-error over the past decade. Again, Infrastructure 2.0 enables this level of choice but must be leveraged by the provider to extend that choice and control to its customers. For now, it’s going to have to be enough to (1) thoroughly test the application and its supporting infrastructure under load and (2) adjust the load balancing algorithm to meet your specific performance criteria based on what is available. You might be surprised to find how much better your response time and capacity can be when you’re using the “right” load balancing algorithm for your application – or at least one that’s more right than it is wrong if you’re in a cloud computing environment.301Views0likes2CommentsPerformance Testing Web 2.0: Don't forget the APIs
Keynote, well known for its application performance testing and monitoring services, just announced a new version of its KITE (Keynote Internet Testing Environment) that is now capable of testing Web 2.0 sites that make use of AJAX, Flash, and other "hidden" methods of obtaining content. Announcement of KITE 2 Performance testing dynamic and HTML websites is now a fairly straightforward process, however the rise of Web 2.0 sites that don’t rely on clicking to reveal another new page have been almost impossible to test. However Keynote has now developed a scripted system that allows you to test Web 2.0 sites as easily as Web 1.0 sites. KITE 2.0 (Keynote Internet Testing Environment), is the latest version of Keynote’s product for testing and analysing the performance of Web applications. KITE 2.0 gives users the flexibility of testing instantly from their desktop or from geographic locations across Keynote’s on-demand global test and measurement network. According to Keynote KITE 2.0 enables Web developers, QA professionals, performance analysts and others, to execute rapid performance analysis and validation by measuring the end user experience of next Web 2.0 applications that include AJAX and asynchronously downloaded content. Ensuring that the additional burden placed on servers by "hidden" requests is imperative when attempting to understand both capacity and end-user performance. But when you're laying out that testing plan, don't forget about any APIs you might be providing. Like Twitter, Google, Facebook, and Amazon, if you're using an API to allow integration with other Web 2.0 sites or applications make certain you performance test that, as well. And then test them at the same time you're load-testing your application. As we've learned from Twitter's most public stability issues, the APIs are going to put additional stress on your network, servers, and databases that must be considered when determining capacity and performance levels. Important, too, is to take into consideration RSS feeds coming from your site. Include those in your performance test, as the retrieval of those files adds an additional burden on servers and can impact the performance of the entire site. Most feed-readers and aggregators poll for RSS feeds on a specified interval so set up a script to grab that RSS from multiple clients every 10 minutes or so during the testing to ensure that you're simulating maximum load as closely as possible. Performance testing sites today is getting more complex because we're adding so many entry points into our sites and applications. Don't forget about those extra entry points when you perform your load-testing or you might find out in a most public way that your application just can't handle the load.265Views0likes0Comments