web acceleration
64 TopicsHow to get the applications Assigned to a Web Acceleration Profile? (Possible API Bug)
I am trying to work out the applications a specific web acceleration profile are connected to. I am using the soap interface (ruby gem) When I make a get_application request from Soap Documentation: interfaces['LocalLB.ProfileWebAcceleration'].get_application(["/MYPARTITION/my_app_cache_profile"]) I get an empty response object: [] I would expect this to contain "/MYPARTITION/my_app.app" Verification (Profile is connected) I can confirm that this profile is connected to an app (by looking for it from the app): system_session.get_active_folder "/MYPARTITION/my_app.app" interfaces['LocalLB.ProfileWebAcceleration'].get_list ["/MYPARTITION/my_app.app/my_app_cache_profile"] NB: This does not help me as if you apply a profile to an app after creation the path is "/MYPARTITION/my_app_cache_profile" so running get_list on the app path will return 0 results even though there is a caching profile assigned.278Views0likes1CommentWeb Acceleration profile and changing HTTP/1.1 to 1.0
Hi, I am not HTTP expert but this behavior is quite a surprise to me - but maybe it's perfectly OK? Setup (tested on v11.2.0HF7): VS with Web Acceleration profile attached (based on optimized-caching), no OneConnect, no HTTP Compression Profile Request from client: GET /images/wwfr.png HTTP/1.1 Host: cache.test.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: pl,en-US;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive Request from BIG-IP to backend server GET /images/wwfr.png HTTP/1.0 Host: cache.test.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: pl,en-US;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive If I will disable Web Acceleration profile then request from BIG-IP to server will be HTTP/1.1. So why such change? Another issue - I have iRule attached to VS to disable caching response on the client side: when HTTP_RESPONSE { HTTP::header insert Cache-control no-store } What surprised me is that BIG-IP is taking this header into account when deciding if server response is cacheable. That is quite strange as header is inserted on client side so should not influence server side decision - or I am wrong here? Piotr497Views0likes1CommentSimple Questions form a Newbie...
1)We want to cache very specific paths on our site, mainly for static content. If we wanted to cache everything under say http://example.com/pdfs, would we just add ‘/pdfs/*’ to the URI Include list? 2)If we do this, are we sure that it would NOT be caching http://example.com/ (root path) or even something like http://example.com/foo? 3)Is there a way to view cache contents – as in, what files it is actually caching?258Views0likes1CommentAcceleration Profile - OWA - Users seeing cached mailbox of other user
I have exchange users accessing OWA that are complaining that they are seeing cached information of another user's inbox. So far it appears that all of the users that are experiencing this are sourcing from the same office so they are likely behind a proxy. They login to OWA and are seeing someone else's name at the top of their window, and when they go to send an email it will have someone else's signature block. Once they refresh they get their mailbox...but this is obviously a fairly major issue. I'm about 99% sure this is b/c we have the web acceleration profile enabled. ...but I suspect this b/c I've seen this happen before. Rather than just removing the web acceleration profile does anyone know specifically what URI's need to be excluded? I used the iApp to build this virtual server so it has the standard URI'S uglobal.js /owa/ev.owa oab.xml I can provide a list of the URI's that are getting cached if that helps. The vast majority are /owa/prem/15.0.1210.6/scripts... or /owa/prem/15.0.1210.6/resources... Thanks in advance!236Views0likes0CommentsLTM Web Acceleration Profile Configuration
My organization and I are new to BIG-IP. Everything has been working quite well, but I do have a question about Web Acceleration profiles. For LTM, the iApps I have used so far create one by default. This seemed to be working satisfactorily, until the programmers began complaining about it. The previous load balancer my company used apparently did not do anything like F5's Web Acceleration profiles do, so the old ADC wasn't caching any items in that regard. I got with the programmers and told them we can clear the cache when we make changes, we can reduce the maximum age, and we can limit what items are cached, but they don't like that either. They want the LTM to recognize a document has changed without any cache clearing or anything like that. They want to be able to make changes to web pages (including to items the Web Acceleration profiles caches) and have all machines see those changes instantly and automatically. Of course, this obviously needs to happen, and the web pages we serve up change often, but the only way I see of doing that is clearing the cache on the Web Acceleration profile. Is there a provision for the LTM to check the web server for items to get their last modified dates so that if an item is changed the new version will be served up automagically instead of the old version? Do Web Acceleration profiles provide enough improvement to justify their use in a medium sized environment? What do other companies regularly do?387Views0likes3CommentsWebacceleration profile not caching
Dear All, I have a Big IP LTM ASM provisioned and applied a web accelerator profile with the default settings (version 11.5.1) to a virtual server. A lot of traffic is passing through but I simply do not see any traffic being cached inside the Ramcache. Also I analyzed the traffic and the HTTP reply packets don’t contain the new age header value. In the webacceleration profile counters I notice a lot of misses, but when I analyze the traffic in Wireshark I see a lot of objects like css, png, jpg, json, js that I believe is all cacheable. Also what I would like to achieve is to alter the cache timer for all objects passing through F5 towards the clients by inserting the new age header. In some objects I notice Cache Control: private, must revalidate, max age=0 but most of the objects responses don’t have any caching header at all. All the objects have the response code 200, so that is also not the issue here. Test results show that the age header is never altered as configured in the webaccelerator profile. As I said before the profile is very basic configured with no filter, Ignore Headers All (also tried none), insert age header enabled, aging rate = 9. I have never seen anything in the cache and the caching time is 100 days. I would like to know why none of these objects are cacheable and why is F5 not changing the age time (cache expiration date)? Perhaps there is something that needs to be configured within the application itself to be cacheable? I hope someone has the answer over here.806Views0likes5CommentsHow to dump list of URIs that were cached ?
I am only getting 80 records output when i run the following command to dump the URIs that were cached. But when I checked the profile statistics it says 6.1 K items were cached. LTM version 11.2.1. tmsh show /ltm profile ramcache my_cache_profile Is there a way to dump the cached items? Thanks for your help in advance.219Views0likes1CommentSPDY and Web Acceleration cannot be used together
If I enable both a SPDY and Web Acceleration profile on a Virtual Server, the initial html page is downloaded but all subsequent HTTP requests (javascript, css, images etc.) return with size 0 and response status 0. If I switch off either of them then it seems to work fine. Has anyone else encountered this? Platform: BIG-IP 11.5.0 Build 1.0.227 Hotfix HF1300Views0likes2CommentsDoes Cloud Solve or Increase the 'Four Pillars' Problem?
It has long been said – often by this author – that there are four pillars to application performance: Memory CPU Network Storage As soon as you resolve one in response to application response times, another becomes the bottleneck, even if you are not hitting that bottleneck yet. For a bit more detail, they are “memory consumption” – because this impacts swapping in modern Operating Systems. “CPU utilization” – because regardless of OS, there is a magic line after which performance degrades radically. “Network throughput” – because applications have to communicate over the network, and blocking or not (almost all coding for networks today is), the information requested over the network is necessary and will eventually block code from continuing to execute. “Storage” – because IOPS matter when writing/reading to/from disk (or the OS swaps memory out/back in). These four have long been relatively easy to track. The relationship is pretty easy to spot, when you resolve one problem, one of the others becomes the “most dangerous” to application performance. But historically, you’ve always had access to the hardware. Even in highly virtualized environments, these items could be considered both at the Host and Guest level – because both individual VMs and the entire system matter. When moving to the cloud, the four pillars become much less manageable. The amount “much less” implies depends a lot upon your cloud provider, and how you define “cloud”. Put in simple terms, if you are suddenly struck blind, that does not change what’s in front of you, only your ability to perceive it. In the PaaS world, you have only the tools the provider offers to measure these things, and are urged not to think of the impact that host machines may have on your app. But they do have an impact. In an IaaS world you have somewhat more insight, but as others have pointed out, less control than in your datacenter. Picture Courtesy of Stanley Rabinowitz, Math Pro Press. In the SaaS world, assuming you include that in “cloud”, you have zero control and very little insight. If you app is not performing, you’ll have to talk to the vendors’ staff to (hopefully) get them to resolve issues. But is the problem any worse in the cloud than in the datacenter? I would have to argue no. Your ability to touch and feel the bits is reduced, but the actual problems are not. In a pureplay public cloud deployment, the performance of an application is heavily dependent upon your vendor, but the top-tier vendors (Amazon springs to mind) can spin up copies as needed to reduce workload. This is not a far cry from one common performance trick used in highly virtualized environments – bring up another VM on another server and add them to load balancing. If the app is poorly designed, the net result is not that you’re buying servers to host instances, it is instead that you’re buying instances directly. This has implications for IT. The reduced up-front cost of using an inefficient app – no matter which of the four pillars it is inefficient in – means that IT shops are more likely to tolerate inefficiency, even though in the long run the cost of paying monthly may be far more than the cost of purchasing a new server was, simply because the budget pain is reduced. There are a lot of companies out there offering information about cloud deployments that can help you to see if you feel blind. Fair disclosure, F5 is one of them, I work for F5. That’s all you’re going to hear on that topic in this blog. While knowing does not always directly correlate to taking action, and there is some information that only the cloud provider could offer you, knowing where performance bottlenecks are does at least give some level of decision-making back to IT staff. If an application is performing poorly, looking into what appears to be happening (you can tell network bandwidth, VM CPU usage, VM IOPS, etc, but not what’s happening on the physical hardware) can inform decision-making about how to contain the OpEx costs of cloud. Internal cloud is a much easier play, you still have access to all the information you had before cloud came along, and generally the investigation is similar to that used in a highly virtualized environment. From a troubleshooting performance problems perspective, it’s much the same. The key with both virtualization and internal (private) clouds is that you’re aiming for maximum utilization of resources, so you will have to watch for the bottlenecks more closely – you’re “closer to the edge” of performance problems, because you designed it that way. A comprehensive logging and monitoring environment can go a long way in all cloud and virtualization environments to keeping on top of issues that crop up – particularly in a large datacenter with many apps running. And developer education on how not to be a resource hog is helpful for internally developed apps. For externally developed apps the best you can do is ask for sizing information and then test their assumptions before buying. Sometimes, cloud simply is the right choice. If network bandwidth is the prime limiting factor, and your organization can accept the perceived security/compliance risks, for example, the cloud is an easy solution – bandwidth in the cloud is either not limited, or limited by your willingness to write a monthly check to cover usage. Either way, it’s not an Internet connection upgrade, which can be dastardly expensive not just at install, but month after month. Keep rocking it. Get the visibility you need, don’t worry about what you don’t need. Related Articles and Blogs: Don MacVittie - Load Balancing For Developers Advanced Load Balancing For Developers. The Network Dev Tool Load Balancers for Developers – ADCs Wan Optimization ... Intro to Load Balancing for Developers – How they work Intro to Load Balancing for Developers – The Gotchas Intro to Load Balancing for Developers – The Algorithms Load Balancing For Developers: Security and TCP Optimizations Advanced Load Balancers for Developers: ADCs - The Code Advanced Load Balancing For Developers: Virtual Benefits Don MacVittie - ADCs for Developers Devops Proverb: Process Practice Makes Perfect Devops is Not All About Automation 1024 Words: Why Devops is Hard Will DevOps Fork? DevOps. It's in the Culture, Not Tech. Lori MacVittie - Development and General Devops: Controlling Application Release Cycles to Avoid the ... An Aristotlean Approach to Devops and Infrastructure Integration How to Build a Silo Faster: Not Enough Ops in your Devops236Views0likes0Comments