compression
12 TopicsVS in BigIP returns uncompressed HTTP response that was compressed by the backend Apache server
Our backend servers run Apache and compress the HTTP data, however, it seems from the VIP associated to backend node is unpacking the compressed data and sending it out uncompressed. This was verified by executing curl: sending the request directly to the apache server returned compressed response while sending the same request to the corresponding VS resulted in the uncompressed response. Isn't the default behavior for VS to be in bypass mode (i.e. return response "as is")? Checked the BigIP VS configuration; it has HTTP Profile = http and HTTP Compression Profile = None. There's no iRule associated with the VS. What are we missing?Solved856Views0likes8CommentsAPM migration to iseries causes VPN network access tunnels to close
We are working on a migration from old hardware 5250v to new iseries i5800 and have extended the APM cluster and configuration is fully in sync. We failover and active the new cluster member a night before and next working day people start to work with the new solution witouth any issues. After some time, when more users are connected users are being disconnected from the VPN and in APM logs we see that the tunnels are being closed and started again. During this time data is being stalled on the tunnel and not working however the VPN client is still connected, similar to this bug. https://cdn.f5.com/product/bugtracker/ID600985.html This seems to be a performance issue however we dont see any CPU utilization issue. The hardware is also using a dedicated Coleto Creek CPU for SSL and compression offloading. GZIP compression offloading is used on the network access tunnels. On the old hardware there is no stability issue with exact same configuration , the only difference is that there the Cave Creek dedicated CPU is used for hardware offloading. In this article it is stated that using compression in APM network access could cause CPU spikes only when there is no hardware offloading used. https://support.f5.com/csp/article/K12524516 Could there perhaps be a bug to perform hardware compression offloading (GZIP deflate) on the new Coleto Creek CPU? If hardware compression offloading is used this should not increase the TMM assigned CPU cores as this is not processed in software?650Views0likes1CommentUnderstanding STREAM expression and Compression
Hello - I have a question to try and confirm my understanding around using STREAM and compression. I'm aware of the need to disable compression so STREAM is able to inspect the payload, but after the STREAM expression has done it's replacing, is or can, the content be compressed to improve performance or is this lost? In our set-up, we have physical LTMs that handle SSL offloading (part of the cloud solution we use) and virtual LTMs that we configure for service specific iRules etc. So on the physical LTM with SSL offload, there is STREAM (blank) and iRule to replace http:// with https:// on the response with the following: when HTTP_REQUEST { PHYSICAL LTM WITH SSL OFFLOAD tell server not to compress response HTTP::header remove Accept-Encoding disable STREAM for request flow STREAM::disable } when HTTP_RESPONSE { catch and replace redirect headers if { [HTTP::header exists Location] } { HTTP::header replace Location [string map {"http://" "https://"} [HTTP::header Location]] } only look at text data if { [HTTP::header Content-Type] contains "text" } { create a STREAM expression to replace any http:// with https:// STREAM::expression {@http://@https://@} enable STREAM STREAM::enable } } On the virtual LTM, we have a similar entry in the iRule: when HTTP_REQUEST { VIRTUAL LTM tell server not to compress response HTTP::header remove Accept-Encoding disable STREAM for request flow STREAM::disable } when HTTP_RESPONSE { catch and replace redirect headers if { [HTTP::header exists Location] } { HTTP::header replace Location [string map {"://internal.url" "://external.url"} [HTTP::header Location]] } only look at text data if { [HTTP::header Content-Type] contains "text" } { create a STREAM expression to replace any http:// with https:// STREAM::expression {@://internal.url@://external.url@} enable STREAM STREAM::enable } } So in this set-up, we we loose the benefit of HTTP compression? Thanks599Views0likes1CommentCompression stripped by Silverline
We've recently experienced slowdowns serving web pages, and here's something we've found: Apparently, when traffic passes through the WAF, the WAF strips out the following line: Content-Encoding: gzip. We serve pages compressed with GZIP, but, from what we can see, the WAF strips that compression, severely slowing down the page delivery. Does this make sense to anyone, and is there a way to remediate this issue?499Views0likes2Commentsbest file types to apply compression
when studying the web acceleration,I come to a point when someone said that there's certain types of files in which applying compression is useless such as videos and images,while others are very useful such as text and html. I want to know is this true and why357Views0likes5CommentsCOMPRESS::enable request
Helpful Links: https://devcentral.f5.com/questions/accepting-compressed-requests-not-responsesanswer30205 https://devcentral.f5.com/wiki/iRules.COMPRESS__enable.ashx After reading the above links I am still unsure of whether "COMPRESS::enable request" will accept compressed requests to the F5 and un-compress for the backend and then re-compress for the response. Can someone clarify this please? Will the below iRule layout work when trying to accept compressed requests? when HTTP_RESPONSE { if { [HTTP::header Content-Type] contains "text/html;charset=UTF-8"} { COMPRESS::enable } } Thank you for your support. Regards, Swordfish343Views0likes3CommentsGet system compression statistics from command line?
Hi! Does anyone know a way of getting system wide compression throughput from the command line? I need to get certain metrics limited by licenses, but have only found SSL TPS and throughput via tmsh, not compression. Tmsh and bash is what I'm allowed to use and I can't use SNMP as it has to be compatible with all systems, even those with SNMP disabled (doing this for a client with very specific needs and methods). Example: show sys performance all-stats Sys::Performance System ------------------------------------------------------------------- System CPU Usage(%) Current Average Max(since 11/06/16 12:37:17) ------------------------------------------------------------------- Utilization 28 28 43 ------------------------------------------------------------------- Memory Used(%) Current Average Max(since 11/06/16 12:37:17) ------------------------------------------------------------------- TMM Memory Used 11 11 11 Other Memory Used 63 63 63 Swap Used 0 0 0 Sys::Performance Connections --------------------------------------------------------------------------- Active Connections Current Average Max(since 11/06/16 12:37:17) --------------------------------------------------------------------------- Connections 45.1K 43.2K 49.4K --------------------------------------------------------------------------- Total New Connections(/sec) Current Average Max(since 11/06/16 12:37:17) --------------------------------------------------------------------------- Client Connections 916 962 1.1K Server Connections 745 782 922 --------------------------------------------------------------------------- HTTP Requests(/sec) Current Average Max(since 11/06/16 12:37:17) --------------------------------------------------------------------------- HTTP Requests 2.8K 2.8K 3.3K Sys::Performance Throughput ----------------------------------------------------------------------------- Throughput(bits)(bits/sec) Current Average Max(since 11/06/16 12:37:17) ----------------------------------------------------------------------------- Service 644.2M 622.5M 807.2M In 659.1M 638.1M 828.2M Out 303.5M 300.6M 375.6M ----------------------------------------------------------------------------- SSL Transactions Current Average Max(since 11/06/16 12:37:17) ----------------------------------------------------------------------------- SSL TPS 599 640 790 ----------------------------------------------------------------------------- Throughput(packets)(pkts/sec) Current Average Max(since 11/06/16 12:37:17) ----------------------------------------------------------------------------- Service 78.7K 77.1K 97.6K In 78.5K 77.1K 97.8K Out 68.4K 67.8K 84.1K Sys::Performance Ramcache ------------------------------------------------------------------------ RAM Cache Utilization(%) Current Average Max(since 11/06/16 12:37:17) ------------------------------------------------------------------------ Hit Rate 61 63 70 Byte Rate 67 67 75 Eviction Rate 8 14 28 Any input appreciated! /Patrik299Views0likes1CommentVary: User-Agent and compression for HTTP 1.0
Hi, We're running BIG-IP 10.2.0 and with the default "http" profile, it adds a "Vary: User-Agent, Accept-Encoding" header to responses that it compresses. To disable this behaviour, it's enough to set "compress-allow-http-10" to "enabled". It will then generate "Vary: Accept-Encoding" headers, with no User-Agent. "Vary: User-Agent" is highly undesirable for us as we want to put a cache in front out our F5s. So, what effects does this setting have besides (obviously) allowing compression with HTTP 1.0 and "fixing" the Vary header? I'm asking because the latter seems to me rather counter-intuitive, and I'm wondering what other tricks that setting might have up its sleeve. Also, why is this setting disabled by default? Is it safe to just turn on? Has anyone seen any adverse affects from doing so? Thanks in advance! Alex289Views0likes2Comments