compression
12 TopicsSelective Pass-through
Hello, I need to inject data into some webpages using an iRule. The backend-server sends HTTP data chunked and compressed (gzip). However, only some pages need to be injected into. Those can be identified by an HTTP-Header (and only this way). I also need all outgoing traffic to be chunked and compressed and with working keep-alive connections. I got this working by setting Response Chunking and Compression to Selective and perform the injection itself in an iRule using STREAM::expression . The problem however is that all data is being decompressed (and in turn rechunked) by the f5, as soon as the compression module is not set to Disabled . This induces an unnecessarily high load on the f5, which I'd like to avoid. What I want is to identify the header in the response from the backend-server, if found inject, rechunk and recompress; otherwise completely pass-through all HTTP data without processing anything. Setting the compression module to Disabled seems to be unfeasible, since I can't perform an injection anymore. Using COMPRESS::disable disables compression, not the compression module, thus decompressing everything from the server and sending it uncompressed to the client. After fiddling around a bit, it seems that compression can be disabled implicitly by disabling HTTP processing ( HTTP::disable ). But this seems to be incompatible with keep-alive connections (because the next request on the same connection isn't recognized). And now I ran out of ideas and ask here: is there any way to archive a selective pass-through, depending on a header sent by the backend-server? I am using BIG-IP 10.2.4 Build 577.0 Final. We are thinking about switching to 11 in the mid-term, but a solution for 10 would be nice. Thanks, Christian171Views0likes0CommentsGet system compression statistics from command line?
Hi! Does anyone know a way of getting system wide compression throughput from the command line? I need to get certain metrics limited by licenses, but have only found SSL TPS and throughput via tmsh, not compression. Tmsh and bash is what I'm allowed to use and I can't use SNMP as it has to be compatible with all systems, even those with SNMP disabled (doing this for a client with very specific needs and methods). Example: show sys performance all-stats Sys::Performance System ------------------------------------------------------------------- System CPU Usage(%) Current Average Max(since 11/06/16 12:37:17) ------------------------------------------------------------------- Utilization 28 28 43 ------------------------------------------------------------------- Memory Used(%) Current Average Max(since 11/06/16 12:37:17) ------------------------------------------------------------------- TMM Memory Used 11 11 11 Other Memory Used 63 63 63 Swap Used 0 0 0 Sys::Performance Connections --------------------------------------------------------------------------- Active Connections Current Average Max(since 11/06/16 12:37:17) --------------------------------------------------------------------------- Connections 45.1K 43.2K 49.4K --------------------------------------------------------------------------- Total New Connections(/sec) Current Average Max(since 11/06/16 12:37:17) --------------------------------------------------------------------------- Client Connections 916 962 1.1K Server Connections 745 782 922 --------------------------------------------------------------------------- HTTP Requests(/sec) Current Average Max(since 11/06/16 12:37:17) --------------------------------------------------------------------------- HTTP Requests 2.8K 2.8K 3.3K Sys::Performance Throughput ----------------------------------------------------------------------------- Throughput(bits)(bits/sec) Current Average Max(since 11/06/16 12:37:17) ----------------------------------------------------------------------------- Service 644.2M 622.5M 807.2M In 659.1M 638.1M 828.2M Out 303.5M 300.6M 375.6M ----------------------------------------------------------------------------- SSL Transactions Current Average Max(since 11/06/16 12:37:17) ----------------------------------------------------------------------------- SSL TPS 599 640 790 ----------------------------------------------------------------------------- Throughput(packets)(pkts/sec) Current Average Max(since 11/06/16 12:37:17) ----------------------------------------------------------------------------- Service 78.7K 77.1K 97.6K In 78.5K 77.1K 97.8K Out 68.4K 67.8K 84.1K Sys::Performance Ramcache ------------------------------------------------------------------------ RAM Cache Utilization(%) Current Average Max(since 11/06/16 12:37:17) ------------------------------------------------------------------------ Hit Rate 61 63 70 Byte Rate 67 67 75 Eviction Rate 8 14 28 Any input appreciated! /Patrik309Views0likes1CommentUnderstanding STREAM expression and Compression
Hello - I have a question to try and confirm my understanding around using STREAM and compression. I'm aware of the need to disable compression so STREAM is able to inspect the payload, but after the STREAM expression has done it's replacing, is or can, the content be compressed to improve performance or is this lost? In our set-up, we have physical LTMs that handle SSL offloading (part of the cloud solution we use) and virtual LTMs that we configure for service specific iRules etc. So on the physical LTM with SSL offload, there is STREAM (blank) and iRule to replace http:// with https:// on the response with the following: when HTTP_REQUEST { PHYSICAL LTM WITH SSL OFFLOAD tell server not to compress response HTTP::header remove Accept-Encoding disable STREAM for request flow STREAM::disable } when HTTP_RESPONSE { catch and replace redirect headers if { [HTTP::header exists Location] } { HTTP::header replace Location [string map {"http://" "https://"} [HTTP::header Location]] } only look at text data if { [HTTP::header Content-Type] contains "text" } { create a STREAM expression to replace any http:// with https:// STREAM::expression {@http://@https://@} enable STREAM STREAM::enable } } On the virtual LTM, we have a similar entry in the iRule: when HTTP_REQUEST { VIRTUAL LTM tell server not to compress response HTTP::header remove Accept-Encoding disable STREAM for request flow STREAM::disable } when HTTP_RESPONSE { catch and replace redirect headers if { [HTTP::header exists Location] } { HTTP::header replace Location [string map {"://internal.url" "://external.url"} [HTTP::header Location]] } only look at text data if { [HTTP::header Content-Type] contains "text" } { create a STREAM expression to replace any http:// with https:// STREAM::expression {@://internal.url@://external.url@} enable STREAM STREAM::enable } } So in this set-up, we we loose the benefit of HTTP compression? Thanks654Views0likes1CommentAPM migration to iseries causes VPN network access tunnels to close
We are working on a migration from old hardware 5250v to new iseries i5800 and have extended the APM cluster and configuration is fully in sync. We failover and active the new cluster member a night before and next working day people start to work with the new solution witouth any issues. After some time, when more users are connected users are being disconnected from the VPN and in APM logs we see that the tunnels are being closed and started again. During this time data is being stalled on the tunnel and not working however the VPN client is still connected, similar to this bug. https://cdn.f5.com/product/bugtracker/ID600985.html This seems to be a performance issue however we dont see any CPU utilization issue. The hardware is also using a dedicated Coleto Creek CPU for SSL and compression offloading. GZIP compression offloading is used on the network access tunnels. On the old hardware there is no stability issue with exact same configuration , the only difference is that there the Cave Creek dedicated CPU is used for hardware offloading. In this article it is stated that using compression in APM network access could cause CPU spikes only when there is no hardware offloading used. https://support.f5.com/csp/article/K12524516 Could there perhaps be a bug to perform hardware compression offloading (GZIP deflate) on the new Coleto Creek CPU? If hardware compression offloading is used this should not increase the TMM assigned CPU cores as this is not processed in software?658Views0likes1CommentCompression stripped by Silverline
We've recently experienced slowdowns serving web pages, and here's something we've found: Apparently, when traffic passes through the WAF, the WAF strips out the following line: Content-Encoding: gzip. We serve pages compressed with GZIP, but, from what we can see, the WAF strips that compression, severely slowing down the page delivery. Does this make sense to anyone, and is there a way to remediate this issue?509Views0likes2Commentsbest file types to apply compression
when studying the web acceleration,I come to a point when someone said that there's certain types of files in which applying compression is useless such as videos and images,while others are very useful such as text and html. I want to know is this true and why371Views0likes5CommentsVS in BigIP returns uncompressed HTTP response that was compressed by the backend Apache server
Our backend servers run Apache and compress the HTTP data, however, it seems from the VIP associated to backend node is unpacking the compressed data and sending it out uncompressed. This was verified by executing curl: sending the request directly to the apache server returned compressed response while sending the same request to the corresponding VS resulted in the uncompressed response. Isn't the default behavior for VS to be in bypass mode (i.e. return response "as is")? Checked the BigIP VS configuration; it has HTTP Profile = http and HTTP Compression Profile = None. There's no iRule associated with the VS. What are we missing?Solved873Views0likes8CommentsHTTP Compression with Stream Profile for Rewrites
Hello Experts, I am using a stream profiles for some URI rewrites and because of that I am bound to remove the Accept-Encoding Header. And Because of that I am losing the compression of the content being sent from F5 to client browser. Can you advise a best approach to tackle this. I want to send compressed data (gzip etc..) to client browser, while using the stream profiles...225Views0likes1CommentVary: User-Agent and compression for HTTP 1.0
Hi, We're running BIG-IP 10.2.0 and with the default "http" profile, it adds a "Vary: User-Agent, Accept-Encoding" header to responses that it compresses. To disable this behaviour, it's enough to set "compress-allow-http-10" to "enabled". It will then generate "Vary: Accept-Encoding" headers, with no User-Agent. "Vary: User-Agent" is highly undesirable for us as we want to put a cache in front out our F5s. So, what effects does this setting have besides (obviously) allowing compression with HTTP 1.0 and "fixing" the Vary header? I'm asking because the latter seems to me rather counter-intuitive, and I'm wondering what other tricks that setting might have up its sleeve. Also, why is this setting disabled by default? Is it safe to just turn on? Has anyone seen any adverse affects from doing so? Thanks in advance! Alex295Views0likes2Comments