Forum Discussion

trx_94323's avatar
trx_94323
Icon for Nimbostratus rankNimbostratus
May 10, 2014

What is the best HTTP Compression profile settings?

Hello Community, Does anyone know the best settings for the gzip HTTP compression profiles? Main content types to compress would be the big 3: HTML, js, css. We are currently on this version of the f5 "BIG-IP 11.1.0 Build 2027.0 Hotfix HF2".

 

Our current profile properties:

 

Any directions is fully appreciated.

 

Thanks.

 

    • kridsana's avatar
      kridsana
      Icon for Cirrocumulus rankCirrocumulus
      How much cpu increase after change compression level from 1 to 6 ?
    • kridsana's avatar
      kridsana
      Icon for Cirrocumulus rankCirrocumulus
      How much cpu increase after change compression level from 1 to 6 ?
  • You really won't notice a difference. I've been using 6 for years on high volume sites with no issues.

     

    A few other notes from me;

     

    • Deflate (based on zlib) is preferable to gzip as gzip uses the deflate algorithm anyway but is generally slower due to its use of larger headers and trailers and a slower integrity check (aka checksum) compared to zlib used alone

       

    • Set the Chunking setting in the HTTP Profile to selective as this only rechunks data if the response payload has been modified (which should only happen if compression has taken place) and if the server response was itself chunked. Responses that the server did not chunk will have the value of their Content Length HTTP header rewritten accordingly.

       

    • I use these content types;

       

      application/(xml|x-javascript)

       

      text/

       

      image/svg+xml

       

      application/x-www-form-urlencoded

       

      application/http

       

      application/pdf Most people say don't but I find there's still a benefit

       

      application/json

       

      application/msword

       

      application/vnd.ms-excel

       

      application/vnd.ms-powerpoint

       

      application/vnd.ms-project

       

      application/vnd.ms-xpsdocument

       

      application/x-shockwave-flash

       

  • A lot of CDN platforms have rules like for example, it only compresses files less than 1MB or else the CPU power spent doing so outweighs the performance improvement of a compressed file being sent over the internet.

     

    Any thoughts on that and if there are any options to set a max limit size of a file when compressing them?

     

    Thanks!

     

  • Are you load balancing a CDN?

     

    I'd disagree regarding the 1MB figure myself but my real point is that decision was probably taken in the context of a very specific infrastructure architecture (and related devices) relevant only to a single provider, presumably where the clients would be fairly 'local' and there would be little latency. Considering the minimal CPU overhead compression introduces on an F5, I see no point in setting a maximum. The bigger the file, the greater the benefits in my view.

     

    Anyway, back to your question, no you can't set a maximum. It would be virtually impossible with most traffic anyway as its chunked and there is no way to determine what the total response size will be. Of course, if you're serving files that are mostly over a maximum you've decided on, you can just exclude them by Content Type.

     

  • It can be compressed and chunked.

     

    I assume the CDN is a service provided by a vendor? If that's the case then what you do is probably irrelevant no?

     

  • Thanks. I"m starting to be for compressing any file content types now of any size vs. only sizes below 1MB. As you mentioned chunking, is there a thumb rule as to when to use it or not?

     

  • You'd want to use a Chunking setting of Selective in nearly all cases. This is recommended as this only rechunks data if the response payload has been modified (which should only happen if compression has taken place) and if the server response was itself chunked. Responses that the server did not chunk will have the value of their Content Length HTTP header rewritten accordingly instead.