compression
41 TopicsSelective Compression on BIG-IP
BIG-IP provides Local Traffic Policies that simplify the way in which you can manage traffic associated with a virtual server. You can associate a BIG-IP local traffic policy to support selective compression for types of content that can benefit from compression, like HTML, XML, and CSS stylesheets. These file types can realize performance improvements, especially across slow connections, by compressing them. You can easily configure your BIG-IP system to use a simple Local Traffic Policy that selectively compresses these file types. In order to use a policy, you will want to create and configure a draft policy, publish that policy, and then associate the policy with a virtual server in BIG-IP v12. Alright, let’s log into a BIG-IP The first thing you’ll need to do is create a draft policy. On the main menu select Local Traffic>Policies>Policy List and then the Create or + button. This takes us to the create policy config screen. We’ll name the policy SelectiveCompression, add a description like ‘This policy compresses file types,’ and we’ll leave the Strategy as the default of Execute First matching rule. This is so the policy uses the first rule that matches the request. Click Create Policy which saves the policy to the policies list. When saved, the Rules search field appears but has no rules. Click Create under Rules. This brings us to the Rules General Properties area of the policy. We’ll give this rule a name (CompressFiles) and then the first settings we need to configure are the conditions that need to match the request. Click the + button to associate file types. We know that the files for compression are comprised of specific file types associated with a content type HTTP Header. We choose HTTP Header and select Content-Type in the Named field. Select ‘begins with’ next and type ‘text/’ for the condition and compress at the ‘response’ time. We’ll add another condition to manage CPU usage effectively. So we click CPU Usage from the list with a duration of 1 minute with a conditional operator of ‘less than or equal to’ 5 as the usage level at response time. Next under Do the following, click the create + button to create a new action when those conditions are met. Here, we’ll enable compression at the response time. Click Save. Now the draft policy screen appears with the General Properties and a list of rules. Here we want to click Save Draft. Now we need to publish the draft policy and associate it with a virtual server. Select the policy and click Publish. Next, on the main menu click Local Traffic>Virtual Servers>Virtual Server List and click the name of the virtual server you’d like to associate for the policy. On the menu bar click Resources and for Policies click Manage. Move SelectiveCompression to the Enabled list and click Finished. The SelectiveCompression policy is now listed in the policies list which is now associated with the chosen virtual server. The virtual server with the SelectiveCompression Local Traffic Policy will compress the file types you specified. Congrats! You’ve now added a local traffic policy for selective compression! You can also watch the full video demo thanks to our TechPubs team. ps966Views0likes7CommentsTCL procedures to compress/expand a IPv6 address notation
Problem this snippet solves: Hi Folks, the iRule below contains two TCL procedures to convert an IPv6 address from/to human readable IPv6 address notations within iRules. The compress_ipv6_addr procedure shrinks an IPv6 address notation by removing leading zeros of each individual IPv6 address group (as defined in RFC 4291 Section 2.2.1) and by replacing the longest range of consecutive zero value IPv6 address groups with the :: notation (as defined in RFC 4291 Section 2.2.2). If two or more zero value IPv6 address group ranges have identical lengths, then the most significant IPv6 address groups will be replaced. If the input IPv6 address contains a mixed IPv6 and IPv4 notation (as defined in RFC 4291 Section 2.2.3), the mixed notation will be kept as is. ----------------------------------- compress_ipv6_addr ----------------------------------------- Input: 0000:00:0000:00:0000:0:00:0000 Output: :: Time: 16 clicks Input: 0:00:000:0000:000:00:0:0001 Output: ::1 Time: 15 clicks Input: 00:000:0000:affe:affe:0000:000:0%eth0 Output: ::affe:affe:0:0:0%eth0 Time: 20 clicks Input: 2001:0022:0333:4444:0001:0000:0000:0000%1 Output: 2001:22:333:4444:1::%1 Time: 20 clicks Input: 2001:1:02:003:0004::0001%2 Output: 2001:1:2:3:4::1%2 Time: 13 clicks Input: 2001:0123:0:00:000:0000:192.168.1.1%3 Output: 2001:123::192.168.1.1%3 Time: 19 clicks Input: 0001:0001::192.168.1.1%4 Output: 1:1::192.168.1.1%4 Time: 11 clicks ----------------------------------- compress_ipv6_addr ----------------------------------------- The expand_ipv6_addr procedure expands a compressed IPv6 notation by zero padding each individual IPv6 address group to its full 16 bit representation (as defined in RFC 4291 Section 2.2.1). If the input IPv6 address contains the truncated :: notation (as defined in RFC 4291 Section 2.2.2), the omitted zero value IPv6 address groups will be restored. If the IPv6 address contains a mixed IPv6 and IPv4 address notation (as defined in RFC 4291 Section 2.2.3), the IPv4 address will be converted into two consecutive IPv6 address groups. If the input contains a malformed IPv6 address which cannot be expanded to a full 128bit IPv6 address, the output will be an empty string. ------------------------------------ expand_ipv6_addr ----------------------------------------------------- Input: :: Output: 0000:0000:0000:0000:0000:0000:0000:0000 Time: 11 clicks Input: ::1 Output: 0000:0000:0000:0000:0000:0000:0000:0001 Time: 16 clicks Input: ::1:2%eth0 Output: 0000:0000:0000:0000:0000:0000:0001:0002%eth0 Time: 15 clicks Input: 2001::1%1 Output: 2001:0000:0000:0000:0000:0000:0000:0001%1 Time: 16 clicks Input: 2001:1:22:333:4444::%2 Output: 2001:0001:0022:0333:4444:0000:0000:0000%2 Time: 21 clicks Input: 2001:123::ff:192.168.1.1%3 Output: 2001:0123:0000:0000:0000:00ff:c0a8:0101%3 Time: 29 clicks Input: 2001:192.168.1.1::10.10.10.10%4 Output: 2001:c0a8:0101:0000:0000:0000:0a0a:0a0a%4 Time: 27 clicks ------------------------------------ expand_ipv6_addr ----------------------------------------------------- Note: Both procedures are able to handle % IPv6 Zone ID suffixes (as defined in RFC 6874) respectively F5's Route Domain notations. Performance considerations: Both procedures are performance optimized to maintain a reasonable performance at high execution rates. The compress_ipv6_addr procedure uses two aligned [string map] commands to remove any leading zeros without breaking the individual groups, followed by a [switch] syntax to detect the longest range of consecutive zero value groups and to execute just a simple [string range] command or a combination of the [substr] + [findstr] commands to perform the final :: truncation. The expand_ipv6_addr procedure is a little more sophisticated, since it is required to parse the input IPv6 address on a per IPv6 address group basis to zero pad the individual groups, to detect and convert embedded IPv4 addresses and to finally restore the :: truncation. To reduce the required CPU cycles the expand_ipv6_addr procedure makes use of the $static::ipv6_grp_filler() , $static::ipv6_addr_filler() and $static::ipv6_dec_map() array variables (defined during RULE_INIT ) to allow a very fast lookup of the required IPv6 address group zero paddings, the length of the zero value IPv6 address groups to insert and to translate IPv4 to IPv6 information. Cheers, Kai How to use this snippet: The iRule below contains a RULE_INIT event which outlines the procedure usage. Enjoy! Code : when RULE_INIT { # Initialize the array used to expand compressed IPv6 groups to 16 bit array set static::ipv6_grp_filler { "1" "000" "2" "00" "3" "0" "4" "" } # Initialize the array used to expand compressed IPv6 addresses to 128 bit array set static::ipv6_addr_filler { "0" "0000:0000:0000:0000:0000:0000:0000:0000" "5" "0000:0000:0000:0000:0000:0000:0000" "10" "0000:0000:0000:0000:0000:0000" "15" "0000:0000:0000:0000:0000" "20" "0000:0000:0000:0000" "25" "0000:0000:0000" "30" "0000:0000" "35" "0000" "40" "" } # Initialize the array used to perform a IPv4 (decimal 0-255) to IPv6 (hex 00-FF) conversation. for { set i 0 } { $i <= 255 } { incr i } { set static::ipv6_dec_map($i) [format %02x $i] } # # Example procedure calls (samples can be removed) # set input "2001:0001:0022:0333:4444:0:0:0:1%1" set output [call compress_ipv6_addr $input] log local0.debug "Input: $input Output: $output" set input "2001:ef:123::192.168.1.1%2" set output [call expand_ipv6_addr $input] log local0.debug "Input: $input Output: $output" } proc compress_ipv6_addr { addr } { # Enumerate and store IPv6 ZoneID / Route Domain suffix if { [set id [getfield $addr "%" 2]] ne "" } then { set id "%$id" set addr [getfield $addr "%" 1] } # X encode (e.g. :0001 becomes :X1) leading zeros on the individual IPv6 address groups (left orientated searches) set addr [string map [list ":0000" ":X" ":000" ":X" ":00" ":X" ":0" ":X" "|0000" "X" "|000" "X" "|00" "X" "|0" "X" ] "|$addr|"] # Restoring the required X encoded zeros (e.g. :X: becomes :0:) while removing any other X encodings and | separators (right orientated searches) set addr [string map [list "X:" "0:" "X|" "0" "X." "0." "X" "" "|" "" ] $addr] # Find the longest range of consecutive zero value IPv6 address groups and then replace the most significant groups with the :: notation. switch -glob -- $addr { "*::*" { #Already compressed } "0:0:0:0:0:0:0:0" { set addr "::" } "0:0:0:0:0:0:0:*" { set addr ":[string range $addr 13 end]" } "*:0:0:0:0:0:0:0" { set addr "[string range $addr 0 end-13]:" } "0:0:0:0:0:0:*" { set addr ":[string range $addr 11 end]" } "*:0:0:0:0:0:0:*" { set addr "[substr $addr 0 ":"]::[findstr $addr ":0:0:0:0:0:0:" 13]" } "*:0:0:0:0:0:0" { set addr "[string range $addr 0 end-11]:" } "0:0:0:0:0:*" { set addr ":[string range $addr 9 end]" } "*:0:0:0:0:0:*" { set addr "[substr $addr 0 ":0:"]::[findstr $addr ":0:0:0:0:0:" 11]" } "*:0:0:0:0:0" { set addr "[string range $addr 0 end-9]:" } "0:0:0:0:*" { set addr ":[string range $addr 7 end]" } "*:0:0:0:0:*" { set addr "[substr $addr 0 ":0:0:"]::[findstr $addr ":0:0:0:0:" 9]" } "*:0:0:0:0" { set addr "[string range $addr 0 end-7]:" } "0:0:0:*" { set addr ":[string range $addr 5 end]" } "*:0:0:0:*" { set addr "[substr $addr 0 ":0:0:0:"]::[findstr $addr ":0:0:0:" 7]" } "*:0:0:0" { set addr "[string range $addr 0 end-5]:" } "0:0:*" { set addr ":[string range $addr 3 end]" } "*:0:0:*" { set addr "[substr $addr 0 ":0:0:"]::[findstr $addr ":0:0:" 5]" } "*:0:0" { set addr "[string range $addr 0 end-3]:" } } # Append the previously extracted IPv6 ZoneID / Route Domain suffix and return the compressed IPv6 address return "$addr$id" } proc expand_ipv6_addr { addr } { if { [catch { # Enumerating and storing IPv6 ZoneID / Route Domain suffix if { [set id [getfield $addr "%" 2]] ne "" } then { set id "%$id" set addr [getfield $addr "%" 1] } # Parsing the first IPv6 address block of a possible :: notation by splitting the block into : separated IPv6 address groups set blk1 "" foreach grp [split [getfield $addr "::" 1] ":"] { # Check if current group contains a IPv4 address notation if { $grp contains "." } then { # The current group contains a IPv4 address notation. Trying to extract the four IPv4 address octets scan $grp {%d.%d.%d.%d} oct1 oct2 oct3 oct4 # Convert the four IPv4 address octets into two IPv6 address groups by querying the $static::ipv6_dec_map array append blk1 "$static::ipv6_dec_map($oct1)$static::ipv6_dec_map($oct2) $static::ipv6_dec_map($oct3)$static::ipv6_dec_map($oct4) " set oct4 "" } else { # The current group contains just a IPv6 address notation. Filling up the IPv6 address group with leading zeros by querying the $static::ipv6_grp_filler array append blk1 "$static::ipv6_grp_filler([string length $grp])$grp " } } # Parsing the second IPv6 address block of a possible :: notation by splitting the block into : IPv6 address separated groups set blk2 "" foreach grp [split [getfield $addr "::" 2] ":"] { # Check if current group contains a IPv4 address notation if { $grp contains "." } then { # The current group contains a IPv4 address notation. Trying to extract the four IPv4 address octets scan $grp {%d.%d.%d.%d} oct1 oct2 oct3 oct4 # Convert the four IPv4 address octets into two IPv6 address groups by querying the $static::ipv6_dec_map array append blk2 "$static::ipv6_dec_map($oct1)$static::ipv6_dec_map($oct2) $static::ipv6_dec_map($oct3)$static::ipv6_dec_map($oct4) " set oct4 "" } else { # The current group contains just a IPv6 address notation. Filling up the IPv6 address group with leading zeros by querying the $static::ipv6_grp_filler array append blk2 "$static::ipv6_grp_filler([string length $grp])$grp " } } # Joining the first and second block of the possible :: notation while expanding the address to 128bit length by querying the $static::ipv6_addr_filler array set addr "[join "$blk1$static::ipv6_addr_filler([string length "$blk1$blk2"]) $blk2" ":"]" }] } then { # log local0.debug "errorInfo: [subst \$::errorInfo]" # return "errorInfo: [subst \$::errorInfo]" return "" } # Append the previously extracted IPv6 ZoneID / Route Domain suffix and return the expanded IPv6 address notation return "$addr$id" } Tested this on version: 12.0913Views0likes2CommentsVS in BigIP returns uncompressed HTTP response that was compressed by the backend Apache server
Our backend servers run Apache and compress the HTTP data, however, it seems from the VIP associated to backend node is unpacking the compressed data and sending it out uncompressed. This was verified by executing curl: sending the request directly to the apache server returned compressed response while sending the same request to the corresponding VS resulted in the uncompressed response. Isn't the default behavior for VS to be in bypass mode (i.e. return response "as is")? Checked the BigIP VS configuration; it has HTTP Profile = http and HTTP Compression Profile = None. There's no iRule associated with the VS. What are we missing?Solved856Views0likes8CommentsAPM migration to iseries causes VPN network access tunnels to close
We are working on a migration from old hardware 5250v to new iseries i5800 and have extended the APM cluster and configuration is fully in sync. We failover and active the new cluster member a night before and next working day people start to work with the new solution witouth any issues. After some time, when more users are connected users are being disconnected from the VPN and in APM logs we see that the tunnels are being closed and started again. During this time data is being stalled on the tunnel and not working however the VPN client is still connected, similar to this bug. https://cdn.f5.com/product/bugtracker/ID600985.html This seems to be a performance issue however we dont see any CPU utilization issue. The hardware is also using a dedicated Coleto Creek CPU for SSL and compression offloading. GZIP compression offloading is used on the network access tunnels. On the old hardware there is no stability issue with exact same configuration , the only difference is that there the Cave Creek dedicated CPU is used for hardware offloading. In this article it is stated that using compression in APM network access could cause CPU spikes only when there is no hardware offloading used. https://support.f5.com/csp/article/K12524516 Could there perhaps be a bug to perform hardware compression offloading (GZIP deflate) on the new Coleto Creek CPU? If hardware compression offloading is used this should not increase the TMM assigned CPU cores as this is not processed in software?650Views0likes1CommentI am wondering why not all websites enabling this great feature GZIP?
Understanding the impact of compression on server resources and application performance While doing some research on a related topic, I ran across this question and thought “that deserves an answer” because it certainly seems like a no-brainer. If you want to decrease bandwidth – which subsequently decreases response time and improves application performance – turn on compression. After all, a large portion of web site traffic is text-based: CSS, JavaScript, HTML, RSS feeds, which means it will greatly benefit from compression. Typical GZIP compression affords at least a 3:1 reduction in size, with hardware-assisted compression yielding an average of 4:1 compression ratios. That can dramatically affect the response time of applications. As I said, seems like a no-brainer. Here’s the rub: turning on compression often has a negative impact on capacity because it is CPU-bound and under certain conditions can actually cause a degradation in performance due to the latency inherent in compressing data compared to the speed of the network over which the data will be delivered. Here comes the science. IMPACT ON CPU UTILIZATION Compression via GZIP is CPU bound. It requires a lot more CPU than you might think. The larger the file being compressed, the more CPU resources are required. Consider for a moment what compression is really doing: it’s finding all similar patterns and replacing them with representations (symbols, indexes into a table, etc…) to a single instance of the text instead. So it makes sense that the larger a file is, the more resources are required – RAM and CPU – to execute such a process. Of course the larger the file is the more benefit you see from compression in terms of bandwidth and improvement in response time. It’s kind of a Catch-22: you want the benefits but you end up paying in terms of capacity. If CPU and RAM is being chewed up by the compression process then the server can handle fewer requests and fewer concurrent users. You don’t have to take my word for it – there are quite a few examples of testing done on web servers and compression that illustrate the impact on CPU utilization. Measuring the Performance Effects of Dynamic Compression in IIS 7.0 Measuring the Performance Effects of mod_deflate in Apache 2.2 HTTP Compression for Web Applications They all essentially say the same thing; if you’re serving dynamic content (or static content and don’t have local caching on the web server enabled) then there is a significant negative impact on CPU utilization that occurs when enabling GZIP/compression for web applications. Given the exceedingly dynamic nature of Web 2.0 applications, the use of AJAX and similar technologies, and the data-driven world in which we live today, that means there are very few types of applications running on web servers for which compression will not negatively impact the capacity of the web server. In case you don’t (want || have time) to slog through the above articles, here’s a quick recap: File Size Bandwidth decrease CPU utilization increase IIS 7.0 10KB 55% 4x 50KB 67% 20x 100KB 64% 30x Apache 2.2 10KB 55% 4x 50KB 65% 10x 100KB 63% 30x It’s interesting to note that IIS 7.0 and Apache 2.2 mod_deflate have essentially the same performance characteristics. This data falls in line with the aforementioned Intel report on HTTP compression which noted that CPU utilization was increased 25-35% when compression was enabled. So essentially when you enable compression you are trading its benefits – bandwidth reduction, response time improvement – for a reduction in capacity. You’re robbing Peter to pay Paul, because instead of paying for bandwidth you’re paying for more servers to handle the same load. THE MYTH OF IMPROVED RESPONSE TIME One of the reasons you’d want to compress content is to improve response time by decreasing the total number of packets that have to traverse a wire. This is a necessity when transferring content via a WAN, but can actually cause a decrease in performance for application delivery over the LAN. This is because the time it takes to compress the content and then deliver it is actually greater than the time to just transfer the original file via the LAN. The speed of the network over which the content is being delivered is highly relevant to whether compression yields benefits for response time. The increasing consumption of CPU resources as volume increases, too, has a negative impact on the ability of the server to process and subsequently respond, which also means an increase in application response time, which is not the desired result. Maybe you’re thinking “I’ll just get more CPU then. After all, there’s like billion core servers out there, that ought to solve the problem!” Compression algorithms, like FTP, are greedy. FTP will, if allowed, consume as much bandwidth as possible in an effort to transfer data as quickly as possible. Compression will do the same thing to CPU resources: consume as much as it can to perform its task as quickly as possible. Eventually, yes, you’ll find a machine with enough cores to support both compression and capacity needs, but at what cost? It may well have been more financially efficient to invest in a better solution (that also brings additional benefits to the table) than just increasing the size of the server. But hey, it’s your data, you need to do what you need to do. The size of the content, too, has an impact on whether compression will benefit application performance. Consider that the goal of compression is to decrease the number of packets being transferred to the client. Generally speaking, the standard MTU for most network is 1500 bytes because that’s what works best with ethernet and IP. That means you can assume around 1400 bytes per packet available to transfer data. That means if content is 1400 bytes or less, you get absolutely no benefit out of compression because it’s already going to take only one packet to transfer; you can’t really send half-packets, after all, and in some networks packets that are too small can actually freak out some network devices because they’re optimized to handle the large content being served today – which means many full packets. TO COMPRESS OR NOT COMPRESS There is real benefit to compression; it’s part of the core techniques used by both application acceleration and WAN application delivery services to improve performance and reduce costs. It can drastically reduce the size of data and especially when you might be paying by the MB or GB transferred (such as applications deployed in cloud environments) this a very important feature to consider. But if you end up paying for additional servers (or instances in a cloud) to make up for the lost capacity due to higher CPU utilization because of that compression, you’ve pretty much ended up right where you started: no financial benefit at all. The question is not if you should compress content, it’s when and where and what you should compress. The answer to “should I compress this content” almost always needs to be based on a set of criteria that require context-awareness – the ability to factor into the decision making process the content, the network, the application, and the user. If the user is on a mobile device and the size of the content is greater than 2000 bytes and the type of content is text-based and … It is this type of intelligence that is required to effectively apply compression such that the greatest benefits of reduction in costs, application performance, and maximization of server resources is achieved. Any implementation that can’t factor all these variables into the decision to compress or not is not an optimal solution, as it’s just guessing or blindly applying the same policy to all kinds of content. Such implementations effectively defeat the purpose of employing compression in the first place. That’s why the answer to where is almost always “on the load-balancer or application delivery controller”. Not only are such devices capable of factoring in all the necessary variables but they also generally employ specialized hardware designed to speed up the compression process. By offloading compression to an application delivery device, you can reap the benefits without sacrificing performance or CPU resources. Measuring the Performance Effects of Dynamic Compression in IIS 7.0 Measuring the Performance Effects of mod_deflate in Apache 2.2 HTTP Compression for Web Applications The Context-Aware Cloud The Revolution Continues: Let Them Eat Cloud Nerd Rage636Views0likes2CommentsUnderstanding STREAM expression and Compression
Hello - I have a question to try and confirm my understanding around using STREAM and compression. I'm aware of the need to disable compression so STREAM is able to inspect the payload, but after the STREAM expression has done it's replacing, is or can, the content be compressed to improve performance or is this lost? In our set-up, we have physical LTMs that handle SSL offloading (part of the cloud solution we use) and virtual LTMs that we configure for service specific iRules etc. So on the physical LTM with SSL offload, there is STREAM (blank) and iRule to replace http:// with https:// on the response with the following: when HTTP_REQUEST { PHYSICAL LTM WITH SSL OFFLOAD tell server not to compress response HTTP::header remove Accept-Encoding disable STREAM for request flow STREAM::disable } when HTTP_RESPONSE { catch and replace redirect headers if { [HTTP::header exists Location] } { HTTP::header replace Location [string map {"http://" "https://"} [HTTP::header Location]] } only look at text data if { [HTTP::header Content-Type] contains "text" } { create a STREAM expression to replace any http:// with https:// STREAM::expression {@http://@https://@} enable STREAM STREAM::enable } } On the virtual LTM, we have a similar entry in the iRule: when HTTP_REQUEST { VIRTUAL LTM tell server not to compress response HTTP::header remove Accept-Encoding disable STREAM for request flow STREAM::disable } when HTTP_RESPONSE { catch and replace redirect headers if { [HTTP::header exists Location] } { HTTP::header replace Location [string map {"://internal.url" "://external.url"} [HTTP::header Location]] } only look at text data if { [HTTP::header Content-Type] contains "text" } { create a STREAM expression to replace any http:// with https:// STREAM::expression {@://internal.url@://external.url@} enable STREAM STREAM::enable } } So in this set-up, we we loose the benefit of HTTP compression? Thanks599Views0likes1CommentCompression stripped by Silverline
We've recently experienced slowdowns serving web pages, and here's something we've found: Apparently, when traffic passes through the WAF, the WAF strips out the following line: Content-Encoding: gzip. We serve pages compressed with GZIP, but, from what we can see, the WAF strips that compression, severely slowing down the page delivery. Does this make sense to anyone, and is there a way to remediate this issue?499Views0likes2CommentsWILS: How can a load balancer keep a single server site available?
Most people don’t start thinking they need a “load balancer” until they need a second server. But even if you’ve only got one server a “load balancer” can help with availability, with performance, and make the transition later on to a multiple server site a whole lot easier. Before we reveal the secret sauce, let me first say that if you have only one server and the application crashes or the network stack flakes out, you’re out of luck. There are a lot of things load balancers/application delivery controllers can do with only one server, but automagically fixing application crashes or network connectivity issues ain’t in the list. If these are concerns, then you really do need a second server. But if you’re just worried about standing up to the load then a Load balancer for even a single server can definitely give you a boost.399Views0likes2Commentsbest file types to apply compression
when studying the web acceleration,I come to a point when someone said that there's certain types of files in which applying compression is useless such as videos and images,while others are very useful such as text and html. I want to know is this true and why357Views0likes5Comments