compression
41 TopicsTCL procedures to compress/expand a IPv6 address notation
Problem this snippet solves: Hi Folks, the iRule below contains two TCL procedures to convert an IPv6 address from/to human readable IPv6 address notations within iRules. The compress_ipv6_addr procedure shrinks an IPv6 address notation by removing leading zeros of each individual IPv6 address group (as defined in RFC 4291 Section 2.2.1) and by replacing the longest range of consecutive zero value IPv6 address groups with the :: notation (as defined in RFC 4291 Section 2.2.2). If two or more zero value IPv6 address group ranges have identical lengths, then the most significant IPv6 address groups will be replaced. If the input IPv6 address contains a mixed IPv6 and IPv4 notation (as defined in RFC 4291 Section 2.2.3), the mixed notation will be kept as is. ----------------------------------- compress_ipv6_addr ----------------------------------------- Input: 0000:00:0000:00:0000:0:00:0000 Output: :: Time: 16 clicks Input: 0:00:000:0000:000:00:0:0001 Output: ::1 Time: 15 clicks Input: 00:000:0000:affe:affe:0000:000:0%eth0 Output: ::affe:affe:0:0:0%eth0 Time: 20 clicks Input: 2001:0022:0333:4444:0001:0000:0000:0000%1 Output: 2001:22:333:4444:1::%1 Time: 20 clicks Input: 2001:1:02:003:0004::0001%2 Output: 2001:1:2:3:4::1%2 Time: 13 clicks Input: 2001:0123:0:00:000:0000:192.168.1.1%3 Output: 2001:123::192.168.1.1%3 Time: 19 clicks Input: 0001:0001::192.168.1.1%4 Output: 1:1::192.168.1.1%4 Time: 11 clicks ----------------------------------- compress_ipv6_addr ----------------------------------------- The expand_ipv6_addr procedure expands a compressed IPv6 notation by zero padding each individual IPv6 address group to its full 16 bit representation (as defined in RFC 4291 Section 2.2.1). If the input IPv6 address contains the truncated :: notation (as defined in RFC 4291 Section 2.2.2), the omitted zero value IPv6 address groups will be restored. If the IPv6 address contains a mixed IPv6 and IPv4 address notation (as defined in RFC 4291 Section 2.2.3), the IPv4 address will be converted into two consecutive IPv6 address groups. If the input contains a malformed IPv6 address which cannot be expanded to a full 128bit IPv6 address, the output will be an empty string. ------------------------------------ expand_ipv6_addr ----------------------------------------------------- Input: :: Output: 0000:0000:0000:0000:0000:0000:0000:0000 Time: 11 clicks Input: ::1 Output: 0000:0000:0000:0000:0000:0000:0000:0001 Time: 16 clicks Input: ::1:2%eth0 Output: 0000:0000:0000:0000:0000:0000:0001:0002%eth0 Time: 15 clicks Input: 2001::1%1 Output: 2001:0000:0000:0000:0000:0000:0000:0001%1 Time: 16 clicks Input: 2001:1:22:333:4444::%2 Output: 2001:0001:0022:0333:4444:0000:0000:0000%2 Time: 21 clicks Input: 2001:123::ff:192.168.1.1%3 Output: 2001:0123:0000:0000:0000:00ff:c0a8:0101%3 Time: 29 clicks Input: 2001:192.168.1.1::10.10.10.10%4 Output: 2001:c0a8:0101:0000:0000:0000:0a0a:0a0a%4 Time: 27 clicks ------------------------------------ expand_ipv6_addr ----------------------------------------------------- Note: Both procedures are able to handle % IPv6 Zone ID suffixes (as defined in RFC 6874) respectively F5's Route Domain notations. Performance considerations: Both procedures are performance optimized to maintain a reasonable performance at high execution rates. The compress_ipv6_addr procedure uses two aligned [string map] commands to remove any leading zeros without breaking the individual groups, followed by a [switch] syntax to detect the longest range of consecutive zero value groups and to execute just a simple [string range] command or a combination of the [substr] + [findstr] commands to perform the final :: truncation. The expand_ipv6_addr procedure is a little more sophisticated, since it is required to parse the input IPv6 address on a per IPv6 address group basis to zero pad the individual groups, to detect and convert embedded IPv4 addresses and to finally restore the :: truncation. To reduce the required CPU cycles the expand_ipv6_addr procedure makes use of the $static::ipv6_grp_filler() , $static::ipv6_addr_filler() and $static::ipv6_dec_map() array variables (defined during RULE_INIT ) to allow a very fast lookup of the required IPv6 address group zero paddings, the length of the zero value IPv6 address groups to insert and to translate IPv4 to IPv6 information. Cheers, Kai How to use this snippet: The iRule below contains a RULE_INIT event which outlines the procedure usage. Enjoy! Code : when RULE_INIT { # Initialize the array used to expand compressed IPv6 groups to 16 bit array set static::ipv6_grp_filler { "1" "000" "2" "00" "3" "0" "4" "" } # Initialize the array used to expand compressed IPv6 addresses to 128 bit array set static::ipv6_addr_filler { "0" "0000:0000:0000:0000:0000:0000:0000:0000" "5" "0000:0000:0000:0000:0000:0000:0000" "10" "0000:0000:0000:0000:0000:0000" "15" "0000:0000:0000:0000:0000" "20" "0000:0000:0000:0000" "25" "0000:0000:0000" "30" "0000:0000" "35" "0000" "40" "" } # Initialize the array used to perform a IPv4 (decimal 0-255) to IPv6 (hex 00-FF) conversation. for { set i 0 } { $i <= 255 } { incr i } { set static::ipv6_dec_map($i) [format %02x $i] } # # Example procedure calls (samples can be removed) # set input "2001:0001:0022:0333:4444:0:0:0:1%1" set output [call compress_ipv6_addr $input] log local0.debug "Input: $input Output: $output" set input "2001:ef:123::192.168.1.1%2" set output [call expand_ipv6_addr $input] log local0.debug "Input: $input Output: $output" } proc compress_ipv6_addr { addr } { # Enumerate and store IPv6 ZoneID / Route Domain suffix if { [set id [getfield $addr "%" 2]] ne "" } then { set id "%$id" set addr [getfield $addr "%" 1] } # X encode (e.g. :0001 becomes :X1) leading zeros on the individual IPv6 address groups (left orientated searches) set addr [string map [list ":0000" ":X" ":000" ":X" ":00" ":X" ":0" ":X" "|0000" "X" "|000" "X" "|00" "X" "|0" "X" ] "|$addr|"] # Restoring the required X encoded zeros (e.g. :X: becomes :0:) while removing any other X encodings and | separators (right orientated searches) set addr [string map [list "X:" "0:" "X|" "0" "X." "0." "X" "" "|" "" ] $addr] # Find the longest range of consecutive zero value IPv6 address groups and then replace the most significant groups with the :: notation. switch -glob -- $addr { "*::*" { #Already compressed } "0:0:0:0:0:0:0:0" { set addr "::" } "0:0:0:0:0:0:0:*" { set addr ":[string range $addr 13 end]" } "*:0:0:0:0:0:0:0" { set addr "[string range $addr 0 end-13]:" } "0:0:0:0:0:0:*" { set addr ":[string range $addr 11 end]" } "*:0:0:0:0:0:0:*" { set addr "[substr $addr 0 ":"]::[findstr $addr ":0:0:0:0:0:0:" 13]" } "*:0:0:0:0:0:0" { set addr "[string range $addr 0 end-11]:" } "0:0:0:0:0:*" { set addr ":[string range $addr 9 end]" } "*:0:0:0:0:0:*" { set addr "[substr $addr 0 ":0:"]::[findstr $addr ":0:0:0:0:0:" 11]" } "*:0:0:0:0:0" { set addr "[string range $addr 0 end-9]:" } "0:0:0:0:*" { set addr ":[string range $addr 7 end]" } "*:0:0:0:0:*" { set addr "[substr $addr 0 ":0:0:"]::[findstr $addr ":0:0:0:0:" 9]" } "*:0:0:0:0" { set addr "[string range $addr 0 end-7]:" } "0:0:0:*" { set addr ":[string range $addr 5 end]" } "*:0:0:0:*" { set addr "[substr $addr 0 ":0:0:0:"]::[findstr $addr ":0:0:0:" 7]" } "*:0:0:0" { set addr "[string range $addr 0 end-5]:" } "0:0:*" { set addr ":[string range $addr 3 end]" } "*:0:0:*" { set addr "[substr $addr 0 ":0:0:"]::[findstr $addr ":0:0:" 5]" } "*:0:0" { set addr "[string range $addr 0 end-3]:" } } # Append the previously extracted IPv6 ZoneID / Route Domain suffix and return the compressed IPv6 address return "$addr$id" } proc expand_ipv6_addr { addr } { if { [catch { # Enumerating and storing IPv6 ZoneID / Route Domain suffix if { [set id [getfield $addr "%" 2]] ne "" } then { set id "%$id" set addr [getfield $addr "%" 1] } # Parsing the first IPv6 address block of a possible :: notation by splitting the block into : separated IPv6 address groups set blk1 "" foreach grp [split [getfield $addr "::" 1] ":"] { # Check if current group contains a IPv4 address notation if { $grp contains "." } then { # The current group contains a IPv4 address notation. Trying to extract the four IPv4 address octets scan $grp {%d.%d.%d.%d} oct1 oct2 oct3 oct4 # Convert the four IPv4 address octets into two IPv6 address groups by querying the $static::ipv6_dec_map array append blk1 "$static::ipv6_dec_map($oct1)$static::ipv6_dec_map($oct2) $static::ipv6_dec_map($oct3)$static::ipv6_dec_map($oct4) " set oct4 "" } else { # The current group contains just a IPv6 address notation. Filling up the IPv6 address group with leading zeros by querying the $static::ipv6_grp_filler array append blk1 "$static::ipv6_grp_filler([string length $grp])$grp " } } # Parsing the second IPv6 address block of a possible :: notation by splitting the block into : IPv6 address separated groups set blk2 "" foreach grp [split [getfield $addr "::" 2] ":"] { # Check if current group contains a IPv4 address notation if { $grp contains "." } then { # The current group contains a IPv4 address notation. Trying to extract the four IPv4 address octets scan $grp {%d.%d.%d.%d} oct1 oct2 oct3 oct4 # Convert the four IPv4 address octets into two IPv6 address groups by querying the $static::ipv6_dec_map array append blk2 "$static::ipv6_dec_map($oct1)$static::ipv6_dec_map($oct2) $static::ipv6_dec_map($oct3)$static::ipv6_dec_map($oct4) " set oct4 "" } else { # The current group contains just a IPv6 address notation. Filling up the IPv6 address group with leading zeros by querying the $static::ipv6_grp_filler array append blk2 "$static::ipv6_grp_filler([string length $grp])$grp " } } # Joining the first and second block of the possible :: notation while expanding the address to 128bit length by querying the $static::ipv6_addr_filler array set addr "[join "$blk1$static::ipv6_addr_filler([string length "$blk1$blk2"]) $blk2" ":"]" }] } then { # log local0.debug "errorInfo: [subst \$::errorInfo]" # return "errorInfo: [subst \$::errorInfo]" return "" } # Append the previously extracted IPv6 ZoneID / Route Domain suffix and return the expanded IPv6 address notation return "$addr$id" } Tested this on version: 12.0976Views0likes2CommentsSelective Pass-through
Hello, I need to inject data into some webpages using an iRule. The backend-server sends HTTP data chunked and compressed (gzip). However, only some pages need to be injected into. Those can be identified by an HTTP-Header (and only this way). I also need all outgoing traffic to be chunked and compressed and with working keep-alive connections. I got this working by setting Response Chunking and Compression to Selective and perform the injection itself in an iRule using STREAM::expression . The problem however is that all data is being decompressed (and in turn rechunked) by the f5, as soon as the compression module is not set to Disabled . This induces an unnecessarily high load on the f5, which I'd like to avoid. What I want is to identify the header in the response from the backend-server, if found inject, rechunk and recompress; otherwise completely pass-through all HTTP data without processing anything. Setting the compression module to Disabled seems to be unfeasible, since I can't perform an injection anymore. Using COMPRESS::disable disables compression, not the compression module, thus decompressing everything from the server and sending it uncompressed to the client. After fiddling around a bit, it seems that compression can be disabled implicitly by disabling HTTP processing ( HTTP::disable ). But this seems to be incompatible with keep-alive connections (because the next request on the same connection isn't recognized). And now I ran out of ideas and ask here: is there any way to archive a selective pass-through, depending on a header sent by the backend-server? I am using BIG-IP 10.2.4 Build 577.0 Final. We are thinking about switching to 11 in the mid-term, but a solution for 10 would be nice. Thanks, Christian171Views0likes0CommentsGet system compression statistics from command line?
Hi! Does anyone know a way of getting system wide compression throughput from the command line? I need to get certain metrics limited by licenses, but have only found SSL TPS and throughput via tmsh, not compression. Tmsh and bash is what I'm allowed to use and I can't use SNMP as it has to be compatible with all systems, even those with SNMP disabled (doing this for a client with very specific needs and methods). Example: show sys performance all-stats Sys::Performance System ------------------------------------------------------------------- System CPU Usage(%) Current Average Max(since 11/06/16 12:37:17) ------------------------------------------------------------------- Utilization 28 28 43 ------------------------------------------------------------------- Memory Used(%) Current Average Max(since 11/06/16 12:37:17) ------------------------------------------------------------------- TMM Memory Used 11 11 11 Other Memory Used 63 63 63 Swap Used 0 0 0 Sys::Performance Connections --------------------------------------------------------------------------- Active Connections Current Average Max(since 11/06/16 12:37:17) --------------------------------------------------------------------------- Connections 45.1K 43.2K 49.4K --------------------------------------------------------------------------- Total New Connections(/sec) Current Average Max(since 11/06/16 12:37:17) --------------------------------------------------------------------------- Client Connections 916 962 1.1K Server Connections 745 782 922 --------------------------------------------------------------------------- HTTP Requests(/sec) Current Average Max(since 11/06/16 12:37:17) --------------------------------------------------------------------------- HTTP Requests 2.8K 2.8K 3.3K Sys::Performance Throughput ----------------------------------------------------------------------------- Throughput(bits)(bits/sec) Current Average Max(since 11/06/16 12:37:17) ----------------------------------------------------------------------------- Service 644.2M 622.5M 807.2M In 659.1M 638.1M 828.2M Out 303.5M 300.6M 375.6M ----------------------------------------------------------------------------- SSL Transactions Current Average Max(since 11/06/16 12:37:17) ----------------------------------------------------------------------------- SSL TPS 599 640 790 ----------------------------------------------------------------------------- Throughput(packets)(pkts/sec) Current Average Max(since 11/06/16 12:37:17) ----------------------------------------------------------------------------- Service 78.7K 77.1K 97.6K In 78.5K 77.1K 97.8K Out 68.4K 67.8K 84.1K Sys::Performance Ramcache ------------------------------------------------------------------------ RAM Cache Utilization(%) Current Average Max(since 11/06/16 12:37:17) ------------------------------------------------------------------------ Hit Rate 61 63 70 Byte Rate 67 67 75 Eviction Rate 8 14 28 Any input appreciated! /Patrik309Views0likes1CommentUnderstanding STREAM expression and Compression
Hello - I have a question to try and confirm my understanding around using STREAM and compression. I'm aware of the need to disable compression so STREAM is able to inspect the payload, but after the STREAM expression has done it's replacing, is or can, the content be compressed to improve performance or is this lost? In our set-up, we have physical LTMs that handle SSL offloading (part of the cloud solution we use) and virtual LTMs that we configure for service specific iRules etc. So on the physical LTM with SSL offload, there is STREAM (blank) and iRule to replace http:// with https:// on the response with the following: when HTTP_REQUEST { PHYSICAL LTM WITH SSL OFFLOAD tell server not to compress response HTTP::header remove Accept-Encoding disable STREAM for request flow STREAM::disable } when HTTP_RESPONSE { catch and replace redirect headers if { [HTTP::header exists Location] } { HTTP::header replace Location [string map {"http://" "https://"} [HTTP::header Location]] } only look at text data if { [HTTP::header Content-Type] contains "text" } { create a STREAM expression to replace any http:// with https:// STREAM::expression {@http://@https://@} enable STREAM STREAM::enable } } On the virtual LTM, we have a similar entry in the iRule: when HTTP_REQUEST { VIRTUAL LTM tell server not to compress response HTTP::header remove Accept-Encoding disable STREAM for request flow STREAM::disable } when HTTP_RESPONSE { catch and replace redirect headers if { [HTTP::header exists Location] } { HTTP::header replace Location [string map {"://internal.url" "://external.url"} [HTTP::header Location]] } only look at text data if { [HTTP::header Content-Type] contains "text" } { create a STREAM expression to replace any http:// with https:// STREAM::expression {@://internal.url@://external.url@} enable STREAM STREAM::enable } } So in this set-up, we we loose the benefit of HTTP compression? Thanks654Views0likes1CommentAPM migration to iseries causes VPN network access tunnels to close
We are working on a migration from old hardware 5250v to new iseries i5800 and have extended the APM cluster and configuration is fully in sync. We failover and active the new cluster member a night before and next working day people start to work with the new solution witouth any issues. After some time, when more users are connected users are being disconnected from the VPN and in APM logs we see that the tunnels are being closed and started again. During this time data is being stalled on the tunnel and not working however the VPN client is still connected, similar to this bug. https://cdn.f5.com/product/bugtracker/ID600985.html This seems to be a performance issue however we dont see any CPU utilization issue. The hardware is also using a dedicated Coleto Creek CPU for SSL and compression offloading. GZIP compression offloading is used on the network access tunnels. On the old hardware there is no stability issue with exact same configuration , the only difference is that there the Cave Creek dedicated CPU is used for hardware offloading. In this article it is stated that using compression in APM network access could cause CPU spikes only when there is no hardware offloading used. https://support.f5.com/csp/article/K12524516 Could there perhaps be a bug to perform hardware compression offloading (GZIP deflate) on the new Coleto Creek CPU? If hardware compression offloading is used this should not increase the TMM assigned CPU cores as this is not processed in software?658Views0likes1CommentCompression stripped by Silverline
We've recently experienced slowdowns serving web pages, and here's something we've found: Apparently, when traffic passes through the WAF, the WAF strips out the following line: Content-Encoding: gzip. We serve pages compressed with GZIP, but, from what we can see, the WAF strips that compression, severely slowing down the page delivery. Does this make sense to anyone, and is there a way to remediate this issue?509Views0likes2CommentsThe Order of (Network) Operations
Thought those math rules you learned in 6 th grade were useless? Think again…some are more applicable to the architecture of your data center than you might think. Remember back when you were in the 6 th grade, learning about the order of operations in math class? You might recall that you learned that the order in which mathematical operators were applied can have a significant impact on the result. That’s why we learned there’s an order of operations – a set of rules – that we need to follow in order to ensure that we always get the correct answer when performing mathematical equations. Rule 1: First perform any calculations inside parentheses. Rule 2: Next perform all multiplications and divisions, working from left to right. Rule 3: Lastly, perform all additions and subtractions, working from left to right. Similarly, the order in which network and application delivery operations are applied can dramatically impact the performance and efficiency of the delivery of applications – no matter where those applications reside.361Views0likes1Commentbest file types to apply compression
when studying the web acceleration,I come to a point when someone said that there's certain types of files in which applying compression is useless such as videos and images,while others are very useful such as text and html. I want to know is this true and why371Views0likes5CommentsDeduplication and Compression – Exactly the same, but different.
One day many years ago, Lori and I’s oldest son held up two sheets of paper and said “These two things are exactly the same, but different!” Now, he’s a very bright individual, he was just young, and didn’t even get how incongruous the statement was. We, being a fun loving family that likes to tease each other on occasion, we of course have not yet let him live it down. It was honestly more than a decade ago, but all is fair, he doesn’t let Lori live down something funny that she did before he was born. It is all in good fun of course. Why am I bringing up this family story? Because that phrase does come to mind when you start talking about deduplication and compression. Highly complimentary and very similar, they are pretty much “Exactly the same, but different”. Since these technologies are both used pretty heavily in WAN Optimization, and are growing in use on storage products, this topic intrigued me. To get this out of the way, at F5, compression is built into the BIG-IP family as a feature of the core BIG-IP LTM product, and deduplication is an added layer implemented over BIG-IP LTM on BIG-IP WAN Optimization Module (WOM). Other vendors have similar but varied (there goes a variant of that phrase again) implementation details. Before we delve too deeply into this topic though, what caught my attention and started me pondering the whys of this topic was that F5’s deduplication is applied before compression, and it seems that reversing the order changes performance characteristics. I love a good puzzle, and while the fact that one should come before the other was no surprise, I started wanting to know why the order it was, and what the impact of reversing them in processing might be. So I started working to understand the details of implementation for these two technologies. Not understand them from an F5 perspective, though that is certainly where I started, but try to understand how they interact and compliment each other. While much of this discussion also applies to in-place compression and deduplication such as that used on many storage devices, some of it does not, so assume that I am talking about networking, specifically WAN networking, throughout this blog. At the very highest level, deduplication and compression are the same thing. They both look for ways to shrink your dataset before passing it along. After that, it gets a bit more complex. If it was really that simple, after all, we wouldn’t call them two different things. Well, okay, we might, IT has a way of having competing standards, product categories, even jobs that we lump together with the same name. But still, they wouldn’t warrant two different names in the same product like F5 does with BIG-IP WOM. The thing is that compression can do transformations to data to shrink it, and it also looks for small groupings of repetitive byte patterns and replaces them, while deduplication looks for larger groupings of repetitive byte patterns and replaces them. In the implementation you’ll see on BIG-IP WOM, deduplication looks for larger byte patterns repeated across all streams, while compression applies transformations to the data, and when removing duplication only looks for smaller combinations on a single stream. The net result? The two are very complimentary, but if you run compression before deduplication, it will find a whole collection of small repeating byte patterns and between that and transformations, deduplication will find nothing, making compression work harder and deduplication spin its wheels. There are other differences – because deduplication deals with large runs of repetitive data (I believe that in BIG-IP the minimum size is over a K), it uses some form of caching to hold patterns that duplicates can match, and the larger the caching, the more strings of bytes you have to compare to. This introduces some fun around where the cache should be stored. In memory is fast, but limited in size, on flash disk is fast and has a greater size, but is expensive, and on disk is slow but has a huge advantage in size. Good deduplication engines can support all three and thus are customizable to what your organization needs and can afford. Some workloads just won’t benefit from one, but will get a huge benefit from the other. The extremes are good examples of this phenomenon – if you have a lot of in-the-stream repetitive data that is too small for deduplication to pick up, and little or no cross-stream duplication, then deduplication will be of limited use to you, and the act of running through the dedupe engine might actually degrade performance a negligible amount – of course, everything is algorithm dependent, so depending upon your vendor it might degrade performance a large amount also. On the other extreme, if you have a lot of large byte count duplication across streams, but very little within a given stream, deduplication is going to save your day, while compression will, at best, offer you a little benefit. So yes, they’re exactly the same from the 50,000 foot view, but very very different from the benefits and use cases view. And they’re very complimentary, giving you more bang for the buck.299Views0likes1Comment