AAM
5 TopicsConfiguring A Generic Routing Encapsulation (GRE) Tunnel Using BIG-IP
This article is written by, and published on behalf of, DevCentral MVP Leonardo Souza. --- Introduction Recently I helped a customer define and test the configuration of multiple Generic Routing Encapsulation (GRE) tunnels, the tunnels were between BIG-IPs and Zscaler. While I have implemented IPsec using BIG-IPs before, GRE on BIG-IPs was new to me. There is a lot of documentation about configuring IPsec on BIG-IP, but very little on GRE configuration. That is understandable, since GRE is not a common setup in BIG-IP compared to IPsec. One important difference between GRE and IPsec; GRE does not provide encryption, while IPsec does. In this customer case, the traffic passing the GRE tunnel is to a cloud Internet proxy, so encryption (if necessary) is provide by the higher layers. This means that the GRE tunnel will pass HTTP and HTTPs traffic so we do not need to care about encryption at the tunnel layer. While Zscaler has documentation about setting up GRE for using their services, their examples only cover Cisco and Juniper, so this article fills in the gap onhow to set up GRE on the BIG-IP side of the tunnel. The Setup In this case I am going to document the setup of a GRE tunnel between 2 BIG-IPs. This should give you the general idea how GRE works in F5 world, so you might translate this into whatever requirement you have when setting up GRE between BIG-IP and other devices. I have a laptop [LABWin] I will use to simulate a user accessing the webserver (everything in the lab is virtual). I have the 2 BIG-IPs [LABBIGIP1, LABBIGIP2] connected via the GRE tunnel, and connected to the same VLAN and network. Lastly, the webserver [LABServer3] used to provide the website content. Both LABWin and LABBIGIP1 will be connected to network 172.20.0.0/24 with IPs 172.20.0.10 and 172.20.0.1 respectively. I will explain this better next, but LABBIGIP1 and LABBIGIP2 will also be connected to 2 networks, 172.18.0.0/30 and 198.18.0.0/30, with their respective IPs ending in .1 and .2. Both LABBIGIP2 and LABServer3 will be connected to network 172.30.0.0/24, with IPs 172.30.0.2 and 172.30.0.200 respectively. GRE Overview GRE was developed by Cisco, but is now a standard defined in RFC. I am sure you heard about the OSI model, or TCP/IP model, they are based on the idea of encapsulation. The easiest way to explain these concepts is to think about those dolls you open and there is another one inside, and you keep open until you get the last one. (I did not know the name, but Google is a friend, so if you want to learn something non-technical today - https://en.wikipedia.org/wiki/Matryoshka_doll ) Encapsulation works like this; you continue adding layers until you have added all layers. Once you have sent the result to the other device, it removes the layers until it gets to the message. GRE adds one GRE layer and another IP layer, allowing messages to be sent, for example, via the Internet. A computer sends a packet to a destination via its local router, the router has a route to the destination via a GRE tunnel that uses the Internet. The router then adds a layer which includes GRE information, adds another IP layer which includes public IPs allowing the traffic to be routed successfully over the Internet. The destination router receives the packet, analyses the GRE information, uses this to send the traffic to its internal network, and the packet is then able to arrive at its proper destination. Here is a tcpdump example showing multiple encapsulation layers. The VLAN layer says the above layer (or below, depending how you look it) is IPv4, the IPv4 layer says the above layer is GRE, the GRE layer says the above layer is IP, and it continues until the HTTP layer. So, GRE just uses the encapsulation approach to add extra layers that will help with the traffic flow. Configuration NOTE: I am using BIG-IP version 15.1.0, but there is nothing here that would not work in other supported versions. First we need the VLANs setup: LABBIGIP1 net vlan gre_tunnel_outside_vlan { interfaces { 1.2 { } } tag 4093 } net vlan laptop_lan_vlan { interfaces { 1.1 { } } tag 4094 } LABBIGIP2 net vlan gre_tunnel_outside_vlan { interfaces { 1.2 { } } tag 4093 } net vlan server_lan_vlan { interfaces { 1.1 { } } tag 4094 } Next, the most important part, the GRE tunnel itself: LABBIGIP1 net tunnels tunnel gre_tunnel { local-address 198.18.0.1 profile gre remote-address 198.18.0.2 traffic-group traffic-group-local-only } LABBIGIP2 net tunnels tunnel gre_tunnel { local-address 198.18.0.2 profile gre remote-address 198.18.0.1 traffic-group traffic-group-local-only } The configuration above only shows the relevant/non-standard settings. I could use the all-properties for the tmsh command, but it is just easy to explain the options you will see via GUI. The important ones (using LABBIGIP1 as example for the IPs): Name: no explanation needed. Profile: gre, as that is the type of tunnel we want to setup. Local Address:198.18.0.1, this is the local IP of the outside layer of the tunnel, the public address for example if you are doing the tunnel via the Internet. Remote Address: select Specify…, 198.18.0.2, same idea as local address but in this case is the remote device IP. Mode: Bidirectional, this is important as this is not like IPsec that would indicate who can start the tunnel, this indicates in which direction traffic can flow, so if you select Outbound and send a ping from the laptop to the server, the ICMP packet would arrive to the server, but the laptop would never receive the response. Traffic Group: /Common/traffic-group-local-only, in this lab we are using standalone devices, if you have a HA pair, you can choose to have a tunnel per device or just a tunnel in the active device. Now let’s create the self IPs: LABBIGIP1 net self gre_tunnel_outside_self { address 198.18.0.1/30 traffic-group traffic-group-local-only vlan gre_tunnel_outside_vlan } net self gre_tunnel_self { address 172.18.0.1/30 traffic-group traffic-group-local-only vlan gre_tunnel } net self laptop_lan_self { address 172.20.0.1/24 traffic-group traffic-group-local-only vlan laptop_lan_vlan } LABBIGIP2 net self server_lan_self { address 172.30.0.2/24 traffic-group traffic-group-local-only vlan server_lan_vlan } net self gre_tunnel_self { address 172.18.0.2/30 traffic-group traffic-group-local-only vlan gre_tunnel } net self gre_tunnel_outside_self { address 198.18.0.2/30 traffic-group traffic-group-local-only vlan gre_tunnel_outside_vlan } As you can see above, gre_tunnel_self is the self IP that is linked the GRE tunnel. Basically it is an interface in the tunnel, and that automatically creates a route to the local connected network of the tunnel, so routing can be done. In relation to port lockdown, you can use any settings, including allow none, and the GRE tunnel will work. Also, traffic group here is similar to the GRE settings, so make sure both use the same. Next we need routing: LABBIGIP1 net route gre_route { interface /Common/gre_tunnel network 172.30.0.0/24 } LABBIGIP2 net route gre_route { interface /Common/gre_tunnel network 172.20.0.0/24 } LABWin Network DestinationNetmaskGatewayInterfaceMetric 172.30.0.0255.255.255.0172.20.0.1172.20.0.1026 LABServer3 DestinationGatewayGenmaskFlagsMSS Windowirtt Iface 172.20.0.0172.30.0.2255.255.255.0UG0 00 ens192 Normal routing rules, devices need to know where to send traffic for non-local networks. In the BIG-IP you tell it to use a vlan/tunnel, and then select the GRE tunnel. Virtual Servers: Lastly, because we are talking about a BIG-IP, you need a listener for BIG-IP to process the traffic and send it via the tunnel. LABBIGIP1 ltm virtual gre_fw_vs { destination 172.30.0.0:any ip-forward mask 255.255.255.0 profiles { fastL4 { } } source 0.0.0.0/0 translate-address disabled translate-port disabled vlans { laptop_lan_vlan } vlans-enabled } To provide some security, and avoid the tunnel having to handle unnecessary traffic, the virtual server is only listening on VLAN laptop_lan_vlan. Make sure the destination matches the Server Lan VLAN. Make sure you select all protocols for protocol option when creating the virtual server. LABBIGIP2 ltm virtual gre_fw_vs { destination 172.30.0.0:any ip-forward mask 255.255.255.0 profiles { fastL4 { } } source 0.0.0.0/0 translate-address disabled translate-port disabled vlans { gre_tunnel } vlans-enabled } Similar to LABBIGIP1, but virtual server listening on the tunnel gre_tunnel, as that is where the traffic arrives. Testing First let me show that it works, showing the content when accessing the URL http://172.30.0.200/. The IP 172.30.0.200 is the LABServer3, that is Linux server with Apache. The image comes from this, but I am just using the final HTML, and not iRules: http://irulesmagic.blogspot.com/2011/06/can-it-make-cup-of-coffee.html When someone says to you can’t make coffee with iRules, that is false, as my Aussie friend Kevin (https://devcentral.f5.com/s/profile/0051T000008u9sQQAQ ) proves that. Now let me show you the traffic from the laptop to the server and back. I started first with tcpdump using 0.0, that means all VLANs on BIG-IP, but I could not see the GET from LABBIGIP1 to LABBIGIP2 via the tunnel. I also tried tcpdump using the tunnel as interface, but that only shows you the inside of the tunnel, so the packet before it gets wrapped on the tunnel information. So, I end up doing the tcpdump on the interfaces, to make sure I could show you all the information. LABWin Using Wireshark to capture the traffic. No /favicon, so it is normal for that 404 request. LABBIGIP1 tcpdump commands: tcpdump -nnvi 1.1:nnn -s0 \(host 172.20.0.10 and 172.30.0.200\) or \(host 198.18.0.1 and 198.18.0.2\) or \(host 172.18.0.1 and 172.18.0.2\) -w /shared/tmp/gre_bigip1_1.1.cap tcpdump -nnvi 1.2:nnn -s0 \(host 172.20.0.10 and 172.30.0.200\) or \(host 198.18.0.1 and 198.18.0.2\) or \(host 172.18.0.1 and 172.18.0.2\) -w /shared/tmp/gre_bigip1_1.2.cap I will show only the relevant part, to avoid a lot of images. Packet received from LABWin in LABBIGIP1 in VLAN laptop_lan_vlan (interface 1.1), As you can see no GRE layer. Packet received from LABBIGIP1 to LABBIGIP2 via the GRE tunnel (interface 1.2), as you can see 2 IPv4 layers and a GRE layer. First layer is the outside of the tunnel (the one selected), and the second layer inside of the tunnel. Also, you can see the stats for the tunnel via this tmsh command: tmsh show net tunnels tunnel gre_tunnel To reset the stats use: tmsh reset-stats net tunnels tunnel gre_tunnel Before and after the traffic: root@(LABBIGIP1)(cfg-sync Standalone)(Active)(/Common)(tmos)# show net tunnels tunnel gre_tunnel --------------------------------- Net::Tunnel: gre_tunnel --------------------------------- Incoming Discard Packets0 Incoming Error Packets0 Incoming Unknown Proto Packets0 Outgoing Discard Packets0 Outgoing Error Packets0 HC Incoming Octets0 HC Incoming Unicast Packets0 HC Incoming Multicast Packets0 HC Incoming Broadcast Packets0 HC Outgoing Octets0 HC Outgoing Unicast Packets0 HC Outgoing Multicast Packets0 HC Outgoing Broadcast Packets0 root@(LABBIGIP1)(cfg-sync Standalone)(Active)(/Common)(tmos)# show net tunnels tunnel gre_tunnel ------------------------------------ Net::Tunnel: gre_tunnel ------------------------------------ Incoming Discard Packets0 Incoming Error Packets0 Incoming Unknown Proto Packets0 Outgoing Discard Packets0 Outgoing Error Packets0 HC Incoming Octets1.3K HC Incoming Unicast Packets6 HC Incoming Multicast Packets0 HC Incoming Broadcast Packets0 HC Outgoing Octets1.1K HC Outgoing Unicast Packets8 HC Outgoing Multicast Packets0 HC Outgoing Broadcast Packets0 root@(LABBIGIP1)(cfg-sync Standalone)(Active)(/Common)(tmos)# LABBIGIP2 Similar to LABBIGIP1, only the relevant parts. tcpdump command: tcpdump -nnvi 1.1:nnn -s0 \(host 172.20.0.10 and 172.30.0.200\) or \(host 198.18.0.1 and 198.18.0.2\) or \(host 172.18.0.1 and 172.18.0.2\) -w /shared/tmp/gre_bigip2_1.1.cap tcpdump -nnvi 1.2:nnn -s0 \(host 172.20.0.10 and 172.30.0.200\) or \(host 198.18.0.1 and 198.18.0.2\) or \(host 172.18.0.1 and 172.18.0.2\) -w /shared/tmp/gre_bigip2_1.2.cap Packet received from LABBIGIP1 in LABBIGIP2 via GRE tunnel (interface 1.2). Packet from LABBIGIP1 to LABServer3 via VLAN server_lan_vlan (interface 1.1). Before and after the traffic: root@(LABBIGIP2)(cfg-sync Standalone)(Active)(/Common)(tmos)# show net tunnels tunnel gre_tunnel --------------------------------- Net::Tunnel: gre_tunnel --------------------------------- Incoming Discard Packets0 Incoming Error Packets0 Incoming Unknown Proto Packets0 Outgoing Discard Packets0 Outgoing Error Packets0 HC Incoming Octets0 HC Incoming Unicast Packets0 HC Incoming Multicast Packets0 HC Incoming Broadcast Packets0 HC Outgoing Octets0 HC Outgoing Unicast Packets0 HC Outgoing Multicast Packets0 HC Outgoing Broadcast Packets0 root@(LABBIGIP2)(cfg-sync Standalone)(Active)(/Common)(tmos)# show net tunnels tunnel gre_tunnel ------------------------------------ Net::Tunnel: gre_tunnel ------------------------------------ Incoming Discard Packets0 Incoming Error Packets0 Incoming Unknown Proto Packets0 Outgoing Discard Packets0 Outgoing Error Packets0 HC Incoming Octets1.1K HC Incoming Unicast Packets8 HC Incoming Multicast Packets0 HC Incoming Broadcast Packets0 HC Outgoing Octets1.3K HC Outgoing Unicast Packets6 HC Outgoing Multicast Packets0 HC Outgoing Broadcast Packets0 root@(LABBIGIP2)(cfg-sync Standalone)(Active)(/Common)(tmos)# LABServer3 tcpdump command: tcpdump -i ens192 -w gre_labserver3.cap Normal traffic flow, that matches what is seen in the LABWin. Load Balancing If you want to have multiple tunnels, like a primary and standby, or even to for a active/active scenario, you need LTM. You will need to create a virtual server to do load balancing, then you can use all the magic from LTM to do load balancing, like priority groups. I will document above just the changed necessary to pass from the routing only example above to a load balancing example. LABBIGIP1 ltm virtual gre_vs { destination 172.20.0.100:http ip-protocol tcp mask 255.255.255.255 pool gre_pool profiles { http { } tcp { } } source 0.0.0.0/0 translate-address enabled translate-port enabled } ltm pool gre_pool { members { 172.30.0.200:http { address 172.30.0.200 session monitor-enabled state up } } monitor http } Just a standard virtual server and a pool. Pool member is the LABServer3. The initial route setup in LABWin for 172.30.0.0/24 is not necessary anymore. The laptop is directly connected to the virtual server and has no visibility of the back end server. LABBIGIP1 does not need a forward virtual server anymore, as the virtual server above will handle the traffic. Nothing changes for LABBIGIP2. LABServer3 either has a default gateway pointing to 172.30.0.2 (LABBIGIP2), that is the case in this lab setup, or need to create a route for 172.18.0.0/30. This is because LABBIGIP1 will monitor LABServer3 using the self IP linked to the tunnel, so IP 172.18.0.1. Conclusion GRE configuration in BIG-IP is not that complicated, but does have some tricks. You need to have a self IP linked to the tunnel, and also have a route that point to the tunnel. Another trick that may cause some people, accustomed to setting up GRE on a router, is - even with a tunnel setup you still need a virtual server so BIGIP accepts traffic to the tunnel and also a virtual server so traffic received from the tunnel is sent to the LAN network.6.1KViews5likes5CommentsCaching FAQs
One of the most mysterious parts of the BIG-IP Application Acceleration Manager (AAM) is caching. Rarely is it explained, and there are very few documents that describe why you would or would not use one of the BIG-IP's caching facilities. Even harder to find is some kind of description of what numbers you should use, or whether or not to push some specific caching button when trying configure your AAM policies or applications. So here's an overview of a select few bits of frequently asked AAM caching questions, and some explanation of why you would or would not do something with those pretty buttons and number fields. To be clear, AAM does not use fast Cache, it has two entirely separate and distinct caching systems of its own: Metastor and the Small Object Cache. In this posting, however, we'll be talking about them, mostly, as if they are one in the same. The 4 most commonly asked questions we get regarding caching are as follows: · Why is there an option to turn off cache on first hit, and why would I ever enable this? · What does Queue Parallel Requests do? · Why would I ever set the maximum object size to anything less than infinity? · OK, a maximum object size makes sense, but what about the minimum object size? Each question is addressed using an analogy of putting marbles into a mason jar. We are, of course, talking about web objects and bytes of data, not marbles and weight. 1) "Why is there an option to turn off cache on first hit, and why would I ever do so?" OK, well, let's start with a simple mental model of a cache. Imagine your website as just a bunch of marbles. To keep it simple, all your marbles are the same size. Now think of a cache as being like a Mason jar. Imagine if the Mason jar is just big enough to hold exactly one marble. You can think of the BIG-IP as a super-fast copying machine that can copy marbles, and store one copy of one marble. Finally, imagine a single user sending requests for marbles to your website through the BIG-IP, where every policy node has "Cache marbles on first hit" turned on, and every marble is cacheable, and cached if requested. Pretty simple, right? If you have "Cache marble on first hit" turned on, then the very first request your user makes for a marble will cause the BIG-IP to turn around, get that marble from the website, copy it, put that copy into the Mason jar, and then hand the original marble to your user. At this point, the Mason jar is full. If the next request your user makes is for a different marble, then the first marble must be removed from the jar in order to make room for the one just requested. Sadly, the effort and time it took to copy and put the first marble into the Mason jar was entirely wasted, and the user got both of his marbles later, and slower than he would have if the BIG-IP had simply taken them from the website, and handed them to your user. If the third request the customer makes is for the first marble, then again the Mason jar has to be emptied and the first marble cached (remember only a single marble can be cached at any time). The BIG-IP is churning away, copying then putting a marble into the Mason jar, then emptying out the Mason jar, but never actually getting any value out of having that Mason jar. If the user keeps switching back and forth between requesting the first marble and the second marble, the jar will never have the marble being requested, and the load on the back end servers has not been reduced. This is considered a zero cache scenario where the benefits of the cache are moot. But imagine if "Cache marble on first hit" is turned off. Now the same marble has to be requested twice before the BIG-IP will copy it and put the copy in the Mason jar. So now, with the first request the BIG-IP does nothing but pass it along. However, the BIG-IP remembers that the blue marble was requested once. The second request also does nothing but pass the marble along, but again, the BIG-IP remembers that, say, a red marble was requested once. At this point, if the user goes back and asks for the blue marble again, it has been requested twice, so it will be copied and stored in the Mason jar. If the user then asks for a green marble, the BIG-IP remembers that the request was made, but does not discard the marble in the jar, as this is the first request. If the user requests the blue marble again, then the user will get a copy of that from the Mason jar, not from your website. You now have an effective cache where 1 in 5 requests have been offloaded from the origin server. In summary, turn off "Cache object on first hit" for policy nodes where the objects either change very quickly, or where the time between requests is relatively long. This will prevent the cache from discarding an object that your users will hopefully be requesting more often, and more frequently. Obviously, the flip side of that coin is that the BIG-IP will have to get the same object from your website twice, so if you are sure that the objects matched by a particular policy node are really popular, and that they will be requested quite frequently, (such as the company logo and navigation buttons) then copy 'em and dump them in the cache the first time they are requested. 2) What is "Queue Parallel Requests" and why would I turn it on? Queuing parallel requests is interesting, as it interacts with caching, but it really only helps when you have a lot of users trying to get the same marble at the same time, and that marble is being cached for the first time. A cache is kind of stupid, and it doesn't remember the marbles it threw away. As a result, any marble being put into it looks like it is being stored "for the first time", even when it is actually being put into the jar for the hundredth time. "Queue Parallel Requests" basically makes all the users who are requesting the same marble wait for it to be fetched off of your website, and then copied once for each user by the BIG-IP. That doesn't sound too interesting or useful until you realize that if you don't turn this on, then between the time you start the process of requesting that marble from your website and finish putting it into the jar, every other request for that same marble will have to be forwarded to your website. Image a scenario where a server takes 2 ms to respond to a request for an object. Every ms 2 new users request the object. In the time it has taken the server to respond to the first request 3 additional requests would have been sent for the server to process. This has created unnecessary demand on the servers. With queuing turned on all subsequent requests for the object will be placed into a parking area to wait for the original response to be returned and cached. Four requests doesn’t sound like it will cause a server to be overloaded, but what if it isn’t 4 but 400 requests. Suddenly, queuing sounds like a better idea, right? It is, but like any other feature, it is not a panacea. Turn it on for new, shareable, highly popular objects that remain the same for a relatively long time. More to the point, however, if the web server that is giving one marble to the BIG-IP to copy and give to a bunch of users hiccups (say, you decide to take down one of the web servers in your pool, or as luck would have it, one of them fails in the middle of hand over that marble), all of those users will get part of a marble, and that is all. You are trading less pool traffic for what our engineers like to call a "single point of failure" risk. But if you have a really rare and valuable marble that everyone wants a copy of, all at the same time, and your website pool is pretty stable and handing out marbles pretty efficiently, then request queuing will really reduce the traffic on your web servers! 3) There is an option to set the minimum and maximum cacheable object size. Why would I ever set the maximum object size to anything less than infinity? Yeah, that's a tough one. First, go read the answer to "Why turn off Cache content on first hit". Then, let's imagine a Mason jar where instead of one marble, we have a jar big enough to store one thousand marbles. In this scenario, however, we are going to assume exactly 16 simultaneous users, and also that the marbles they are requesting are in the jar. Obviously, the web servers in your pool are getting zero requests. Cool, right!? When caching is working, it can be really handy! But now let us change one assumption: let's allow your web site objects to vary in size. We still have 16 users, but there is one marble that is twice the diameter as the marbles in our first example. When this marble is cached it reduces the total number of marbles that can be cached. Only 13 of the original 16 requests can be served from the jar, the other 3 requests have to go to the server pool. If every marble in the cache is twice the diameter of the marbles in our first example, twelve of the 16 requests being made have to go to your pool. At the extreme, if one object completely fills the Mason jar, that marble (well, bowling ball, really!) is the only object that can be served from cache; the other 15 requests have to go to your pool. So you limit the maximum size of the marbles that can be stored in your Mason jar to configure the BIG-IP to serve the average number of simultaneous users you expect, and wish to serve. By the emergent properties of the system, it turns out that large objects are often times not that popular, anyway. Unless you are running a web server whose job it is to serve large patch files to end users, that is. 4) OK, a maximum object size makes sense. So why have a minimum object size? OK, now we have to get explicit about the jar, and about knowing what has been requested, copied, and stored in the jar. Assume that we have a peg board that has exactly one thousand holes in it. Each time we dump a marble in the jar, we write out a tag that describes the marble, tie it to a peg, then put that peg into the peg board. When we remove a marble from the jar, we remove its associated peg from the board. When the peg board is full, we can't store any more marbles in the Mason jar. Now, what if your minimum size is that of a grain of sand, but your mason jar is big enough to fit 100 marbles with a diameter of 2 inches? If what is popular, and requested quite frequently is a bunch of grains of sand, you can end up running out of peg board space long, LONG before you even finish coating the bottom of your Mason jar with sand. Giving your customers copies of those grains of sand will happen often, but will by definition be a smaller percentage of the total volume of traffic than if you made your minimum size larger, AND if you still have enough marbles of that minimum size on your website to fill your cache. Another way of looking at it is in terms of a collection of marbles of all sizes. If a large marble is in cache, and it has to be displaced to make room on the peg board for a tag that records the information for a grain of sand, and then the grain of sand has to be displaced to make room for the large marble, you will have to get both off of your origin servers. If you don't try to cache the sand grain, then when a user asks for the larger marble, the total weight of marbles requested from your server is going to be smaller. Even if that grain of sand has to be served from your server several times in order to keep the larger marble in the jar, that will be a lot less total grams of marbles moved, copied and stored or retrieved from the jar. Obviously, there is a trade off here between the number of requests, versus the total weight of the marbles being requested. Putting it all together Knowing when and what to cache is an important step to ensure that BIG-IP and your application is performing optimally. Setting a parameter with a wrong value can have negative effects causing increased traffic on your origin servers and consuming resources unnecessarily on BIG-IP. Think about what you are trying to achieve, what other optimization features are enabled and the traffic patterns of your site when configuring the cache settings. Thank you to my colleague John Stevens for assistance in writing this article.1.1KViews0likes2CommentsAAM Dynamic Policy Updates And Removal of SPDY in BIG-IP v13
February 22nd marked the downloadable release of F5 BIG-IP v13. With any major release, our user community wants to know what changes and features will present themselves post upgrade. For BIG-IP Application Acceleration Manager (AAM) users, there are two changes users can expect. Under-the-hood, users will enjoy improved performance in Bandwidth Controller updates for Dynamic Policies. For those who deployed SPDY services, keep reading. Dynamic Policy Updates Maybe you implemented dynamic bandwidth policies and maaaayyybeeee you saw policies not always metering "fairly" causing throttling prior to peak utilization. We saw it too. The algorithms behind the fairness policies within Bandwidth Controller and their surrounding policy management are updated to properly maximize your available bandwidth. BIG-IP v13 modifies existing static and common token bucket algorithms. For you this means: Better tracking of slow flows Better utilize assigned max rates in all conditions Better control of max rate and token distributions across TMM instances and blades These updates require no user intervention to implement. If you have existing policies already in use, you should see performance improvements immediately. There are two mechanisms for assigning policy-based flow rates in AAM, Policy Enforcement Manager and Access Policy Manager (and iRules if you're splitting hairs) but actual changes for these improvements reside within Bandwidth Controller. Say Goodby to SPDY Google announced removal of SPDY support in favor of HTTP/2 Februrary 2015. SPDY (and NPN) support were removed from Chrome in February 2016. All other major browsers followed suit or are planning to drop support shortly. F5 introduced HTTP/2 support in 11.6 and with version 13 we complete SPDY's removal as a functioning service. SPDY service profiles in v12 SPDY profiles create is removed in v13 SPDY profiles create is removed in v13 If for some reason you had a SPDY-only virtual server/application, kudo's for being that one person and let's start making plans to migrate your SPDY gateway/services to HTTP/2 before your upgrade. SPDY functionality exists in LTM and AAM and GUI removal applies to both. We don't want you to be surprised. SPDY removal and Bandwidth Controller fairness algorithm updates are the two changes you'll see affect AAM (and other modules) in BIG-IP v13. One requires no intervention and will provide performance increases, the other is a deprecated web standard who should already have both feet out your infrastructure door. Let us know your experiences with BIG-IP v13 keep on IT'ing262Views0likes1CommentLa transition vers HTTP/2, l'envisager, s'y préparer, la réaliser
HTTP/2 est désormais un standard avec son support intégré dans les browsers modernes. Les serveurs Web, proposent aussi dans leurs dernières versions, la compatiliblité avec cette évolution. Ce qu'il faut retenir est qu'HTTP/2 vient accéler le transport du contenu Web en maintenant la confidentialité à travers SSL. Un des bénéfices pour les developpeurs et fournisseurs de contenu est la capacité à se rendre compte des apports de ce protocole sans remettre en cause toute son infrastructure. Les démonstrations montrent bien les gains à travers un browser sur un ordinateur portable, choses encore plus appréciables sur les plateformes mobiles. La version 12.0 de TMOS permet de se comporter comme un serveur HTTP/2 vis à vis des clients tout en continuant à solliciter le contenu en HTTP/1.0 et HTTP/1.1 auprès des serveurs. Pour trouver des raisons de s'interesser à ce protocole, plusieurs sources d'information peuvent y aider : Making the journey to HTTP/2 HTTP/2 home253Views0likes0CommentsLa transition vers HTTP/2, l'envisager, s'y préparer, la réaliser
HTTP/2 est désormais un standard avec son support intégré dans les browsers modernes. Les serveurs Web, proposent aussi dans leurs dernières versions, la compatiliblité avec cette évolution. Ce qu'il faut retenir est qu'HTTP/2 vient accéler le transport du contenu Web en maintenant la confidentialité à travers SSL. Un des bénéfices pour les developpeurs et fournisseurs de contenu est la capacité à se rendre compte des apports de ce protocole sans remettre en cause toute son infrastructure. Les démonstrations montrent bien les gains à travers un browser sur un ordinateur portable, choses encore plus appréciables sur les plateformes mobiles. La version 12.0 de TMOS permet de se comporter comme un serveur HTTP/2 vis à vis des clients tout en continuant à solliciter le contenu en HTTP/1.0 et HTTP/1.1 auprès des serveurs. Pour trouver des raisons de s'interesser à ce protocole, plusieurs sources d'information peuvent y aider : Making the journey to HTTP/2 HTTP/2 home209Views0likes0Comments