local traffic manager
5 TopicsIs TCP's Nagle Algorithm Right for Me?
Of all the settings in the TCP profile, the Nagle algorithm may get the most questions. Designed to avoid sending small packets wherever possible, the question of whether it's right for your application rarely has an easy, standard answer. What does Nagle do? Without the Nagle algorithm, in some circumstances TCP might send tiny packets. In the case of BIG-IP®, this would usually happen because the server delivers packets that are small relative to the clientside Maximum Transmission Unit (MTU). If Nagle is disabled, BIG-IP will simply send them, even though waiting for a few milliseconds would allow TCP to aggregate data into larger packets. The result can be pernicious. Every TCP/IP packet has at least 40 bytes of header overhead, and in most cases 52 bytes. If payloads are small enough, most of the your network traffic will be overhead and reduce the effective throughput of your connection. Second, clients with battery limitations really don't appreciate turning on their radios to send and receive packets more frequently than necessary. Lastly, some routers in the field give preferential treatment to smaller packets. If your data has a series of differently-sized packets, and the misfortune to encounter one of these routers, it will experience severe packet reordering, which can trigger unnecessary retransmissions and severely degrade performance. Specified in RFC 896 all the way back in 1984, the Nagle algorithm gets around this problem by holding sub-MTU-sized data until the receiver has acked all outstanding data. In most cases, the next chunk of data is coming up right behind, and the delay is minimal. What are the Drawbacks? The benefits of aggregating data in fewer packets are pretty intuitive. But under certain circumstances, Nagle can cause problems: In a proxy like BIG-IP, rewriting arriving packets in memory into a different, larger, spot in memory taxes the CPU more than simply passing payloads through without modification. If an application is "chatty," with message traffic passing back and forth, the added delay could add up to a lot of time. For example, imagine a network has a 1500 Byte MTU and the application needs a reply from the client after each 2000 Byte message. In the figure at right, the left diagram shows the exchange without Nagle. BIG-IP sends all the data in one shot, and the reply comes in one round trip, allowing it to deliver four messages in four round trips. On the right is the same exchange with Nagle enabled. Nagle withholds the 500 byte packet until the client acks the 1500 byte packet, meaning it takes two round trips to get the reply that allows the application to proceed. Thus sending four messages takes eight round trips. This scenario is a somewhat contrived worst case, but if your application is more like this than not, then Nagle is poor choice. If the client is using delayed acks (RFC 1122), it might not send an acknowledgment until up to 500ms after receipt of the packet. That's time BIG-IP is holding your data, waiting for acknowledgment. This multiplies the effect on chatty applications described above. F5 Has Improved on Nagle The drawbacks described above sound really scary, but I don't want to talk you out of using Nagle at all. The benefits are real, particularly if your application servers deliver data in small pieces and the application isn't very chatty. More importantly, F5® has made a number of enhancements that remove a lot of the pain while keeping the gain: Nagle-aware HTTP Profiles: all TMOS HTTP profiles send a special control message to TCP when they have no more data to send. This tells TCP to send what it has without waiting for more data to fill out a packet. Autonagle:in TMOS v12.0, users can configure Nagle as "autotuned" instead of simply enabling or disabling it in their TCP profile. This mechanism starts out not executing the Nagle algorithm, but uses heuristics to test if the receiver is using delayed acknowledgments on a connection; if not, it applies Nagle for the remainder of the connection. If delayed acks are in use, TCP will not wait to send packets but will still try to concatenate small packets into MSS-size packets when all are available. [UPDATE:v13.0 substantially improves this feature.] One small packet allowed per RTT: beginning with TMOS® v12.0, when in 'auto' mode that has enabled Nagle, TCP will allow one unacknowledged undersize packet at a time, rather than zero. This speeds up sending the sub-MTU tail of any message while not allowing a continuous stream of undersized packets. This averts the nightmare scenario above completely. Given these improvements, the Nagle algorithm is suitable for a wide variety of applications and environments. It's worth looking at both your applications and the behavior of your servers to see if Nagle is right for you.1.3KViews2likes5CommentsImplementing Lightweight East-West Firewalls with F5
In 2005, perpetual diva Miss Piggy portrayed all four of the directional witches (North, South, East and West) in Jim Henson’s Muppet’s Wizard of Oz. Despite a vigorous and, occasionally violent, performance, she was snubbed at the Academy Awards, ultimately losing out to Reese Witherspoon (Walk the Line). Maybe the Academy Awards voters understood this key principle that escaped our porcine starlet: East and West are fundamentally different from North and South. This is certainly true for the modern enterprise datacenter architecture. When you look at an architectural diagram of a single datacenter, the Internet is at the top (North) and users at the bottom (South). In between are the DMZ and services. These services are applications, virtualized servers and databases and they communicate with each other laterally (often portrayed west to east). N-S networking is considered the traditional perimeter and the conventional home of giant firewalls. E-W is the home of virtualized services, and sometimes N-S security teams don’t get invited to play in the sandboxes. So E-W traffic can be left unguarded. But it shouldn’t be that way; network policy can be implemented quickly with the F5 load-balancers already in place. Let’s take a look at an example E-W layout. Security in an East-West Network Security Traffic is flowing from the web servers east-ward through a middleware cluster and ultimately to the database cluster at the east end. All good. There should never be traffic going from the web servers to the development network, right? Web servers are a huge threat surface: when they get hacked we don’t want the attackers to be able to get into the intellectual property in development. The same goes for the middleware network, it should only talk to the database cluster network. And connections from the database cluster should never go into the development network either. Let’s redraw the diagram with the red lines to indicate connections that we want to prevent. So how can we implement this in a way that doesn’t disrupt everything or require new hardware? Lightweight Firewall Rule for Web Apps Cluster via F5 In the example above, there is an F5 application delivery controller (ADC) in between the web applications and middleware and another in front of the database cluster. Suppose the ADCs are providing simple load balancing for the application traffic running west to east. The web servers are the most interesting section of the architecture. They accept traffic from the Internet (via the ADC) and are typically configured to use the ADC as their default gateway. About the default gateway: historically,the ADC has always passed the original client IP address to the web server for logging purposes. The web server then has to use the ADC as the default gateway when it replies back the client (otherwise the traffic would go around the ADC, and that doesn’t work for proxy and cache services). On the F5 we’ll create three VLANs and virtual server objects to represent the three types of flows that we’re looking for. web-app1 : inbound traffic to web application. middleware : eastbound requests into middleware web-gw : default gateway traffic (mostly traffic from web-pool) The web-app1 virtual server defines a single public address for inbound web traffic to our application. net vlan internet { } ltm virtual web-app1 { description "internet to web app" destination 71.237.39.99:http pool web-pool profiles { http { } tcp { } } vlans { internet } } Because the web servers are accepting routable IP addresses (see sidebar), they have their default gateways set to a wildcard virtual server at the F5. The return traffic will be matched to the incoming flows. net vlan web { } ltm virtual web-gw { description "gateway to middleware" destination 0.0.0.0:any fw-enforced-policy web-gw profiles { fastL4 { } } source 192.168.2.0/24 translate-address disabled translate-port disabled vlans { web } } Web Servers: 192.168.2.0/24 Middleware: 10. 0.0.0/8 Development: 20.0.0.0/8 Database Cluster:172.16.0.0/16 Notice the destination 0.0.0.0:any. This is necessary to allow the webservers to communicate to the outside world. But suppose that an attacker got a shell on the web-server. He could then use the wildcard server to tunnel through to the development network (20.0.0.0). Not what we want. So we define a lightweight firewall rule (fw-enforced-policy web-gw) to prevent packets from getting into the development network. security firewall policy web-gw { rules { w-to-m { action reject description "Disallow development" log yes destination { addresses { 20.0.0.0/8 { } } } } } } Here’s what that rule looks like in the GUI. We’re not specifying the source network because this is only enabled on the “web” VLAN anyway (and therefore wouldn’t be acting on other traffic). Lightweight Firewall Rule for Middleware Cluster via F5 The middleware virtual server defines a single address for the web servers to communicate to a pool of middleware servers. ltm virtual middleware { description "webapp to middleware" source 192.168.2.0/24 destination 192.168.2.202:7001 pool mid-pool profiles { tcp { } } vlans { web } } The middleware virtual server has a source network specification which will act as a firewall rule all by itself. Only traffic originating from the web network will be able to pass through to the middleware cluster. It doesn’t get much more lightweight than this. If we want to prevent connections originating from the middleware tier cluster westward to the web app cluster, we can define a global firewall rule to handle this. Note that we leave a “hole” for development to push to the web app cluster. Lightweight Firewall Rule for Database Cluster via F5 The right-hand F5 ADC will very similar to the left-hand ADC. For our simple example, we have a virtual server balancing to a single pool of database servers and they should only be accessed from the middleware. Just as we did for the web app cluster, we can use the source and destination attributes of the virtual server to create an east-bound flow. ltm virtual mid-to-db { description "middleware to database" source 10.0.0.0/8 destination 10.10.10.10:3306 pool db-pool profiles { tcp { } } vlans { middleware } } We’ll also add a global firewall rule to prevent connections originating from the database cluster back toward the middleware or development. Add one exception: every Saturday night from 8PM to midnight the database clusters will be allowed to access the Internet to pull down updates. Is that Lightweight enough for you Miss Piggy? “Hiiii-yaaah!” See how easy and lightweight that was? Much of the security enforcement is already bound into the definitions of the virtual server objects themselves, leaving us with just handful of global firewall rules. Full disclosure here: I simplified this configuration a little bit for clarity. The full configuration is larger and you’d probably have a SNAT pool in between the clusters where default gateways are used. But hopefully you get the gist that it is possible, with relatively little effort, to attach lightweight firewall rules to manage the east-west traffic in your datacenter.520Views0likes1CommentFull Stack Security
When talking to someone who’s spent a lot of time around F5 technology, two words always come up: full-proxy and platform. BIG-IP is the platform of services offered by a full, dual-stack proxy. The proxy yields unmatched visibility and control at every layer of the connection stack, beginning at the network level to the TCP stack, the SSL stack, and then the application layer. When architecting solutions with BIG-IP Advanced Firewall Manager (AFM), this dual-stack full-proxy design differentiates itself among other security solutions. Since AFM is fully-integrated into TMOS, the performance overhead for the DDoS and other layers 3 & 4 is very low. Unlike flow-based network firewalls, BIG-IP AFM is able to fully inspect and control the client-side TCP handshake before reassembling the request and establishing the server-side TCP connection. In this way, AFM protects not only the server-side network, but the rest of the BIG-IP platform. So far, it’s becoming clear what makes BIG-IP a full-proxy security solution, but what makes it a platform? In technology, a platform is able to provide multiple services and capabilities in a well-integrated fashion. Where AFM provides layer 3 & 4 security services, the layer 7 services fall mostly elsewhere in the BIG-IP platform. For our purposes today, we’ll focus on BIG-IP Application Security Manager (ASM) and the ability to augment the DoS-mitigation functions of AFM with greater DoS and other L7 security for HTTP-based application traffic. Almost all HTTP applications will be encrypted via TLS before too long, so let’s first take note of the SSL stack on BIG-IP. The topic of configuring SSL profiles on BIG-IP for better SSL Labs scores and stronger encryption has been covered at length on DevCentral in recent years. Since the proprietary SSL stack on BIG-IP is easy to configure to enforce good, strong encryption, any vulnerable or obsolete TLS settings on the application server environment is effectively protected. The BIG-IP crypto stack is not only hardened with the latest cryptographic standards, but it is also highly optimized for performance whether accelerated by specialized chips on BIG-IP or VIPRION hardware or virtualized on your hypervisor of choice. This provides effective protection for scaled attacks, such as SSL floods. As with all things BIG-IP, the extensibility and adaptability of iRules applies to SSL, as well. (Jason Rahm will be covering iRules extensibility for AFM later this week). Once the SSL handshake has been validated, thenext stop along the full stack is the HTTP protocol inspection. Within Local Traffic Manager (LTM), there are various HTTP profile settings for inspecting each HTTP request. Security features such as header sanitization, cookie encryption, and size/length enforcement are all available before visiting more advanced HTTP inspection and enforcement capabilities found in both AFM and ASM. Protocol level enforcement of HTTP traffic is available in AFM, providing protection such as evasion technique detection, RFC compliance checks, and more advanced blocking and alarming for different HTTP attack conditions than is found the LTM HTTP profile. These HTTP protocol security enforcements are also present in ASM, but whether they are licensed or configured via AFM or ASM, the same high performance protocol enforcement engine built into the TMOS is employed. Many of these protocol security settings are useful in environments where the a full-blown WAF policy may not be applicable, whether because the application team isn’t participating in more comprehensive policy creation or because some application flows are less-sensitive or lower priority. If more advanced L7 inspection is required, then ASM is prescribed. The inspection engine in ASM is able to evaluate each HTTP request for matching attack signatures in the entire request, including the headers, URL, parameters, and data payload. There are multiple ways to configure WAF policy on ASM, including vulnerability assessment import, automated Policy Builder, and pre-loaded policy templates. Application Security Manager is no mere WAF, though. ASM also includes an array of advanced bot detection techniques, covered on DevCentral in the past by John Wagnon. Bots can’t be effectively detected and mitigated by IP address blacklisting alone, and only a full-proxy can effectively inject the various JavaScript and other countermeasures employed by ASM. In v12 of TMOS, bot detection is further enhanced with what’s known as Proactive Bot Defense, which enables more advanced fingerprinting of potential bots. Bot detection and mitigation is vital to L7 DoS defense, as almost all L7 DoS attacks are highly-automated by attackers. When Full Stack Securityis required, the BIG-IP full-proxy security platform is uniquely positioned to provide that security with great depth of control as well as massive scale. The platform-level integration provides the scale and integration for policy to be discretely configurable at each layer, from Layer 3 all the way to Layer 7. Each layer of policy, in turn, protects the platform itself, demonstrating the efficiency of the integration found on BIG-IP.379Views0likes0CommentsUsing an F5 iApp to Install and Configure VMware Horizon with View on Nutanix
Welcome to the second post in my series about realizing the benefits of using F5 technology with hyper-converged infrastructure such as the Nutanix hyper-converged platform. Last month, I walked you through the simple process of installing BIG-IP Virtual Editions on Nutanix using the VMware ESXi hypervisor. Today, I’m going to talk about using Nutanix best practices to set up VMware Horizon with View and then configuring the BIG-IP VEs we installed earlier to proxy the PCoIP traffic — all using an easy F5 iApp. So if you’re ready, let’s get going. This is a bit of a long post, so I’ll break it up into a few sections. In the first section, I’ll discuss the components that make up the solution, then we will move on to the steps required to prep the BIG-IP device for this solution. I’ll then tackle how to download the View iApp, and how to import it to the BIG-IP. The next section takes us through the View-specific configuration steps required to use View with F5 BIG-IPs. Since SSL certificate management is an important and required part of any View environment, I’ll show you how to import the necessary SSL certificate and key. And finally, I’ll walk you through filling out the iApp wizard, showing how easy it is to configure the BIG-IPs for View. Solution Components In this screenshot of the vSphere Web Client inventory pane, you can see the environment as it exists for this solution. We also have a pair of View Connection Servers and a Composer Server, all tied to an already existing database server. A few templates have been created for the three example desktop pools: one full clone, one linked clone dedicated pool, and one linked clone floating pool. In addition, we have the Nutanix Controller Virtual Machines (CVMs) running on each of the three Nutanix hosts in this environment. We also have the vCenter 6 Server for this environment running on one of the Nutanix hosts. You might notice that there are no View Security Servers, nor are there any View Access Point servers being deployed in this scenario. Instead, we are going to be using the F5 BIG-IP VEs to handle secure access via the PCoIP and HTML5 protocols. We have an HA pair of BIG-IP VEs, version 11.6 HF6. This is the version of BIG-IP that was recently released to support Horizon with View 6.2. You can see here that the Nutanix-supplied datastores being used in this environment are VAAI (vStorage APIs for Array Integration)-enabled, and set up according to Nutanix best practices. Here is the setting in the View Admin UI, which allows for the use of the Nutanix VAAI and View Composer Array Integration features to enable fast cloning. Now this is where I was really impressed. For this pool, I chose to have five desktops cloned at the same time from an existing replica… and bang, 15 seconds later, all five desktops are cloned and ready to be powered on. Configuring the BIG-IP Before we can allow the F5 devices to proxy the PCoIP traffic for View on the Nutanix platform, we have to do a bit of BIG-IP configuration. First, I went through the initial setup process for the BIG-IPs, setting the management IP address, the hostname, DNS resolvers, NTP servers, passwords and the like. I licensed these BIG-IPs with a VE BEST set of licenses, and I also deployed an HA Active/Standby pair of BIG-IPs, configuring them with Traffic Management Interface Self and “Floating” IPs to enable them to be in a Device Service Cluster. These are standard procedures that are required anytime you configure an HA pair of BIG-IPs. It’s very well documented on our F5 Support site, which, by the way, is a great resource. · Here’s how to deploy BIG-IP VEs on ESXi. · Here’s a guide that discusses the initial setup of the BIG-IPs. · And finally, a guide that discusses the Device Service Clustering process. Now, let’s get back to it. One step that is required for this specific implementation with View is to provision the BIG-IP Access Policy Manager (APM) module, which integrates with Active Directory (AD) and the PCoIP protocol, enabling the BIG-IPs to provide secure remote access to the View desktops. You can see here that we need to put a check mark next to APM and set the provisioning to Nominal. Downloading and Installing the F5 iApp Let’s take a second and talk about F5 iApps, a user-customizable framework for deploying applications. iApps give you a powerful, flexible way to automate tasks and templatize sets of functionality on your F5 gear. Using APL (Application Presentation Language), you define a question-driven interface with which users will interact with their application and enter data. Using that data, you can then automate nearly any task on the device. For example, you can use iApps to automate the way you add virtual servers so that you don’t have to go through the same manual steps every time you add a new application. And guess what? There’s a great iApp already written for View. You can get all the iApps right here; the one we’ll be using for this solution is f5.vmware.view.v1.3.0.tmpl. Once you have logged in to the F5 Downloads site, choose the BIG-IP v12.x / Virtual Edition link to see the latest set of iApp templates. Then select the iApp-Templates download, read and accept the EULA, and download both the .zip and the .md5 files. You can use the tool of your choice to check the MD5 hash of the download to verify it has not been monkeyed with. Extracting this .zip file will create a folder with all of the latest iApp templates. We are interested in the vmware.view one in the above screenshot. Make note of this location, because we’ll reference it when we import the iApp into the BIG-IP. And it wouldn’t be a bad idea to check the MD5 hash against the .tmpl file and to read the README.txt file while you’re at it. The next major step is to import the iApp into the BIG-IP so we can utilize it with the View infrastructure we have installed. Navigate to the iApps > Templates section of the BIG-IP UI, and click Import. On the next screen, click Browse and head over to the location where you unzipped the iApp templates earlier. Choose the file. This will upload the template to the BIG-IP device, which will be synchronized with the other BIG-IP device in the cluster at the next synchronization event, whether that’s a manual sync event or an automatic sync event. Configuring the View Environment After the iApp is on the BIG-IP device, we can configure View for use with the BIG-IP PCoIP proxy — and to configure the BIG-IP via the iApp to provide PCoIP and HTML5 remote access services. Let’s start with the configuration of the View environment. From the View Administrator GUI, expand the View Configuration twisty and choose Servers, and then the Connection Servers tab. Highlight one of the Connection Servers and click the Edit button. Make sure that all three checkboxes on the General tab are unchecked, since the BIG-IPs are going to be handling the Secure Tunnel, PCoIP Secure Gateway, and Blast Secure Gateway services. Repeat these steps for all remaining Connection Servers in your environment. Uploading the SSL Certificate Since View uses SSL certificates to assist in the encryption of the View traffic, we must have the SSL certificates used by View on the BIG-IP. Navigate to System > File Management > SSL Certificate List and choose Import. Provide the type, name, source, and password for the certificate (assuming a PKCS 12) and click Import. We will reference this certificate and key via the iApp wizard. Using the iApp to Configure the BIG-IP for View The final step in the whole process is to configure the BIG-IPs via the iApp. Navigate to the iApp > Application Services section, and click on the Create button. Enter a name for the service you are creating and choose f5.vmware.view.v1.3.0 from the dropdown list of available templates. Those blue bands to the left of certain field names indicate that the field is required. Notice that in this section, I chose to change from the default to support HTML5 Clientless browser connections. This allows the BIG-IP to proxy both PCoIP and HTML5 browser–delivered virtual desktops in a secure manner. In this section, I specified the NetBIOS domain name, location of my Active directory, the DNS suffix of my AD domain, and the credentials used for accessing AD. (I’d suggest that you create a unique AD user with administrative rights so that you can easily audit these connections.) Here I chose to decrypt and re-encrypt the SSL traffic associated with View and specified the cert and key imported earlier. While it would be technically possible to only do SSL decryption and communicate between the BIG-IP and the View Connection Servers via port 80, this would require additional configuration steps to be completed on the View Connection Servers; it’s also less secure than re-encrypting. Notice that I specified BOTH an internal and an external virtual server. The external virtual server securely proxies the PCoIP and HTML5 View traffic to clients on external untrusted networks, while the internal virtual server load balances the Connection Servers and allows for direct connections to clients on trusted internal networks. This section is optional, but I created an advanced health monitor that allows for the BIG-IP to validate that not only are the Connection Servers pingable, but that they are actually able to authenticate a user and present back the list of available desktop pools to this particular user. This provides a higher level of assurance that the entirety of the View infrastructure is functioning as designed. Finally, it is time to click Finished. And to watch as the hundreds of configuration objects are created on the BIG-IP device. One of the major benefits of an iApp (beyond the speed of deployment) is that all of the configuration objects used to provide the service are grouped together in one screen where you can see their interrelatedness and notice the health of each of the components — all in one place. And that’s how easy it is. You’re ready to go. If you’re interested in the business benefits of the F5 and Nutanix partnership, check out the following posts from my colleague, Frank Strobel: · Boost Business Mobility and Application Security with Nutanix and F5 · F5 and Nutanix: Together, Powering Business Mobility478Views0likes1CommentLa transition vers HTTP/2, l'envisager, s'y préparer, la réaliser
HTTP/2 est désormais un standard avec son support intégré dans les browsers modernes. Les serveurs Web, proposent aussi dans leurs dernières versions, la compatiliblité avec cette évolution. Ce qu'il faut retenir est qu'HTTP/2 vient accéler le transport du contenu Web en maintenant la confidentialité à travers SSL. Un des bénéfices pour les developpeurs et fournisseurs de contenu est la capacité à se rendre compte des apports de ce protocole sans remettre en cause toute son infrastructure. Les démonstrations montrent bien les gains à travers un browser sur un ordinateur portable, choses encore plus appréciables sur les plateformes mobiles. La version 12.0 de TMOS permet de se comporter comme un serveur HTTP/2 vis à vis des clients tout en continuant à solliciter le contenu en HTTP/1.0 et HTTP/1.1 auprès des serveurs. Pour trouver des raisons de s'interesser à ce protocole, plusieurs sources d'information peuvent y aider : Making the journey to HTTP/2 HTTP/2 home253Views0likes0Comments