http
100 TopicsLayer 4 vs Layer 7 DoS Attack
Not all DoS (Denial of Service) attacks are the same. While the end result is to consume as much - hopefully all - of a server or site's resources such that legitimate users are denied service (hence the name) there is a subtle difference in how these attacks are perpetrated that makes one easier to stop than the other. SYN Flood A Layer 4 DoS attack is often referred to as a SYN flood. It works at the transport protocol (TCP) layer. A TCP connection is established in what is known as a 3-way handshake. The client sends a SYN packet, the server responds with a SYN ACK, and the client responds to that with an ACK. After the "three-way handshake" is complete, the TCP connection is considered established. It is as this point that applications begin sending data using a Layer 7 or application layer protocol, such as HTTP. A SYN flood uses the inherent patience of the TCP stack to overwhelm a server by sending a flood of SYN packets and then ignoring the SYN ACKs returned by the server. This causes the server to use up resources waiting a configured amount of time for the anticipated ACK that should come from a legitimate client. Because web and application servers are limited in the number of concurrent TCP connections they can have open, if an attacker sends enough SYN packets to a server it can easily chew through the allowed number of TCP connections, thus preventing legitimate requests from being answered by the server. SYN floods are fairly easy for proxy-based application delivery and security products to detect. Because they proxy connections for the servers, and are generally hardware-based with a much higher TCP connection limit, the proxy-based solution can handle the high volume of connections without becoming overwhelmed. Because the proxy-based solution is usually terminating the TCP connection (i.e. it is the "endpoint" of the connection) it will not pass the connection to the server until it has completed the 3-way handshake. Thus, a SYN flood is stopped at the proxy and legitimate connections are passed on to the server with alacrity. The attackers are generally stopped from flooding the network through the use of SYN cookies. SYN cookies utilize cryptographic hashing and are therefore computationally expensive, making it desirable to allow a proxy/delivery solution with hardware accelerated cryptographic capabilities handle this type of security measure. Servers can implement SYN cookies, but the additional burden placed on the server alleviates much of the gains achieved by preventing SYN floods and often results in available, but unacceptably slow performing servers and sites. HTTP GET DoS A Layer 7 DoS attack is a different beast and it's more difficult to detect. A Layer 7 DoS attack is often perpetrated through the use of HTTP GET. This means that the 3-way TCP handshake has been completed, thus fooling devices and solutions which are only examining layer 4 and TCP communications. The attacker looks like a legitimate connection, and is therefore passed on to the web or application server. At that point the attacker begins requesting large numbers of files/objects using HTTP GET. They are generally legitimate requests, there are just a lot of them. So many, in fact, that the server quickly becomes focused on responding to those requests and has a hard time responding to new, legitimate requests. When rate-limiting was used to stop this type of attack, the bad guys moved to using a distributed system of bots (zombies) to ensure that the requests (attack) was coming from myriad IP addresses and was therefore not only more difficult to detect, but more difficult to stop. The attacker uses malware and trojans to deposit a bot on servers and clients, and then remotely includes them in his attack by instructing the bots to request a list of objects from a specific site or server. The attacker might not use bots, but instead might gather enough evil friends to launch an attack against a site that has annoyed them for some reason. Layer 7 DoS attacks are more difficult to detect because the TCP connection is valid and so are the requests. The trick is to realize when there are multiple clients requesting large numbers of objects at the same time and to recognize that it is, in fact, an attack. This is tricky because there may very well be legitimate requests mixed in with the attack, which means a "deny all" philosophy will result in the very situation the attackers are trying to force: a denial of service. Defending against Layer 7 DoS attacks usually involves some sort of rate-shaping algorithm that watches clients and ensures that they request no more than a configurable number of objects per time period, usually measured in seconds or minutes. If the client requests more than the configurable number, the client's IP address is blacklisted for a specified time period and subsequent requests are denied until the address has been freed from the blacklist. Because this can still affect legitimate users, layer 7 firewall (application firewall) vendors are working on ways to get smarter about stopping layer 7 DoS attacks without affecting legitimate clients. It is a subtle dance and requires a bit more understanding of the application and its flow, but if implemented correctly it can improve the ability of such devices to detect and prevent layer 7 DoS attacks from reaching web and application servers and taking a site down. The goal of deploying an application firewall or proxy-based application delivery solution is to ensure the fast and secure delivery of an application. By preventing both layer 4 and layer 7 DoS attacks, such solutions allow servers to continue serving up applications without a degradation in performance caused by dealing with layer 4 or layer 7 attacks.20KViews0likes3CommentsHTTP/2 Protocol in Plain English using Wireshark
1. Quick Intro Some people find it easier to do a "test drive" first to learn how a new protocol works in its simplest form and only then read the RFC. It turns out Wireshark is a perfect tool for me to do just that. It's a simple test and here's the topology: I'll just issue a HEAD request and later on a GET request and we'll see how it looks like on Wireshark. For more info about HTTP/2 profile and HTTP/2 protocol itself you can read the article I published onAskF5andJason's DevCentral article: What is HTTP Part X - HTTP/2. 2. Confirmation of which protocol will be used The packet capture taken below was the result of the following curl command issued from my ubuntu Linux client (I could've used a browser instead): Note: 10.199.3.44 is my virtual server with HTTP/2 profile applied. Here's packet capture (in case you want to follow along): http2-test-v1.zip HTTP/2 is negotiated during SSL handshake in Application Layer Protocol Negotiation (RFC 7301) SSL extension like this: Client says which protocol(s) it supports and server responds whichone it picked (in this case it's HTTP/2!). 3. Negotiation of HTTP/2 Parameters Think of it as something that has to take place like Client Hello and Server Hello in SSL for example. Server side (BIG-IP in this case) sendsSETTINGSframe which counts as confirmation that HTTP/2 is being used plus any flow control configuration we want our peer to honour: Client sendsMagicframe to confirm HTTP/2 is being used and thenSETTINGSwith its requirements for the connection. Yes,Magicframe is always the same. Still curious aboutMagicframe? Readhttps://tools.ietf.org/html/rfc7540#section-3.5. End-points are also supposed to ACK the receipt ofSETTINGSframefrom the other peer and the way they do it is by responding with another emptySETTINGSframewith ACK flag set: 4. Exchanging data Connection-wise we're all set now. For HTTP/2 GET/HEAD requests there is a specific frame type calledHEADERSwhich as the name implies carries HTTP/2 header information. If there was payload it would be carried insideDATAframe type but as this is just aHEADrequest then noDATAframe follows. 5. Appendix A - Other common frame types 5.1 WINDOW_UPDATE There are other common frame types and in my capturethe one that came up wasWINDOW_UPDATE. If you look at section 3 above we see that Client advised BIG-IP that its Initial Window Size was1073741824. WINDOW_UPDATEjust adjusted this value to1073676289: This is HTTP/2 flow control in action. 5.2 DATA in another test (http2-v2.zip) I usedHTTP/2 GETrequest instead ofHEADand requested more data which comes in throughDATAframe type: End Streamflag is false in allDATAmessages except for the last one. It signals when there is more data as well as the lastDATAframe. 5.3 GOAWAY In a subsequent test (http2-connection-idletimeout-1.zip)I set Connection Idle Timeoutin HTTP/2 profile to 1 to force BIG-IP sendingGOAWAYframe to close down connection after 1 second of idle connection. After last piece of data is sent by BIG-IP to client (frame #39), BIG-IP waits 1 second and sendsGOAWAYframewhich initiates the shutdown of HTTP/2 connection. GOAWAYmessages always containsPromised-Stream-IDwhich tells the client what is the lastStream IDit processed. A newStream IDis typically created for every new HTTP request (viaHEADERSmessage). We can see that a new HTTP request slipped in onframe #46but ignored as connection had already been closed on BIG-IP's side.7.7KViews3likes12CommentsThe Three HTTP Routing Patterns You Should Know
HTTP is the de-facto application transport layer. The majority of applications and APIs today are delivered via HTTP, regardless of their content (HTML, JSON, XML). That means most of the scaling going on is focused on scaling HTTP-based apps. In fact, when I peek at our latest data from nearly 4 million virtual servers, 63% of them are HTTP/S. In MuleSoft’s latest Connectivity Benchmark, respondents reported an average of 1020 applications in their enterprise environments. Even if only 63% are HTTP/S, that’s still a lot of HTTP - with not a lot of IP space to give it. Most organizations aren’t lucky enough to own significant ranges of publicly accessible IP addresses. That’s one of the reasons why “virtual servers” exist in the realm of web serving. So that a single IP address can service multiple servers (or applications). Host-based (virtual) routing has been a go-to for the Internet for years. And though it’s the most commonly used (and best known) type of HTTP routing, there are actually three types of HTTP routing NetOps should get familiar with. That’s because increasingly the world of applications is clashing with that of the network, and many of the deployment patterns being used by DevOps rely on HTTP-based routing capabilities. 1. Host-based The old standard. Host-based routing is what enables virtual servers on web servers. It’s also used by application services like load balancing and ingress controllers to achieve the same thing. One IP address, many hosts. Host-based routing allows you to send a request for api.example.com and for web.example.com to the same endpoint with the certainty it will be delivered to the correct back-end application. 2. Path-based Increasingly common – particularly in the realm of scaling containers using ingress controllers – is path-based routing. Path-based routing requires visibility into the URI portion of an HTTP request. You can route based on the entire path (not advisable) or a portion of the path. For example, you could search for /getprofile in this path and route it to one application while routing all others to a different application. You can also search for /v1 in the path and use it to implement a type of API versioning. Because of the focus on API-only communication in modern apps (like mobile) and architectures (like microservices), the use of path-based routing is increasingly important to enabling not only scale but simple delivery. When everything you need to know is found in the URI (in the path), it becomes imperative that you can dissect that path into its composite pieces and make decisions on where that request needs to go. 3. Header-based Header-based is a broad category that includes some familiar routing patterns such as persistence (sticky sessions). Header-based routing simply means that you use an arbitrary HTTP header as the basis for determining how to route a request. This might be a standard header, like content-type or cookie, or it might be a custom header, like x-custom-header-for-my-app. The use of HTTP headers to route requests is a long-standing tradition of sorts. The concept of sticky sessions (persistence) is based on the use of Cookies to aid in scaling stateful applications. It’s also instrumental in maintaining secure sessions (SSL/TLS). Note that Header-based is usually separated from host-based routing even though host is technically one of the many HTTP headers. Many systems were able to perform host-based routing but not general header-based routing, though today it is hard to find a load balancer/proxy that cannot support routing based on any header. Despite the apparent simplicity of these routing patterns, their importance should not be underestimated. API Gateways, web application firewalls (particularly inspection capabilities), ingress controllers, and a robust set of other application services rely on being able to route based on this information.6.2KViews0likes0CommentsThe Disadvantages of DSR (Direct Server Return)
I read a very nice blog post yesterday discussing some of the traditional pros and cons of load-balancing configurations. The author comes to the conclusion that if you can use direct server return, you should. I agree with the author's list of pros and cons; DSR is the least intrusive method of deploying a load-balancer in terms of network configuration. But there are quite a few disadvantages missing from the author's list. Author's List of Disadvantages of DSR The disadvantages of Direct Routing are: Backend server must respond to both its own IP (for health checks) and the virtual IP (for load balanced traffic) Port translation or cookie insertion cannot be implemented. The backend server must not reply to ARP requests for the VIP (otherwise it will steal all the traffic from the load balancer) Prior to Windows Server 2008 some odd routing behavior could occur in In some situations either the application or the operating system cannot be modified to utilse Direct Routing. Some additional disadvantages: Protocol sanitization can't be performed. This means vulnerabilities introduced due to manipulation of lax enforcement of RFCs and protocol specifications can't be addressed. Application acceleration can't be applied. Even the simplest of acceleration techniques, e.g. compression, can't be applied because the traffic is bypassing the load-balancer (a.k.a. application delivery controller). Implementing caching solutions become more complex. With a DSR configuration the routing that makes it so easy to implement requires that caching solutions be deployed elsewhere, such as via WCCP on the router. This requires additional configuration and changes to the routing infrastructure, and introduces another point of failure as well as an additional hop, increasing latency. Error/Exception/SOAP fault handling can't be implemented. In order to address failures in applications such as missing files (404) and SOAP Faults (500) it is necessary for the load-balancer to inspect outbound messages. Using a DSR configuration this ability is lost, which means errors are passed directly back to the user without the ability to retry a request, write an entry in the log, or notify an administrator. Data Leak Prevention can't be accomplished. Without the ability to inspect outbound messages, you can't prevent sensitive data (SSN, credit card numbers) from leaving the building. Connection Optimization functionality is lost. TCP multiplexing can't be accomplished in a DSR configuration because it relies on separating client connections from server connections. This reduces the efficiency of your servers and minimizes the value added to your network by a load balancer. There are more disadvantages than you're likely willing to read, so I'll stop there. Suffice to say that the problem with the suggestion to use DSR whenever possible is that if you're an application-aware network administrator you know that most of the time, DSR isn't the right solution because it restricts the ability of the load-balancer (application delivery controller) to perform additional functions that improve the security, performance, and availability of the applications it is delivering. DSR is well-suited, and always has been, to UDP-based streaming applications such as audio and video delivered via RTSP. However, in the increasingly sensitive environment that is application infrastructure, it is necessary to do more than just "load balancing" to improve the performance and reliability of applications. Additional application delivery techniques are an integral component to a well-performing, efficient application infrastructure. DSR may be easier to implement and, in some cases, may be the right solution. But in most cases, it's going to leave you simply serving applications, instead of delivering them. Just because you can, doesn't mean you should.5.9KViews0likes4CommentsHTTP Explicit Proxy Explained in Plain English
Introduction If you've ever used the old Linux Squid proxy or F5's Secure Gateway solution, you might be familiar with the existence of HTTP Explicit Proxy. If all you're looking for is to configure it ASAP, there's an iApp available to set things up in no time. This technology is also used by F5's Secure Web Gateway Services (SWG). This article will walk you through the mechanics behind the scenes, i.e. how to manually configure it and understand how it works through a lab test. 1. Lab Scenario Expected result is thatclient (.135) sends HTTP GET request to server2.rodrigo.example using 10.199.3.100/32 as http proxy andBIG-IP queries our DNS server (172.16.199.31) for server2.rodrigo.example's IP address, retrieves web page and passes on to client. 2. How Explicit Proxy works In HTTP Explicit proxy, we configure our client (application) to point to BIG-IP's virtual server which will act as an HTTP proxy for external websites. Once client issues a request, BIG-IP will then open a separate connection with external server the client is trying to connect to and issue the requests on behalf of client passing requests/responses back and forth between client and external server. In order to test HTTP Explicit proxy functionality, I configured my client's browser pointing to BIG-IP's HTTP Explicit proxy virtual server. BIG-IP is supposed to issue the requests on behalf of client and pass them back to client. The client is explicitly configured to use VIP's IP address as a proxy. Here's my config on Firefox (Preferences→ Advanced→ Network): When using any form of unencrypted traffic Firefox forwards request straight to BIG-IP without issuing an HTTP CONNECT request: We can see that BIG-IP just forwards request mostly as it is. The reason forGET /instead ofFQDNon server-side is because BIG-IP performed DNS resolution before that and I filtered it. If request is HTTPS instead of HTTP, things are slightly different. First, Firefox (my browser) tries to establish an HTTP tunnel (using CONNECT HTTP method) tohttps://server2.rodrigo.example, BIG-IP then immediately establishes TCP connection with remote-destination (after DNS lookup of course!). Lastly, once TCP connection is established with remote-destination, BIG-IP responds (to client) with 200 Connected signalling tunnel is successfully established and BIG-IP is ready to forward any requests through recently established tunnel: Now that tunnel is properly established, BIG-IP just forwards whatever Firefox sends. In this case, the SSL handshake Client Hello was the first packet which was captured: It is interesting to note that there is still an HTTP header in the messages between client and BIG-IPwith proxy information sent by firefox (frame 47) that encapsulates SSL message: Now that we understand how HTTP Explicit proxy works under the hood, we're ready to learn how to set up BIG-IP as Explicit proxy. 3. Setting Up Explicit proxy on BIG-IP 3.1 Create Virtual Server with no pool (you can leave http profile for later): 3.2 Create DNS resolver: In the GUI, it would be on Network → DNS Resolvers → Create. Then, we'd click on Forward Zones and add a dot '.' if we want all queries to be sent to this name server. For the purposes of this specific test, I could also have named this zone asrodrigo.example.instead. Note: In my lab test I worked out that forward zone's name is really important! When I named it 'testing' or 'google' BIG-IP did not forward DNS query to 172.16.199.31. However, when I named it either . or rodrigo.example. then DNS resolution worked. 3.3 Create tunnel interface BIG-IP already has a default http-tunnel interface. As long as encapsulation type istcp-forward, we can either stick to default or create a new one. I decided to create a new one for the purpose of this lab test: 3.4. Create http profile using http-explicit as parent and assign it to VIP created previously: Note: When configuring it from GUI just set proxy type to explicit and http-explicit will be set to parent profile. The equivalent in the GUI: 3.5. Testing Connection 3.6 Enabling encrypted requests Initially I did not understand why HTTP traffic worked whendefault-connect-handling was set to deny but HTTPS did not. Below is the summary of my tests: Looking at the packet capture, I noticed that BIG-IP was explicitly denying Firefox's request to establish a tunnel as shown below: Note: For this test I created another Virtual Server called using a different address 10.199.3.101 instead of the one I originally tested just to test default-connect-handling. After reading K40243113: Overview of the HTTP profile, I kind of understood the point: "indicates that outbound requests are delivered only if another virtual server is listening on the tunnel for the requested outbound connection. With this setting, virtual servers are required, and the system processes the outbound traffic before it leaves the device." I then created another VIP withsamedestination IP address and port as back-end server client was trying to connect to just to confirm if this time traffic would go through: And indeed it worked: I also tested with a wildcard virtual server listening on *:443 instead of 172.16.199.32/32 and it worked fine as well. What we have observed so far is that when default-connect-handling is set to deny,in order for encrypted client traffic to be accepted by BIG-IP and proxy tunnel established, the destination address that matches the one on client request has to have a listener on BIG-IP for return traffic. Therefore, we can conclude thatdefault-connect-handlingsetting governs what traffic is accepted by the tunnel interface, but unencrypted requests are not affected. The below picture sums up the explanation: default-connect-handling just adds the extra step to validate whether there is a listener on BIG-IP for destination address:port or not. If not, we deny request. If yes, request goes through.5.4KViews2likes8CommentsTools and facilities to troubleshoot HTTP/3 over QUIC with the BIG-IP system
Introduction This article is for engineers who are troubleshooting issues related to HTTP/3 over QUIC as you deploy this new technology on your BIG-IP system. As you perform your troubleshooting tasks, the BIG-IP system provides you a set of useful tools along with other third party software to identify the root cause of issues and even tune HTTP/3 performance to maximize your system's potential. Overview of HTTP/3 and QUIC HTTP/3 is the next version of the HTTP protocol after HTTP/2. The most significant change in HTTP/3 from its predecessors is that it uses the UDP protocol instead of TCP. HTTP/3 uses a new Internet transport protocol, QUIC uses streams at the transport layer and provides TCP-like congestion control and loss recovery. One major improvement QUIC provides is it combines the typical 3-way TCP handshake with TLS 1.3's handshake. This improves the time required to establish a connection. Hence, you may see QUIC as providing the functions previously provided by TCP, TLS, and HTTP/2 as shown in the following diagram: For an overview of HTTP/3 over QUIC with the BIG-IP system, refer to K60235402: Overview of the BIG-IP HTTP/3 and QUIC profiles. Available tools and facilties Beginning in BIG-IP 15.1.0.1, HTTP/3 over QUIC (client-side only) is available as an experimental feature on the BIG-IP system. Beginning in BIG-IP 16.1.0, BIG-IP supports QUIC and HTTP/3. In addition to that feature, there are tools and facilities that are available to help you troubleshoot issues you might encounter. Install an HTTP/3 command line client Use a browser that supports QUIC Review statistics on your BIG-IP system Enable QUIC debug logging on the BIG-IP system Perform advanced troubleshooting with Packet Tracing Use the NetLog feature from the Chromium Project to capture a NetLog dump. Use the tcpdump command and Wireshark to capture and analyze traffic. Use the qlog trace system database key on the BIG-IP system. Important: For BIG-IP versions prior to 16.1.0 that are in the experimental stages, it is important that you note in your troubleshooting, the version of the ietf draft that your client and server implements. For example, in the Hello packets between the client and server, version negotiation is performed to ensure that client and server agree to a QUIC version that is mutually supported. In BIG-IP 15.1.0.1, the HTTP/3 and QUIC profiles in the BIG-IP system are experimental implementations of draft-ietf-quic-http-24 and draft-ietf-quic-transport-24 respectively. You need to consider this when configuring the Alt-Svc header in HTTP/3 discovery. For some browsers such as those from the Chromium project, Chrome canary, Microsoft Edge canary and Opera, when starting these browsers from the command line, you need to provide the QUIC version it implements. For example, for Chrome canary, you run the following command: chrome.exe --enable-quic --quic-version=h3-25. Only implementations of the final, published RFC can identify themselves ash3. Until such an RFC exists, implementations must not identify themselves using theh3string. 1. Install an HTTP/3 command line client Keep in mind that HTTP/3 over QUIC runs on UDP instead of TCP. By default, browsers always initiate a connection to the server using the traditional TCP handshake which will not work with a QUIC server listening for UDP packets. You therefore need to configure HTTP/3 discovery on your BIG-IP system. This can be done by using the HTTP Alternative Services concept which can be implemented either by inserting the Alt-Svc header or via DNS as a HTTPSVC DNS resource record. To insert the Alt-Svc header, refer to K16240003: Configuring HTTP/3 discovery for BIG-IP virtual server. As you troubleshoot your HTTP/3 discovery implementation, you can use a command line tool that does not come with the overhead of HTTP/3 discovery. Following are two popular tools that you can install on your client system: The picoquic client The curl client where you have the option to use either the ntcp2 or quiche software libraries. 2. Use a browser that supports QUIC At this time, browsers by default, still do not support QUIC and do not send the server UDP packets to establish a QUIC connection. The following browsers which are in development support it: Firefox Nightly Chrome Canary (Chromium Project) Microsoft Edge Canary (Chromium Project) Opera (Chromium Project) Note that for browsers from the Chromium project, you need to specify the QUIC ietf version that the browser supports when you launch it. For example, for Chrome, run the following command: chrome.exe --enable-quic --quic-version=h3-25. In most browsers today, you can quickly view the HTTP information exchanged by using the built-in developer tool. To open the tool, select F12 after your browser opens and access any site that supports QUIC. Select the Network tab and under the Protocol column, look at h3-<draft_version> . If the Protocol column is not there, you may have to right click the toolbar to add it. Note: Only implementations of the final, published RFC can identify themselves as h3. Until such an RFC exists, implementations must not identify themselves using the h3 string. Click the name of the HTTP request and you can see that the site returns the Alt-Svc header indicating that it supports HTTP/3 with its ietf draft version. 3. Review statistics on your BIG-IP system The statistics facility on the BIG-IP system displays the system's QUIC traffic processing. On the Configuration utility, go to Local Traffic > Virtual Servers. Select the name of your virtual server and select the Statistics tab. In the Profiles section, select the HTTP/3 and QUIC profiles associated with the virtual server. Alternatively, you can view the statistics from the TMOS shell (tmsh) utility using the following command syntax: tmsh show ltm profile http3 <http3_profile_name> tmsh show ltm profile quic <quic_profile_name> 4. Enable QUIC debug logging on the BIG-IP system You can use the sys db variable tmm.quic.log.level to adjust the verbosity of the QUIC log level to the /var/log/ltm file. Type the following command to see the list of values. tmsh list sys db tmm.quic.log.level value-range sys db tmm.quic.log.level { default-value "Critical" value-range "Critical Error Warning Notice Info Debug" } For example: tmsh modify sys db tmm.quic.log.level value debug 5. Perform advanced troubleshooting with Packet Tracing a. Use the NetLog feature from the Chromium Project to capture a NetLog dump. NetLog is an event logging mechanism for Chrome’s network stack to help debug problems and analyze performance not just for HTTP/3 over QUIC traffic but also HTTP/1.1 and HTTP/2. This feature is available only in browsers from the Chromium project, such as Google Chrome, Opera and Microsoft Edge. The feature provides detailed client side logging including SSL handshake and HTTP content without having to perform decryption or run any commands on your BIG-IP system. To start capturing, open a new tab on your browser and go to, for example, chrome://net-export (Chromium only). For a step by step guide, refer to How to capture a NetLog dump. Once you have your NetLog dump, you can view and analyze it by navigating to netlog-viewer (Chromium only). To analyze QUIC traffic, on the left panel, select Events. In the Description column, identify the URL you requested. For QUIC SSL handshake events, select QUIC_SESSION. For HTTP content, select URL_REQUEST. For example, in the following NetLog dump, the connection failed at the beginning because the client and server could not negotiate a common QUIC version. b. Use the tcpdump command and Wireshark to capture and analyze traffic. The tcpdump command and Wireshark are both essential tools when you need to examine any communication at the packet level. To generate captures and the SSL secrets required to decrypt them, follow the procedure in K05822509: Decrypting HTTP/3 over QUIC with Wireshark. Keep your Wireshark version updated at all times as Wireshark's ability to decode QUIC packets continue to evolve as we speak. c. Use the qlog traces on the BIG-IP system. The BIG-IP qlog trace facility provides you another tool to troubleshoot QUIC communications. By enabling a database variable, the system logs packets and other events to /var/log/trace<TMM_number>.qlog files. qlog is a standardized structured logging format for QUIC and is basically a well-defined JSON file with specified event names and associated data that allows you to use tools like qvis for visualization. Note that the payload is not logged. The qlog trace files are compliant to the IETF schema specified in draft-marx-qlog-main-schema-01. To capture and analyze qlog trace files on your BIG-IP system, perform the following procedure: Capturing qlog trace files on the BIG-IP system Login to the BIG-IP command line. Enable qlogging by typing the following command: tmsh modify sys db quic.qlogging value enable Reproduce the issue you are troubleshooting by initiating QUIC traffic to your virtual server. Disable qlogging by typing the following command: tmsh modify sys db quic.qlogging value disable Note: This step will log required closing json content to the trace files, terminate the trace logging gracefully and is required before you view the files. Sanitizing the qlog trace files Before loading the trace files onto a graphical visualization tool, you first need to sanitize the json content. The tools attempt to fix some common json errors but there may be cases where you need to manually correct some json syntax errors by adding closing braces or commas. Note: Knowledge of different json constructs such as objects, arrays and members may be required when you fix the json files. You can use any of the available online tools such as the following: Json Formatter Fixjson freeformatter Important: Even as the payload information such as IP addresses, or HTTP content are not included in the trace files, you should exercise caution when uploading content to online tools. F5 is not responsible for the privacy and security of your data when you use the third party software listed in this procedure. Alternatively, you can download and install any of the following command line tools on your client device: jsonlint-php jsonlint-py jsonlint-cli Loading and analyzing the qlog trace files with a visualization tool When you have sanitized your json trace files, upload them to a visualization tool for analysis. For example, you can use the following tool available for free. qvis QUIC and HTTP/3 visualization toolsuite The visualization tool can provide you graphical representations of the sequence of messages, congestion information and qlog stats for troubleshooting. For example, the following screenshot, illustrates a sequence diagram of the SSL handshake. Important: Even as payload information such as IP addresses, or HTTP content are not included in the trace files, you should exercise caution when uploading content to online tools. F5 is not responsible for the privacy and security of your data when you use the third party software listed in this procedure. Summary As you use the tools described in this article, you notice that each one helps you troubleshoot issues at the different OSI layers. The built-in developer tools in each browser provide a quick and easy way to view HTTP content but do not let you see the details of the protocol, as do the NetLog tool and Wireshark. However, viewing qlog traces on the qvis graphical tool provides you high level trends and statistics that packet captures do not show. Using the appropriate tool with the right troubleshooting methodology maximizes the potential of HTTP/3 and QUIC for your organization.4.9KViews3likes0CommentsPersistent and Persistence, What's the Difference?
The English language is one of the most expressive, and confusing, in existence. Words can have different meaning based not only on context, but on placement within a given sentence. Add in the twists that come from technical jargon and suddenly you've got words meaning completely different things. This is evident in the use of persistent and persistence. While the conceptual basis of persistence and persistent are essentially the same, in reality they refer to two different technical concepts. Both persistent and persistence relate to the handling of connections. The former is often used as a general description of the behavior of HTTP and, necessarily, TCP connections, though it is also used in the context of database connections. The latter is most often related to TCP/HTTP connection handling but almost exclusively in the context of load-balancing. Persistent Persistent connections are connections that are kept open and reused. The most commonly implemented form of persistent connections are HTTP, with database connections a close second. Persistent HTTP connections were implemented as part of the HTTP 1.1 specification as a method of improving the efficiency Related Links HTTP 1.1 RFC Persistent Connection Behavior of Popular Browsers Persistent Database Connections Apache Keep-Alive Support Cookies, Sessions, and Persistence of HTTP in general. Before HTTP 1.1 a browser would generally open one connection per object on a page in order to retrieve all the appropriate resources. As the number of objects in a page grew, this became increasingly inefficient and significantly reduced the capacity of web servers while causing browsers to appear slow to retrieve data. HTTP 1.1 and the Keep-Alive header in HTTP 1.0 were aimed at improving the performance of HTTP by reusing TCP connections to retrieve objects. They made the connections persistent such that they could be reused to send multiple HTTP requests using the same TCP connection. Similarly, this notion was implemented by proxy-based load-balancers as a way to improve performance of web applications and increase capacity on web servers. Persistent connections between a load-balancer and web servers is usually referred to as TCP multiplexing. Just like browsers, the load-balancer opens a few TCP connections to the servers and then reuses them to send multiple HTTP requests. Persistent connections, both in browsers and load-balancers, have several advantages: Less network traffic due to less TCP setup/teardown. It requires no less than 7 exchanges of data to set up and tear down a TCP connection, thus each connection that can be reused reduces the number of exchanges required resulting in less traffic. Improved performance. Because subsequent requests do not need to setup and tear down a TCP connection, requests arrive faster and responses are returned quicker. TCP has built-in mechanisms, for example window sizing, to address network congestion. Persistent connections give TCP the time to adjust itself appropriately to current network conditions, thus improving overall performance. Non-persistent connections are not able to adjust because they are open and almost immediately closed. Less server overhead. Servers are able to increase the number of concurrent users served because each user requires fewer connections through which to complete requests. Persistence Persistence, on the other hand, is related to the ability of a load-balancer or other traffic management solution to maintain a virtual connection between a client and a specific server. Persistence is often referred to in the application delivery networking world as "stickiness" while in the web and application server demesne it is called "server affinity". Persistence ensures that once a client has made a connection to a specific server that subsequent requests are sent to the same server. This is very important to maintain state and session-specific information in some application architectures and for handling of SSL-enabled applications. Examples of Persistence Hash Load Balancing and Persistence LTM Source Address Persistence Enabling Session Persistence 20 Lines or Less #7: JSessionID Persistence When the first request is seen by the load-balancer it chooses a server. On subsequent requests the load-balancer will automatically choose the same server to ensure continuity of the application or, in the case of SSL, to avoid the compute intensive process of renegotiation. This persistence is often implemented using cookies but can be based on other identifying attributes such as IP address. Load-balancers that have evolved into application delivery controllers are capable of implementing persistence based on any piece of data in the application message (payload), headers, or at in the transport protocol (TCP) and network protocol (IP) layers. Some advantages of persistence are: Avoid renegotiation of SSL. By ensuring that SSL enabled connections are directed to the same server throughout a session, it is possible to avoid renegotiating the keys associated with the session, which is compute and resource intensive. This improves performance and reduces overhead on servers. No need to rewrite applications. Applications developed without load-balancing in mind may break when deployed in a load-balanced architecture because they depend on session data that is stored only on the original server on which the session was initiated. Load-balancers capable of session persistence ensure that those applications do not break by always directing requests to the same server, preserving the session data without requiring that applications be rewritten. Summize So persistent connections are connections that are kept open so they can be reused to send multiple requests, while persistence is the process of ensuring that connections and subsequent requests are sent to the same server through a load-balancer or other proxy device. Both are important facets of communication between clients, servers, and mediators like load-balancers, and increase the overall performance and efficiency of the infrastructure as well as improving the end-user experience.4.9KViews0likes2CommentsTightening the Security of HTTP Traffic part 1
Summary HTTP is the de facto protocol used to communicate over the internet. The security challenges that exists today was not considered when designing the HTTP protocol. Web browsers are generally used to interact with web applications. However, this interaction is not secured in most deployments, despite the existence of a number of http headers that can be leveraged by application developers to tighten the security of their applications. In this article, I will give an overview of some important headers that can be added to HTTP responses in order to improve the security web applications. I will also provide sample iRules that can be implemented on F5 BigIP to insert those headers in HTTP responses. Content This article has been made in a series of 3 parts to make it easier to read and digest. The following headers will be reviewed in different parts of this article: Part 1: A brief overview of Cross-Site Scripting (XSS) vulnerability X-XSS-Protection HttpOnly flag for cookies Secure flag for cookies Part 2: The X-Frame Options Header HTTP Strict Transport Security X-Content-Type-Options Content-Security-Policy Public Key Pinning Extension for HTTP (HPKP) Part 3: Server and X-Powered-By Cookie encryption Https (SSL/TLS) A brief overview of XSS Attacks Before going further into the details of the headers we need to implement to improve the security of http traffic, it is important to make a quick review of the Cross-Site Scripting (XSS) attack,which is the most prevalent web application security flaw . This is important because any of the security measure we will review may be completely useless if an XSS flaw exist on the application.The security measures proposed in this article rely on the browsernot infected by any kind of malware (including XSS) and working properly. A Cross-Site Scripting (XSS) attack is a type of injection attack where a malicious script is injected into a trusted but vulnerable web applications. The malicious script is send to victim browser as client side script when the user uses a vulnerable web app. XSS are generally caused by poor input validation or encoding by web application developers. An XSS attack can load a malware (JavaScript) into to browser and make it behave in a completely unpredictable manner, thus making all other protection mechanism useless. Cross-Site scripting was ranked third in the last OWASP 2013 top 10 security vulnerabilities. Detailed information on XSS can be found on OWASP website. 1. TheX-XSS-Protection header The X-XSS-Protection header enables the Cross-Site Scripting filter on the browser. When enabled, the XSS Filter operates as a browser component with visibility into all requests / responses flowing through the browser.When the filter detects a likely XSS in a request or response, it prevents the malicious script from executing. The X-XSS-Protection header is supported by most major browsers. Usage: X-XSS-Protection: 0 XSS protection filter is disabled (facebook) X-XSS-Protection: 1 XSS protection filter is enable. Upon XSS attack detection, the browser will sanitize the page. X-XSS-Protection: 1; mode=block If XSS attack is detected, The browser blocks the page in order to stop the attack. (Google, Twitter) Note: If third party scripts are allowed to run on your web application, enabling XSS-Protection might prevents some scripts from working properly. As such, proper considerations should be made before enabling the header. 1.1 Example of how this header is used: the following command can be used to see which http headers are enabled on a web application. This command sends a single http request to the destination application and follows redirection. Therefore it is completely harmless and is only used for educational purposes in this article. I will use the same commandthroughout the article against somes well known sites, that are build with high security standards, in order to demonstrate how the discussed headers are used in real life. curl -L -A "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" -sD - https://www.google.com -o /dev/null HTTP/1.1 200 OK Date: Mon, 07 Aug 2017 11:24:08 GMT Expires: -1 Cache-Control: private, max-age=0 Server: gws X-XSS-Protection: 1; mode=block [...] 1.2 iRule to enableXSS-Protection The following iRule can be used to insertXSS-Protection header to all http responses: when HTTP_RESPONSE { if { !([ HTTP::header exists "X-XSS-Protection“ ])} { HTTP::header insert "X-XSS-Protection" "1; mode=block" } } 2.HttpOnly flag for cookies When added to cookies in HTTP responses, the HttpOnly flag prevents client side scripts from accessing cookies. Existing cross-site scripting flaws on the web applicationwill not be exploited by using this cookie. The HttpOnly flag relies on the browser to prevent XSS attacks, when set by the server.This flag is supported by all major browsers in their latest versions. It was Introduced in 2002 by Microsoft in IE6 SP1. Note: Using older browser exposes you to higher security risk as older browsers may not support some of the security headers discussed in this article. 2.1 Example: curl -L -A "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" -sD - https://www.google.com -o /dev/null HTTP/1.1 200 OK Date: Wed, 16 Aug 2017 19:34:36 GMT Expires: -1 [..] Server: gws X-XSS-Protection: 1; mode=block X-Frame-Options: SAMEORIGIN Set-Cookie: NID=110=rKlFElgqL3YuhyKtxcIr0PiSE; expires=Thu, 15-Feb-2018 19:34:36 GMT; path=/; domain=.google.com; HttpOnly Alt-Svc: quic=":443"; ma=2592000; v="39,38,37,35" [...] 2.2 iRule to add HttpOnly flag to all cookies for http responses: If added to Big-IP LTM virtual server, the following iRule will add the HttpOnly flag to all cookies in http responses. when HTTP_RESPONSE { foreach mycookie [HTTP::cookie names] { HTTP::cookie httponly $mycookie enable } } 3.Secure flag for cookies The purpose of the secure flag is to prevent cookies from being observed by unauthorized parties due to the transmission of a the cookie in clear text. When a cookie has the secure flag attribute, Browsers that support the secure flag will only send this cookie within a secure session(HTTPS). This means that, if the web application accidentally points to a hard coded http link, the cookie will not be send. Enabling secure flag for cookies is common and recommended 3.1 Example curl -L -A "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" -s -D - https://www.f5.com -o /dev/null HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 […] Set-Cookie: BIGipServer=!M+eTuU0mHEu4R8mBiet1HZsOy/41nDC5VnWBAlE5bvU0446qCYN4w/jKbf2+U8d8EVe+BxHFZ5+UJYg=; path=/; Httponly;Secure Strict-Transport-Security: max-age=16070400 […] 3.2 Setting the secure flag with iRule If added to Big-IP LTM virtual server, the following iRule will add the secure flag to all http responses. when HTTP_RESPONSE { foreach mycookie [HTTP::cookie names] { HTTP::cookie secure $mycookie enable } }3.7KViews3likes3CommentsWhat is HTTP Part VII - OneConnect
In the last article in this What is HTTP? series we finished covering the settings in the HTTP profile itself. This week, we begin the transition to technologies that support, enhance, optimize, secure, and (insert your descriptive qualifier here…) the HTTP traffic flowing through the BIG-IP, starting with OneConnect. What is OneConnect? What comes to mind when you hear the phrase “Reduce Reuse Recycle”? Does a little green pseudo-triangle with arrows flash before your eyes, or do you think if they really cared about the reduction, the phrase would be “re[duce|use|cycle]”? But I digress. OneConnect reduces the number of server-side connections between the BIG-IP and application servers by reusing the existing server-side connections. So whereas you might be serving thousands of connections on the client-side, there will be a reduced number of server-side connections to offload the overhead of tcp connection setup and teardown. You can see in the image above that before HTTP/1.1, single requests per connection on client and server sides resulted in inefficiencies. With HTTP/1.1, you can multiplex requests on single connections client-side, and as shown below, can multiplex multiple client-side connections onto single server-side connections. This is the power OneConnect offers. Both clients multiple requests in a single tcp connection for each client are pooled together on a single tcp connection between the BIG-IP and the server. There’s even hope for HTTP/1.0 clients via the OneConnect Transformations I mentioned in part five of this series. In this case, the four individual connections on the client-side of the BIG-IP are transformed into a single connection on the server-side thanks to OneConnect. The OneConnect Profile Let’s start with a screen shot of the fields available in the OneConnect profile. As this is the default parent, you don’t have the option to name it and set the parent, but when creating a custom profile, those settings would be available for configuration as well in the General Properties. Source Prefix Length Also known as theSource Maskfield in earlier versions, this is where you specify the mask length for connection reuse. It’s important to understand that this is the contextual source IP of the server-side connection. This means that if you are using a SNAT address, it is applied before the mask length is evaluated for OneConnect reuse eligibility. The masking behavior details by version are maintained on AskF5 in knowledge base articles K5911 (10.x - 11.x) andK85357500 (12.x.) In my current lab version of 12.1.2, the prefix length can be set to IPv4 or IPv6 mask lengths, and are supplied in CIDR notation instead of traditional netmasks. There’s no rocket science to the mask itself, a mask-length of 0 (default) is most efficient as OneConnect will look for any open idle tcp connection on the destination server. A length of /32 for IPv4 or /128 for IPv6 is a host match, so only an open connection between the source host IP (or SNAT) and the destination server would be eligible for reuse. Maximum Size This setting controls the maximum number of idle server-side tcp connections that BIG-IP holds in the connection reuse pool. This value is divided by the number of TMMs on BIG-IP, so for the default of 10,000 on a system with 10 TMMs, the maximum number of idle connections per TMM would be 1,000. This can be tested with an artificially low maximum-size of two, which means one a two TMM system only one idle connection is available per TMM in the connection reuse pool. Let's use this iRule to make it simple to confirm whether connection has been reused or not: when HTTP_REQUEST_RELEASE { log local0. "BIG-IP: [IP::local_addr]:[TCP::local_port] sent [HTTP::method] to [IP::server_addr]:[serverside {TCP::remote_port}]" } If we have two servers and send two requests to them, each request goes to each server respectively (everything except max size here is default with round-robin LB method.) Rule /Common/check_oneconnect : BIG-IP: 172.16.199.234:38648 sent GET to 172.16.199.31:80 Rule /Common/check_oneconnect : BIG-IP: 172.16.199.234:38650 sent GET to 172.16.199.32:80 The server2 connection (from second request) is not added to the connection reuse pool because max size is set to 2 (1 per TMM) and server1 is already there. The third request goes to server1 and reuses the connection from the first request (src port 38648): Rule /Common/check_oneconnect : BIG-IP: 172.16.199.234:38648 sent GET to 172.16.199.31:80 The fourth request goes to server2 and because there are no idle connections open to server2 then a new connection is created: Rule /Common/check_oneconnect : BIG-IP: 172.16.199.234:38646 sent GET to 172.16.199.32:80 The fifth request again goes to server1 still reusing the idle server1 connection: Rule /Common/check_oneconnect : BIG-IP: 172.16.199.234:38648 sent GET to 172.16.199.31:80 A sixth request to server2 also creates a new connection because server1 is still occupying the only spot available in the connection reuse pool: Rule /Common/check_oneconnect : BIG-IP: 172.16.199.234:38658 sent GET to 172.16.199.32:80 Maximum Age This is the maximum number of seconds that a connection stays in the connection reuse pool before the BIG-IP marks a connection ineligible for reuse. Two key points on the max age: It is triggered after the first time the connection is reused, not when the connection is used for the initial request. It is only after the first reuse is completed and the connection is idle that the max age countdown begins. The max-age setting does not close connections when expired, it marks connections ineligible for future reuse. They are then closed in one of two ways. The first way is the BIG-IP will close an ineligible idle connection when a new request comes in. The second way is to wait for the server’s HTTP keep alive to time out. Both sides of the connection have the ability to close down the connection. Let’s take a look at the max age behavior by setting it artificially low on the BIG-IP to 5 seconds, and setting the test apache server to 60 seconds to prevent apache from closing the connection first. If we send one request (frame 10) only the connection is closed 60 seconds later (frame 46) honoring the keep alive we set on apache: The max age did not kick in here as the connection was not reused. However, let’s make another initial request (frame 10.) This time we make a second request (frame 51) 20 seconds later but before the 60 second timeout. Notice the connection has been reused Now that the max age setting has kicked in, we wait another 20 seconds (well after the 5 second expiration we set) and BIG-IP closes the previous connection (frame 77) and opens a new one (frame 92.) One the new connection is established the client-side request is sent to the server (frame 96.) You can also see that after another minute of idleness, the apache server keep alive timeout has expired as well, and it closes the connection (frame 130.) If we had opened a new connection within 5 seconds after frame 115, then the connection would have been reused, and then max-age would kick in again immediately after the connection was idle again. Maximum Reuse This setting controls themaximum number (default: 1,000) of times a particular server connection can be reused before it is closed. So for example, if you set the maximum reuse to two connections, a server-side tcp connection (172.16.199.234:63826 <-> 172.16.199.31:80) would be used for three requests (one initial in frame 10 and two reuses in frames 49 and 90,) and then BIG-IP, unlike with max age in marking a connection ineligible, will immediately close the connection (frame 119.) Idle Timeout Override When enabled (disabled by default,) this setting overrides any timeout set by protocol profiles such as tcp which has a default timeout of 300 seconds. So with the BIG-IP tcp idle timeout at 300 seconds and apache (from earlier) set to 60 seconds on the keep alive, the idle connection will be closed by the server. If we set the keep alive to 301 seconds, BIG-IP would reset the connection. For the sake of testing the idle timeout override, however, let’s enable the idle timeout override in the OneConnect profile by setting it at 30 seconds. You can see in the capture above that after the request is completed (frame 33) and the connection is now idle, it is reset after 30 seconds (frame 44,) occurring before the server timeout and the BIG-IP tcp timeout. You can also set the idle-timeout-override to infinite, which will ignore the other BIG-IP protocol timers but not overcome the settings of the server keep alives. Limit Type This setting controls how connection limits are managed in conjunction with OneConnect. None (default) - OneConnect is not involved with the connection limits. Idle - Specifies that idle connections will be dropped as the tcp connection limit is reached. For short intervals, during the overlap of the idle connection being dropped and the new connection being establish, the tcp connection limit may be exceeded. Strict - Specifies that the tcp connection limit is honored without exception. This means that idle connections will prevent new tcp connections from being made until they expire, even if they could otherwise be reused. This is not a recommended configuration, but might prove useful in exceptional cases with very short expiration timeouts. Understanding OneConnect Behaviors OneConnect does not override load balancing behavior. If there are two pool members (s1 and s2) and we use the round-robin load balancing method with OneConnect applied, the behavior is as follows: First request goes to s1 Second request goes to s2 Third request goes to s1 and reuses idle connection Fourth request goes to s2 and reuses idle connection Applying a OneConnect profile to a virtual server without a layer seven transactional (read: clearly defined requests and responses) protocol like HTTP is generally considered a misconfiguration, and you may see odd behavior when doing so. There may be exceptions, but this configuration should be considered carefully and tested thoroughly before deploying to a production environment. Without OneConnect on a virtual server with HTTP, you will find that persistence data does not appear to be honored. This is because by default, the BIG-IP system performs load balancing for each tcp connection, not each HTTP request, so further requests on the same client-side connection will follow suit from the original decision. By applying OneConnect to the virtual, you effectively detach the server-side from the client-side connection after each request, forcing a new load balancing decision and using persistence information if it available. Resources OneConnect profile overview OneConnect and persistence OneConnect HTTP headers OneConnect and Traffic Distribution OneConnect and Source Mask Thanks to Rodrigo Albuquerque for the fantastic test source material!3.4KViews1like5CommentsWhat is HTTP Part V - HTTP Profile Basic Settings
In the first four parts of this series on HTTP, we laid the foundation for understanding what’s to come. In this article, we’ll focus on the basic settings in the BIG-IP HTTP profile. Before we dive into the HTTP-specific settings, however, let’s briefly discuss the nature of a profile. In the BIG-IP, every application is serviced by one or more virtual servers. There are literally hundreds of individual settings one can make to tune the various protocols used to service applications. Tweaking every individual setting for every individual application is not only time consuming, but at risk for human error. A profile allows an administrator to configure a sub-set of (usually) protocol specific settings that can then be applied to one or more virtual servers. Profiles can also be tiered in that you can have a baseline parent profile, and then use that to create child profiles based on that parent profile to tweak individual settings as necessary. The built-in profiles can be changed but it is not recommended you do so. The recommendation is to create a child based on a built-in profile, and customize from there. The settings we’ll focus on today are shown in the screen capture below. Note that this is TMOS version 12.1 HF1. The HTTP profile has oscillated over the versions to include or disclude various feature sets, so don’t be alarmed if your view is different. There are a few proxy modes available in the HTTP profile, but we’ll focus on reverse proxy mode. Basic Auth Realm The realm is a scope of protection for resources on an origin server. Think of realms as partitions where each can have their own authentication schemes. When authentication is required, the server should respond with a status of 401 and the WWW-Authenticate header. If you set this field to testrealm, the header value returned to the client will be Basic realm=“testrealm”. On the client end, when this response is received, a browser pop-up will trigger and provide a message like Please enter your username and password for : though some browsers do not provide the realm in this message. Fallback The two fallback fields in the profile are useful for handling adverse conditions in availability of your server pool of resources. The Fallback on Error Codes field allows you to set status codes in a space-separated list that will redirect to your Fallback Host when the server responds with any of those codes. This is useful hiding back end issues for a better user experience as well as securing any underlying server information leakage. The Fallback Host by itself is utilized when there are no available nodes to send requests to. When this happens, a status of 302 (a temporary redirect)with the specified host is returned to the client. Header Insertions, Erasures, & Allowances The profile supports limited insert/remove/sanitize functionality. On requests, you can select a single header to insert and a single header to remove. There is a precedence for insertions and removals you'll want to keep in mind if you are inserting and removing the same header. Not sure why you'd do that but I tested and it appears that the HSTS and X-Forwarded-For features will be inserted first, if either are specified in the Request Header Erase they will be removed, and if also specified in the Request Header Insert, will be re-inserted on the way to the server. On responses, you can specify a space-separated list of allowed headers and headers not in that list will be removed before the response is returned to the client. If any of that doesn’t meet your needs, you can further customize the experience with local traffic policies or iRules, which we’ll dig into in part nine of this series. Chunking The request/response chunking fields help deal with compression. We’ll talk about compression in part eight of this series. These fields control how BIG-IP handles chunked content from clients and servers as indicated in the Transfer-Encoding header. The table below shows how the settings in the profile handle chunked and unchecked data for requests and responses. OneConnect Transformations We’ll address OneConnect in part seven of this series, but basically, by enabling transformations AND having a OneConnect profile, it allows the system to change non-keepalive connection requests on the client side to keep alive connections on the server side. If this feature is enabled WITHOUT a OneConnect profile, which is default, there is no action taken. Redirect Rewrites SSL offload is a very popular component for BIG-IP deployments. Traffic from client->BIG-IP is typically secured with SSL, and often, not protected from BIG-IP->server. Some servers are either not configured or just not able to handle returning secured URLs if the server is serving unsecured content. There are many ways to handle this, but this particular setting can be configured to assure that all or request-matching URLs are rewritten from http:// to https:// links. For redirects to node IP addresses, you can also configure the redirect to be rewritten to the virtual server address instead. The default for this setting is to not rewrite redirects. The actions taken for each option in the rewrites is as follows: None: Specifies that the system does not rewrite the URI in any HTTP redirect responses. All: Specifies that the system rewrites the URI in all HTTP redirect responses. Matching: Specifies that the system rewrites the URI in any HTTP redirect responses that match the request URI. Nodes: Specifies that if the URI contains a node IP address instead of a host name, the system changes it to the virtual server address. Note that it is not content that is being rewritten here. Only the Location header in the redirect is updated. Cookie Encryption This is a security feature, enabling you to take an unencrypted cookie in a server response, encrypt it with 192-bit AES cipher, and b64 encode it before sending the response to the client. Multiple cookies can be specified in a space-separated list. This can also be done in iRules against all cookies without calling out specific cookie names. X-Forwarded-For The X-Forwarded-For header is a de-facto standard for passing client IP addresses to servers when the true nature of the force (ahem, the IP path) is not visible to servers. This is common with network address translation being prevalent between clients and server. By default the Insert field is disabled, but if enabled, the client-side IP address of the packets being received by BIG-IP will be inserted into the X-Forwarded-For header and sent to the server. As X-Forwarded-For is a de-facto standard, that means that it is no standard at all, and some proxies with insert an additional header and some will just append to a header. The Accept XFF and XFF Alternate Names fields are used for system level statistics and whether they can be trusted. More details on that (specific to ASM) can be found in solution K12264. Enabling X-Forwarded-For in the HTTP profile does not sanitize existing X-Forwarded-For headers already present in the client request. If there are two present in the request, then BIG-IP if configured to insert one will add a third. You can test this easily with curl: curl -H "X-Forwarded-For: 172.16.31.2" -H "X-Forwarded-For: 172.16.31.3" http://www.test.local/ Here are my wireshark captures on my Ubuntu test server before and after enabling the XFF header insertion in the profile: If you want to pass oneX-Forwarded-For and one only to the server, an iRule is a good idea to achieve this behavior. Linear White Space There are two fields for handling linear white space: max columns (default: 80) and the separator (default: \r\n). Linear white space is any number of spaces, tabs, newlines IF followed by at least one space or tab. From RFC2616: A CRLF is allowed in the definition of TEXT only as part of a header field continuation. It is expected that the folding LWS will be replaced with a single SP before interpretation of the TEXT value. I've not had a reason to ever change these settings. you can test for the presence of linear white space in a header with the HTTP::header lws command. This stack thread has a good answer on what linear white space is. Max Requests This is pretty straightforward. The default of zero does not limit the number of HTTP requests per connection, but changing this number will limit client requests on a per-connection basis to the value specified. Via Headers Regardless of request or response, this setting controls how BIG-IP handles the Via header. Like trace route for IP packets, the Via header will be appended with each intermediary along the way, so when the message reaches its destination, the HTTP path (not the IP path) should be known (whereas all HTTP/1.1 proxies are required to participate, HTTP/1.0 proxies that may or may not append the Via header.) When updating the header, the proxy is required to identify their hostname (or a pseudonym) and the HTTP version of the previous server in the path. Each proxy appends the information in sequential order to the header. Here’s an example Via header from Mozilla’s developer portal. Via: 1.1 vegur Via: HTTP/1.1 GWA Via: 1.0 fred, 1.1 p.example.net The available selections for this setting, request or response, is to preserve, remove, or append the Via header. Server Agent Name This is the value that BIG-IP populates in the Server header for response traffic. The default of BigIP gives away infrastructure clues, so many implementations will change this default. The Server header is analogous in response traffic to the User Agent header in client traffic. What questions do you have? Drop a comment below if there is anything I can clarify. Join me next week when we move on to the HTTP profile enforcement settings.3.1KViews0likes13Comments