irule
10 TopicsAutomate Let's Encrypt Certificates on BIG-IP
To quote the evil emperor Zurg: "We meet again, for the last time!" It's hard to believe it's been six years since my first rodeo with Let's Encrypt and BIG-IP, but (uncompromised) timestamps don't lie. And maybe this won't be my last look at Let's Encrypt, but it will likely be the last time I do so as a standalone effort, which I'll come back to at the end of this article. The first project was a compilation of shell scripts and python scripts and config files and well, this is no different. But it's all updated to meet the acme protocol version requirements for Let's Encrypt. Here's a quick table to connect all the dots: Description What's Out What's In acme client letsencrypt.sh dehydrated python library f5-common-python bigrest BIG-IP functionality creating the SSL profile utilizing an iRule for the HTTP challenge The f5-common-python library has not been maintained or enhanced for at least a year now, and I have an affinity for the good work Leo did with bigrest and I enjoy using it. I opted not to carry the SSL profile configuration forward because that functionality is more app-specific than the certificates themselves. And finally, whereas my initial project used the DNS challenge with the name.com API, in this proof of concept I chose to use an iRule on the BIG-IP to serve the challenge for Let's Encrypt to perform validation against. Whereas my solution is new, the way Let's Encrypt works has not changed, so I've carried forward the process from my previous article that I've now archived. I'll defer to their how it works page for details, but basically the steps are: Define a list of domains you want to secure Your client reaches out to the Let’s Encrypt servers to initiate a challenge for those domains. The servers will issue an http or dns challenge based on your request You need to place a file on your web server or a txt record in the dns zone file with that challenge information The servers will validate your challenge information and notify you You will clean up your challenge files or txt records The servers will issue the certificate and certificate chain to you You now have the key, cert, and chain, and can deploy to your web servers or in our case, to the BIG-IP Before kicking off a validation and generation event, the client registers your account based on your settings in the config file. The files in this project are as follows: /etc/dehydrated/config # Dehydrated configuration file /etc/dehydrated/domains.txt # Domains to sign and generate certs for /etc/dehydrated/dehydrated # acme client /etc/dehydrated/challenge.irule # iRule configured and deployed to BIG-IP by the hook script /etc/dehydrated/hook_script.py # Python script called by dehydrated for special steps in the cert generation process # Environment Variables export F5_HOST=x.x.x.x export F5_USER=admin export F5_PASS=admin You add your domains to the domains.txt file (more work likely if signing a lot of domains, I tested the one I have access to). The dehydrated client, of course is required, and then the hook script that dehydrated interacts with to deploy challenges and certificates. I aptly named that hook_script.py. For my hook, I'm deploying a challenge iRule to be applied only during the challenge; it is modified each time specific to the challenge supplied from the Let's Encrypt service and is cleaned up after the challenge is tested. And finally, there are a few environment variables I set so the information is not in text files. You could also move these into a credential vault. So to recap, you first register your client, then you can kick off a challenge to generate and deploy certificates. On the client side, it looks like this: ./dehydrated --register --accept-terms ./dehydrated -c Now, for testing, make sure you use the Let's Encrypt staging service instead of production. And since I want to force action every request while testing, I run the second command a little differently: ./dehydrated -c --force --force-validation Depicted graphically, here are the moving parts for the http challenge issued by Let's Encrypt at the request of the dehydrated client, deployed to the F5 BIG-IP, and validated by the Let's Encrypt servers. The Let's Encrypt servers then generate and return certs to the dehydrated client, which then, via the hook script, deploys the certs and keys to the F5 BIG-IP to complete the process. And here's the output of the dehydrated client and hook script in action from the CLI: # ./dehydrated -c --force --force-validation # INFO: Using main config file /etc/dehydrated/config Processing example.com + Checking expire date of existing cert... + Valid till Jun 20 02:03:26 2022 GMT (Longer than 30 days). Ignoring because renew was forced! + Signing domains... + Generating private key... + Generating signing request... + Requesting new certificate order from CA... + Received 1 authorizations URLs from the CA + Handling authorization for example.com + A valid authorization has been found but will be ignored + 1 pending challenge(s) + Deploying challenge tokens... + (hook) Deploying Challenge + (hook) Challenge rule added to virtual. + Responding to challenge for example.com authorization... + Challenge is valid! + Cleaning challenge tokens... + (hook) Cleaning Challenge + (hook) Challenge rule removed from virtual. + Requesting certificate... + Checking certificate... + Done! + Creating fullchain.pem... + (hook) Deploying Certs + (hook) Existing Cert/Key updated in transaction. + Done! This results in a deployed certificate/key pair on the F5 BIG-IP, and is modified in a transaction for future updates. This proof of concept is on github in the f5devcentral org if you'd like to take a look. Before closing, however, I'd like to mention a couple things: This is an update to an existing solution from years ago. It works, but probably isn't the best way to automate today if you're just getting started and have already started pursuing a more modern approach to automation. A better path would be something like Ansible. On that note, there are several solutions you can take a look at, posted below in resources. Resources https://github.com/EquateTechnologies/dehydrated-bigip-ansible https://github.com/f5devcentral/ansible-bigip-letsencrypt-http01 https://github.com/s-archer/acme-ansible-f5 https://github.com/s-archer/terraform-modular/tree/master/lets_encrypt_module(Terraform instead of Ansible) https://community.f5.com/t5/technical-forum/let-s-encrypt-with-cloudflare-dns-and-f5-rest-api/m-p/292943(Similar solution to mine, only slightly more robust with OCSP stapling, the DNS instead of HTTP challenge, and with bash instead of python)23KViews6likes18CommentsGHOST Vulnerability (CVE-2015-0235)
On 27 of January Qualys publisheda critical vulnerability dubbed “GHOST” as it can be triggered by the GetHOST functions ( gethostbyname*() ) of the glibc library shipping with the Linux kernel. Glibc is the main library of C language functionality and is present on most linux distributions. Those functions are used to get a corresponding structure out of a supplied hostname, while it also performs a DNS lookup if the hostname is a domain name and not an IP address. The vulnerable functions are obsolete however are still in use by many popular applications such as Apache, MySQL, Nginx and Node.js. Presently this vulnerability was proven to be remotely exploited for the Exim mail service only, while arbitrary code execution on any other system using those vulnerable functions is very context-dependent. Qualys mentioned through a security email list, the applications that were investigated but found to not contain the buffer overflow. Read more on the email list archive link below: http://seclists.org/oss-sec/2015/q1/283 Currently, F5 is not aware of any vulnerable web application, although PHP applications might be potentially vulnerable due to its “gethostbyname()” equivalent. UPDATE: WordPress content management system using xml-rpc ping back functionality was found to be vulnerable to the GHOST vulnerability. WordPress automatically notifies popular Update Services that you've updated your blog by sending aXML-RPCpingeach time you create or update a post. By sending a specially crafted hostname as paramter of xml-rpc ping back method the vulnerable Wordpress will return "500" HTTP response or no response at all after resulting in memory corruption. However, no exploitability was proven yet. Using ASM to Mitigate WordPress GHOST exploit As the crafted hostname should be around 1000 characters to trigger the vulnerability, limiting request size will mitigate the threat. Add the following user defined attack signature to detect and prevent potential exploitation of this specific vulnerability for WordPress systems. For version greater than 11.2.x: uricontent:"xmlrpc.php"; objonly; nocase; content:"methodcall"; nocase; re2:"/https?://(?:.*?)?[\d\.]{500}/i"; For versions below 11.2.x: uricontent:"xmlrpc.php"; objonly; nocase; content:"methodcall"; nocase; pcre:"/https?://(?:.*?)?[\d\.]{500}/i"; This signature will catch any request to the "xmlrpc.php" URL which contains IPv4 format hostname greater than 500 characters. iRule Mitigation for Exim GHOST exploit At this time, only Exim mail servers are known to be exploitable remotely if configured to verify hosts after EHLO/HELO command in an SMTP session. If you run the Exim mail server behind a BigIP, the following iRule will detect and mitigate exploitation attempts: when CLIENT_ACCEPTED { TCP::collect } when CLIENT_DATA { if { ( [string toupper [TCP::payload]] starts_with "HELO " or [string toupper [TCP::payload]] starts_with "EHLO " ) and ( [TCP::payload length] > 1000 ) } { log local0. "Detected GHOST exploitation attempt" TCP::close } TCP::release TCP::collect } This iRule will catch any HELO/EHLO command greater than 1000 bytes. Create a new iRule and attach it to your virtual server.1.5KViews0likes7CommentsMitigating Winshock (CVE-2014-6321) Vulnerabilities Using BIG-IP iRules
Recently we’ve witnessed yet another earth shattering vulnerability in a popular and very fundamental service. Dubbed Winshock, it follows and joins the Heartbleed, Shellshock and Poodle in the pantheon of critical vulnerabilities discovered in 2014. Winshock (CVE-2014-6321) earns a 10.0 CVSS score due to being related to a common service such as TLS, and potentially allowing remote arbitrary code execution. SChannel From MSDN: Secure Channel, also known as Schannel, is a security support provider (SSP) that contains a set of security protocols that provide identity authentication and secure, private communication through encryption. Basically, SChannel is Microsoft’s implementation of TLS, and it is used in various MS-related services that support encryption and authentication – such as: Internet Information Services (IIS), Remote Desktop Protocol, Exchange and Outlook Web Access, SharePoint, Active Directory and more. Naturally, SChannel also contains implementation for the TLS handshake protocol, which is performed before every secure session is established between the client and the server. The TLS Handshake The following image demonstrates how a typical TLS handshake looks like: Image source: http://www-01.ibm.com/support/knowledgecenter/SSFKSJ_7.1.0/com.ibm.mq.doc/sy10660_.htm?lang=en The handshake is used for the client and the server to agree on the terms of the connection. The handshake is conducted using messages, for the purpose of authenticating between the server and the client, agreeing on cipher suites, and exchanging public keys using certificates. Each type of message is passed on the wire as a unique “TLS Record”. Several messages (TLS records) may be sent over one packet. Some of the known TLS records are the following: Client Hello – The client announces it would like to initiate a connection with the server. It also presents all the various cipher suites it can support. This record may also have numerous extensions used to provide even more data. Server Hello – The server acknowledges the Client Hello and presents its own information. Certificate Request – In some scenarios, the client is required to present its certificate in order to authenticate itself. This is known as two-way authentication (or a mutual authentication). The Certificate Request message is sent by the server and forces the client to present a valid certificate before the handshake is successful. Certificate – A message used to transfer the contents of a certificate, including subject name, issuer, public key and more. Certificate Verify – Contains signed value using the client’s private key. It is presented by the client along with their certificate during a 2-way handshake, and serves as a proof of the client actually holding the certificate they claim to. SChannel Vulnerabilities Two vulnerabilities were found in the way SChannel handles those TLS records. One vulnerability occurs when parsing the “server_name” extension of the Client Hello message. This extension is typically used to specify the host name which the client is trying to connect to on the target server. In some way this is similar to the HTTP “Host” header. It was found that SChannel will not properly manage memory allocation when this record contains more than one server name. This vulnerability leads to denial of service by memory exhaustion. The other vulnerability occurs when an invalid signed value is presented inside a Certificate Verify message. It was found that values larger than what the server expects will be written to the memory beyond the allocated buffer scope. This behavior may result in a potential remote code execution. Mitigationwith BIG-IP iRules SSL offloading using BIG-IP is inherently not vulnerable as it does not relay vulnerable messages to the backend server. However, in a “pass-through” scenario, where all the TLS handshake messages are being forwarded without inspection, backend servers may be vulnerable to these attacks. The following iRule will detect and mitigate attempts to exploit above SChannel vulnerabilities: when CLIENT_ACCEPTED { TCP::collect set MAX_TLS_RECORDS 16 set iPacketCounter 0 set iRecordPointer 0 set sPrimeCurve "" set iMessageLength 0 } when CLIENT_DATA { #log local0. "New TCP packet. Length [TCP::payload length]. Packet Counter $iPacketCounter" set bScanTLSRecords 0 if { $iPacketCounter == 0 } { binary scan [TCP::payload] cSS tls_xacttype tls_version tls_recordlen if { [info exists tls_xacttype] && [info exists tls_version] && [info exists tls_recordlen] } { if { ($tls_version == "769" || $tls_version == "770" || $tls_version == "771") && $tls_xacttype == 22 } { set bScanTLSRecords 1 } } } if { $iPacketCounter > 0 } { # Got here mid record, collect more fragments #log local0. "Gather. tls rec $tls_recordlen, ptr $iRecordPointer" if { [expr {$iRecordPointer + $tls_recordlen + 5}] <= [TCP::payload length] } { #log local0. "Full record received" set bScanTLSRecords 1 } else { #log local0. "Record STILL fragmented" set iPacketCounter [expr {$iPacketCounter + 1}] TCP::collect } } if { $bScanTLSRecords } { # Start scanning records set bNextRecord 1 set bKill 0 while { $bNextRecord >= 1 } { #log local0. "Reading next record. ptr $iRecordPointer" binary scan [TCP::payload] @${iRecordPointer}cSS tls_xacttype tls_version tls_recordlen #log local0. "SSL Record Type $tls_xacttype , Version: $tls_version , Record Length: $tls_recordlen" if { [expr {$iRecordPointer + $tls_recordlen + 5}] <= [TCP::payload length] } { binary scan [TCP::payload] @[expr {$iRecordPointer + 5}]c tls_action if { $tls_xacttype == 22 && $tls_action == 1 } { #log local0. "Client Hello" set iRecordOffset [expr {$iRecordPointer + 43}] binary scan [TCP::payload] @${iRecordOffset}c tls_sessidlen set iRecordOffset [expr {$iRecordOffset + 1 + $tls_sessidlen}] binary scan [TCP::payload] @${iRecordOffset}S tls_ciphlen set iRecordOffset [expr {$iRecordOffset + 2 + $tls_ciphlen}] binary scan [TCP::payload] @${iRecordOffset}c tls_complen set iRecordOffset [expr {$iRecordOffset + 1 + $tls_complen}] binary scan [TCP::payload] @${iRecordOffset}S tls_extenlen set iRecordOffset [expr {$iRecordOffset + 2}] binary scan [TCP::payload] @${iRecordOffset}a* tls_extensions for { set i 0 } { $i < $tls_extenlen } { incr i 4 } { set iExtensionOffset [expr {$i}] binary scan $tls_extensions @${iExtensionOffset}SS etype elen if { ($etype == "00") } { set iScanStart [expr {$iExtensionOffset + 9}] set iScanLength [expr {$elen - 5}] binary scan $tls_extensions @${iScanStart}A${iScanLength} tls_servername if { [regexp \x00 $tls_servername] } { log local0. "Winshock detected - NULL character in host name. Server Name: $tls_servername" set bKill 1 } else { #log local0. "Server Name found valid: $tls_servername" } set iExtensionOffset [expr {$iExtensionOffset + $elen}] } else { #log local0. "Uninteresting extension $etype" set iExtensionOffset [expr {$iExtensionOffset + $elen}] } set i $iExtensionOffset } } elseif { $tls_xacttype == 22 && $tls_action == 11 } { #log local0. "Certificate" set iScanStart [expr {$iRecordPointer + 17}] set iScanLength [expr {$tls_recordlen - 12}] binary scan [TCP::payload] @${iScanStart}A${iScanLength} client_certificate if { [regexp {\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01(\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07|\x06\x05\x2b\x81\x04\x00(?:\x22|\x23))} $client_certificate reMatchAll reMatch01] } { #log local0. $match01 switch $reMatch01 { "\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07" { set sPrimeCurve "P-256" } "\x06\x05\x2b\x81\x04\x00\x22" { set sPrimeCurve "P-384" } "\x06\x05\x2b\x81\x04\x00\x23" { set sPrimeCurve "P-521" } default { #log local0. "Invalid curve" } } } } elseif { $tls_xacttype == 22 && $tls_action == 15 } { #log local0. "Certificate Verify" set iScanStart [expr {$iRecordPointer + 11}] set iScanLength [expr {$tls_recordlen - 6}] binary scan [TCP::payload] @${iScanStart}A${iScanLength} client_signature binary scan $client_signature c cSignatureHeader if { $cSignatureHeader == 48 } { binary scan $client_signature @3c r_len set s_len_offset [expr {$r_len + 5}] binary scan $client_signature @${s_len_offset}c s_len set iMessageLength $r_len if { $iMessageLength < $s_len } { set iMessageLength $s_len } } else { #log local0. "Sig header invalid" } } else { #log local0. "Uninteresting TLS action" } # Curve and length found - check Winshock if { $sPrimeCurve ne "" && $iMessageLength > 0 } { set iMaxLength 0 switch $sPrimeCurve { "P-256" { set $iMaxLength 33 } "P-384" { set $iMaxLength 49 } "P-521" { set $iMaxLength 66 } } if { $iMessageLength > $iMaxLength } { log local0. "Winshock detected - Invalid message length (found: $iMessageLength, max:$iMaxLength)" set bKill 1 } } # Exploit found, close connection if { $bKill } { TCP::close set bNextRecord 0 } else { # Next record set iRecordPointer [expr {$iRecordPointer + $tls_recordlen + 5}] if { $iRecordPointer == [TCP::payload length]} { # End of records => Assume it is the end of the packet. #log local0. "End of records" set bNextRecord 0 set iPacketCounter 0 set iRecordPointer 0 set sPrimeCurve "" set iMessageLength 0 TCP::release TCP::collect } else { if { $bNextRecord < $MAX_TLS_RECORDS } { set bNextRecord [expr {$bNextRecord + 1}] } else { set bNextRecord 0 #log local0. "Too many loops over TLS records, exit now" TCP::release TCP::collect } } } } else { #log local0. "Record fragmented" set bNextRecord 0 set iPacketCounter [expr {$iPacketCounter + 1}] TCP::collect } } } else { # Exit here if packet is not TLS handshake if { $iPacketCounter == 0 } { TCP::release TCP::collect } } } Create a new iRule and attach it to your virtual server.1.2KViews0likes13CommentsShellshock – The SIP Proxy Edition
The recent Shellshock and Heartbleed vulnerabilities have something in common – they both affect very infrastructural services. That is the reason their magnitude is much bigger than any other ol’ vulnerability out there. “Everyone” uses bash, “everyone” uses OpenSSL. Shock the shell However, one of the differences is that bash isn’t a public facing service like OpenSSL. Bash is simply the shell service of the underlying operating system. To be able to get to bash and exploit the vulnerability – one has to find a way to remotely “talk” with and feed it their evil commands via environment variables. Arguably, the most common path to reach bash is through a web server that makes use of the CGI technology. By default, CGI creates user-controlled environment variables, which are then parsed by bash, for every HTTP request the server accepts. This means that exploiting bash on such a system is as easy as sending an HTTP request to a CGI controlled page. However, CGI isn’t the only service that uses bash “behind the scenes”. DHCP services are affected, SSH and Telnet are affected, FTP services are affected. Some SIP proxies are also affected, we will learn why and how to mitigate them. SIP Express Router and friends Popular open source SIP proxies, such as Kamailio, have been found vulnerable to Shellshock. The author of a POC tool called sipshock has written a very clear explanation on the matter: The exec module in Kamailio, Opensips and probably every other SER fork passes the received SIP headers as environment variables to the invoking shell. This makes these SIP proxies vulnerable to CVE-2014-6271 (Bash Shellshock). If a proxy is using any of the exec functions and has the 'setvars' parameter set to the default value '1' then by sending SIP messages containing a specially crafted header we can run arbitrary code on the proxy machine. This means that if you have a public facing SIP proxy running a SIP Express Router implementation, you should patch your bash immediately. If you have an F5 LTM doing load balancing for that SIP server – a simple iRule will save you the headache of patching the operating system, and give you breathing room to do so properly. Mitigate Shellshock SIP with BIG-IP iRules The following iRule will detect SIP requests which contain the Shellshock pattern in one of the headers: when CLIENT_DATA { set sCVEPattern "*: () \{*" set bCVEFound 0 if { [string match $sCVEPattern [UDP::payload]] } { set bCVEFound 1 } } when SIP_REQUEST { if { $bCVEFound } { log local0. "Detected CVE-2014-6271 Shellshock attack! IP: '[IP::client_addr]' From: [SIP::from] To: [SIP::to]" reject } } Create a new iRule and attach it to your SIP proxy virtual server. Make sure the Virtual Server has “UDP” set as protocol, and is assigned with a SIP profile.952Views0likes1CommentAdopting SRE practices with F5: Observability and beyond with ELK Stack
This article is a joint collaboration between Eric Ji and JC Kwon. Getting started In the previous article, we explained SRE (Site Reliability Engineering) and how F5 helps SRE deploy and secure modern applications. We already talked observability is essential for SRE to implement SLOs. Meanwhile, we havea wide range of monitoring tools and analytic applications, each assigned to special devices or runningonly for certain applications. In this article, we will explore one of the most commonly utilized logging tools, or the ELK stack. The ELK stack is a collection of three open-source projects, namely Elasticsearch, Logstash, and Kibana. It provides IT project stakeholders the capabilities of multi-system and multi-application log aggregation and analysis.Besides, the ELK stack provides data visualization at stakeholders' fingertips, which is essential for security analytics, system monitoring, and troubleshooting. A brief description of the three projects: Elasticsearch is an open-source, full-text analysis, and search engine. Logstash is a log aggregator that executes transformations on data derived from various input sources, before transferring it to output destinations. Kibana provides data analysis and visualization capabilities for end-users, complementary to Elasticsearch. In this article, the ELK is utilized to analyze and visualize application performance through a centralized dashboard. A dashboard enables end-users to easily correlate North-South traffic with East-West traffic, for end-to-end performance visibility. Overview This use case is built on top of targeted canary deployment. As shown in the diagram below, we are taking advantage of the iRule on BIG-IP, generated a UUID is and inserted it into the HTTP header for every HTTP request packet arriving at BIG-IP. All traffic access logs will contain the UUIDs when they are sent to the ELK server, for validation of information, like user location, the response time by user location, response time of BIG-IP and NGINX plus, etc. Setup and Configuration 1.Create HSL pool, iRule on BIG-IP First, we created a High-Speed Logging (HSL) pool on BIG-IP, to be used by the ELK Stack. The HSL pool is assigned to the sampleapplication. This pool member will be used by iRule to send access logs from BIG-IP to the ELK server. The ELK server is listening for incoming log analysis requests Below is the iRule that we created. when CLIENT_ACCEPTED { set timestamp [clock format [clock seconds] -format "%d/%h/%y:%T %Z" ] } when HTTP_REQUEST { # UUID injection if { [HTTP::cookie x-request-id] == "" } { append s [clock seconds] [IP::local_addr] [IP::client_addr] [expr { int(100000000 * rand()) }] [clock clicks] set s [md5 $s] binary scan $s c* s lset s 8 [expr {([lindex $s 8] & 0x7F) | 0x40}] lset s 6 [expr {([lindex $s 6] & 0x0F) | 0x40}] set s [binary format c* $s] binary scan $s H* s set myuuid $s unset s set inject_uuid_cookie 1 } else { set myuuid [HTTP::cookie x-request-id] set inject_uuid_cookie 0 } set xff_ip "[expr int(rand()*100)].[expr int(rand()*100)].[expr int(rand()*100)].[expr int(rand()*100)]" set hsl [HSL::open -proto UDP -pool pool_elk] set http_request "\"[HTTP::method] [HTTP::uri] HTTP/[HTTP::version]\"" set http_request_time [clock clicks -milliseconds] set http_user_agent "\"[HTTP::header User-Agent]]\"" set http_host [HTTP::host] set http_username [HTTP::username] set client_ip [IP::remote_addr] set client_port [TCP::remote_port] set http_request_uri [HTTP::uri] set http_method [HTTP::method] set referer "\"[HTTP::header value referer]\"" if { [HTTP::uri] contains "test" } { HTTP::header insert "x-request-id" "test-$myuuid" } else { HTTP::header insert "x-request-id" $myuuid } HTTP::header insert "X-Forwarded-For" $xff_ip } when HTTP_RESPONSE { set syslogtime [clock format [clock seconds] -format "%h %e %H:%M:%S"] set response_time [expr {double([clock clicks -milliseconds] - $http_request_time)/1000}] set virtual [virtual] set content_length 0 if { [HTTP::header exists "Content-Length"] } { set content_length \"[HTTP::header "Content-Length"]\" } else { set content_length \"-\" } set lb_server "[LB::server addr]:[LB::server port]" if { [string compare "$lb_server" ""] == 0 } { set lb_server "" } set status_code [HTTP::status] set content_type \"[HTTP::header "Content-type"]\" # construct log for elk, local6.info <182> set log_msg "<182>$syslogtime f5adc tmos: " #set log_msg "" append log_msg "time=\[$timestamp\] " append log_msg "client_ip=$client_ip " append log_msg "virtual=$virtual " append log_msg "client_port=$client_port " append log_msg "xff_ip=$xff_ip " append log_msg "lb_server=$lb_server " append log_msg "http_host=$http_host " append log_msg "http_method=$http_method " append log_msg "http_request_uri=$http_request_uri " append log_msg "status_code=$status_code " append log_msg "content_type=$content_type " append log_msg "content_length=$content_length " append log_msg "response_time=$response_time " append log_msg "referer=$referer " append log_msg "http_user_agent=$http_user_agent " append log_msg "x-request-id=$myuuid " if { $inject_uuid_cookie == 1} { HTTP::cookie insert name x-request-id value $myuuid path "/" set inject_uuid_cookie 0 } # log local2. sending log to elk via log publisher #log local2. $log_msg HSL::send $hsl $log_msg } Next, we added a new VIP for theHSL pool which was created earlier, and applied iRule for this VIP. Then all access logs containing the respective UUID for the HTTP datagram will be sent to the ELK server. Now, the ELK server is ready for the analysis of the BIG-IP access logs. 2.Configure NGINX plus Logging We configure logging for each NGINXplus deployed inside the OpenShift cluster through the respective configmap objects. Here is one example: 3.Customize Kibana Dashboard With all configurations in place, log information will be processed by the ELK server. We will be able to customize a dashboard containing useful, visualized data, like user location, response time by location, etc. When an end-user accesses the service, the VIP will be responded and iRule will apply. Next, the user’s HTTP header information will be checked by iRule, and logs are forwarded to the ELK server for analysis. As the user is accessing the app services, the app server’s logs are also forwarded to the ELK server based on the NGINX plus configmap setting. The list of key indicators available on the Kibana dashboard page is rather long, so we won't describe all of them here. You can check detail here 4.ELK Dashboard Samples We can easily customize the data for visualization in the centralized display, and the following is just a list of dashboard samples. We can look at user location counts and response time by user location: We can check theaverage response time and max response time for each endpoint: We can seethe correlation between BIG-IP (N-S traffic) and NGINX plus endpoint(E-W traffic): We can also checkthe response time for N-S traffic: Summary In this article, we showed how the ELK stack joined forces with F5 BIG-IP and NGINX plus to provide an observability solution for visualizing application performance with a centralized dashboard. Withcombined performance metrics, it opens up a lot of possibilities for SRE's to implement SLOs practically. F5 aims to provide the best solution to support your business success and continue with more use cases. If you want to learn more about this and other SRE use cases, please visit the F5 DevCentral GitHub link here.1KViews1like0CommentsHeartbleed: Network Scanning, iRule Countermeasures
Get the latest updates on how F5 mitigates HeartbleedGet the latest updates on how F5 mitigates Heartbleed I just spent the last two days writing “business-friendly” copy about Heartbleed. I think the result was pretty good and hey, it even got on the front page of f5.com so I guess I got that going for me. Then I wrote up some media stuff and sales stuff and blah, blah blah. But what I really wanted was to do talk tech. So let’s go! What’s Wrong with OpenSSL? The Heartbleed vulnerability only affects systems derived from, or using, the OpenSSL library. I have experience with the OpenSSL library. As one of F5’s SSL developers I spent a lot of time with OpenSSL before we basically abandoned it in our data plane path. Here are some examples of what’s wrong with OpenSSL from someone who spent hundreds of hours mired in its gore. OpenSSL does weird things for weird reasons. As Theo De Raadt points out in his small rant, the team’s decision to use their own malloc and free instead of trusting the OS by default was a tactical fuckup with major future impacts (Heartbleed). OpenSSL has dozens of different APIs with a lack of consistency between them. For some APIs, the caller frees the returned memory. In others, the API frees the memory itself. It’s not documented which is which and this results in lots of memory leaks or worse, double-frees. There are pointers opaque everywhere. A higher-level interface or language (even C++ instead of C) could provide lots of protection, but instead we’re stuck with pointers and a lack of good memory protection. Those are just three off the top of my head. The real meta-issue for OpenSSL is this: There is no real, commercial organization has any stake in maintaining the OpenSSL library. Other open source code bases typically have some very close commercial firm that has a significant interest in maintaining and improving the library. BSD has Apple. Linux has Ubuntu. Gecko has Mozilla. OpenSSL has nobody. The premier security firm, RSA, long ago forked off their own copy of the library (even hiring away its original development team) and thus they don’t put changes back upstream for OpenSSL. Why would they? Maybe you could say that OpenSSL has Adam Langley (from Google) on its side, so at least it’s got that going for it. In Dan Kaminsky’s analysis about the aftermath of Heartbleed, he says that Professor Matthew Green of Johns Hopkins University makes a great point: OpenSSL should be considered Critical Infrastructure. Therefore, it should be receiving critical infrastructure funding, which could enable code audits, static code analysis, fuzz-testing, and all the other processes and improvements it deserves. I agree: in spite of all its flaws, OpenSSL has become the most significant cryptographic library in the history of computing. Speaking of Kaminsky, if you are looking for the last time there was a vulnerability of this severity, you’d probably have to go back to CVE-2008-1447 - the DNS Cache Poisoning Issue (aka the "Kaminsky bug"). I wonder if he wishes he’d made a logo for it. Detecting Heartbleed Is it worth trying to detect Heartbleed moving forward? Heartbleed is so severe that there’s a possibility that nearly all vulnerable systems will get patched quickly and we’ll stop talking about it except in a historical sense. But until that happens, it is worthwhile to detect both vulnerable servers and Heartbleed probes. There’s been some debate about whether probing for Heartbleed is legal, since a successful probe will return up to 64K of server memory (and therefore qualifies as a hack). Regardless of the legality, there’s a Metasploit module for Heartbleed. For your browsers there’s also a Google Chrome extension and a Mozilla Firefox extension. In fact, here’s a whole list of Heartbleed testing tools. Scanning a network for Heartbleed using nmap I used the nmap engine script to scan my home network from an Ubuntu 13.04 client desktop. The script requires a relatively recent version (6.25) of nmap so I had to upgrade. Is it me or is it always more of a pain to upgrade on Ubuntu than it should be? Install Heartbleed nmap script Upgrade to nmap 6.45 Install Subversion (if libsnv_client load error) Install tls.lua NSE script and ssl-heartbleed.nse script. Run against your network like this: nmap -oN ./sslhb.out --script ssl-heartbleed --script-args vulns.showall -sV 192.168.5.0/24 On my home network I found one vulnerable server which I knew about and one which I didn’t! Nmap scan report for 192.168.5.23 Host is up, received syn-ack (0.00078s latency). PORT STATE SERVICE REASON VERSION 22/tcp open ssh syn-ack OpenSSH 6.1p1 Debian 4 (protocol 2.0) 80/tcp open http syn-ack Apache httpd 2.2.22 ((Ubuntu)) 443/tcp open ssl/http syn-ack Apache httpd 2.2.22 ((Ubuntu)) | ssl-heartbleed: | VULNERABLE: | The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. It allows for stealing information intended to be protected by SSL/TLS encryption. | State: VULNERABLE | Risk factor: High Scanning your network (home or otherwise) should hopefully be just as easy. I love nmap, so cool. Heartbleed iRule Countermeasures You probably don’t need an iRule. If you are using BIG-IP in front of HTTP servers, your applications are already protected because the native SSL code path for all versions isn’t vulnerable. Over 90% of the HTTPS applications fronted by BIG-IP are protected by its SSL code. However, there may be reasons that you might have BIG-IP doing simply layer 4 load balancing to a pool of HTTPS servers. A pool of servers with a security policy restricting private keys to origin server hosts. An ingress point for multi-tenant SSL sites (using SNI and different keys). A virtual ADC fronting a pool of virtualized SSL sites. If any of these cases apply to your organization, we have an iRule to help. While I was busy writing all the copy for Heartbleed, my long-time colleague and friend Jeff Costlow was on a team of heroes developing a se tof iRules to counter Heartbleed. Here’s the iRule to reject client connections that attempt to send a TLS heartbeat. No browsers or tools (such as curl) that we know of actually send TLS heartbeats at this time so this should be relatively safe. Client-side Heartbleed iRule: https://devcentral.f5.com/s/articles/ssl-heartbleed-irule-update And here’s a server-side iRule. It watches for HTTPS heartbeat responses from your server and discards any that look suspicious (over 128 bytes). Use this one if you have a client app that sends heartbeats. Server-side Heartbleed iRule: https://devcentral.f5.com/s/articles/ssl-heartbleed-server-side-irule Like I said, you probably don’t need to deploy them. But if you do I suggest making the slight tweaks you need to make them compatible with High-Speed logging. Notes on Patching Systems See F5 Solution 15159 All F5 users on 11.5.0 or 11.5.1 should patch their systems with F5 Heartbleed hotfix. Users on 11.4 or earlier are safe with the version they have. This includes the virtual edition, which is based on 11.3.0 HF1. See the Ask F5 solution for more information on patching. And don’t ignore internal systems A colleague of mine recently reminded me that after you get your externally-facing systems patched (and their certificates redeployed) you need to take care of any internal systems that might be affected as well. Why? Let’s say you have an internal server running a recent version of Linux. And it uses a recent version of OpenSSL. Someone inside your organization (anyone) could send it Heartbleed probes and perhaps pick out passwords and browsing history (or bank records) or of course, the private keys in the system itself. It’s a privilege escalation problem. Conclusion If you need some less technical collateral to send to up your management chain about why your F5 systems are protecting your application from Heartbleed, here’s a link to that business-friendly copy: https://f5.com/solutions/mitigation/mitigating-openssl-heartbleed American computer scientist Gerald Weinberg once said, “If builders built buildings the way programmers wrote programs, the first woodpecker to come along would destroy civilization.” Heartbleed is one of those woodpeckers. We got lucky and had very limited exposure to it, so I guess we had that going for us. Hopefully you had limited exposure to it as well.355Views0likes2CommentsiRule. Do You? Winners Announced.
Congratulations to the 2009 iRule. Do You? winners! Compared to previous years, it is worth noting that the competition for top-spots in both the Customer and Partner divisions was incredibly tight. In fact, the community voting – a first for this contest – actually played a role in determining the final outcome (as it should, given how great this community is). So, while we have to crown top winners, everyone of the finalists really delivered spectacular iRules that demonstrate creativity and flair. And, the winners are… Customer Winners 1st Chetan Bhatt (USA) – "Pool Member Status List" 2nd Kris Weinhold (USA) – "Siteminder Authentication Rule (Login External, NTLM Internal)" 3rd Jari Leppala (Finland) – "RTSP-Redirect" F5 Partner Winners 1st Sake Blok / Ion-IP (Netherlands) – "EncryptOutgoingSOARequest" 2nd Henrik Gyllkrans / Advanced IP Scandinavia AB (Sweden) – "Cookie Tampering Prevention" 3rd Levin Chen / Sinogrid Information Technology (China) – "iRules_deny_repeat_login_ok" They have all selected the prizes from our list of options and will be receiving their goodies shortly. (surprise, surprise – anything Apple was a hit with MacBooks and Cinema Displays the collective choices!) We want to thank ALL of the participants; this edition was more difficult than ever to judge due to the volume and quality of entries. There were some additional iRules – we call them “Honorable Mention” finalists – that didn’t make the top 6 but deserve recognition. So, check them out as well. We want to acknowledge and thank all of our celebrity judges from Dr. Dobb’s, Dell’Oro Group, and Enterprise Strategy Group as well. Jon, Alan, and Jon each invested time and energy reviewing entries, offering their comments, all freely and simply at our asking. F5 and our community are fortunate they appreciate the spirit of DevCentral and I encourage you to visit their sites, blogs, and learn more about what they do. They are ALL top-notch in their respective fields and you will likely benefit from paying attention to what they say. Special recognition goes to our F5 “guest guru” judge Kirk Bauer and the DevCentral guys as well. Kirk’s dedication – to spend hours judging in the wee hours of the morning just to make sure his work was done before leaving later that day for a long holiday vacation – is remarkable and embodies the F5 culture. The DevCentral guys also contributed as usual. I see the team everyday and they put tremendous thought and energy into helping make your community something that makes your life better, easier, and hopefully fun too. In the end, I find it incredibly inspiring and humbling each time we do these contests. What captures the spirit of the contributors, and community best for me, are the comments 2nd Place Partner Henrik Gyllkrans made when he learned he had been selected one of the best. His sentiments unquestionable represent the collective attitude that makes the DevCentral community truly special: “I’m proud and honored that my iRule was received so well by the judges. I hope this iRule can help organizations around the world to deliver better and more secure sites.” With that said, congratulations to the winners. Jump on over to the 2009 iRule. Do You? contest section to learn more and check out some cool iRules!181Views0likes0CommentsPerformance vs Reliability
Our own Deb Allen posted a greattech tipyesterday on conditional logic using HTTP::retry with iRules. This spawned a rather heated debate between Don and myself on the importance of performance versus reliability and application delivery, specifically with BIG-IP. Performance is certainly one of the reasons for implementing an application delivery network with an application delivery controller like BIG-IP as its foundation.As an application server becomes burdened by increasing requests and concurrent users, it can slow down as it tries to balance connection management with execution of business logic with parsing data with executing queries against a database with making calls to external services. Application servers, I think, have got to be the hardest working software in IT land. As capacity is reached, performance is certainly the first metric to slide down the tubes, closely followed by reliability. First the site is just slow, then it becomes unpredictable - one never knows if a response will be forthcoming or not. While slow is bad, unreliable is even worse. At this point it's generally the case that a second server is brought in and an application deliverysolution implemented in order to distribute the load and improve both performance and reliability. This is where F5 comes in, by the way. One of the things we at F5 are really good at doingis ensuring reliability while maintaining performance. One of the ways we can do that is through the use of iRules and specifically the HTTP::retry call. Using this mechanism we can ensure that even if one of the application servers fails to respond or responds with the dreaded 500 Internal Server Error, we can retry the request to another server and obtain a response for the user. As Deb points out (and Don pointed out offline vehemently as well) this mechanism [HTTP::retry] can lead to increased latency. In layman's terms, it slows down the response to the customer because we had to make another call to a second server to get a response on top of the first call. But assuming the second call successfully returns a valid response, we have at a minimum maintained reliability and served the user. So the argument is: which is more important, reliability or speed? Well, would you rather it take a bit longer to drive to your destinationor run into the ditch really fast? The answer is up to you, of course, and it depends entirely on your business and what it is you are trying to do. Optimally, of course, we would want the best of both worlds: we want fast and reliable. Using an iRule gives you the best of both worlds. In most cases, that second call isn't going to happen because it's likely BIG-IP has already determined the state of the servers and won't send a request to a server it knows is responding either poorly or not at all. That's why we implement advanced health checking features. But in the event that the server began to fail (in the short interval between health checks) and BIG-IP sent the request on before noticing the server was down, the iRule lets you ensure that the response can be resent to another server with minimal effort. Think of this particulariRule as a kind of disaster recovery plan - it will only be invoked if a specific sequence of events that rarely happens happens. It's reliability insurance.The mere existence of the iRule doesn't impact performance, and latency is only added if the retry is necessary, which should be rarely, if ever. So the trade off between performance and reliability isn't quite as cut and dried as it sounds. You can have both, that's the point of including an application delivery controller as part of your infrastructure. Performance is not at all at odds with reliability, at least not in BIG-IP land, and should never be conflicting goals regardless of your chosen application deliverysolution. Imbibing: Pink Lemonade Technorati tags: performance, MacVittie, F5, BIG-IP, iRule, reliability, application delivery263Views0likes1CommentTag! <You're It>
Yesterday Don voiced his opinion that XML tagging is a broken proposition. One of his basic premises is that because tags, a.k.a. meta-data,are generated by people and people aren't always as, shall we say, obsessive about doing so, that the entire system is broken. Obviously I disagree or I wouldn't be penning this post. :-) Web 3.0, a.k.a. the Semantic Web, is going to require tagging, or meta-data if you prefer. Without it there's not a good way to establish the relationships between content that forms the basis for connections. But people are only part of the equation, and while they certainly form the basis for proper tagging, there are other issues that need to be addressed in order for this semantic web to work properly. While some proponents are doggedly attempted to create acceptable taxonomies from which the proper tags can be plucked, a semantic web is going to require more than just appropriate, consistent tagging. The People Problem Don's partially right (but only partially).Even with an extensive, consistent taxonomy the tags still need to be applied. People haven't done a great job with this thus far, therefore it's probably time to consider an automated, technological solution. This isn't as far-fetched as it might sound. Most categories/tags are based on single, simple words like "performance" or "SOA" or "XML". Consider a transparent device between you and the server receiving your content that automatically inspects incoming content and compares it against a known list of tags. The device inspecting the content automatically builds a list of relevant tags and inserts the result set into the content before submitting it. BIG-IP could do this today with an iRule, and it can do so transparently. In fact, this type of content enrichment is something that iRules is very well suited to accomplish, whether it's tagging or URI replacement or content filtering. Such a solution only addresses web-submitted content, like blogs and wiki pages. We store and ultimately search and deliver a lot more than just Web 2.0 content, sometimes we need to do the same thing for documents in multiple formats, such as PDF or EXCEL or PPT. Well, consider another device that transparently sits between you and the server on which you store that data, and the possibility that it could accomplish the same type of task as BIG-IP and iRules. Perhaps a device like Acopia's ARX? But even if we use such a solution to automatically tag content there are still other issues that need to be addressed in order for the Semantic Web to be realized. The Tomato Problem One of the biggest hurdles that needs to be overcome is the fact that while I say tuh-mey-toh, you might say tuh-mah-toh. Less pedantically, I say "tag", Bob says "class" and Alice says "category". Synonyms are something that aren't always addressed by tagging and they are not handled at all by search engines. The search technology that will drive Web 3.0 is semantically minded. It will understand that the meaning is more important than the actual word and search for the former rather than simply pattern matching on the latter. Sure, everyone who practices SEO (Search Engine Optimization) makes certain to include as many synonyms as they can, but they can't (and don't) get them all. And it shouldn't be up to the author to worry about catching them all, especially as we're all essentially working off the same thesaurus anyway. This is important. Even if we can automate or get everyone in the world into the habit of properly tagging their content, there will still be the need for "smarter" search capabilities that take the intrinsic properties of langauge into consideration. Forcing a single taxonomy isn't the solution, primarily because of the next issue that needs to be addressed, language. The Language Problem When discussing the problem of language some might immediately think of the differences between say, Spanish and French and English. But the language problem is even more subtle than that. UK English and American English are not exactly the same, particularly in the area of spelling. Key word and pattern matching don't treat "localization" and "localisation" the same. But both are the same word simply spelled a bit differently due to changes and localization in the language over centuries. There are a large number of these words that span the two versions of English; it isn't just a smattering, it's a large enough subset that people could not be relied upon to consistently recall all the exceptions in their own language. So not only do we have inconsistency between French and English, but we have internal language inconsistencies to deal with as well. We can hardly force the UK or the US to change their entire language to solve this problem, so some sort of mechanism to deal with these subtle issues as well as broader translation capabilities is necessary for Web 3.0 to actually come to fruition. Tagging Isn't the Problem, It's Search That Needs Some Fixin' Tagging itself isn't necessarily the issue. We can, and likely will, automate the process of applying meta-data for the purposes of categorization and search rather than continue to rely on the inconsistent application by users. But simply applying the meta-data consistently will not address the deeper issues prevalent in search technology today that prevents a truly semantic web from being a success. There is a Search 3.0 on the horizon; but it remains to be seen who will figure it out first. The language barrier issue is one that's already been somewhat addressed, but only for languages that require translation. The subtle differences in English around the world have yet to be addressed, and no one appears to have presented a semantic search that addresses the synonym issue. Try it yourself. Search on any search engine first for "dog", and then "canine". Or "cat" and "feline". Note the differences in the results. Not similar at all, even though they probably should be. It seems obvious (to me at least) that the language and synonym issue lie squarely in the realm of search. The problem is, of course, how to implement such technology without sacrificing the performance of current search technology. Imbibing: Coffee Technorati tags: MacVittie, F5, BIG-IP, iRule, semantics, Web 3.0, taxonomy175Views0likes0CommentsKPiRules
KPI (Key Performance Indicators) are quantifiable metrics that can be measured against organizational goals. KPIs vary from business to business, based on what the company does. If it's a sales oriented company, a KPI might be something like "increase sales by X% year over year"; if it's a retail company it might be both "increase sales" and "increase customer retention". The key is that a KPI must be measurable in some way. That's why most KPIs are directly related to revenue, customer retention/churn, and other quantifiable aspects of doing business. Aggregating this measurable data is generally the responsibility of BI (Business Intelligence) suites. These products aggregate and assemble massive amounts of data, usually on a weekly or monthly basis, into a form that analysts can use to determine whether the company is on track to meeting its goals. These forms are often OLAP cubes and require lengthy processing to assemble, so they're usually only created on a weekly or nightly basis. But what if you wanted something a bit more "real time"? Maybe you wanted to know on an hourly basis how sales was doing so you weren't surprised when you finally got the weekly or daily update. If you've got a BIG-IP (v9.2+) and it's part of the infrastructure managing transactions, you're in luck. Using iRules and custom statistics profiles you can collect that data in real-time so you can satisfy your curiosity any time of the day or week. What You Need First you'll need a custom statistics profile so you have a place to store the data as it's collected. Joe's got a great article on how to create a custom statistics profile and walks through the steps required to assign it to the appropriate virtual server. As an example, I created a custom statistics profile with two custom fields: total_value and orders. After saving it, I then assigned it to a virtual server. Use the Advanced settings to find the "Statistics Profile" drop-down associated with your virtual server, and choose your newly created custom profile. The last step is to create an iRule that extracts the data you're interested in and stores it in your custom statistics profile. You might also add a few additional processing-oriented rules within the iRule to handle resetting the statistics as well as displaying the statistics. Basically, we're creating a KPIService with a REST style interface. By default, the rule will extract the appropriate data and update the statistics profile, but by directly invoking a specific URI, e.g. /statistics/display, /statistics/reset, you can create a service that provides some amount of control over your custom profile. The iRule The following iRule is not optimized and it's rather simplistic in its nature, but it does present the basic concept of extracting specificdata from transactions and storing them in a custom profile for retrieval whenever you desire. If you were really ambitious, you could output the statistics in XML or CSV and import them right into something like Microsoft Excel so you could view the data over time. when HTTP_REQUEST { set PROFILE_NAME "MyKPIStats" set t [STATS::get $PROFILE_NAME "total_value"] set o [STATS::get $PROFILE_NAME "orders"] set uri [HTTP::uri] set FIELD_NAME "totalvalue" if {$uri starts_with "/statistics/display"} { HTTP::respond 200 content "" } else if {$uri starts_with "/statistics/reset"} { STATS::set $PROFILE_NAME "total_value" 0 STATS::set $PROFILE_NAME "orders" 0 HTTP::respond 200 content "Statistics have been reset" } else { set postvalues [split [HTTP::payload] "&"] foreach p $postvalues { set pval [split $p "="] if {[lindex $pval 0] eq $FIELD_NAME} { set newtotal [expr [lindex $pval 1] + $t] } } STATS::set $PROFILE_NAME "total_value" $newtotal // This piece of the profile counts the number of orders we've received set neworders [expr $o + 1] STATS::set $PROFILE_NAME "orders" $neworders } } KPIs aren't the only thing you could capture with custom profiles - you could measure a lot of customer-oriented actions as well, such as the number of users who loaded the "checkout" page and how many loaded the "final_checkout" page, as areal time viewof abandonment rates. While iRules aren't meant to - and won't - take the place of business intelligence suites, they can provide some interesting real-time visibility into your business without requiring modification to existing applications or the purchase of additional software. Imbibing: Coffee Technorati tags: iRule, F5, MacVittie, business rules, SOA, application delivery310Views0likes0Comments