JA4 Part 2: Detecting and Mitigating Based on Dynamic JA4 Reputation
In my previous article on JA4 I provided a brief introduction to what is JA4 and JA4+, and I shared an iRule that enables you to generate a JA4 client TLS fingerprint. But having a JA4 fingerprint (or any "identifier") is only valuable if you can take some action on it. It is even more valuable when you can take immediate action on it. In this article, I'll explain how I integrated F5 BIG-IP Advanced WAF with a third-party solution that allowed me to identify JA4s that were consistently doing "bad" things, build a list of those JA4s that have a "bad" reputation, pull that list into the F5 BIG-IP, and finally, make F5 Advanced WAF blocking decisions based on that reputation. Understanding JA4 Fingerprints It is important to understand that a JA4 TLS fingerprint, or any TLS fingerprint for that matter, is NOT a fingerprint of an individual instance of a device or browser. Rather, it is a fingerprint of a TLS "stack" or application. For example, all Chrome browsers of the same version and the same operating system will generate the same JA4 fingerprint*. Similarly, all Go HTTP clients with the same version and operating system will generate an identical JA4 fingerprint. Because of this, we have to be careful when taking action based on JA4 fingerprints. We cannot simply block in our various security devices based on JA4 fingerprint alone UNLESS we can be certain that ALL (or nearly all) requests with that JA4 are malicious. To make this determination, we need to watch requests over time. TLDR; I used CrowdSec Security Engine to build a JA4 real-time reputation database; and 3 irules, an iCall script, and a custom WAF violation to integrate that JA4 reputation into F5 BIG-IP Advanced WAF. CrowdSec and John Althouse - Serendipity While at Black Hat each year, I frequently browse the showroom floor (when I'm not working the F5 booth) looking for cool new technology, particularly cool new technology that can potentially be integrated with F5 security solutions. Last year I was browsing the floor and came across CrowdSec. As the name suggests, CrowdSec provides a crowd-sourced IP reputation service. I know, I know. On the surface this doesn't sound that exciting — there are hundreds of IP reputation services out there AND IP address, as an identifier of a malicious entity, is becoming (has become?) less and less valuable. So what makes CrowdSec any different? Two things jumped out at me as I looked at their solution. First, while they do provide a central crowd-sourced IP reputation service like everyone else, they also have "Security Engines". A security engine is an agent/application that you can install on-premises that can consume logs from your various security devices, process those logs based on "scenarios" that you define, and produce a reputation database based on those scenarios. This enables you to create an IP reputation feed that is based on your own traffic/logs and based on your own conditions and criteria for what constitutes "malicious" for your organization. I refer to this as "organizationally-significant" reputation. AND, because this list can be updated very frequently (every few seconds if you wanted) and pushed/pulled into your various security devices very frequently (again, within seconds), you are afforded the ability to block for much shorter periods of time and, possibly, more liberally. Inherent in such an architecture, as well, is the ability for your various security tools to share intelligence in near real-time. i.e. If your firewall identifies a bad actor, your WAF can know about that too. Within seconds! At this point you're probably wondering, "How does this have anything to do with JA4?" Second, while the CrowdSec architecture was built to provide IP reputation feeds, I discovered that it can actually create a reputation feed based on ANY "identifier". In the weeks leading up to Black Hat last year, I had been working with John Althouse on the JA4+ spec and was actually meeting him in person for the first time while there. So JA4 was at the forefront of my mind. I wondered if I could use CrowdSec to generate a reputation based on a JA4 fingerprint. Yes! You can! Deploying CrowdSec As soon as I got home from Black Hat, I started playing. I already had my BIG-IP deployed, generating JA4s, and including those in the WAF logs. Following the very good documentation on their site, I created an account on CrowdSec's site and deployed a CrowdSec Security Engine on an Ubuntu box that I deployed next to my BIG-IP. It is beyond the scope of this article to detail the complete deployment process but, I will include details relevant to this article. After getting the CrowdSec Security Engine deployed I needed to configure a parser so that the CrowdSec Security Engine (hereafter referred to simply as "SE") could properly parse the WAF logs from F5. Following their documentation, I created a YAML file at /etc/crowdsec/parsers/s01-parse/f5-waf-logs.yaml: onsuccess: next_stage debug: false filter: "evt.Parsed.program == 'ASM'" name: f5/waf-logs description: "Parse F5 ASM/AWAF logs" pattern_syntax: F5WAF: 'unit_hostname="%{DATA:unit_hostname}",management_ip_address="%{DATA:management_ip_address}",management_ip_address_2="%{DATA:management_ip_address_2}",http_class_name="%{DATA:http_class_name}",web_application_name="%{DATA:web_application_name}",policy_name="%{DATA:policy_name}",policy_apply_date="%{DATA:policy_apply_date}",violations="%{DATA:violations}",support_id="%{DATA:support_id}",request_status="%{DATA:request_status}",response_code="%{DATA:response_code}",ip_client="%{IP:ip_client}",route_domain="%{DATA:route_domain}",method="%{DATA:method}",protocol="%{DATA:protocol}",query_string="%{DATA:query_string}",x_forwarded_for_header_value="%{DATA:x_forwarded_for_header_value}",sig_ids="%{DATA:sig_ids}",sig_names="%{DATA:sig_names}",date_time="%{DATA:date_time}",severity="%{DATA:severity}",attack_type="%{DATA:attack_type}",geo_location="%{DATA:geo_location}",ip_address_intelligence="%{DATA:ip_address_intelligence}",username="%{DATA:username}",session_id="%{DATA:session_id}",src_port="%{DATA:src_port}",dest_port="%{DATA:dest_port}",dest_ip="%{DATA:dest_ip}",sub_violations="%{DATA:sub_violations}",virus_name="%{DATA:virus_name}",violation_rating="%{DATA:violation_rating}",websocket_direction="%{DATA:websocket_direction}",websocket_message_type="%{DATA:websocket_message_type}",device_id="%{DATA:device_id}",staged_sig_ids="%{DATA:staged_sig_ids}",staged_sig_names="%{DATA:staged_sig_names}",threat_campaign_names="%{DATA:threat_campaign_names}",staged_threat_campaign_names="%{DATA:staged_threat_campaign_names}",blocking_exception_reason="%{DATA:blocking_exception_reason}",captcha_result="%{DATA:captcha_result}",microservice="%{DATA:microservice}",tap_event_id="%{DATA:tap_event_id}",tap_vid="%{DATA:tap_vid}",vs_name="%{DATA:vs_name}",sig_cves="%{DATA:sig_cves}",staged_sig_cves="%{DATA:staged_sig_cves}",uri="%{DATA:uri}",fragment="%{DATA:fragment}",request="%{DATA:request}",response="%{DATA:response}"' nodes: - grok: name: "F5WAF" apply_on: message statics: - meta: log_type value: f5waf - meta: user expression: "evt.Parsed.username" - meta: source_ip expression: "evt.Parsed.ip_client" - meta:violation_rating expression:"evt.Parsed.violation_rating" - meta:request_status expression:"evt.Parsed.request_status" - meta:attack_type expression:"evt.Parsed.attack_type" - meta:support_id expression:"evt.Parsed.support_id" - meta:violations expression:"evt.Parsed.violations" - meta:sub_violations expression:"evt.Parsed.sub_violations" - meta:session_id expression:"evt.Parsed.session_id" - meta:sig_ids expression:"evt.Parsed.sig_ids" - meta:sig_names expression:"evt.Parsed.sig_names" - meta:method expression:"evt.Parsed.method" - meta:device_id expression:"evt.Parsed.device_id" - meta:uri expression:"evt.Parsed.uri" nodes: - grok: pattern: '%{GREEDYDATA}X-JA4: %{DATA:ja4_fp}\\r\\n%{GREEDYDATA}' apply_on: request statics: - meta: ja4_fp expression:"evt.Parsed.ja4_fp" Sending WAF Logs On the F5 BIG-IP, I created a logging profile to send the WAF logs to the CrowdSec Security Engine IP address and port. Defining "Scenarios" At this point, I had the WAF logs being sent to the SE and properly being parsed. Now I needed to define the "scenarios" or the conditions under which I wanted to trigger and alert for an IP address or, in this case, a JA4 fingerprint. For testing purposes, I initially created a very simple scenario that flagged a JA4 as malicious as soon as I saw 5 violations in a sliding 30 second window but only if the violation rating was 3 or higher. That worked great! But that would never be practical in the real world (see the Understanding JA4 Fingerprints section above). I created a more practical "scenario" that only flags a JA4 as malicious if we have seen at least X number of requests AND more than 90% of requests from that JA4 have triggered some WAF violation. The premise with this scenario is that there should be enough legitimate traffic from popular browsers and other client types to keep the percentage of malicious traffic from any of those JA4s below 90%. Again, following the CrowdSec documentation, I created a YAML file at /etc/crowdsec/scenarios/f5-waf-ja4-viol-percent.yaml: type: conditional name: f5/waf-ja4-viol-percent description: "Raise an alert if the percentage of requests from a ja4 finerprint is above X percent" filter: "evt.Meta.violations != 'JA4 Fingerprint Reputation'" blackhole: 300s leakspeed: 5m capacity: -1 condition: | len(queue.Queue) > 10 and (count(queue.Queue, Atof(#.Meta.violation_rating) > 1) / len(queue.Queue)) > 0.9 groupby: "evt.Meta.ja4_fp" scope: type: ja4_fp expression: evt.Meta.ja4_fp labels: service: f5_waf type: waf_ja4 remediation: true debug: false There are a few key lines to call out from this configuration file. leakspeed: This is the "sliding window" within which we are looking for our "scenarios". i.e. events "leak" out of the bucket after 5 minutes. condition: The conditions under which I want to trigger this bucket. For my scenario, I have defined a condition of at least 10 events (with in that 5 minute window) AND where the total number of events, divided by the number of events where the violation rating is above 1, is greater than 0.9. in other words, if more than 90% of the requests have triggered a WAF violation with a rating higher than 1. filter: used to filter out events that you don't want to include in this scenario. In my case, I do not want to include requests where the only violation is the "JA4 Fingerprint Reputation" violation. groupby: this defines how I want to group requests. Typiiccally, in most CrowdSec scenarios this wil be some IP address field from the logs. In my scenario, I wanted to group by the JA4 fingerprint parsed out of the WAF logs. blackhole: this defines how long I want to "silence" alerts per JA4 fingerprint after this scenario has triggered. This prevents the same scenario from triggering repeatedly every time a new request comes into the bucket. scope: the scope is used by the reputation service to "categorize" alerts triggered by scenarios. the type field is used to define the type of data that is being reported. In most CrowdSec scenarios the type is "ip". In my case, I defined a custom type of "ja4_fp" with an "expression" (or value) of the JA4 fingerprint extracted from the WAF logs. Defining "Profiles" In the CrowdSec configuration "profiles" are used to define the remediation that should be taken when a scenario is triggered. I edited the /etc/crowdsec/profiles.yaml file to include the new profile for my JA4 scenario. name: ban_ja4_fp filters: - Alert.Remediation == true && Alert.GetScope() == "ja4_fp" decisions: - type: ban scope: "ja4_fp" duration: 5m debug: true on_success: continue --- ##### Everything below this point was already in the profiles.yaml file. Truncated here for brevity. name: default_ip_remediation #debug: true filters: - Alert.Remediation == true && Alert.GetScope() == "Ip" decisions: ... on_success: break Again, there are a few key lines from this configuration file. First, I only added a new profile named "ban_ja4_fp" with lines 1 through 9 in the file above. filters: Used to define which triggered scenarios should be included in this profile. In my case, all scenarios with the "remediation" label AND the "ja4_fp" scope. decisions: Used to define what type of remediation should be taken, for which "scope", and for how long. In my case, I chose the default of "ban", for the "ja4_fp" scope, and for 5 minutes. With this configuration in place I sent several malicious requests from my browser to my test application protected by the F5 Advanced WAF. I then checked the CrowdSec decisions list and voila! I had my browser's JA4 fingerprint listed! This was great but I wanted to be able to take action based on this intelligence in the F5 WAF. CrowdSec has the concept of "bouncers". Bouncers are devices the can take action on the remediation decisions generated by the SEs. Technically, anything that can call the local CrowdSec API and take some remediating action can be a bouncer. So, using the CLI on the CrowdSec SE, I defined a new "bouncer" for the F5 BIG-IP. ubuntu@xxxxxxxx:~$ sudo cscli bouncer add f5-bigip Api key for 'f5-bigip': xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Please keep this key since you will not be able to retrieve it! I knew that I could write an iRule that could call the SE API. However, the latency introduced by a sideband API call on EVERY HTTP request would just be completely untenable. I wanted a way to download the entire reputation list at a regular interval and store it on the F5 BIG-IP in a way that would be easily and efficiently accessible from the data plane. This sounded like a perfect job for an iCall script. Customizing the F5 BIG-IP Configuration If you are not familiar with iCall scripts, they are a programmatic way of checking or altering the F5 configuration based on some trigger; they are to the F5 BIG-IP management plane what iRules are to the data plane. The trigger can be some event, condition, log message, time interval, etc. I needed my iCall script to do two things. First, pull the reputation list from the CrowdSec SE. Second, store that list somewhere accessible to the F5 data plane. Like many of you, my first thought was either an iFile or a data group. Both of these are easily configurable components accessible via iCall scripts that are also accessible via iRules. For several reasons that I will not bother to detail here, I did not want to use either of these solutions, primarily for performance reasons (this reputation lookup needs to be very performant). And the most performant place to store information like this is the session table. The session table is accessible to iRules via "table" commands. However, the session table is not accessible via iCall scripts. At least not directly. I realized that I could send an HTTP request using the iCall script, AND that HTTP request could be to a local virtual server on the same BIG-IP where I could use an iRule to populate the session table with the JA4 reputation list pulled from the CrowdSec SE. The iCall Script From the F5 BIG-IP CLI I created the following iCall script using the tmsh command 'tmsh create sys icall script crowdsec_ja4_rep_update': sys icall script crowdsec_ja4_rep_update { app-service none definition { package require http set csapi_resp [http::geturl http://10.0.2.240:8080/v1/decisions/stream?startup=true&scopes=ja4_fp -headers "X-api-Key 1a234xxxxxxxxxxxxxxe56fab7"] #tmsh::log "[http::data ${csapi_resp}]" set payload [http::data ${csapi_resp}] http::cleanup ${csapi_resp} set tupdate_resp [http::geturl http://10.0.1.110/updatetables -type "application/json" -query ${payload}] tmsh::log "[http::data ${tupdate_resp}]" http::cleanup ${tupdate_resp} } description none events none } Let's dig through this iCall script line by line: 4. Used to "require" or "include" the TCL http library. 5. HTTP request to the CrowdSec API to get the JA4 reputation list. 10.0.2.240:8080 is the IP:port of the CrowdSec SE API /v1/decisions/stream is the API endpoint used to grab an entire reputation list (rather than just query for the status of an individual IP/JA4) startup=true tells the API to send the entire list, not just additions/deletions since the last API call scopes=ja4_fp limits the returned results to just JA4 fingerprint-type decisions -headers "X-api-Key xxxxxxxxxxxxxxxxxxxxxxxxxx" includes the API key generated previously to authenticate the F5 BIG-IP as a "bouncer" 7. Store just the body of the API response in a variable called "payload" 8. free up memory used by the HTTP request tot eh CrowdSec API 9. HTTP Request to a local virtual server (on the same F5 BIG-IP) including the contents of the "payload" variable as the POST body. The IP address needs to be the IP address of the virtual server defined in the next step. An iRule will be created and placed on this virtual server that parses the "payload" and inserts the JA4 reputation list into the session table. An iCall script will not run unless an iCall handler is created that defines when that iCall script should run. I call handlers can be "triggered", "perpetual", or "periodic". I created the following periodic iCall handler to run this iCall script at regular intervals. sys icall handler periodic crowdsec-api-ja4 { interval 30 script crowdsec_ja4_list } This iCall handler is very simple; it has an "interval" for how often you want to run the script and the script that you want to run. I chose to run the iCall script every 30 seconds so that the BIGIP session table would be updated with any new malicious JA4 fingerprints very quickly. But you could choose to run the iCall script every 1 minute, 5 minutes, etc. The Table Updater Virtual Server and iRule I then created a HTTP virtual server with no pool associated to it. This virtual server exists solely to accept and process the HTTP requests from the iCall script. I then created the following iRule to process the requests and payload from the iCall script: proc duration2seconds {durstr} { set h 0 set m 0 set s 0 regexp {(\d+)h} ${durstr} junk h regexp {(\d+)m} ${durstr} junk m regexp {(\d+)\.} ${durstr} junk s set seconds [expr "(${h}*3600) + (${m}*60) + ${s}"] return $seconds } when HTTP_REQUEST { if { ([HTTP::uri] eq "/updatetables" || [HTTP::uri] eq "/lookuptables") && [HTTP::method] eq "POST"} { HTTP::collect [HTTP::header value "content-length"] } else { HTTP::respond 404 } } when HTTP_REQUEST_DATA { #log local0. "PAYLOAD: '[HTTP::payload]'" regexp {"deleted":\[([^\]]+)\]} [HTTP::payload] junk cs_deletes regexp {"new":\[([^\]]+)\]} [HTTP::payload] junk cs_adds if { ![info exists cs_adds] } { HTTP::respond 200 content "NO NEW ENTRIES" return } log local0. "CS Additions: '${cs_adds}'" set records [regexp -all -inline -- {\{([^\}]+)\},?} ${cs_adds}] set update_list [list] foreach {junk record} $records { set urec "" foreach k {scope value type scenario duration} { set v "" regexp -- "\"${k}\":\"?(\[^\",\]+)\"?,?" ${record} junk v log local0. "'${k}': '${v}'" if { ${k} eq "duration" } { set v [call duration2seconds ${v}] } append urec "${v}:" } set urec [string trimright ${urec} ":"] #log local0. "$urec" lappend update_list ${urec} } set response "" foreach entry $update_list { scan $entry {%[^:]:%[^:]:%[^:]:%[^:]:%s} scope entity type scenario duration if { [HTTP::uri] eq "/updatetables" } { table set "${scope}:${entity}" "${type}:${scenario}" indefinite $duration append response "ADDED ${scope}:${entity} FOR ${duration} -- " } elseif { [HTTP::uri] eq "/lookuptables" } { set remaining "" set action "" if { [set action [table lookup ${scope}:${entity}]] ne "" } { set remaining [table lifetime -remaining ${scope}:${entity}] append response "${scope}:${entity} - ${action} - ${remaining}s remaining\r\n" } else { append response "${scope}:${entity} - NOT IN TABLE\r\n" } } } HTTP::respond 200 content "${response}" } I have attempted to include sufficient inline comments so that the iRule is self-explanatory. If you have any questions or comments on this iRule please feel free to DM me. It is important to note here that the iRule is storing not only each JA4 fingerprint in the session table as a key but also the metadata passed back from the CrowdSec API about each JA4 reputation as the value for each key. This metadata includes the scenario name, the "type" or action, and the duration. So at this point I had a JA4 reputation list, updated continuously based on the WAF violation logs and CrowdSec scenarios. I also had an iCall script on the F5 BIG-IP that was pulling that reputation list via the local CrowdSec API every 30 seconds and pushing that reputation list into the local session table on the BIG-IP. Now I just needed to take some action based on that reputation list. Integrating JA4 Reputation into F5 WAF To integrate the JA4 reputation into the F5 Advanced WAF we only need two things: a custom violation defined in the WAF an iRule to lookup the JA4 in the local session table and raise that violation Creating a Custom Violation Creating a custom violation in F5 Advanced WAF (or ASM) will vary slightly depending on which version of the TMOS software you are running. In version 17.1 it is at Security ›› Options : Application Security : Advanced Configuration : Violations List. Select the User-Defined Violations tab and click Create. Give the Violation a Title and define the Type, Severity, and Attack Type. Finally, I modified the Learning and Blocking Settings of my policy to ensure that the new custom violation was set to Alarm and Block. F5 iRule for Custom Violation I then created the following iRule to raise this new custom WAF violation if the JA4 fingerprint is found in the reputation list in the local session table. when ASM_REQUEST_DONE { # Grab JA4 fingerprint from x-ja4 header # This header is inserted by the JA4 irule set ja4_fp [HTTP::header value "x-ja4"] # Lookup JA4 fingerprint in session table if { [set result [table lookup "ja4_fp:${ja4_fp}"]] ne "" } { # JA4 was found in session table, scan the value to get "category" and "action" scan ${result} {%[^:]:%s} action category # Initialize all the nested list of lists format required for the # violation details of the ASM::raise command set viol [] set viol_det1 [] set viol_det2 [] set viol_det3 [] # Populate the variables with values parsed from the session table for this JA4 lappend viol_det1 "JA4 FP" "${ja4_fp}" lappend viol_det2 "CrowdSec Category" "${category}" lappend viol_det3 "CrowdSec Action" "${action}" lappend viol ${viol_det1} ${viol_det2} ${viol_det3} # Raise custom ASM violation with violation details ASM::raise VIOL_JA4_REPUTATION ${viol} } } Again, I tried to include enough inline documentation for the iRule to be self-explanatory. Seeing It All In Action With everything in place, I sent several requests, most malicious and some benign, to the application protected by the F5 Advanced WAF. Initially, only the malicious requests were blocked. After about 60 seconds, ALL of the requests were being blocked due to the new custom violation based on JA4 reputation. Below is a screenshot from one of my honeypot WAF instances blocking real "in-the-wild" traffic based on JA4 reputation. Note that the WAF violation includes (1) the JA4 fingerprint, (2) the "category" (or scenario), and (3) the "action" (or type). Things to Note The API communication between the F5 BIG-IP and the CrowdSec SE is over HTTP. This is obviously insecure; for this proof-of-concept deployment I was just too lazy to spend the extra time to get signed certs on all the devices involved and alter the iCall script to use the TCL SSL library.665Views5likes0CommentsSSL Profiles Part 8: Client Authentication
This is the eighth article in a series of Tech Tips that highlight SSL Profiles on the BIG-IP LTM. SSL Overview and Handshake SSL Certificates Certificate Chain Implementation Cipher Suites SSL Options SSL Renegotiation Server Name Indication Client Authentication Server Authentication All the "Little" Options This article will discuss the concept of Client Authentication, how it works, and how the BIG-IP system allows you to configure it for your environment. Client Authentication In a TLS handshake, the client and the server exchange several messages that ultimately result in an encrypted channel for secure communication. During this handshake, the client authenticates the server's identity by verifying the server certificate (for more on the TLS handshake, see SSL Overview and Handshake - Article 1in this series). Although the client always authenticates the server's identity, the server is not required to authenticate the client's identity. However, there are some situations that call for the server to authenticate the client. Client authentication is a feature that lets you authenticate users that are accessing a server. In client authentication, a certificate is passed from the client to the server and is verified by the server. Client authentication allow you to rest assured that the person represented by the certificate is the person you expect. Many companies want to ensure that only authorized users can gain access to the services and content they provide. As more personal and access-controlled information moves online, client authentication becomes more of a reality and a necessity. How Does Client Authentication Work? Before we jump into client authentication, let's make sure we understand server authentication. During the TLS handshake, the client authenticates the identity of the server by verifying the server's certificate and using the server's public key to encrypt data that will be used to compute the shared symmetric key. The server can only generate the symmetric key used in the TLS session if it can decrypt that data with its private key. The following diagram shows an abbreviated version of the TLS handshake that highlights some of these concepts. Ultimately, the client and server need to use a symmetric key to encrypt all communication during their TLS session. In order to calculate that key, the server shares its certificate with the client (the certificate includes the server's public key), and the client sends a random string of data to the server (encrypted with the server's public key). Now that the client and server each have the random string of data, they can each calculate (independently) the symmetric key that will be used to encrypt all remaining communication for the duration of that specific TLS session. In fact, the client and server both send a "Finished' message at the end of the handshake...and that message is encrypted with the symmetric key that they have both calculated on their own. So, if all that stuff works and they can both read each other's "Finished" message, then the server has been authenticated by the client and they proceed along with smiles on their collective faces (encrypted smiles, of course). You'll notice in the diagram above that the server sent its certificate to the client, but the client never sent its certificate to the server. When client authentication is used, the server still sends its certificate to the client, but it also sends a "Certificate Request" message to the client. This lets the client know that it needs to get its certificate ready because the next message from the client to the server (during the handshake) will need to include the client certificate. The following diagram shows the added steps needed during the TLS handshake for client authentication. So, you can see that when client authentication is enabled, the public and private keys are still used to encrypt and decrypt critical information that leads to the shared symmetric key. In addition to the public and private keys being used for authentication, the client and server both send certificates and each verifies the certificate of the other. This certificate verification is also part of the authentication process for both the client and the server. The certificate verification process includes four important checks. If any of these checks do not return a valid response, the certificate verification fails (which makes the TLS handshake fail) and the session will terminate. These checks are as follows: Check digital signature Check certificate chain Check expiration date and validity period Check certificate revocation status Here's how the client and server accomplish each of the checks for client authentication: Digital Signature: The client sends a "Certificate Verify" message that contains a digitally signed copy of the previous handshake message. This message is signed using the client certificate's private key. The server can validate the message digest of the digital signature by using the client's public key (which is found in the client certificate). Once the digital signature is validated, the server knows that public key belonging to the client matches the private key used to create the signature. Certificate Chain: The server maintains a list of trusted CAs, and this list determines which certificates the server will accept. The server will use the public key from the CA certificate (which it has in its list of trusted CAs) to validate the CA's digital signature on the certificate being presented. If the message digest has changed or if the public key doesn't correspond to the CA's private key used to sign the certificate, the verification fails and the handshake terminates. Expiration Date and Validity Period: The server compares the current date to the validity period listed in the certificate. If the expiration date has not passed and the current date is within the period, everything is good. If it's not, then the verification fails and the handshake terminates. Certificate Revocation Status: The server compares the client certificate to the list of revoked certificates on the system. If the client certificate is on the list, the verification fails and the handshake terminates. As you can see, a bunch of stuff has to happen in just the right way for the Client-Authenticated TLS handshake to finalize correctly. But, all this is in place for your own protection. After all, you want to make sure that no one else can steal your identity and impersonate you on a critically important website! BIG-IP Configuration Now that we've established the foundation for client authentication in a TLS handshake, let's figure out how the BIG-IP is set up to handle this feature. The following screenshot shows the user interface for configuring Client Authentication. To get here, navigate to Local Traffic > Profiles > SSL > Client. The Client Certificate drop down menu has three settings: Ignore (default), Require, and Request. The "Ignore" setting specifies that the system will ignore any certificate presented and will not authenticate the client before establishing the SSL session. This effectively turns off client authentication. The "Require" setting enforces client authentication. When this setting is enabled, the BIG-IP will request a client certificate and attempt to verify it. An SSL session is established only if a valid client certificate from a trusted CA is presented. Finally, the "Request" setting enables optional client authentication. When this setting is enabled, the BIG-IP will request a client certificate and attempt to verify it. However, an SSL session will be established regardless of whether or not a valid client certificate from a trusted CA is presented. The Request option is often used in conjunction with iRules in order to provide selective access depending on the certificate that is presented. For example: let's say you would like to allow clients who present a certificate from a trusted CA to gain access to the application while clients who do not provide the required certificate be redirected to a page detailing the access requirements. If you are not using iRules to enforce a different outcome based on the certificate details, there is no significant benefit to using the "Request" setting versus the default "Ignore" setting. In both cases, an SSL session will be established regardless of the certificate presented. Frequency specifies the frequency of client authentication for an SSL session. This menu offers two options: Once (default) and Always. The "Once" setting specifies that the system will authenticate the client only once for an SSL session. The "Always"setting specifies that the system will authenticate the client once when the SSL session is established as well as each time that session is reused. The Retain Certificate box is checked by default. When checked, the client certificate is retained for the SSL session. Certificate Chain Traversal Depth specifies the maximum number of certificates that can be traversed in a client certificate chain. The default for this setting is 9. Remember that "Certificate Chain" part of the verification checks? This setting is where you configure the depth that you allow the server to dig for a trusted CA. For more on certificate chains, see article 2 of this SSL series. Trusted Certificate Authorities setting is used to specify the BIG-IP's Trusted Certificate Authorities store. These are the CAs that the BIG-IP trusts when it verifies a client certificate that is presented during client authentication. The default value for the Trusted Certificate Authorities setting is None, indicating that no CAs are trusted. Don't forget...if the BIG-IP Client Certificate menu is set to Require but the Trusted Certificate Authorities is set to None, clients will not be able to establish SSL sessions with the virtual server. The drop down list in this setting includes the name of all the SSL certificates installed in the BIG-IP's /config/ssl/ssl.crt directory. A newly-installed BIG-IP system will include the following certificates: default certificate and ca-bundle certificate. The default certificate is a self-signed server certificate used when testing SSL profiles. This certificate is not appropriate for use as a Trusted Certificate Authorities certificate bundle. The ca-bundle certificate is a bundle of CA certificates from most of the well-known PKIs around the world. This certificate may be appropriate for use as a Trusted Certificate Authorities certificate bundle. However, if this bundle is specified as the Trusted Certificate Authorities certificate store, any valid client certificate that is signed by one of the popular Root CAs included in the default ca-bundle.crt will be authenticated. This provides some level of identification, but it provides very little access control since almost any valid client certificate could be authenticated. If you want to trust only certificates signed by a specific CA or set of CAs, you should create and install a bundle containing the certificates of the CAs whose certificates you trust. The bundle must also include the entire chain of CA certificates necessary to establish a chain of trust. Once you create this new certificate bundle, you can select it in the Trusted Certificate Authorities drop down menu. The Advertised Certificate Authorities setting is used to specify the CAs that the BIG-IP advertises as trusted when soliciting a client certificate for client authentication. The default value for the Advertised Certificate Authorities setting is None, indicating that no CAs are advertised. When set to None, no list of trusted CAs is sent to a client with the certificate request. If the Client Certificate menu is set to Require or Request, you can configure the Advertised Certificate Authorities setting to send clients a list of CAs that the server is likely to trust. Like the Trusted Certificate Authorities list, the Advertised Certificate Authorities drop down list includes the name of all the SSL certificates installed in the BIG-IP /config/ssl/ssl.crt directory. A newly-installed BIG-IP system includes the following certificates: default certificate and ca-bundle certificate. The default certificate is a self-signed server certificate used for testing SSL profiles. This certificate is not appropriate for use as an Advertised Certificate Authorities certificate bundle. The ca-bundle certificate is a bundle of CA certificates from most of the well-known PKIs around the world. This certificate may be appropriate for use as an Advertised Certificate Authorities certificate bundle. If you want to advertise only a specific CA or set of CAs, you should create and install a bundle containing the certificates of the CA to advertise. Once you create this new certificate bundle, you can select it in the Advertised Certificate Authorities setting drop down menu. You are allowed to configure the Advertised Certificate Authorities setting to send a different list of CAs than that specified for the Trusted Certificate Authorities. This allows greater control over the configuration information shared with unknown clients. You might not want to reveal the entire list of trusted CAs to a client that does not automatically present a valid client certificate from a trusted CA. Finally, you should avoid specifying a bundle that contains a large number of certificates when you configure the Advertised Certificate Authorities setting. This will cut down on the number of certificates exchanged during a client SSL handshake. The maximum size allowed by the BIG-IP for native SSL handshake messages is 14,304 bytes. Most handshakes don't result in large message lengths, but if the SSL handshake is negotiating a native cipher and the total length of all messages in the handshake exceeds the 14,304 byte threshold, the handshake will fail. The Certificate Revocation List (CRL) setting allows you to specify a CRL that the BIG-IP will use to check revocation status of a certificate prior to authenticating a client. If you want to use a CRL, you must upload it to the /config/ssl/ssl.crl directory on the BIG-IP. The name of the CRL file may then be entered in the CRL setting dialog box. Note that this box will offer no drop down menu options until you upload a CRL file to the BIG-IP. Since CRLs can quickly become outdated, you should use either OCSP or CRLDP profiles for more robust and current verification functionality. Conclusion Well, that wraps up our discussion on Client Authentication. I hope the information helped, and I hope you can use this to configure your BIG-IP to meet the needs of your specific network environment. Be sure to come back for our next article in the SSL series. As always, if you have any other questions, feel free to post a question here or Contact Us directly. See you next time!25KViews1like21CommentsLet's Encrypt
Let's Encrypt has revolutionized the way website owners implement HTTPS by offering free and automated SSL certificates, making secure connections accessible to everyone. This article delves into the technical aspects of Let's Encrypt, explaining how it establishes trust and secures your website. Before diving into Let's Encrypt, it's essential to understand the role of a Certificate Authority (CA). CAs are trusted entities that verify domain ownership and issue SSL certificates. They form the foundation of the Public Key Infrastructure (PKI) that enables secure communication on the internet. The Role of a Certificate Authority (CA): The process begins when a web server requests a certificate from the CA, specifying the domain name. The CA sends a challenge to validate the server's control over the domain. Upon successful validation, the CA issues an X.509/SSL/TLS certificate, which the server installs. When a user visits the website, their browser verifies the certificate's authenticity by checking the chain of trust back to a trusted root certificate. If the chain is valid, a secure connection is established. A critical role in this ecosystem is played by Certificate Authorities (CAs). CAs are trusted third-party entities responsible for: Domain Validation:CAs employ various mechanisms to validate the ownership or control of a domain name by the entity requesting the certificate. This validation process helps mitigate phishing attacks and ensures certificates are issued to legitimate entities. Public Key Infrastructure (PKI) Management:CAs operate within a Public Key Infrastructure (PKI) framework. They maintain a repository of trusted root certificates and issue intermediate certificates signed by a trusted root. Website administrators generate a public/private key pair, and the CA signs a certificate binding the public key to the validated domain identity. This signed certificate, containing the public key and domain information, is then installed on the web server. Trust Chain Establishment:Web browsers and operating systems come pre-loaded with a set of trusted root certificates issued by well-known CAs. When a user visits a website with a valid SSL/TLS certificate, the browser can verify the certificate's authenticity by chaining it back to a trusted root certificate, establishing a secure connection. This sequence below shows the role of a CA in the certificate issuance and validation process: Traditionally, obtaining certificates from CAs involved a manual enrollment process and significant costs. Let's Encrypt disrupted this model by offering free certificates through an automated process using the Automated Certificate Management Environment (ACME) protocol. ACME streamlines communication between web servers and the CA, automating the entire certificate lifecycle, including issuance and renewal. Let's Encrypt certificates have a short 90-day validity period to enhance security, and the automation ensures seamless renewal before expiration. This sequence shows the steps involved in obtaining a Let's Encrypt SSL/TLS certificate for a web server. Here's a breakdown: Requesting a Certificate:The web server software initiates the process by sending a request to Let's Encrypt CA, asking for a certificate. Challenge for Validation:Let's Encrypt CA responds by sending the web server a challenge. This challenge is designed to verify that the software requesting the certificate actually controls the domain name. A common challenge involves placing a specific file on the web server's directory. Responding to the Challenge:The web server software must complete the challenge. In this example, it would place the specific file in the designated directory on the server. Verification by Let's Encrypt:Once the web server software completes the challenge, Let's Encrypt CA verifies the response. Two Possible Outcomes: Success:If the challenge response is valid, Let's Encrypt CA issues a new SSL/TLS certificate for the web server's domain name. The web server software then downloads the certificate from Let's Encrypt CA. The downloaded certificate is installed on the web server. Finally, the web server is configured to enable HTTPS, which encrypts communication between the website and visitors. Failure:If the challenge response is invalid (e.g., the file wasn't placed correctly), Let's Encrypt CA informs the web server of the failure. In this case, the web server software would likely retry the entire process by requesting a new certificate again. Let's Encrypt and Key Pinning Let's Encrypt recently introduced new intermediate certificates to replace older ones that are nearing expiration. These new certificates are designed to be more secure and efficient. One of the goals is to discourage the use of an outdated practice known as key pinning. Key pinning refers to a security practice where software applications are configured to trust only a specific set of cryptographic keys issued by a certificate authority (CA). In the context of Let's Encrypt, this would involve an application trusting only a particular intermediate certificate used by Let's Encrypt to sign website certificates. There are a few reasons why Let's Encrypt discourages key pinning: Manual Updates:Key pinning typically requires manual updates whenever a certificate authority changes its certificates, which can be a cumbersome and error-prone process. Reduced Flexibility:Pinned keys limit your ability to benefit from security improvements or optimizations introduced by the CA's newer certificates. Potential Outages:If a pinned certificate expires or becomes invalid, applications that rely on it may malfunction or fail entirely, potentially leading to outages. Let's Encrypt argues that trusting the built-in trust store of your operating system or web browser is a more secure and flexible approach. These trust stores are automatically updated to reflect changes made by certificate authorities, reducing the risk of errors and outages. However, there are some niche cases where key pinning might still be considered justified. For instance, an organization might pin a key if they have a specific security requirement to strictly limit trusted certificates. Overall, Let's Encrypt's move to new intermediate certificates aims to improve security and efficiency while promoting a more automated and flexible approach to certificate trust management.1KViews0likes0CommentsSSL Orchestrator Advanced Use Cases: Outbound SNAT Persistence
Introduction F5 BIG-IP is synonymous with "flexibility". You likely have few other devices in your architecture that provide the breadth of capabilities that come native with the BIG-IP platform. And for each and every BIG-IP product module, the opportunities to expand functionality are almost limitless. In this article series we examine the flexibility options of the F5 SSL Orchestrator in a set of "advanced" use cases. If you haven't noticed, the world has been steadily moving toward encrypted communications. Everything from web, email, voice, video, chat, and IoT is now wrapped in TLS, and that's a good thing. The problem is, malware - that thing that creates havoc in your organization, that exfiltrates personnel records to the Dark Web - isn't stopped by encryption. TLS 1.3 and multi-factor authentication don't eradicate malware. The only reasonable way to defend against it is to catch it in the act, and an entire industry of security products are designed for just this task. But ironically, encryption makes this hard. You can't protect against what you can't see. F5 SSL Orchestrator simplifies traffic decryption and malware inspection, and dynamically orchestrates traffic to your security stack. But it does much more than that. SSL Orchestrator is built on top of F5's BIG-IP platform, and as stated earlier, is abound with flexibility. SSL Orchestrator Use Case: Outbound SNAT Persistence It may not be the most obvious thing to think about persistence in the vein of outbound traffic. We are all groomed to accept that any given load balancer can handle persistence (or "affinity", or "stickiness") to backend servers. This is an important characteristic for sure. But in an outbound scenario, you don't load balance remote servers, so why on Earth would you need persistence? Well, I'm glad you asked. There indeed happens to be a somewhat unique, albeit infrequent use case where two different servers need to persist on YOUR IP address. The classic example is a site that requires federated authentication, where the service provider (SP) generates a token (perhaps a SAML auth request) and inside of that request the SP has embedded the client IP. The client receives this message and is redirected to the IdP to authenticate. But in this case the client is talking to the outside world through a forward proxy, and outbound source NAT (SNAT) could be required in this environment. That means there's a potential that the client IP address as seen from the two remote servers could be different. So if the IdP needs to verify the client IP based on what's embedded in the authentication request token, that could possibly fail. The good news here is that federated authentication doesn't normally require client IP verification, and there aren't many other similar use cases, but it can happen. The F5 BIG-IP, as with ANY proxy server, load balancer, or ADC device, clearly supports server affinity, and in a highly flexible way. But, as with ANY proxy server, load balancer, or ADC device, that doesn't apply to SNAT addresses. Nevertheless, the F5 BIG-IP can be configured to do this, which is exactly what this article is about. We're going to flex some BIG-IP muscle to derive a unique and innovative way to enable outbound SNAT persistence. What we're basically talking about is ensuring that a single internal client persists a single outbound SNAT IP address, when and where needed, and as long as possible. It's important to note here that we're not really talking about persistence in the same way you think about load balanced server affinity. With affinity, you're stapling a single (remote) client "session" to a single load balanced server. With SNAT persistence, you're stapling a single outbound SNAT IP to a single internal client so that all remote servers see that same source address. Same-same but different-different. To do this we'll need a SNAT pool and an iRule. We need the SNAT pool to define the SNAT addresses we can use. And since SNAT pools don't provide a persistence option like regular pools do, we'll use an iRule to provide the stickiness. It's also worth noting here, again since we're not really talking about load balancing stickiness, that the IP persistence mechanism in the iRule may not (likely will not) evenly distribute the IPs in the SNAT pool. Your best bet is to provide as many SNAT pool IPs as possible and reasonable. The good news here is that, because you're using a BIG-IP, you can define exactly how you assert that IP stickiness. In most cases, you'll probably just want to persist on the internal client IP, but you could also persist on: Client source address and remote server port Client source address and remote destination addresses Client source, day of the week, the year+month+day % mod 2, a hash of the word-of-the-day...and hopefully you get the idea. Lot's of options. To make this work, let's start with the SNAT pool. Navigate to Local Traffic -> Address Translation -> SNAT Pool List in the BIG-IP and click Create. In the Member List section, add as many SNAT IPs as you can afford. Remember, these are going to be IPs on your outbound VLAN, so in the same subnet as your outbound VLAN self-IP. Figure: SNAT pool list You don't need to assign the SNAT pool to anything directly. The iRule will handle that. And now onto the iRule. Navigate to Local Traffic -> iRules -> iRule List in the BIG-IP, and click Create. Copy the following into the iRule editor: when RULE_INIT { ## This iRule should be applied to your SSLO intercaption rule ending with in-t-4. catch { unset -nocomplain static::snat_ips } ## For each SNAT IP needed define the IP versus dynamically looking it up. ## These need to be in the real SNAT pool as well so ARP works. set static::snat_ips(1) 10.1.20.50 set static::snat_ips(2) 10.1.20.51 set static::snat_ips(3) 10.1.20.52 set static::snat_ips(4) 10.1.20.53 set static::snat_ips(5) 10.1.20.54 ## Set to how many SNAT IPs were added set static::array_size 5 } when CLIENT_ACCEPTED priority 100 { ## Select and uncomment only ONE of the below SNAT persistence options ## Persist SNAT based on client address only snat $static::snat_ips([expr {[crc32 [IP::client_addr]] % $static::array_size}]) ## Persist SNAT based on client address and remote port #snat $static::snat_ips([expr {[crc32 [IP::client_addr] [TCP::remote_port]] % $static::array_size}]) ## Persist SNAT based on client address and remote address #snat $static::snat_ips([expr {[crc32 [IP::client_addr] [IP::local_addr]] % $static::array_size}]) } Let's take a moment to explain what this iRule is actually doing, and it is fairly straightforward. In RULE_INIT, which fires ONCE when you update the iRule, the members of the defined SNAT pool are read into an array. Then a second static variable is created to store the size of the array. These values are stored as static, global variables. In CLIENT_ACCEPTED we set a priority of 100 to control the order of execution under SSL Orchestrator as there is already a CLIENT_ACCEPTED iRule event on the topology (we want our new event to run first). Below that you're provided with three choices for persistence: persist on source IP only, source IP and destination port, or source IP and destination IP. You'll want to uncomment only ONE of these. Each basically performs a quick CRC hash on the selected value, then calculates a modulus based on the array size. This returns a number within the size of the array, that is then applied as the index to the array to extract one of the array values. This calculation is always the same for the same input value(s), so effectively persisting on that value. The selected SNAT IP is then fed to the 'snat' command, and there you have it. As stated, you're probably only going to need the source-only persistence option. Using either of the others will pin a SNAT IP to a client IP and protocol port (ex. client IP:443 or client IP:80), or pin a SNAT IP to a specific host (ex. client IP:www.example.com), respectively. At the end of the day, you can insert any reasonable expression that will result in the selection of one of the values in the SNAT pool array, so the sky is really the limit here. The last step is easiest of all. You need to attach this iRule to your SSL Orchestrator topology. To do that. navigate to SSL Orchestrator -> Configuration in the UI, select the Interception Rules tab, and click to edit the respective outbound interception rule. Scroll to the bottom of this page, and under Resources, add the new iRule to the Selected column. The order doesn't matter. Click Deploy to complete the change, and you're done. You can do a packet capture on your outbound VLAN to see what is happening. tcpdump -lnni [outbound vlan] host 93.184.216.34 And then access https://www.example.com to test. For your IP address you should see a consistent outgoing SNAT IP. If you have access to a Linux client, you can add multiple IP addresses to an interface and test with each: ifconfig eth0:1 10.1.10.51 ifconfig eth0:2 10.1.10.52 ifconfig eth0:3 10.1.10.53 ifconfig eth0:4 10.1.10.54 ifconfig eth0:5 10.1.10.55 curl -vk https://www.example.com --interface 10.1.10.51 curl -vk https://www.example.com --interface 10.1.10.52 curl -vk https://www.example.com --interface 10.1.10.53 curl -vk https://www.example.com --interface 10.1.10.54 curl -vk https://www.example.com --interface 10.1.10.55 And again there you have it. In just a few steps you've been able to enable outbound SNAT persistence, and along the way you have hopefully recognized the immense flexibility at your command.1.8KViews1like5CommentsUnderstanding the Authenticate Name Option on Server SSL profile on BIG-IP
Quick Intro Have you ever wondered what this little option on Server SSL profile really does in practice? This is what this article is all about. If you're only interested about what I learnt during my lab tests, please feel free to read just Lab Test Results section. Otherwise, enjoy the lab walkthrough :) Lab Test Results In case you just want to know what it is, this setting looks at bothcommonNameandsubjectAltNameextension and if whatever name set on Authenticate Name doesn't match the name in any of these 2 fields I mentioned, certificate authentication fails and BIG-IP resets connection. Note: for Authenticate Name to work, Server Certificate has to be set to Require, i.e. BIG-IP should be configured to check the validity of Server Side certificate. On top of that,ca-file(Trusted Certificate Authorities in the GUI) is also required to be set, otherwise BIG-IP has no trusted Root CA list to validate server's certificate. What it is This is like another layer of authentication. When server sends us a Certificate andpeer-cert-modeis set to require, BIG-IP looks up Root CA present in ca-file and confirms that server's certificate is trusted. Then, whenauthenticate-nameis also set, BIG-IP checkssubjectAltNameextension andcommonNamefor a match of what we've typed in this field. If no match is found, certificate authentication fails and we do not trust certificate. Otherwise, certificate is trusted and we proceed with handshake. Lab Test When Authenticate Name does not match Certificate's commonName or subjectAltName I created an end-entity X.509v3 TLS Certificate and set commonName (CN) toserver1.rodrigoandsubjectAltNameextension toserver01.rodrigoas seen below: I set Server Certificate (peer-cert-mode in tmsh) to require, added the Root CA that signed back-end server's certificate to BIG-IP's Trusted Certificate Authorities (ca-file in tmsh) and authentication-name tofail.rodrigo: On Wireshark,we see that authentication fails as soon as we receiveCertificatemessage from server: Note: If we want BIG-IP to display the specific alert such as unknown_ca above, we need to disable generic-alert on Server SSL settings. It fails because fail.rodrigo is neither inCNnor insubjectAltName. I had set server1.rodrigo instead, remember? Let's break it down into more details: back-end server sends Certificate message to BIG-IP because peer-cert-mode is set to require, BIG-IP looks up the Root CA list in ca-file (root-ca.crt here) BIG-IP answers the following question: was back-end certificate signed by any of the certificates listed in root-ca.crt? If not, authentication fails immediately and we never get to use authenticate-name In this case it was, so BIG-IP moves on to check if fail.rodrigo is in either commonName or subjectAltName fields of back-end's X.509v3 certificate Because fail.rodrigo doesn't match server1.rodrigo, authentication fails and BIG-IP resets connection. When Authenticate Name matches Certificate's commonName or subjectAltName I've now set authenticate-name to server1.rodrigo and TLS handshake suceeds: Let's break it down into more details: back-end server sends Certificate message to BIG-IP because peer-cert-mode is set to require, BIG-IP looks up the Root CA list in ca-file (root-ca.crt here) BIG-IP answers the following question: was back-end certificate signed by any of the certificates listed in root-ca.crt? If not, authentication fails immediately and we never get to use authenticate-name In this case it was, so BIG-IP moves on to check if server1.rodrigo is in either commonName or subjectAltName fields of back-end's X.509v3 certificate Because server1.rodrigo matches server1.rodrigo in both commonName and SubjectAltName fields, authentication succeeds and TLS handshake proceeds If you ever wondered where to find commonName and SubjectAltName on TLS headers, here's where they are: The above snippet is from back-end's Certificate message that is sent to BIG-IP as part of TLS handshake. Hope that's helpful.1.9KViews3likes15CommentsHandling HTTP Requests on an HTTPS Virtual Server
There are scenarios where it might be prudent to support HTTP request redirection on a single port, and thus, a single virtual server. Yes, this can be done with the alias port zero, but that locks all other ports down unless you plan to build out a pretty extensive iRule to support the various services required for each port. This latter option is less than ideal. So what can be done? TL;DR - only two steps required. First, check the Non-SSL Connections box in the SSL profile. Second, create an iRule to redirect non-SSL connections to SSL. The Details I have a test virtual server, appropriately called testvip, and on that testvip I have no iRules and a clientssl profile attached with only my certificate key chain changed from the parent profile. ltm virtual testvip { destination 192.168.102.50:https ip-protocol tcp mask 255.255.255.255 pool testpool profiles { cssl { context clientside } http { } tcp { } } source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled vs-index 50 } ltm profile client-ssl cssl { app-service none cert myssl.crt cert-key-chain { myssl { cert myssl.crt key myssl.key } } chain none defaults-from clientssl inherit-certkeychain false key myssl.key passphrase none } In this configuration, I expect that normal HTTPS requests will work just fine, and HTTP requests will fail. Let's take a look with curl, first with the working HTTPS and then the failing HTTP: (leaving curl details out for brevity here) ### HTTPS ### MY-MAC:~ rahm$ curl -v -s -k https://test.test.local/ 1> /dev/null -> HTTP/1.1 200 OK ### HTTP ### MY-MAC:~ rahm$ curl -v -s -k http://test.test.local:443 1> /dev/null -> Empty reply from server Now that we have confirmed that the regular HTTP request is not working, let's enable non-SSL connections. Again, that's done with the checkbox in the clientssl profile: Now let's try that second test again. ### HTTP ### MY-MAC:~ rahm$ curl -v -s -k http://test.test.local:443 1> /dev/null -> HTTP/1.1 200 OK That's pretty cool, we successfully made an HTTP request on the HTTPS single-port virtual server! But that's not the endgame, and is most certainly not a desired state. Let's take the final step with the iRule to capture any non-SSL connections and redirect them. Here's the iRule: when CLIENTSSL_HANDSHAKE { set https_state 1 } when HTTP_REQUEST { if { ![info exists https_state] } { HTTP::redirect https://[HTTP::host][HTTP::uri] } } This is optimized slightly from the original iRule from gasch that inspired this article. First, we look at the CLIENTSSL_HANDSHAKE event and set a variable to true to capture that this connection is indeed an SSL connection. Then in the HTTP_REQUEST event, if that variable doesn't exist, we know that the connection is not SSL and we take the redirect action. Simple and sleek! Now, with that iRule applied, let's test again: ### HTTP ### MY-MAC:~ rahm$ curl -v -s -k http://test.test.local:443 1> /dev/null * Rebuilt URL to: http://test.test.local:443/ * Trying 192.168.102.50... * TCP_NODELAY set * Connected to test.test.local (192.168.102.50) port 443 (#0) > GET / HTTP/1.1 > Host: test.test.local:443 > User-Agent: curl/7.54.0 > Accept: */* > * HTTP 1.0, assume close after body < HTTP/1.0 302 Found < Location: https://test.test.local:443/ < Server: BigIP * HTTP/1.0 connection set to keep alive! < Connection: Keep-Alive < Content-Length: 0 < * Connection #0 to host test.test.local left intact And there it is! You can see the HTTP request is accepted as previously, but instead of the 200 OK status, you get the 302 Found redirect. Sometimes things like this are solutions looking for problems, but conserving ports, IP space, configuration objects, etc, could all be factors.6.5KViews3likes9CommentsImplementing ECC+PFS on LineRate (Part 1/3): Choosing ECC Curves and Preparing SSL Certificates
(Editors note: the LineRate product has been discontinued for several years. 09/2023) --- Overview In case you missed it,Why ECC and PFS Matter: SSL offloading with LineRatedetails some of the reasons why ECC-based SSL has advantages over RSA cryptography for both performance and security. This article will generate all the necessary ECC certificates with the secp384r1 curve so that they may be used to configure an LineRate System for SSL Offload. Getting Started with LineRate In order to appreciate the advantages of SSL/TLS Offload available via LineRate as discussed in this article, let's take a closer look at how to configure SSL/TLS Offloading on a LineRate system. This example will implement Elliptical Curve Cryptography and Perfect Forward Secrecy. SSL Offloading will be added to an existing LineRate System that has one public-facing Virtual IP (10.10.11.11) that proxies web requests to a Real Server on an internal network (10.10.10.1). The following diagram demonstrates this configuration: Figure 1: A high-level implementation of SSL Offload Overall, these steps will be completed in order to enable SSL Offloading on the LineRate System: Generate a private key specifying the secp384r1 elliptic curve Obtain a certificate from a CA Configure an SSL profile and attach it to the Virtual IP Note that this implementation will enable only ECDHE cipher suites. ECDH cipher suites are available, but these do not implement the PFS feature. Further, in production deployments, considerations to implement additional types of SSL cryptography might be needed in order to allow backward compatibility for older clients. Generating a private key for Elliptical Curve Cryptography When considering the ECC curve to use for your environment, you may choose one from the currently available curves list in the LineRate documentation. It is important to be cognizant of the curve support for the browsers or applications your application targets using. Generally, the NIST P-256, P-384, and P-521 curves have the widest support. This example will use the secp384r1 (NIST P-384) curve, which provides an RSA equivalent key of 7680-bits. Supported curves with OpenSSL can be found by running the openssl ecparam -list_curves command, which may be important depending on which curve is chosen for your SSL/TLS deployment. Using OpenSSL, a private key is generated for use with ssloffload.lineratesystems.com. The ECC SECP curve over a 384-bit prime field (secp384r1) is specified: openssl ecparam -genkey -name secp384r1 -out ssloffload.lineratesystems.com.key.pem This command results in the following private key: -----BEGIN EC PARAMETERS----- BgUrgQQAIg== -----END EC PARAMETERS----- -----BEGIN EC PRIVATE KEY----- MIGkAgEBBDD1Kx9hghSGCTujAaqlnU2hs/spEOhfpKY9EO3mYTtDmKqkuJLKtv1P 1/QINzAU7JigBwYFK4EEACKhZANiAASLp1bvf/VJBJn4kgUFundwvBv03Q7c3tlX kh6Jfdo3lpP2Mf/K09bpt+4RlDKQynajq6qAJ1tJ6Wz79EepLB2U40fC/3OBDFQx 5gSjRp8Y6aq8c+H8gs0RKAL+I0c8xDo= -----END EC PRIVATE KEY----- Generating a Certificate Request (CSR) to provide the Certificate Authority (CA) After the primary key is obtained, a certificate request (CSR) can be created. Using OpenSSL again, the following command is issued filling out all relevant information in the successive prompts: openssl req -new -key ssloffload.lineratesystems.com.key.pem -out ssloffload.lineratesystems.com.csr.pem This results in the following CSR: -----BEGIN CERTIFICATE REQUEST----- MIIB3jCCAWQCAQAwga8xCzAJBgNVBAYTAlVTMREwDwYDVQQIEwhDb2xvcmFkbzET MBEGA1UEBxMKTG91aXN2aWxsZTEUMBIGA1UEChMLRjUgTmV0d29ya3MxGTAXBgNV BAsTEExpbmVSYXRlIFN5c3RlbXMxJzAlBgNVBAMTHnNzbG9mZmxvYWQubGluZXJh dGVzeXN0ZW1zLmNvbTEeMBwGCSqGSIb3DQEJARYPYS5yYWdvbmVAZjUuY29tMHYw EAYHKoZIzj0CAQYFK4EEACIDYgAEi6dW73/1SQSZ+JIFBbp3cLwb9N0O3N7ZV5Ie iX3aN5aT9jH/ytPW6bfuEZQykMp2o6uqgCdbSels+/RHqSwdlONHwv9zgQxUMeYE o0afGOmqvHPh/ILNESgC/iNHPMQ6oDUwFwYJKoZIhvcNAQkHMQoTCGNpc2NvMTIz MBoGCSqGSIb3DQEJAjENEwtGNSBOZXR3b3JrczAJBgcqhkjOPQQBA2kAMGYCMQCn h1NHGzigooYsohQBzf5P5KO3Z0/H24Z7w8nFZ/iGTEHa0+tmtGK/gNGFaSH1ULcC MQCcFea3plRPm45l2hjsB/CusdNo0DJUPMubLRZ5mgeThS/N6Eb0AHJSjBJlE1fI a4s= -----END CERTIFICATE REQUEST----- Obtaining a Certificate from a Certificate Authority (CA) Rather than using a self-signed certificate, a test certificate is obtained from Entrust. Upon completing the certificate request and receiving it from Entrust, a simple conversion needs to be done to PEM format. This can be done with the following OpenSSL command: openssl x509 -inform der -in ssloffload.lineratesystems.com.cer -out ssloffload.lineratesystems.com.cer.pem This results in the following certificate: -----BEGIN CERTIFICATE----- MIIC5jCCAm2gAwIBAgIETUKHWzAKBggqhkjOPQQDAzBtMQswCQYDVQQGEwJVUzEW MBQGA1UEChMNRW50cnVzdCwgSW5jLjEfMB0GA1UECxMWRm9yIFRlc3QgUHVycG9z ZXMgT25seTElMCMGA1UEAxMcRW50cnVzdCBFQ0MgRGVtb25zdHJhdGlvbiBDQTAe Fw0xNDA4MTExODQ3MTZaFw0xNDEwMTAxOTE3MTZaMGkxHzAdBgNVBAsTFkZvciBU ZXN0IFB1cnBvc2VzIE9ubHkxHTAbBgNVBAsTFFBlcnNvbmEgTm90IFZlcmlmaWVk MScwJQYDVQQDEx5zc2xvZmZsb2FkLmxpbmVyYXRlc3lzdGVtcy5jb20wdjAQBgcq hkjOPQIBBgUrgQQAIgNiAASLp1bvf/VJBJn4kgUFundwvBv03Q7c3tlXkh6Jfdo3 lpP2Mf/K09bpt+4RlDKQynajq6qAJ1tJ6Wz79EepLB2U40fC/3OBDFQx5gSjRp8Y 6aq8c+H8gs0RKAL+I0c8xDqjgeEwgd4wDgYDVR0PAQH/BAQDAgeAMB0GA1UdJQQW MBQGCCsGAQUFBwMBBggrBgEFBQcDAjA3BgNVHR8EMDAuMCygKqAohiZodHRwOi8v Y3JsLmVudHJ1c3QuY29tL0NSTC9lY2NkZW1vLmNybDApBgNVHREEIjAggh5zc2xv ZmZsb2FkLmxpbmVyYXRlc3lzdGVtcy5jb20wHwYDVR0jBBgwFoAUJAVL4WSCGvgJ zPt4eSH6cOaTMuowHQYDVR0OBBYEFESqK6HoSFIYkItcfekqqozX+z++MAkGA1Ud EwQCMAAwCgYIKoZIzj0EAwMDZwAwZAIwXWvK2++3500EVaPbwvJ39zp2IIQ98f66 /7fgroRGZ2WoKLBzKHRljVd1Gyrl2E3BAjBG9yPQqTNuhPKk8mBSUYEi/CS7Z5xt dXY/e7ivGEwi65z6iFCWuliHI55iLnXq7OU= -----END CERTIFICATE----- Note that the certificate generation process is very familiar with Elliptical Curve Cryptography versus traditional cryptographic algorithms like RSA. Only a few differences are found in the generation of the primary key where an ECC curve is specified. Continue the Configuration Now that the certificates needed to configure Elliptical Curve Cryptography have been created, it is now time to configure SSL Offloading on LineRate. Part 2: Configuring SSL Offload on LineRate continues the demonstration of SSL Offloading by importing the certificate information generated in this article and getting the system up and running. In case you missed it,Why ECC and PFS Matter: SSL offloading with LineRatedetails some of the reasons why ECC-based SSL has advantages over RSA cryptography for both performance and security. (Editors note: the LineRate product has been discontinued for several years. 09/2023) Stay Tuned! Next week a demonstration on how to verify a correct implementation of SSL with ECC+PFS on LineRate will make a debut on DevCentral. The article will detail how to check for ECC SSL on the wire via WireShark and in the browser. In the meantime, take some time to download LineRate and test out its SSL Offloading capabilities. In case you missed any content, or would like to reference it again, here are the articles related to implementing SSL Offload with ECC and PFS on LineRate: Why ECC and PFS Matter: SSL offloading with LineRate Implementing ECC+PFS on LineRate (Part 1/3): Choosing ECC Curves and Preparing SSL Certificates Implementing ECC+PFS on LineRate (Part 2/3): Configuring SSL Offload on LineRate Implementing ECC+PFS on LineRate (Part 3/3): Confirming the Operation of SSL Offloading407Views0likes0CommentsAutomate Let's Encrypt Certificates on BIG-IP
To quote the evil emperor Zurg: "We meet again, for the last time!" It's hard to believe it's been six years since my first rodeo with Let's Encrypt and BIG-IP, but (uncompromised) timestamps don't lie. And maybe this won't be my last look at Let's Encrypt, but it will likely be the last time I do so as a standalone effort, which I'll come back to at the end of this article. The first project was a compilation of shell scripts and python scripts and config files and well, this is no different. But it's all updated to meet the acme protocol version requirements for Let's Encrypt. Here's a quick table to connect all the dots: Description What's Out What's In acme client letsencrypt.sh dehydrated python library f5-common-python bigrest BIG-IP functionality creating the SSL profile utilizing an iRule for the HTTP challenge The f5-common-python library has not been maintained or enhanced for at least a year now, and I have an affinity for the good work Leo did with bigrest and I enjoy using it. I opted not to carry the SSL profile configuration forward because that functionality is more app-specific than the certificates themselves. And finally, whereas my initial project used the DNS challenge with the name.com API, in this proof of concept I chose to use an iRule on the BIG-IP to serve the challenge for Let's Encrypt to perform validation against. Whereas my solution is new, the way Let's Encrypt works has not changed, so I've carried forward the process from my previous article that I've now archived. I'll defer to their how it works page for details, but basically the steps are: Define a list of domains you want to secure Your client reaches out to the Let’s Encrypt servers to initiate a challenge for those domains. The servers will issue an http or dns challenge based on your request You need to place a file on your web server or a txt record in the dns zone file with that challenge information The servers will validate your challenge information and notify you You will clean up your challenge files or txt records The servers will issue the certificate and certificate chain to you You now have the key, cert, and chain, and can deploy to your web servers or in our case, to the BIG-IP Before kicking off a validation and generation event, the client registers your account based on your settings in the config file. The files in this project are as follows: /etc/dehydrated/config # Dehydrated configuration file /etc/dehydrated/domains.txt # Domains to sign and generate certs for /etc/dehydrated/dehydrated # acme client /etc/dehydrated/challenge.irule # iRule configured and deployed to BIG-IP by the hook script /etc/dehydrated/hook_script.py # Python script called by dehydrated for special steps in the cert generation process # Environment Variables export F5_HOST=x.x.x.x export F5_USER=admin export F5_PASS=admin You add your domains to the domains.txt file (more work likely if signing a lot of domains, I tested the one I have access to). The dehydrated client, of course is required, and then the hook script that dehydrated interacts with to deploy challenges and certificates. I aptly named that hook_script.py. For my hook, I'm deploying a challenge iRule to be applied only during the challenge; it is modified each time specific to the challenge supplied from the Let's Encrypt service and is cleaned up after the challenge is tested. And finally, there are a few environment variables I set so the information is not in text files. You could also move these into a credential vault. So to recap, you first register your client, then you can kick off a challenge to generate and deploy certificates. On the client side, it looks like this: ./dehydrated --register --accept-terms ./dehydrated -c Now, for testing, make sure you use the Let's Encrypt staging service instead of production. And since I want to force action every request while testing, I run the second command a little differently: ./dehydrated -c --force --force-validation Depicted graphically, here are the moving parts for the http challenge issued by Let's Encrypt at the request of the dehydrated client, deployed to the F5 BIG-IP, and validated by the Let's Encrypt servers. The Let's Encrypt servers then generate and return certs to the dehydrated client, which then, via the hook script, deploys the certs and keys to the F5 BIG-IP to complete the process. And here's the output of the dehydrated client and hook script in action from the CLI: # ./dehydrated -c --force --force-validation # INFO: Using main config file /etc/dehydrated/config Processing example.com + Checking expire date of existing cert... + Valid till Jun 20 02:03:26 2022 GMT (Longer than 30 days). Ignoring because renew was forced! + Signing domains... + Generating private key... + Generating signing request... + Requesting new certificate order from CA... + Received 1 authorizations URLs from the CA + Handling authorization for example.com + A valid authorization has been found but will be ignored + 1 pending challenge(s) + Deploying challenge tokens... + (hook) Deploying Challenge + (hook) Challenge rule added to virtual. + Responding to challenge for example.com authorization... + Challenge is valid! + Cleaning challenge tokens... + (hook) Cleaning Challenge + (hook) Challenge rule removed from virtual. + Requesting certificate... + Checking certificate... + Done! + Creating fullchain.pem... + (hook) Deploying Certs + (hook) Existing Cert/Key updated in transaction. + Done! This results in a deployed certificate/key pair on the F5 BIG-IP, and is modified in a transaction for future updates. This proof of concept is on github in the f5devcentral org if you'd like to take a look. Before closing, however, I'd like to mention a couple things: This is an update to an existing solution from years ago. It works, but probably isn't the best way to automate today if you're just getting started and have already started pursuing a more modern approach to automation. A better path would be something like Ansible. On that note, there are several solutions you can take a look at, posted below in resources. Resources https://github.com/EquateTechnologies/dehydrated-bigip-ansible https://github.com/f5devcentral/ansible-bigip-letsencrypt-http01 https://github.com/s-archer/acme-ansible-f5 https://github.com/s-archer/terraform-modular/tree/master/lets_encrypt_module(Terraform instead of Ansible) https://community.f5.com/t5/technical-forum/let-s-encrypt-with-cloudflare-dns-and-f5-rest-api/m-p/292943(Similar solution to mine, only slightly more robust with OCSP stapling, the DNS instead of HTTP challenge, and with bash instead of python)22KViews6likes18CommentsBleichenbacher vs. Forward Secrecy: How much of your TLS is still RSA?
The RSA algorithm has been the go-to public key algorithm for the last fifteen years. But, perhaps like RC4, MD5 and Al Franken, it’s time forRSA to retire. TLS 1.3, the upcoming version of the de facto web encryption protocol, does not even include RSA among its allowed key exchange algorithms. The world is moving toward “Forward Secret” ciphers which use ephemeral keys, exchanged with either elliptic curve or straight-up Diffie-Hellman cryptography. These forward secret ciphers are typically noted as ECDHE or DHE. The former is vastly preferred these days; there are nearly 20 ECDHE servers for every DHE server. Of immediate concern to F5 users is the recent issuance of CVE-2017-6168, a series of Bleichenbacher-style attacks against F5 RSA key exchanges from version 11.6 to version 13. Patches have been issued for the vulnerable versions, but some customers have complicated patching schedules. They’re wondering if they can simply disable the RSA protocol on their F5 virtual servers and offer only forward secret ciphers. The general answer is “very probably.” All modern browsers prefer forward secret ciphers, so most modern human end-users already use it. A small but statistically significant number of F5 deployments offer no forward secrecy because they rely on passive TLS monitoring. If that is you, then may I suggest that instead of reading this article, you watch this ten-minute light-board video made specially for you. It's about how to do passive monitoring even with TLS 1.3. SSL Visibility: The Ultimate Passive Inspection Architecture But back to the task at hand. Let's assume that you aren't currently disabling forward secrecy, and we're back to thequestion “can you disable RSA?” That depends on how many of your users are still using it. Maybe your application has a bunch of automated queries from bespoke legacy software that only uses RSA. Or maybe your Jet Li fan site, which is still somehow in Alexa’s top 8 billion list, still receives a lot visitors running Windows XP in Guangdong Province, using TLS v 1.0, RSA and RC4. How would you know? Getting the TLS Statistics You can see what percentage of your customers are still using RSA instead of ECDHE with either F5 graphical user interface (GUI) or via command line (CLI). I’ll give examples of both methods using version 13.0 but these statistics have been available in the same methods since before germs. In the GUI, From the Main tab on the left, select the Statistics control at the top of the list. Then select the Module Statistics menu and from that, the Local Traffic menu. When the screen refreshes, you’ll see a selector under Display Options titled Statistics Type. Click it and choose Profiles Summary. You’ll see a giant list of profile types. When you click the View… link next to the “Client SSL” a giant, juicy list of crypto stats will appear. You’ll be interested in one group in particular; the one named “Key Exchange Method”. Very likely most of the key exchange types will have 0 entries (no one should be using anonymous Diffie-Hellman for example). But three in the middle include the two forward secrecy algorithms, ephemeral Diffie-Hellman (DHE) and ephemeral elliptic curve Diffie-Hellman (ECDHE). Sandwiched between them is the RSA cipher. Here’s a sample: Add the values of the two forward secret ciphers and compare that sum to the RSA. In this example, there are 99 forward secret key exchanges and only 5 RSA key exchanges, for a rate of just under 5%. You can get these same statistics from the command line with the simple tmsh command: (tmos)# show ltm profile client-ssl This command shows the profile statistics individually and you may have to add them together to get the global numbers. Or maybe you wanted them broken down by specific profile in the first place. Having the RSA percentage at hand can help you decide whether or not you want to disable the RSA key exchange. If your F5 has a really long uptime, then the counts will include RSA key exchanges from months or years ago when forward secrecy wasn’t so popular, and that might distort your decision. You could get more timely numbers by hitting the “clear statistics” button at the top of the page and then watching for a period of hours or days to see the mix of ECDHE vs RSA key exchanges. I would record or screen shot the old numbers before you do, just in case. Disabling RSA Key Exchanges If you are a good enough administrator that you’re already using F5’s TMOS version 13, you can associate the “f5-ecc” cipher group to your client ssl profile and get only forward secret ciphers. Or you could use the cipher builder from the Local Traffic | Ciphers main tab to build a cipher group that excludes RSA. If you’re using a version prior to 13.0, then see knowledge base article K21905460 associated to CVE-2017-6168 for some good cipher string recommendations. Or, read the cipher string primer in my award-winning F5 SSL Recommended Practices guide. Note that using RSA certificates with forward secrecy is still okay. As long as each cipher in the cipher list includes ECDHE or DHE you’ll be safe from Bleichenbachers. Hopefully this information was of some use to you, dear reader, and can help you make the decision about how to treat your cryptographic key exchanges now, and in the future.769Views0likes0CommentsAutomate import of SSL Certificate, Key & CRL from BIG-IP to BIG-IQ
The functionality to automate the import of SSL cert & key from BIG-IP to BIG-IQ is available in the product starting BIG-IQ 7.0 and above. This script should not be used on BIG-IQ 7.0+ as it has not been tested on those versions. This script will import all supported SSL Certificate, Key & CRL that exist as unmanaged objects on this BIG-IQ which can be found on the target BIG-IP. Steps performed by the script: Gather certificateand key metadata (including cache-path) from BIG-IPs Download certificate and key file datafrom BIG-IPs Upload certificateand key file data to BIG-IQ Prerequisite:Discover and import LTM services before using this script.The target BIG-IP will be accessed over ssh using the BIG-IP root account. Installation:The script mustbe installed in BIG-IQ under /shared/scripts: # mkdir /shared/scripts# chmod +x /shared/scripts/import-bigip-cert-key-crl.py Command example: # ./import-bigip-cert-key-crl.py <big-ip IP address> Enter the root user's password if prompted. Allowed command line options:-h show this help message and exit-l LOG_FILE,log to the given file name--log-level {debug,info,warning,error,critical},set logging to the given level (default: info)-p PORT BIG-IPssh port (default: 22) Result:Configuration > Certificate Management > Certificates & Keys Before running the script: After running the script: Location of the scriptson GitHub:https://github.com/f5devcentral/f5-big-iq-pm-team In case you BIG-IQ is running on Hardware: Step 1: Install packages using pip, targeting a location of your choice # mkdir py-modules# pip install --target py-modules requests argparse Step 2: Run using python2.7, adding py-modules to the python path # PYTHONPATH=py-modules python2.7 import-bigip-cert-key-crl.py <big-ip IP address>5.1KViews1like43Comments