dns
128 TopicsUse F5 Distributed Cloud to control Primary and Secondary DNS
Overview Domain Name Service (DNS); it's how humans and machines discover where to connect. DNS on the Internet is the universal directory of addresses to names. If you need to get support for the product Acme, you go to support.acme.com. Looking for the latest headlines in News, try www.aonn.com or www.npr.org. DNS is the underlying feature that nearly every service on the Internet depends on. Having a robust and reliable DNS provider is critical to keeping your organization online and working, and especially so during a DDoS attack. "Nature is a mutable cloud, which is always and never the same." - Ralph Waldo Emerson We might not wax that philosophically around here, but our heads are in the cloud nonetheless! Join the F5 Distributed Cloud user group today and learn more with your peers and other F5 experts. F5 Distributed Cloud DNS (F5 XC DNS) can function as both Primary or Secondary nameservers, and it natively includes DDoS protection. Using F5 XC DNS, it’s possible to provision and configure primary or secondary DNS securely in minutes. Additionally, the service uses a global anycast network and is built to scale automatically to respond to large query volumes. Dynamic security is included and adds automatic failover, DDoS protection, TSIG authentication support, and when used as a secondary DNS—DNSSEC support. F5 Distributed Cloud allows you to manage all of your sites as a single “logical cloud” providing: - A portable platform that spans multiple sites/clouds - A private backbone connects all sites - Connectivity to sites through its nodes (F5 Distributed Cloud Mesh and F5 Distributed Cloud App Stack) - Node flexibility, allowing it to be virtual machines, live on hardware within data centers, sites, or in cloud instances (e.g. EC2) - Nodes provide vK8s (virtual K8s), network and security services - Services managed through F5 Distributed Cloud’s SaaS base console Scenario 1 – F5 Distributed Cloud DNS: Primary Nameserver Consider the following; you're looking to improve the response time of your app with a geo-distributed solution, including DNS and app distribution. With F5 XC DNS configured as the primary nameserver, you’ll automatically get DNS DDoS protection, and will see an improvement in the response the time to resolve DNS just by using Anycast with F5’s global network’s regional point of presence. To configure F5 XC DNS to be the Primary nameserver for your domain, access the F5 XC Console, go to DNS Management, and then Add Zone. Alternately, if you're migrating from another DNS server or DNS service to F5 XC DNS, you can import this zone directly from your DNS server. Scenario 1.2 below illustrates how to import and migrate your existing DNS zones to F5 XC DNS. Here, you’ll write in the domain name (your DNS zone), and then View Configuration for the Primary DNS. On the next screen, you may change any of the default SOA parameters for the zone, and any type of resource record (RR) or record sets which the DNS server will use to respond to queries. For example, you may want to return more than one A record (IP address) for the frontend to your app when it has multiple points of presence. To do this, enter as many IP addresses of record type A as needed to send traffic to all the points of ingress to your app. Additional Resource Record Sets allows the DNS server to return more than a single type of RR. For example, the following configurations, returns two A (IPv4 address) records and one TXT record to the query of type ANY for “al.demo.internal”. Optionally, if your root DNS zone has been configured for DNSSEC, then enabling it for the zone is just a matter of toggling the default setting in the F5 XC Console. Scenario 1.2 - Import an Existing Primary Zone to Distributed Cloud using Zone Transfer (AXFR) F5 XC DNS can use AXFR DNS zone transfer to import an existing DNS zone. Navigate to DNS Management > DNS Zone Management, then click Import DNS Zone. Enter the zone name and the externally accessible IP of the primary DNS server. ➡️ Note: You'll need to configure your DNS server and any firewall policies to allow zone transfers from F5. A current list of public IP's that F5 uses can be found in the following F5 tech doc. Optionally, configure a transaction signature (TSIG) to secure the DNS zone transfer. When you save and exit, F5 XC DNS executes a secondary nameserver zone AXFR and then transitions itself to be the zone's primary DNS server. To finish the process, you'll need to change the NS records for the zone at your domain name registrar. In the registrar, change the name servers to the following F5 XC DNS servers: ns1.f5clouddns.com ns2.f5clouddns.com Scenario 1.3 - Import Existing (BIND format) Primary Zones directly to Distributed Cloud F5 XC DNS can directly import BIND formatted DNS zone files in the Console, for example, db.2-0-192.in-addr.arpa and db.foo.com. Enterprises often use BIND as their on-prem DNS service, importing these files to Distributed Cloud makes it easier to migrate existing DNS records. To import existing BIND db files, navigate to DNS Management > DNS Zone Management, click Import DNS Zone, then "BIND Import". Now click "Import from File" and upload a .zip with one or more BIND db zone files. The import wizard accepts all primary DNS zones and ignores other zones and files. After uploading a .zip file, the next screen reports any warnings and errors At this poing you can "Save and Exit" to import the new DNS zones or cancel to make any changes. For more complex zone configurations, including support for using $INCLUDE and $ORIGIN directives in BIND files, the following open source tool will convert BIND db files to JSON, which can then be copied directly to the F5 XC Console when configuring records for new and existing Primary DNS zones. BIND to XC-DNS Converter Scenario 2 - F5 Distributed Cloud DNS: Primary with Delegated Subdomains An enhanced capability when using Distributed Cloud (F5 XC) as the primary DNS server for your domains or subdomains, is to have services in F5 XC dynamically create their own DNS records, and this can be done either directly in the primary domain or the subdomains. Note thatbeforeJuly 2023, the delegated DNS feature in F5 XC required the exclusive use of subdomains to dynamically manage DNS records. As of July 2023, organizations are allowed to have both F5 XC managed and user-managed DNS resource records in the same domain or subdomain. When "Allow HTTP Load Balancer Managed Records" is checked, DNS records automatically added by F5 XC appear in a new RR set group called x-ves-io-managed which is read-only. In the following example, I've created an HTTP Load Balanacer with the domain "www.example.f5-cloud-demo.com" and F5 XC automatically created the A resource record (RR) in the group x-ves-io-managed. Scenario 3 – F5 Distributed Cloud DNS: Secondary Nameserver In this scenario, say you already have a primary DNS server in your on-prem datacenter, but due to security reasons, you don’t want it to be directly accessible to queries from the Internet. F5 XC DNS can be configured as a secondary DNS server and will both zone transfer (AXFR, IXFR) and receive (NOTIFY) updates from your primary DNS server as needed. To configure F5 XC DNS to be a secondary DNS server, go to Add Zone, then choose Secondary DNS Configuration. Next, View Configuration for it, and add your primary DNS server IP’s. To enhance the security of zone transfers and updates, F5 XC DNS supports TSIG encrypted transfers from the primary DNS server. To support TSIG, ensure your primary DNS server supports encryption, and enable it by entering the pre-shared key (PSK) name and its value. The PSK itself can be blindfold-encrypted in the F5 XC Console to prevent other Console admins from being able to view it. If encryption is desired, simply plug in the remaining details for your TSIG PSK and Apply. Once you’ve saved your new secondary DNS configuration, the F5 XC DNS will immediately transfer your zone details and begin resolving queries on the F5 XC Global Network with its pool of Anycast-reachable DNS servers. Conclusion You’ve just seen how to configure F5 XC DNS both as a primary DNS as well as a secondary DNS service. Ensure the reachability of your company with a robust, secure, and optimized DNS service by F5. A service that delivers the lowest resolution latency with its global Anycast network of nameservers, and one that automatically includes DDoS protection, DNSSEC, TSIG support for secondary DNS. Watch the following demo video to see how to configure F5 XC DNS for scenarios #1 and #3 above. Additional Resources For more information about using F5 Distributed Cloud DNS: https://www.f5.com/cloud/products/dns For technical documentation: https://docs.cloud.f5.com/docs/how-to/app-networking/manage-dns-zones DNS Management FAQ: https://f5cloud.zendesk.com/hc/en-us/sections/7057223802519-DNS-Management DNS Demo Guide and step-by-step walkthrough: https://github.com/f5devcentral/f5xc-dns BIND to XC-DNS Converter (open source tool):https://github.com/Mikej81/BINDtoXCDNS9KViews6likes0CommentsConfiguring BIG-IP for Zone Transfer and DNSSEC
This article is for organizations that use our F5 BIG-IP as their primary DNS. The guide consists of two parts. First, it shows you how to configure BIG-IP DNS to perform a zone transfer to a secondary DNS server. Second, it demonstrates how to enable DNSSEC (Domain Name System Security Extensions) on BIG-IP DNS. Part 1: Configure BIG-IP DNS for Zone Transfer This part of the article will focus on guiding you on how to set up BIG-IP for zone transfer. I assume at this point in time that you already have DNS records configured via Zone Runner. Having said that, let's proceed to set up BIG-IP for zone transfer to a secondary DNS, which in our case will be F5 Distributed Cloud DNS. Step 1: Create a custom DNS Profile On the Main tab, click DNS > Delivery > Profiles. click Create. Type a Name for the custom DNS profile. Select 'dns' as the Parent Profile from which it will inherits settings. Under DNS Traffic area, Zone Transfer, select Enabled. Click Save & Close. Step 2: Create a custom DNS Listener On the Main tab, clickDNS>Delivery>Listeners,clickCreate. In theNamefield, type a unique name for the listener. For the Destination setting, in theAddressfield, type an IPv4 address on which BIG-IP DNS listens for network traffic. In the Service area, from theProtocollist, selectUDP. In the Service area, from the DNS Profile list, select the custom profile created on Step 1. ClickFinished. Repeat steps 1-6 to create a TCP listener, but on step 4, select TCP. Step 3: Generate a TSIG Key On BIG-IP DNS Command Line, enter the following in bash: tsig-keygen -a HMAC-SHA256 <tsig name> Example: tsig-keygen -a HMAC-SHA256 example The output should be similar to this key "example" { algorithm hmac-sha256; secret "UAHXLiErXSTXw84QcaeWk2jLnU0GYXGWBQ2IT+rtfCU="; }; Step 4: Configure TSIG Key In the BIG-IP GUI, go to DNS > Delivery > Keys > TSIG Key List, click Create. Name: example Algorithm: HMAC SHA-256 Secret: <paste the secret output generated from Step 3> Step 5: Create Nameservers Go to DNS > Delivery > Nameservers > Nameserver List, click Create. Create the following nameserver objects: Name: localbind, Address: 127.0.0.1, Service Port: 53 Name: f5xcdns1, Address: 52.14.213.208, Service Port: 53, TSIG Key: example Name: f5xcdns2, Address: 3.140.118.214, Service Port: 53, TSIG Key: example The IP address details of F5XC to be used in Zone Transfers can be found here https://docs.cloud.f5.com/docs/reference/network-cloud-ref Step 6: Create DNS Zone for Zone Transfer Go to DNS > Zones > Zones > Zones List, click Create. Fill the following details: Name: f5sg.com DNS Express :: Server: localbind Zone Transfer Clients :: Nameservers: Select f5xcdns1 & f5xcdns2 TSIG :: Server Key: example Step 7: Include TSIG in named.conf On BIG-IP command line, create and open a new file named tsig.key in the /var/named/config directory. For example, use vi editor to create a new file named tsig.key in the /var/named/config directory, enter the following command: vi /var/named/config/tsig.key To add the TSIG key, paste the following output we generated earlier: key "example" { algorithm hmac-sha256; secret "UAHXLiErXSTXw84QcaeWk2jLnU0GYXGWBQ2IT+rtfCU="; }; Save thetsig.key file. To create the necessary symbolic link to thetsig.keyfile in the/configdirectory, enter the following command: ln -s /var/named/config/tsig.key /config/tsig.key To set the correct owner for thetsig.keyfile, enter the following command: chown named:named /var/named/config/tsig.key Using a text editor, open the/var/named/config/named.conf file for editing. For example, to usevieditor to edit the/var/named/config/named.conffile, enter the following command: vi /var/named/config/named.conf Add the followingincludestatement to the top of thenamed.conffile, below the first two comments in the file: include "/config/tsig.key"; Save the file. Step 8: Add the Secondary DNS (F5XC DNS) IP addresses in Zone Transfer allow list Using a text editor, open the/var/named/config/named.conf file for editing. For example, to usevieditor to edit the/var/named/config/named.conffile, enter the following command: vi /var/named/config/named.conf Add the following acl statement at the bottom of the named.conf file (Note: The IP address details of F5XC to be used in Zone Transfers can be found here https://docs.cloud.f5.com/docs/reference/network-cloud-ref) acl "F5XC" { 52.14.213.208/32; 3.140.118.214/32; }; Insert the following inside the allow-transfer block (this will allow F5XC to perform AXFR requests) allow-transfer { localhost; F5XC; <--- Add this line }; Save the file (Optional) Part 2: Configure a BIG-IP DNS Zone for DNSSEC Assuming you already have a zone configured for DNS zone transfer and you want to enable DNSSEC on this zone, you can follow the steps below. The generated cryptographic keys for DNSSEC will be synced to the secondary DNS as part of the zone transfer. Step 1: Create automatically-managed zone-signing keys (ZSK) On the Main tab, DNS>Delivery>Keys>DNSSEC Key List, click Create. In theName field, type a name for the key (Zone names are limited to63characters) From theTypelist, selectZone Signing Key. From theStatelist, selectEnabled. **You can leave all other setting to default and click Finish on this point. But if you can also modify other settings based on your requirement Click Finished. Step 2: Create automatically-managed zone-signing keys (KSK) On the Main tab, DNS>Delivery>Keys>DNSSEC Key List, click Create. In theName field, type a name for the key (Zone names are limited to63characters) From the Type list, select Key Signing Key. From theStatelist, selectEnabled. **You can leave all other setting to default and click Finish on this point. But if you can also modify other settings based on your requirement Click Finished. Step 3: Creating a DNSSEC zone On the Main tab, clickDNS>Zones>DNSSEC Zones, click Create. In theNamefield, type a domain name. For example, use a zone name of f5sg.comto handle DNSSEC requests forwww.f5sg.comand*.www.f5sg.com. From theStatelist, selectEnabled. For theZone Signing Keysetting, assign at least one enabled zone-signing key to the zone. You can associate the same zone-signing key with multiple zones. For theKey Signing Keysetting, assign at least one enabled key-signing key to the zone. You can associate the same key-signing key with multiple zones. ClickFinished. Step 4: Upload generated DS record to parent zone Upload the DS records for this zone to the organization that manages the parent zone. The administrators of the parent zone sign the DS record with their own key and upload it to their zone. You can find the DS records in the Configuration utility.459Views4likes0CommentsUsing Distributed Cloud DNS Load Balancer with Geo-Proximity and failover scenarios
Introduction To have both high performance and responsive apps available on the Internet, you need a cloud DNS that’s both scalable and one that operates at a global level to effectively connect users to the nearest point of presence. The F5 Distributed Cloud DNS Load Balancer positions the best features used with GSLB DNS to enable the delivery of hybrid and multi-cloud applications with compute positioned right at the edge, closest to users. With Global Server Load Balancing (GSLB) features available in a cloud-based SaaS format, the Distributed Cloud DNS Load Balancer has a number distinct advantages: Speed and simplicity: Integrate with DevOps pipelines, with an automation focus and a rich and intuitive user interface Flexibility and scale: Global auto-scale keeps up with demand as the number of apps increases and traffic patterns change Security: Built-in DDoS protection, automatic failover, and DNSSEC features help ensure your apps are effectively protected. Disaster recovery: With automatic detection of site failures, apps dynamically fail over to individual recovery-designated locations without intervention. Adding user-location proximity policies to DNS load balancing rules allows the steering of users to specific instances of an app. This not only improves the overall experience but it guarantees and safeguards data, effectively silo’ing user data keeping it region-specific. In the case of disaster recovery, catch-all rules can be created to send users to alternate destinations where restrictions to data don’t apply. Integrated Solution This solution uses a cloud-based Distributed Cloud DNS to load balance traffic to VIP’s that connect to region-specific pools of servers. When data privacy isn’t a requirement, catch-all rules can further distribute traffic should a preferred pool of origin servers become unhealthy or unreachable. The following solution covers the following three DNS LB scenarios: Geo-IP Proximity Active/Standby failover within a region Disaster Recovery for manually activated failovers Autonomous System Number (ASN) Lists Fallback pool for automated failovers The configuration for this solution assumes the following: The app is in multiple regions Users are from different regions Distributed Cloud hosts/manages/is delegated the DNS domain or subdomain (optional) Failover to another region is allowed Prerequisite Steps Distributed Cloud must be providing primary DNS for the domain. Your domain must be registered with a public domain name registrar with the nameservers ns1.f5clouddns.com and ns2.f5clouddns.com. F5 XC automatically validates the domain registration when configured to be the primary nameserver. Navigate to DNS Management > domain > Manage Configuration > Edit Configuration >> DNS Zone Configuration: Primary DNZ Configuration > Edit Configuration. Select “Add Item”, with Record Set type “DNS Load Balancer” Enter the Record Name and then select Add Item to create a new load balancer record. This opens the submenu to create DNS Load Balancer rules. DNS LB for Geo-Proximity Name the rule “app-dns-rule” then go to Load Balancing Rules > Configure. Select “Add Item” then under the Load Balancing Rule, within the default Geo Location Selection, expand the “Selector Expression” and select “geoip.ves.io/continent”. Select Operator “In” and then the value “EU”. Click Apply. Under the Action “Use DNS Load Balancer pool”, click “Add Item”. Name the pool “eu-pool”, and under Pool Type (A) > Pool Members, click “Add Item”. Enter a “Public IP”, then click “Apply”. Repeat this process to have a second IP Endpoint in the pool. Scroll down to Load Balancing Method and select “Static-Persist”. Now click Continue, and then Apply to the Load Balancing Rule, and then “Add Item” to add a second rule. In the new rule, choose Geo Location Selection value “Geo Location Set selector”, and use the default “system/global-users”. Click “Add Item”. Name this new pool “global-pool” and add then select “Add Item” with the following pool member: 54.208.44.177. Change the Load Balancing Mode to “Static-Persist”, then click Continue. Click “Continue”. Now set the Load Balancing Rule Score to 90. This allows the first load balancing rule, specific to EU users, to be returned as the only answer for users of that region unless the regional servers are unhealthy. Note: The rule with the highest score is returned. When two or more rules match and have the same score, answers for each rule is returned. Although there are legitimate reasons for doing this, matching more than one rule with the same score may provide an unanticipated outcome. Now click "Apply", “Apply”, and “Continue”. Click the final “Apply” to create the new DNS Zone Resource Record Set. Now click “Apply” to the DNS Zone configuration to commit the new Resource Record. Click “Save and Exit” to finalize everything and complete the DNS Zone configuration! To view the status of the services that were just created, navigate to DNS Management > Overview > DNS Load Balancers > app-dns-rule. Clicking on the rule “eu-pool”, you can find the status for each individual IP endpoint, showing the overall health of each pool’s service that has been configured. With the DNS Load Balancing rule configured to connect two separate regions, when one of the primary sites goes down in the eu-pool users will instead be directed to the global-pool. This provides reliability in the context of site failover that spans regions. If data privacy is also a requirement, additional rules can be configured to support more sites in the same region. DNS LB for Active-Passive Sites In the previous scenario, two members are configured to be equally active for a single location. We can change the weight of the pool members so that of the two only one is used when the other is unhealthy or disabled. This creates a backup/passive scenario within a region. Navigate to DNS Load Balancer Management > DNS Load Balancers. Go to the service name "app-dns-rule", then under Actions, select Manage Configuration. Click Edit Configuration for the DNS rule. Go to the Load Balancing Rules section, and Edit Configuration. On the Load Balancing Rules order menu, go to Actions > Edit for the eu-pool Rule Action. In the Load Balancing Rule menu for eu-pool, under the section Action, click Edit Configuration. In the rule for eu-pool, under Pool Type (A) > Pool Members click the Edit action In the IP Endpoint section, change the Load Balancing Priority to 1, then click Apply. Change the Load Balancing Mode to Priority, then exit and save all changes by clicking Continue, Apply, Apply, and then Save and Exit. DNS LB for Disaster Recovery Unlike with backup/standby where failover can happen automatically depending on the status of a service's health, disaster recovery (DR) can either happen automatically or be configured to require manual intervention. In the following two scenarios, I'll show how to configure manual DR failover within a region, and also how to manual failover outside the region. To support east/west manual DR failover within the EU region, use the steps above to create a new Load Balancing Rule with the same label selector as the EU rule (eu-pool) above, then create a new DNS LB pool (name it something like eu-dr-pool) and add new designated DR IP pool endpoints. Change the DR Load Balancing Rule Score to 80, and then click Apply. On the Load Balanacing Rules page, change the order of the rules and confirm that the score is such that it aligns to the following image, then click Apply, and then Save and Exit. In the previous active/standby scenario the Global rule functions as a backup for EU users when all sites in EU are down. To force a non-regional failover, you can change F5 XC DNS to send all EU users to the Global DNS rule by disabling each of the two EU DNS rule(s) above. To disable the EU DNS rules, Navigate to DNS Load Balancer Management > DNS Load Balancers, and then under Actions, select Manage Configuration. Click Edit Configuration for the DNS rule. Go to the Load Balancing Rules section, and Edit Configuration. On the Load Balancing Rules order menu, go to Actions > Edit for the eu-pool Rule Action. In the Load Balance Rule menu for eu-pool, under the section Action, click Edit Configuration. In the top section labeled Metadata, check the box to Disable the rule. Then click Continue, Apply, Apply, and then Save and Exit. With the EU DNS LB rules disabled, all requests in the EU regionare served by the Global Pool. When it's time to restore regional services, all that's needed is to re-enter the configuration rule and uncheck the Disable box to each rule. DNS LB with ASN Lists ASN stands for Autonomous System Number. It is a unique identifier assigned to networks on the internet that operate under a single administration or entity. By mapping IP addresses to their corresponding ASN, DNS LB administrators can manage some traffic more effectively. To configure Distributed Cloud DNS LB to use ASN lists, navigate toDNS Management > DNS Load Balancer Management, then "Managed Configuration" for a DNS LB service. Choose "Add Item", and on the next page, select "ASN List", and enter one or more ASN's that apply to this rule, select a DNS LB pool, and optionally configure the score (weight). When the same ASN exists in multiple DNS LB rules, the rule having the highest score is used. Note: F5 XC uses ASPlain (4-byte) formatted AS numbers. Multiple numbers are configured one per item line. DNS LB with IP Prefix Lists and IP Prefix Sets Intermediate DNS servers are almost always involved in server name resolution. By default, DNS LB doesn't see originating IP address or subnet prefix of the client making the DNS request. To improve the effectiveness of DNS-based services like DNS LB by making more informed decisions about which server will be the closest to the client, RFC 7871 proposes a solution using the EDNS0 field to allow intermediate DNS servers to add to the DNS request the client subnet (EDNS Client Subnet or EDS). The IP Prefix List and IP Prefix Set in F5 XC DNS is used when DNS requests contain the client subnet and the prefix is within one of the prefixed defined in one or more DNS LB rule sets. To configure an IP Prefix rule, navigate to DNS Management > DNS Load Balancer Management, then "Manage Configuration" of your DNS LB service. Now click "Edit Configuration" at the top left corner, then "Edit Configuration" in the section dedicated to Load Balancing Rules. Inside the section for Load Balancing Rules, click "Add Item" and in the Client Selection box choose either "IP Prefix List" or "IP Prefix Sets" from the menu. For IP Prefix List, enter the IPv4 CIDR prefix, one prefix per line. For IP Prefix Sets, you have the option of choosing whether to use a pre-existing set created in the Shared Configuration space in your tenant or you can Add Item to create a completely new set. ::rt-arrow:: Note: IP Prefix Sets are intended to be part of much larger groups of IP CIDR block prefixes and are used for additional features in F5 XC, such as in L7 WAF and L3 Network Firewall access lists. IP Prefix Sets support the use of both IPv4 and IPv6 CIDR blocks. In the following example, the configured IP Prefix rule having client subnet 192.168.1.0/24 get an answer to our eu-dr-pool (1.1.1.1). Meanwhile, a request not having a client subnet in the defined prefix and is also outside of the EU region, get an answer for the pool global-poolx (54.208.44.177). Command line: ; <<>> DiG 9.10.6 <<>> @ns1.f5clouddns.com www.f5-cloud-demo.com +subnet=1.2.3.0/24 in a ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44218 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; CLIENT-SUBNET: 1.2.3.0/24/0 ;; QUESTION SECTION: ;www.f5-cloud-demo.com. IN A ;; ANSWER SECTION: www.f5-cloud-demo.com. 30 IN A 54.208.44.177 ;; Query time: 73 msec ;; SERVER: 107.162.234.197#53(107.162.234.197) ;; WHEN: Wed Jun 05 21:46:04 PDT 2024 ;; MSG SIZE rcvd: 77 ; <<>> DiG 9.10.6 <<>> @ns1.f5clouddns.com www.f5-cloud-demo.com +subnet=192.168.1.0/24 in a ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48622 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; CLIENT-SUBNET: 192.168.1.0/24/0 ;; QUESTION SECTION: ;www.f5-cloud-demo.com. IN A ;; ANSWER SECTION: www.f5-cloud-demo.com. 30 IN A 1.1.1.1 ;; Query time: 79 msec ;; SERVER: 107.162.234.197#53(107.162.234.197) ;; WHEN: Wed Jun 05 21:46:50 PDT 2024 ;; MSG SIZE rcvd: 77 Here we're able to see that a different answer is given based on the client-subnet provided in the DNS request. Additional use-cases apply. The ability to make a DNS LB decision with a client subnet improves ability of the F5 XC nameservers to deliver an optimal response. DNS LB Fallback Pool (Failsafe) The scenarios above illustrate how to designate alternate pools both regional and global when an individual pool fails. However, in the event of a catastrophic failure that brings all service pools are down, F5 XC provides one final mechanism, the fallback pool. Ideally, when implemented, the fallback pool should be independent from all existing pool-related infrastructure and services to deliver a failsafe service. To configure the Fallback Pool, navigate to DNS Management > DNS Load Balancer Management, then "Managed Configuration" of your DNS LB service. Click "Edit Configuration", navigate to the "Fallback Pool" box and choose an existing pool. If no qualified pool exists, the option is available to add a new pool. In my case, I've desginated "global-poolx" as my failsafe fallback pool which already functions as a regional backup. Best practice for the fallback pool is that it should be a pool not referenced elsewhere in the DNSLB configuration, a pool that exists on completely independent resources not regionally-bound. DNS LB Health Checks and Observability For sake of simplicity the above scenarios do not have DNS LB health checks configured and it's assumed that each pool's IP members are always reachable and healthy. My next article shows how to configure health checks to enable automatic failovers and ensure that users always reach a working server. Conclusion Using the Distributed Cloud DNS Load Balancer enables better performance of your apps while also providing greater uptime. With scaling and security automatically built into the service, responding to large volumes of queries without manual intervention is seamless. Layers of security deliver protection and automatic failover. Built-in DDoS protection, DNSSEC, and more make the Distributed Cloud DNS Load Balancer an ideal do-it-all GSLB distributor for multi-cloud and hybrid apps. To see a walkthrough where I configure first scenario above for Geo-IP proximity, watch the following accompanying video. Additional Resources Next article: Using Distributed Cloud DNS Load Balancer health checks and DNS observability More information about Distributed Cloud DNS Load Balancer available at: https://www.f5.com/cloud/products/dns-load-balancer Product Documentation: DNS LB Product Documentation DNS Zone Management6KViews3likes0CommentsSSL Orchestrator Advanced Use Cases: Local DNS Proxy Cache
Introduction F5 BIG-IP is synonymous with "flexibility". You likely have few other devices in your architecture that provide the breadth of capabilities that come native with the BIG-IP platform. And for each and every BIG-IP product module, the opportunities to expand functionality are almost limitless. In this article series we examine the flexibility options of the F5 SSL Orchestrator in a set of "advanced" use cases. If you haven't noticed, the world has been steadily moving toward encrypted communications. Everything from web, email, voice, video, chat, and IoT is now wrapped in TLS, and that's a good thing. The problem is, malware - that thing that creates havoc in your organization, that exfiltrates personnel records to the Dark Web - isn't stopped by encryption. TLS 1.3 and multi-factor authentication don't eradicate malware. The only reasonable way to defend against it is to catch it in the act, and an entire industry of security products are designed for just this task. But ironically, encryption makes this hard. You can't protect against what you can't see. F5 SSL Orchestrator simplifies traffic decryption and malware inspection, and dynamically orchestrates traffic to your security stack. But it does much more than that. SSL Orchestrator is built on top of F5's BIG-IP platform, and as stated earlier, is abound with flexibility. SSL Orchestrator Use Case: Local DNS Proxy Cache A basic tenet of an explicit proxy is DNS resolution. That is, the proxy functions to perform DNS requests on the client's behalf. This is a useful security characteristic as it abstracts DNS away from the client and generally prevents IP spoofing. For example, a client cannot spoof the IP of a known-bypassed host because an explicit proxy client does not control the IP address. The client passes a URL to the proxy, and the proxy resolves the URL to the correct address. Needless to say, however, if you have a thousand hosts perform 100 DNS requests per hour (each) in an environment, and you shift that to an explicit proxy, you now have a proxy server that's performing potentially 100,000 DNS requests per hour. Now fortunately, a large portion of the traffic from each of these clients will be going to the same places (ex. google.com), so the proxy server can locally cache a lot of these. But at some point, a dedicated DNS proxy cache can be useful to offload this burden, and in a configurable way. In an SSL Orchestrator environment, you may also have an explicit proxy security device plugged into a decrypted service chain, and that explicit proxy will also need DNS resolution. So if you've configured an SSL Orchestrator explicit proxy topology, and sending decrypted traffic to an explicit proxy in a service chain, you now have two devices that need DNS. As you've probably guessed by now, there's an elegant solution to this problem. Using a DNS cache profile on the BIG-IP, you can point the topology explicit proxy resolver, and the inline service resolver at the same cache. As traffic arrives at the topology, an initial DNS request flows to the DNS cache. If an answer doesn't exist, the request is forwarded to a defined authority and the answer cached. When the decrypted traffic arrives at the inline service, it attempts a DNS request to the same place and gets an immediate response from the cache. This has additional benefits in both optimizing traffic flow through the inline proxy device, as it doesn't have to wait for a DNS response from an external source, and also removes the need for this inline obfuscated security device to have to communicate outside of its secure enclave (for DNS). It may still have to communicate beyond the enclave for software, signature and licensing updates, but those are not real-time traffic concerns. This article provides a simple set of steps to build a local DNS proxy cache for SSL Orchestrator. [figure 1: SSL Orchestrator with DNS proxy cache] ** Note that a DNS proxy cache requires the DNS license, which also requires SSL Orchestrator to be licensed as an add-on to base LTM. ** Configuration Configuring a local DNS proxy cache involves creating the DNS cache and virtual server to hold it. This virtual server basically load balances external DNS and enables optimization through caching. You will point both the SSL Orchestrator resolver and the inline proxy DNS at this virtual server to take full benefit of the optimization. Also note that this could simply be used by an inline proxy service, in the absence of an explicit proxy SSL Orchestrator topology. This allows you to both optimize DNS for the inline proxy, and also not have to build a service control channel for inline proxy DNS to get out to an external DNS resource. Create DNS cache - Located under DNS -> Caches -> Cache List, click the Create button. Resolver Type: Transparent (None) Create DNS profile - Located under Local Traffic -> Profiles -> Services -> DNS, click the Create button. DNS Cache: Enabled DNS Cache Name: <above DNS cache> Create DNS proxy pool - This is the actual DNS resource (ex. 8.8.8.8:53). Find this under Local Traffic -> Pools. Create DNS proxy VIP (on XP service's outbound subnet) - This is the virtual server that load balances the DNS resource. Find this under Local Traffic Virtual Servers. It is important to create the inline proxy service first as that will establish the VLAN and IP subnet that this virtual server will attach to. Type: Standard Source Address: 0.0.0.0/0 Destination Address/Mask: select a unique IP address on the inline proxy service's outbound (from-service) subnet (ex. 198.19.96.153) Service Port: 53 Protocol: UDP Protocol Profile (Client): udp DNS Profile: select the previously-created DNS profile VLAN: select the inline proxy service's outbound (from-service) VLAN Source Address Translation: SNAT as required for routing Address Translation: enabled Port Translation: enabled Default Pool: select the DNS server pool Configure the SSL Orchestrator resolver to point to DNS proxy VIP - Navigate to the SSL Orchestrator UI and in the top right corner click on the gear icon to access the DNS resolver configuration. Enter this same virtual server IP address. Configure the inline explicit proxy service to point to the DNS proxy VIP - DNS requests will leave the inline proxy service on the service's from-service VLAN to the DNS proxy VIP. Optionally create a system route if the DNS pool requires a gateway route - If you need a route to get to the actual DNS resource, create a system route. This is found under Network -> Routes. To test, initiate a tcpdump capture on your outbound VLAN and look for port 53 traffic. Assuming an explicit proxy SSL Orchestrator topology, you should see a single outbound DNS request. The DNS request from the inline proxy device will be served directly from the newly cached data. tcpdump -lnni [outbound-vlan] port 53 And there you have it. In just a few steps you've configured your SSL Orchestrator security policy to take advantage of a local DNS proxy cache, and along the way you have hopefully recognized the immense flexibility at your command.682Views3likes1CommentUsing Cryptonice to check for HTTPS misconfigurations in devsecops workflows
Co-Author: Katie Newbold, F5 Labs Intern, Summer 2020 A huge thanks to Katie Newbold, lead developer on the Cryptonice project, for her amazing work and patience as I constantly moved the goal posts for this project. ---F5 Labs recently published an article Introducing the Cryptonice HTTPS Scanner. Cryptonice is aimed at making it easy for everyone to scan for and diagnose problems with HTTPS configurations. It is provided as a is a command line tool and Python library that allows a user to examine the TLS protocols and ciphers, certificate information, web application headers and DNS records for one or more supplied domain names. You can read more about why Cryptonice was released over at F5 Labs but it basically boils down a few simple reasons. Primarily, many people fire-and-forget their HTTPS configurations which mean they become out of date, and therefore weak, over time. In addition, other protocols, such as DNS can be used to improve upon the strength of TLS but few sites make use of them. Finally, with an increasing shift to automation (i.e. devsecops) it’s important to integrate TLS testing into the application lifecycle. How do I use Cryptonice? Since the tool is available as an executable, a Python script, and a Python library, there are a number of ways and means in which you might use Cryptonice. For example: The executable may be useful for those that do not have Python 3 installed and who want to perform occasional ad-hoc scans against internal or external websites The Python script may be installed along side other Python tools which could allow an internal security team to perform regular and scriptable scanning of internal sites The Python library could be used within devops automation workflows to check for valid certificates, protocols and ciphers when new code is pushed into dev or production environments The aforementioned F5 Labs article provides a quick overview of how to use the command line executable and Python script. But this is DevCentral, after all, so let’s focus on how to use the Python library in your own code. Using Cryptonice in your own code Cryptonice can output results to the console but, since we’re coding, we’ll focus on the detailed JSON output that it produces. Since it collects all scan and test results into a Python dictionary, this variable can be read directly, or your code may wish to read in the completed JSON output. More on this later. First off we’ll need to install the Cryptonice library. With Python 3 installed, we simply use the pip command to download and install it, along with its dependencies. pip install cryptonice Installing Cryptonice using pip will also install the dependent libraries: cffi, cryptography , dnspython, http-client, ipaddress, nassl, pycurl, pycparser, six, sslyze, tls-parser, and urllib3. You may see a warning about the cryptography library installation if you have a version that is greater than 2.9, but cryptonice will still function. This warning is generated because the sslyze package currently requires the cryptography library version to be between 2.6 and 2.9. Creating a simple Cryptonice script An example script (sample_script.py) is included in the GitHub repository. In this example, the script reads in a fully formatted JSON file called sample_scan.json from the command line (see below) and outputs the results in to a JSON file whose filename is based on the site being scanned. The only Cryptonice module that needs to be imported in this script is scanner. The JSON input is converted to a dictionary object and sent directly to the scanner_driver function, where the output is written to a JSON file through the writeToJSONFile function. from cryptonice import scanner import argparse import json def main(): parser = argparse.ArgumentParser() parser.add_argument("input_file", help="JSON input file of scan commands") args = parser.parse_args() input_file = args.input_file with open(input_file) as f: input_data = json.load(f) output_data, hostname = scanner.scanner_driver(input_data) if output_data is None and hostname is None: print("Error with input - scan was not completed") if __name__ == "__main__": main() In our example, above, the scanner_driver function is being passed the necessary dictionary object which is created from the JSON file being supplied as a command line parameter. Alternatively, the dictionary object could be created dynamically in your own code. It must, however, contain the same key/value pairs as our sample input file, below: This is what the JSON input file must look like: { "id": string, "port": int, "scans": [string] "tls_params":[string], "http_body": boolean, "force_redirect": boolean, "print_out": boolean, "generate_json": boolean, "targets": [string] } If certain parameters (such as “scans”, “tls_parameters”, or “targets”) are excluded completely, the program will abort early and print an error message to the console. Mimicking command line input If you would like to mimic the command line input in your own code, you could write a function that accepts a domain name via command line parameter and runs a suite of scans as defined in your variable default_dict: from cryptonice.scanner import writeToJSONFile, scanner_driver import argparse default_dict = {'id': 'default', 'port': 443, 'scans': ['TLS', 'HTTP', 'HTTP2', 'DNS'], 'tls_params': ["certificate_info", "ssl_2_0_cipher_suites", "ssl_3_0_cipher_suites", "tls_1_0_cipher_suites", "tls_1_1_cipher_suites", "tls_1_2_cipher_suites", "tls_1_3_cipher_suites", "http_headers"], 'http_body': False, 'print_out': True, 'generate_json': True, 'force_redirect': True } def main(): parser = argparse.ArgumentParser(description="Supply commands to cryptonice") parser.add_argument("domain", nargs='+', help="Domain name to scan", type=str) args = parser.parse_args() domain_name = args.domain if not domain_name: parser.error('domain (like www.google.com or f5.com) is required') input_data = default_dict input_data.update({'targets': domain_name}) output_data, hostname = scanner_driver(input_data) if output_data is None and hostname is None: print("Error with input - scan was not completed") if __name__ == "__main__": main() Using the Cryptonice JSON output Full documentation for the Cryptonice JSON output will shortly be available on the Cryptonice ReadTheDocs pages and whilst many of the key/value pairs will be self explanatory, let’s take a look at some of the more useful ones. TLS protocols and ciphers The tls_scan block contains detailed information about the protocols, ciphers and certificates discovered as part of the handshake with the target site. This can be used to check for expired or expiring certificates, to ensure that old protocols (such as SSLv3) are not in use and to view recommendations. cipher_suite_supported shows the cipher suite preferred by the target webserver. This is typically the best (read most secure) one available to modern clients. Similarly, highest_tls_version_supported shows the latest available version of the TLS protocol for this site. In this example, cert_recommendations is blank but is a certificate were untrusted or expired this would be a quick place to check for any urgent action that should be taken. The dns section shows results for cryptographically relevant DNS records, for example Certificate Authority Authorization (CAA) and DKIM (found in the TXT records). In our example, below, we can see a dns_recommendations entry which suggested implementing DNS CAA since no such records can be found for this domain. { "scan_metadata":{ "job_id":"test.py", "hostname":"example.com", "port":443, "node_name":"Cocumba", "http_to_https":true, "status":"Successful", "start":"2020-07-1314:31:09.719227", "end":"2020-07-1314:31:16.939356" }, "http_headers":{ "Connection":{ }, "Headers":{ }, "Cookies":{ } }, "tls_scan":{ "hostname":"example.com", "ip_address":"104.127.16.98", "cipher_suite_supported":"TLS_AES_256_GCM_SHA384", "client_authorization_requirement":"DISABLED", "highest_tls_version_supported":"TLS_1_3", "cert_recommendations":{ }, "certificate_info":{ "leaf_certificate_has_must_staple_extension":false, "leaf_certificate_is_ev":false, "leaf_certificate_signed_certificate_timestamps_count":3, "leaf_certificate_subject_matches_hostname":true, "ocsp_response":{ "status":"SUCCESSFUL", "type":"BasicOCSPResponse", "version":1, "responder_id":"17D9D6252267F931C24941D93036448C6CA91FEB", "certificate_status":"good", "hash_algorithm":"sha1", "issuer_name_hash":"21F3459A18CAA6C84BDA1E3962B127D8338A7C48", "issuer_key_hash":"37D9D6252767F931C24943D93036448C2CA94FEB", "serial_number":"BB72FE903FA2B374E1D06F9AC9BC69A2" }, "ocsp_response_is_trusted":true, "certificate_0":{ "common_name":"*.example.com", "serial_number":"147833492218452301349329569502825345612", "public_key_algorithm":"RSA", "public_key_size":2048, "valid_from":"2020-01-1700:00:00", "valid_until":"2022-01-1623:59:59", "days_left":552, "signature_algorithm":"sha256", "subject_alt_names":[ "www.example.com" ], "certificate_errors":{ "cert_trusted":true, "hostname_matches":true } } }, "ssl_2_0":{ "preferred_cipher_suite":null, "accepted_ssl_2_0_cipher_suites":[] }, "ssl_3_0":{ "preferred_cipher_suite":null, "accepted_ssl_3_0_cipher_suites":[] }, "tls_1_0":{ "preferred_cipher_suite":null, "accepted_tls_1_0_cipher_suites":[] }, "tls_1_1":{ "preferred_cipher_suite":null, "accepted_tls_1_1_cipher_suites":[] }, "tls_1_2":{ "preferred_cipher_suite":"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "accepted_tls_1_2_cipher_suites":[ "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA" ] }, "tls_1_3":{ "preferred_cipher_suite":"TLS_AES_256_GCM_SHA384", "accepted_tls_1_3_cipher_suites":[ "TLS_CHACHA20_POLY1305_SHA256", "TLS_AES_256_GCM_SHA384", "TLS_AES_128_GCM_SHA256", "TLS_AES_128_CCM_SHA256", "TLS_AES_128_CCM_8_SHA256" ] }, "tests":{ "compression_supported":false, "accepts_early_data":false, "http_headers":{ "strict_transport_security_header":{ "preload":false, "include_subdomains":true, "max_age":15768000 } } }, "scan_information":{ }, "tls_recommendations":{ } }, "dns":{ "Connection":"example.com", "dns_recommendations":{ "Low-CAA":"ConsidercreatingDNSCAArecordstopreventaccidentalormaliciouscertificateissuance." }, "records":{ "A":[ "104.127.16.98" ], "CAA":[], "TXT":[], "MX":[] } }, "http2":{ "http2":false } } Advanced Cryptonice use - Calling specific modules There are 6 files in Cryptonice that are necessary for its functioning in other code. scanner.py and checkport.py live in the Cryptonice folder, and getdns.py, gethttp.py. gethttp2.py and gettls.py all live in the cryptonice/modules folder. A full scan is run out of the scanner_driver function in scanner.py, which generates a dictionary object based on the commands it receives via the input parameter dictionary. scanner_driver is modularized, allowing you to call as many or as few of the modules as needed for your purposes. However, if you would like to customize the use of the cryptonice library further, individual modules can be selected and run as well. You may choose to call the scanner_driver function and have access to all the modules in one location, or you could call on certain modules while excluding others. Here is an example of a function that calls the tls_scan function in modules/gettls.py to specifically make use of the TLS code and none of the other modules. from cryptonice.modules.gettls import tls_scan def tls_info(): ip_address = "172.217.12.196" host = "www.google.com" commands = ["certificate_info"] port = 443 tls_data = tls_scan(ip_address, host, commands, port) cert_0 = tls_data.get("certificate_info").get("certificate_0") # Print certificate information print(f'Common Name:\t\t\t{cert_0.get("common_name")}') print(f'Public Key Algorithm:\t\t{cert_0.get("public_key_algorithm")}') print(f'Public Key Size:\t\t{cert_0.get("public_key_size")}') if cert_0.get("public_key_algorithm") == "EllipticCurvePublicKey": print(f'Curve Algorithm:\t\t{cert_0.get("curve_algorithm")}') print(f'Signature Algorithm:\t\t{cert_0.get("signature_algorithm")}') if __name__ == "__main__": tls_info() Getting Started The modularity of Cryptonice makes it a versatile tool to be used within other projects. Whether you want to use it to test the strength of an internal website using the command line tool or integrate the modules into another project, Cryptonice provides a detailed and easy way to capture certificate information, HTTP headers, DNS restrictions, TLS configuration and more. We plan to add additional modules to query certificate transparency logs, test for protocols such as HTTP/3 and produce detailed output with guidance on how to improve your cryptographics posture on the web. This is version 1.0, and we encourage the submission of bugs and enhancements to our Github page to provide fixes and new features so that everyone may benefit from them. The Cryptonice code and binary releases are maintained on the F5 Labs Github pages. Full documentation is currently being added to our ReadTheDocs page and the Cryptonice library is available on PyPi.: F5 Labs overview: https://www.f5.com/labs/cryptonice Source and releases: https://github.com/F5-Labs/cryptonice PyPi library: https://pypi.org/project/cryptonice Documentation: https://cryptonice.readthedocs.io698Views3likes1CommentBIG-IP Advanced Firewall Manager (AFM) DNS NXDOMAIN Query Attack Type Walkthrough - part two
This is part two of theBIG-IPAdvanced Firewall Manager (AFM) DNS NXDOMAIN Query Attack Type Walkthrough article series. Part one is athttps://community.f5.com/t5/technical-articles/big-ip-advanced-firewall-manager-afm-dns-nxdomain-query-attack/ta-p/317656 Reviewing Part One In part one, we looked at: What and How of DNS NXDOMAIN response and flood definitions How threat actors generate random dns queries with the use of Domain Generation Algorithms and concepts such as DNS Blackhole, Fast Flux,DNS water torture attack What information can be used in observing NXDOMAIN response spike ReviewingBIG-IP DNS profile statistics collected using a periodic data collection script - provides visibility/statistics on the type of requests and responses the DNS listener processed which are useful when reviewing a recent DNS traffic spike event and understanding the characteristic of the traffic. Reviewing sample packet capture during a NXDOMAIN response spike and reducing and zooming in to the data of interest using commands such as capinfos, tshark, sort, uniq, wc Configuring Detection and Mitigation Thresholds In this article, we will continue using the information gathered from the NXDOMAIN response flood packet capture and configure BIG-IP Advanced Firewall Manager (AFM) DNS NXDOMAIN Query Attack Type Detection and Mitigation Thresholds. Configuring BIG-IP AFM DNS NXDOMAIN attack type to mitigate NXDOMAIN response spike Now that we have information from the sample packet capture and extracted DNS names we can start working on using these information to configure BIG-IP AFM NXDOMAIN attack type Detection and Mitigation thresholds. The sample packet capture we reviewed ran approx. 13 mins and recorded 60063 packets. If we divide the number of packets to number of seconds the pcap ran - 60063 p/780s, the number of packets per second is 334 packets per second. Since the packet capture contains only DNS traffic, we can expect it to have both dns requests and responses, which further reduces to 167 packets per second for either dns request or response. Since NXDOMAIN is a response and the packet capture was taken in a simulated attack to produce NXDOMAIN response, we can use this 167 packets per second as a baseline of what attack traffic looks like. We should aim for a lower number of packets per second to detect the attack and provide an allowance before we start mitigating the NXDOMAIN response flood. For the purpose of the demonstration, I have configured a lower detection and mitigation threshold to mitigate the NXDOMAIN response flood. This configuration is on a DNS enabled AFM DoS protection profile that will be applied to a Virtual Server. Dns-dos-protect is the name of the profile in this lab test. Configuring BIG-IP AFM DNS NXDOMAIN query attack type in a DNS enabled AFM DoS protection profile Detection Threshold: 20 EPS Mitigation Threshold: 30 EPS I'll be using 2 test clients to send a flood dns queries to a DNS listener for the hostnames generated thru DGA. As expected, the response for these queries will be NXDOMAIN and the AFM DNS NXDOMAIN Query Attack type will detect the attack as soon as 20 NXDOMAIN responses are observed and will start to drop excess of 30 NXDOMAIN responses. Here is a sample script to read a file , line by line, that contains DNS names to query. 10.93.56.197 is the DNS listener where the DNS enabled AFM DoS protection profile "dns-dos-protect" is applied to. root@ubuntu-server1:~# cat nxdig.sh #!/bin/bash while read -r line; do dig @10.93.56.197 "$line" done wait Here is the DNS listener DoS protection profile configuration. It also shows dos-dns-logging-profile is used as a Log profile. Here is the dos-dns-logging-profile Log profile profile configuration which only have DNS DoS protection logs enabled and logging to the local-db-publisher (logdb, a mysql db in the BIG-IP) Using the tmsh show security dos profile <DoS Protection profile name>, we can view the statistics observed by the DoS protection profile per Attack Type. In this test, only NXDOMAIN Query is enabled. Using the same periodic data capture script when we observed the ltm dns profile statistics, we can capture statistics for the DoS protection profile for review and understanding the phases of the attack being observed. while true; do date >> /var/tmp/afm_nx_stats; tmsh show security dos profile dns-dos-protect >> /var/tmp/afm_nx_stats; echo "###################" >> /var/tmp/afm_nx_stats; sleep 2; done Run the dns query flood using the script while true; do ./nxdig.sh < nx.txt 2>&1; done Here is a screenshot when the detection threshold of the NXDOMAIN Query attack type was exceeded, see the Attack Status. Here is when the Attack is being Mitigated Here is when there is no more Attack being detected A look at the periodic data capture for the DoS protection profile shows interesting statistics. Attack Detected - value of 1 means attack is detected which also means the detection threshold was exceeded, value of 0 means no attack currently detected. Stats - number of packets observed by the Attack type - this is a cumulative value since the BIG-IP booted up Stats Rate - current number of packets observed by the Attack type - provides an idea of how much of this type of packets currently observed, you can think of this as the current - Events Per Second - EPS of the attack type Stats 1m - average number of packets per second observed by the Attack type in the last 1m - provides an idea of average number packets of this type every second for the last 1m The Stats Rate and Stats 1m can provide an idea of how much packets can be seen in the current second and the average per second in the last minute. In non-attack scenario, observing these values shows what normal number of packets may look like. During an attack scenario - detection threshold exceeded - it can provide an idea how much the attack type was seeing. These information can then be used as a basis for setting the mitigation threshold. For example, it was observed that the Stats 1m value was 21 and during an attack scenario, the Stats Rate value was 245, this is about 12x of the average and the volume appears to be an attack. Depending on the risk appetite of the business, an allowance of 2x of the average number of packets for the attack type is where they want to drop exceeding packets , thus, 42 EPS can be configured for the mitigation threshold. Do note, setting low detection and mitigation thresholds can cause false positives, triggering detection and mitigation too early. Therefore it is important to understand the traffic characteristic for an attack type. In the gathered data, we can see here Attack Detected is 1, which means the detection threshold was exceeded. Stats Rate is at 245 which do exceed the 20 EPS detection threshold. Note that there were no Drops stats yet. Tue Jun 27 07:47:52 PDT 2023 | Attack Detected 1 | .. | Aggregate Attack Detected 1 | Attack Count 1 .... | Stats 1461 | Stats Rate 245 | Stats 1m 21 | Stats 1h 0 | Drops 0 4 seconds later, we do see the Drops count is 2, which tells us the Mitigation threshold - configured as 30 EPS - was exceeded. Tue Jun 27 07:47:56 PDT 2023 Attack Detected 1 ... | Aggregate Attack Detected 1 | Attack Count 1 | Stats 1h Samples 0 | Stats 1516 | Stats Rate 0 | Stats 1m 20 | Stats 1h 0 | Drops 2 3 seconds later, we do see the Drops count is 4, which tells us the mitigation is ongoing and dropping excess packets. Tue Jun 27 07:47:59 PDT 2023 | Attack Detected 1 ... | Aggregate Attack Detected 1 | Attack Count 1 | Stats 1h Samples 0 | Stats 1578 | Stats Rate 0 | Stats 1m 20 | Stats 1h 0 | Drops 4 From these sample stats, particularly the 'Stats' value, in the 7 seconds - 07:47:52 to 07:47:59 - the difference is 117 (1578 - 1461), which tells us that the packet of this type volume is low - averages at 17 packets per second for the last 7 seconds.If the difference on these 'Stats' values are much bigger, then we potentially have traffic spike. Drops stats increasing means that an attack is still being mitigated and the volume of the packets is not yet lower that the defined mitigation threshold.Drops - number of packets observed by the Attack type - this is a cumulative value since the BIG-IP booted upDrops Rate - current number of packets dropped by the Attack type Reviewing DoS stats information in the Reporting DoS DashboardWe have seen the DoS protection profile stats output, now we switch to the GUI and review the same DoS stats information.In the Reporting DoS Dashboard, there are records of the recorded Attacks. The timeframe can be adjusted to find incident of interest. In the testing done, I filtered DNS only and Attack IDs are displayed along with very useful information and statistics. In this screenshot, Attack ID 2958374472 was selected and relevant statistics are displayed. It was of DNS NXDOMAIN query Attack type/vector it shows how much packets were observed in this attack, which is 235 packets, and dropped packets at 25. Avg PPS - average packets per second for the duration of the attack, similar to the Stats Rate 1m, can also be used as a basis for the Detection and mitigation threshold of the attack type. Domain names observed are also recorded along with the same statistics on the attack. Configuring BIG-IP AFM DNS NXDOMAIN query attack type in AFM Device DoS Protection AFM Device Protection also have the DNS NXDOMAIN query attack type. This is a device wide protection and protects the self Ips and Virtual Servers of the BIG-IP. Detection and Mitigation thresholds can be configured the same way, observed the traffic type using the same type of statistics - but this time its Device DoS protection specific. Here is a sample of tmshshow security dos device-config dos-device-configoutput and piping it to grep to filter specifically lines for DNS NXDOMAIN Query. tmsh show security dos device-config dos-device-config | grep -i nxdomain -A 40 Security::DoS Config: DNS NXDOMAIN Query ------------------------------------------------------ Statistics Type Count Detection Method Static Vector - Inline Status Ready Attack Detected 1 ... Aggregate Attack Detected 1 Attack Count 20 Stats 1h Samples 0 Stats 46580 Stats Rate 187 Stats 1m 73 Stats 1h 8 ... Drops 150 ...snip.. AutoDetection 137 Mitigation Low 4294967295 Similarly, in the GUI, we can observe the states of an attack detected and mitigated by the NXDOMAIN query attack type configured in the AFM Device DoS protection. Here is the Detection and Mitigation threshold configuration Attack Detected Attack being Mitigated We can review the Reporting DoS Dashboard of the Attack events for the Device DoS and review the statistics Configuring Valid FQDNs in the DNS NXDOMAIN Query Attack type TheDNS NXDOMAIN Query Attack type has a configuration calledValid FQDNs and is described as: Allows you to create a whitelist of valid fully qualified domain names. In theAdd new FQDNfield, type a domain name and clickCheckto see if it is already on the list, clickAddto add it to the list, or clickDeleteto remove it from the whitelist. Take the name qehspqnmrn[.]fop789[.]loc as an example. Let's assume that this is a valid DNS hostname/FQDN and we do not wantDNS NXDOMAIN Query Attack type to drop packets for its response even though it would result in a NXDOMAIN response. We can add it in theValid FQDNslist from the GUI or tmsh. Here is the tmsh example, sinceqehspqnmrn[.]fop789[.]loc is already in the Valid FQDNs list, let's add another FQDN site1[.]fop789[.]loc. tmsh modify security dos device-config dos-device-config dos-device-vector { dns-nxdomain-query { valid-domains add { site1[.]fop789[.]loc } } } Here is the tmsh output when listing the DNS NXDOMAIN Query Attack Type including the Valid FQDNs. tmsh list security dos device-config dos-device-config dos-device-vector { dns-nxdomain-query } security dos device-config dos-device-config { dos-device-vector { dns-nxdomain-query { ... valid-domains { qehspqnmrn[.]fop789[.]loc site1[.]fop789[.]loc } } } } To verify that the packets for the FQDN in the Valid FQDNs list are not being dropped, we can look at theReporting DoS Dashboard. We can see ongoing attacks are reported and also lists the Domain Names in the attack. Taking a closer look at the statistics,qehspqnmrn[.]fop789[.]loc in the Domain Name list has NO packet Drops and NO Attack detected. The rest of the DNS names in the list have Drops and Attacks and are being mitigated by the AFM DNS NXDOMAIN query attack type.3KViews2likes0CommentsBIG-IP Advanced Firewall Manager (AFM) DNS NXDOMAIN Query Attack Type Walkthrough
Introduction In this article, we will look at configuring BIG-IP Advanced Firewall Manager's (AFM) DNS NXDOMAIN attack type in the Device Protection and DNS enabled protection profile to mitigate DNS NXDOMAIN response floods. We will review data from a packet capture and BIG-IP DNS' DNS profile statistics to set detection and mitigation thresholds. This is part one of two of this article series. Part two is athttps://community.f5.com/t5/technical-articles/big-ip-advanced-firewall-manager-afm-dns-nxdomain-query-attack/ta-p/317681 What is a NXDOMAIN dns response The DNS protocol [RFC1035] defines response code 3 as "Name Error", or "NXDOMAIN" [RFC2308], which means that the queried domain name does not exist in the DNS. Since domain names are represented as a tree of labels ([RFC1034], Section3.1), nonexistence of a node implies nonexistence of the entire subtree rooted at this node. From https://datatracker.ietf.org/doc/html/rfc8020#page-5 How is NXDOMAIN dns response generated RCODE Response code - this 4 bit field is set as part of responses. The values have the following interpretation: 3 Name Error - Meaningful only for responses from an authoritative name server, this code signifies that the domain name referenced in the query does not exist. https://datatracker.ietf.org/doc/html/rfc1035#section-4.1.1 What is a NXDOMAIN (response) flood? From the F5 Glossary in https://www.f5.com/glossary/dns-flood-nxdomain-flood The roadmap to every single computer on the Internet is held in DNS servers. The DNS NXDOMAIN flood attack attempts to make servers disappear from the Internet by making it impossible for clients to access the roadmap. In this attack, the attacker floods the DNS server with requests for invalid or nonexistent records. The DNS server spends its time searching for something that doesn't exist instead of serving legitimate requests. The result is that the cache on the DNS server gets filled with bad requests, and clients can't find the servers they are looking for. How do threat actors generate random dns queries There are many tools to generate a flood of DNS queries . The DNS records in the flood of DNS queries for the most part, should be unique if an attacker wants to poison a DNS servers cache. Otherwise, a DNS administrator can simply blackhole if the same DNS record is queried in DNS flood query. DNS Blackhole https://en.wikipedia.org/wiki/DNS_sinkhole Here is a devcentral article on DNS Blackhole implemented in an iRule https://community.f5.com/t5/codeshare/dns-blackhole/ta-p/283786 Domain Generation Algorithms To generate many random host records for one or many domains, Domain Generation Algorithms are used. https://en.wikipedia.org/wiki/Domain_generation_algorithm Many examples of DGA as collected in https://github.com/baderj/domain_generation_algorithms shows how these random records may look like. In the lab setup we will be using, we have the fop789[.]loc domain, and I borrowed from the github page random host list generated thru the "mydoom (aka Novarg, Mimail.R, Shimgapi)" DGA and appended the host part of the host list to the test lab domain . qehspqnmrn[.]fop789[.]loc mmahaesqar[.]fop789[.]loc pwprhhnqqn[.]fop789[.]loc .... Here is a sample query using the one of the hostnames. As expected, a NXDOMAIN response is received because this record does not exist in the sample lab domain fop789.loc. root@ubuntu-server1:~# dig @10.93.56.197 qehspqnmrn.fop789.loc ; <<>> DiG 9.16.1-Ubuntu <<>> @10.93.56.197 qehspqnmrn.fop789.loc ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 1279 ;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;qehspqnmrn.fop789.loc. IN A ;; AUTHORITY SECTION: fop789.loc. 300 IN SOA ns1.fop789.loc. hostmaster.ns1.fop789.loc. 2023041101 10800 3600 604800 86400 ;; Query time: 4 msec ;; SERVER: 10.93.56.197#53(10.93.56.197) ;; WHEN: Sun Jun 25 11:10:41 UTC 2023 ;; MSG SIZE rcvd: 101 root@ubuntu-server1:~# The most prevalent reason why threat actors would use DGA is in malware and phishing campaigns to avoid detection and resilient to counter measures. The evasion technique used is Fast flux. Fast flux Fast flux is a domain name system (DNS) based evasion technique used by cyber criminals to hide phishing and malware delivery websites behind an ever-changing network of compromised hosts acting as reverse proxies to the backend botnet master—a bulletproof autonomous system. It can also refer to the combination of peer-to-peer networking, distributed command and control, web-based load balancing and proxy redirection used to make malware networks more resistant to discovery and counter-measures. The fundamental idea behind fast-flux is to have numerous IP addresses associated with a single fully qualified domain name, where the IP addresses are swapped in and out with extremely high frequency, through changing DNS resource records, thus the authoritative name servers of the said fast-fluxing domain name is—in most cases—hosted by the criminal actor Depending on the configuration and complexity of the infrastructure, fast-fluxing is generally classified into single, double, and domain fast-flux networks. Fast-fluxing remains an intricate problem in network security and current countermeasures remain ineffective. https://en.wikipedia.org/wiki/Fast_flux What information can be used in observing NXDOMAIN response spike There are several sources of information that can be used when NXDOMAIN response spike. BIG-IP DNS profile statistics A BIG-IP DNS listener (Virtual Server) will have a DNS profile applied to it. This profile provides access to DNS traffic statistics. In particular, "Question Type" and "Return Code" sections has statistics on DNS record types queried and return code count. Here is a sample output from a script that periodically captured DNS profile statistics - stats were taken 20 seconds apart. These were taken during a lab test where NXDOMAIN response flood is being simulated. Here is the sample script: while true; do date >> /var/tmp/dns_stat.txt; tmsh show ltm profile dns dns-prof-1 >> /var/tmp/dns_stat.txt; echo "###################" >> /var/tmp/dns_stat.txt; sleep 20; done Notice that the "Question Type" has only "A" records queried and in the "Return Code (RCODE)" , only " No Name (NXDOMAIN)" were the responses. Date: Sun Jun 25 11:15:59 PDT 2023 --------------------------------------------------- Ltm::DNS Profile: dns-prof-1 --------------------------------------------------- Virtual Server Name N/A Query Message Recursion Desired (RD) 18847 100.0 DNSSEC Checking Disabled (CD) 0 0.0 EDNS0 18847 100.0 Client Subnet 0 0.0 Client Subnet Inserted 0 0.0 Operation Code (OpCode) Query 18847 100.0 Notify 0 0.0 Update 0 0.0 Other 0 0.0 Question Type A 18847 100.0 AAAA 0 0.0 ANY 0 0.0 CNAME 0 0.0 MX 0 0.0 ... Other 0 0.0 Response Message Authoritative Answer (AA) 18843 99.9 Recursion Available (RA) 0 0.0 Authenticated Data (AD) 0 0.0 Truncated (TC) 0 0.0 Return Code (RCODE) No Error 1 0.0 No Name (NXDOMAIN) 18842 99.9 Server Failed 0 0.0 Refused 1 0.0 Bad EDNS Version 0 0.0 Name Error (NXDOMAIN) Override 0 0.0 EDNS0 client subnet 0 0.0 Date: Sun Jun 25 11:16:19 PDT 2023 Query Message Recursion Desired (RD) 18993 100.0 DNSSEC Checking Disabled (CD) 0 0.0 EDNS0 18993 100.0 Client Subnet 0 0.0 Client Subnet Inserted 0 0.0 Operation Code (OpCode) Query 18993 100.0 Notify 0 0.0 Update 0 0.0 Other 0 0.0 Question Type A 18993 100.0 AAAA 0 0.0 ANY 0 0.0 CNAME 0 0.0 ... Other 0 0.0 Response Message Authoritative Answer (AA) 18989 99.9 Recursion Available (RA) 0 0.0 Authenticated Data (AD) 0 0.0 Truncated (TC) 0 0.0 Return Code (RCODE) No Error 1 0.0 No Name (NXDOMAIN) 18988 99.9 Server Failed 0 0.0 Refused 1 0.0 Bad EDNS Version 0 0.0 Name Error (NXDOMAIN) Override 0 0.0 EDNS0 client subnet 0 0.0 Sample packet capture during a NXDOMAIN response spike Earlier, we reviewed what is Domain Generation Algorithm (DGA) and that its used to generate random DNS names which are used by threat actors in fast flux technique to evade detection and mitigations for their phishing and malware campaigns. The sample packet capture was taken while using the sample random DNS names generated thru a DGA to simulate a NXDOMAIN response flood. Using the capinfos command, we can observe various details about the packet capture. First and last packet time tells us how long this pcap was running, in this case it has been around 13 mins. Also average packet rate per second, 76 packets/sec, can be useful if we are looking to find a baseline on packets per second value. And other packet capture details which may be useful depending on your purpose. Since we are looking at DNS traffic, remember its query and response packets, so 76 packets per second, presumably may contain both type of packets, thus, estimation of 38 packets per second for dns queries. [root@bigip:TimeLimitedModules::Active:Standalone] tmp # capinfos nx-4.pcap File name: nx-4.pcap File type: Wireshark/tcpdump/... - pcap File encapsulation: Ethernet File timestamp precision: microseconds (6) Packet size limit: file hdr: 65535 bytes Number of packets: 60 k File size: 16 MB Data size: 15 MB Capture duration: 786.961838 seconds First packet time: 2023-06-25 11:05:48.352417 Last packet time: 2023-06-25 11:18:55.314255 Data byte rate: 19 kBps Data bit rate: 159 kbps Average packet size: 260.57 bytes Average packet rate: 76 packets/s SHA1: 10d3652ce3b97d68d16f324ee6eaac918b8f34d9 RIPEMD160: 7c766e4e5819fb9f5c90cee133f0e1f61e9b5801 MD5: 51cd10815de35802460ae0b26e156d66 Strict time order: False Number of interfaces in file: 1 Interface #0 info: Encapsulation = Ethernet (1 - ether) Capture length = 65535 Time precision = microseconds (6) Time ticks per second = 1000000 Number of stat entries = 0 Number of packets = 60063 Next up, use tshark to get more information from the packet capture. Specifically, we are interested in the dns related information, such as the DNS records queried and the DNS response. Extract all dns names from the packet capture - queries or response [root@bigip:TimeLimitedModules::Active:Standalone] tmp # tshark -r nx-4.pcap -n -T fields -e dns.qry.name > nx-4-pcap-records.txt Running as user "root" and group "root". This could be dangerous. Reviewing the number of dns names extracted, it matches the output from capinfos - 60063. [root@bigip:TimeLimitedModules::Active:Standalone] tmp # cat nx-4-pcap-records.txt | wc -l 60063 Sort the dns names extracted from the pcap - notice the randomness of these dns names. Could be fast flux technique or an attempt to drown a DNS server of random records that it needs to search and eventually cause service disruption - a classic DNS water torture attack. The sample DNS names taken from the pcap are still not so random and are short. There are longer and more random hostnames that can be generated by DGAs and this can really take a lot of memory and cpu resource from a DNS server. [root@bigip:TimeLimitedModules::Active:Standalone] tmp # cat nx-4-pcap-records.txt | sort | uniq -c | sort -nrk 1 3514 mmahaesqar.fop789.loc 3512 arphansaqh.fop789.loc 3509 hwepmerswa.fop789.loc 3508 qrqnswerqs.fop789.loc 3506 seenwrqrps.fop789.loc 3506 pwprhhnqqn.fop789.loc 3506 hrhspsrenn.fop789.loc 3506 arwrseqssh.fop789.loc 3505 eqqhnpswmh.fop789.loc 3504 fop789.locaehwmnms.fop789.loc 3503 qehspqnmrn.fop789.loc 3503 psrhaaeqqa.fop789.loc 3503 paepnpamea.fop789.loc 3503 ewamspqwha.fop789.loc 3501 aepaaemrmn.fop789.loc 3500 mrspmramrn.fop789.loc 3499 rnqhapapwn.fop789.loc 475 DNS Water Torture Denial-of-Service Attacks Customers reported pseudo-random subdomain or “DNS water torture attacks” hitting their networks with half a million connections per second. Outages were occurring even if a network wasn’t the direct target of the attack. For example, service providers still felt the effects as the DNS water torture traffic passed through their networks and saturated their pipes. To pull off a DNS water torture attack, an attacker leverages a botnet (or thingbot) to make thousands of DNS requests for fake subdomains against an Authoritative Name Server.1 Because the requests are for non-existent subdomains or hosts, the requests consume the memory and processing resources on the main resolver. If there are intermediary DNS resolvers inline, they too get clogged up with these fake requests. For legitimate end users, all this resource consumption means everything runs slow or even stops, resulting in a denial of service. https://www.f5.com/labs/articles/threat-intelligence/the-dns-attacks-we-re-still-seeing#:~:text=To%20pull%20off%20a%20DNS,against%20an%20Authoritative%20Name%20Server.&text=Because%20the%20requests%20are%20for,resources%20on%20the%20main%20resolver. Filtering further, we can extract the DNS response packets only [root@bigip:TimeLimitedModules::Active:Standalone] tmp # tshark -r nx-4.pcap -n -Y "dns.flags.response == 1" -T fields -e dns.qry.name > nx-4-pcap-records-response.txt Running as user "root" and group "root". This could be dangerous. DNS response packets only count shows 29994, approx half of the previous output of 60063 [root@bigip:TimeLimitedModules::Active:Standalone] tmp # cat nx-4-pcap-records-response.txt | wc -l 29994 We can then sort DNS response packets only and find the number of times each DNS name was responded to. We know that the response to these queries are NXDOMAIN because we don’t have these records in the lab DNS server records list. We can also observe which DNS names were responded the most. In the example output, each of the records were responded almost equally. [root@bigip:TimeLimitedModules::Active:Standalone] tmp # cat nx-4-pcap-records-response.txt | sort | uniq -c | sort -nrk 1 1748 qehspqnmrn.fop789.loc 1748 mmahaesqar.fop789.loc 1746 seenwrqrps.fop789.loc .... 1746 aepaaemrmn.fop789.loc 1744 paepnpamea.fop789.loc 1744 arphansaqh.fop789.loc 1742 eqqhnpswmh.fop789.loc 3161.4KViews2likes0CommentsConfiguring Decision Logging for the F5 BIG-IP Global Traffic Manager
I was working on a GTM solution and with my limited lab I wanted to make sure that the decisions that F5 BIG-IP Global Traffic Manager made at the wideIP and pool level were as evident in the logs as they were consistent in my test results. It turns out there are some fancy little checkboxes in the wideIP configuration that you can check to enable such logs. You might notice, however, that upon enabling these checkboxes the logs are nowhere to be found. This is because there are other necessary steps. You need to configure a few objects to get those logs flowing. Log Publisher The first object is the log publisher. For as much detail as flows in the decision logging, I’d highly recommend using an HSL profile to log to a remote server, but for the purposes of testing I used the local syslog. This can also be done with tmsh. sys log-config publisher gtm_decision_logging { destinations { local-syslog { } } } DNS Logging Profile Next, create a DNS logging profile, make sure to select the Log Publisher you created in the previous step. For testing purpose I enabled the log responses and query ID as well, but those are disabled by default. This also can be created in tmsh. ltm profile dns-logging gtm_decision_logging { enable-response-logging yes include-query-id yes log-publisher gtm_decision_logging } Custom DNS Profile Now create a custom DNS profile. The only custom properties necessary are at the bottom of the profile where you enable logging and select the logging profile. This can also be configured in tmsh. ltm profile dns gtm_decision_logging { app-service none defaults-from dns enable-logging yes log-profile gtm_decision_logging } Apply the DNS Profile Now that all the objects are created, you can reference the DNS profile in the listener. in tmsh, you can modify the listener by adding the profile or if one already exists, replacing it. modify gtm listener gtmlistener profiles replace-all-with { udp_gtm_dns gtm_decision_logging } Log Details Once you have all the objects configured and the DNS profile referenced in your listener, the logging should be hitting /var/log/ltm now. For this first query, the emea pool is selected, but there is no probe data for my primary load balancing method, and the none alternate method skips to the fallback, which uses the configured fallback IP to respond to the client. 2015-06-03 08:54:21 ltm1.dc.test qid 11139 from 192.168.102.1#64536: view none: query: my.example.com IN A + (192.168.102.5%0) 2015-06-03 08:54:21 ltm1.dc.test qid 11139 from 192.168.102.1#64536 [my.example.com A] [round robin selected pool (emea)] [pool member check succeeded (vip3:192.168.103.12) - pool member state is available (green)] [QoS skipped pool member (vip3:192.168.103.12) - path has unmeasured RTT] [pool member check succeeded (vip4:192.168.103.13) - pool member state is available (green)] [QoS skipped pool member (vip4:192.168.103.13) - path has unmeasured RTT] [failed to select pool member by preferred load balancing method] [Using none load balancing method] [failed to select pool member by alternate load balancing method] [selected configured fallback IP] 2015-06-03 08:54:21 ltm1.dc.test qid 11139 to 192.168.102.1#64536: [NOERROR qr,aa,rd] response: my.example.com. 30 IN A 192.168.103.99; In this second request, the emea pool is again selected, but now there is probe data, so the pool member is selected as appropriate. 2015-06-03 08:55:43 ltm1.dc.test qid 6201 from 192.168.102.1#61503: view none: query: my.example.com IN A + (192.168.102.5%0) 2015-06-03 08:55:43 ltm1.dc.test qid 6201 from 192.168.102.1#61503 [my.example.com A] [round robin selected pool (emea)] [pool member check succeeded (vip3:192.168.103.12) - pool member state is available (green)] [QoS selected pool member (vip3:192.168.103.12) - QoS score (2082756232) is higher] [pool member check succeeded (vip4:192.168.103.13) - pool member state is available (green)] [QoS skipped pool member (vip4:192.168.103.13) from two pool members with equal scores] [QoS selected pool member (vip3:192.168.103.12)] 2015-06-03 08:55:43 ltm1.dc.test qid 6201 to 192.168.102.1#61503: [NOERROR qr,aa,rd] response: my.example.com. 30 IN A 192.168.103.12; In this final request, the americas pool is selected, but there is no valid topology score for the pool members, so query is refused. 2015-06-03 08:55:53 ltm1.dc.test qid 23580 from 192.168.102.1#59437: view none: query: my.example.com IN A + (192.168.102.5%0) 2015-06-03 08:55:53 ltm1.dc.test qid 23580 from 192.168.102.1#59437 [my.example.com A] [round robin selected pool (americas)] [pool member check succeeded (vip1:192.168.103.10) - pool member state is available (green)] [QoS selected pool member (vip1:192.168.103.10) - QoS score (0) is higher] [pool member check succeeded (vip2:192.168.103.11) - pool member state is available (green)] [QoS skipped pool member (vip2:192.168.103.11) from two pool members with equal scores] [QoS selected pool member (vip1:192.168.103.10)] [topology load balancing method failed to select pool member (vip1:192.168.103.10) - topology score is 0] [failed to select pool member by preferred load balancing method] [selected configured option Return To DNS] 2015-06-03 08:55:53 ltm1.dc.test qid 23580 to 192.168.102.1#59437: [REFUSED qr,rd] response: empty Yeah, yeah, skip all that and give me the good stuff If you want to test it quickly, you can save the config below to a file (/var/tmp/gtmlogging.txt in this example) and then merge it in. Finally, modify the wideIP and listener and you’re good to go! ### ### configuration: /var/tmp/gtmlogging.txt ### sys log-config publisher gtm_decision_logging { destinations { local-syslog { } } } ltm profile dns-logging gtm_decision_logging { enable-response-logging yes include-query-id yes log-publisher gtm_decision_logging } ltm profile dns gtm_decision_logging { app-service none defaults-from dns enable-logging yes log-profile gtm_decision_logging } ### ### Merge Command ### tmsh load sys config merge file /var/tmp/gtmlogging.txt ### ### Modify wideIP and Listener ### tmsh modify gtm wideip my.example.com load-balancing-decision-log-verbosity { pool-member-selection pool-member-traversal pool-selection pool-traversal } tmsh modify gtm listener gtmlistener profiles replace-all-with { udp_gtm_dns gtm_decision_logging } tmsh save sys config2KViews1like3CommentsWILS: Virtual Server versus Virtual IP Address
load balancing intermediaries have long used the terms “virtual server” and “virtual IP address”. With the widespread adoption of virtualization these terms have become even more confusing to the uninitiated. Here’s how load balancing and application delivery use the terminology. I often find it easiest to explain the difference between a “virtual server” and a “virtual IP address (VIP)” by walking through the flow of traffic as it is received from the client. When a client queries for “www.yourcompany.com” they get an IP address, of course. In many cases if the site is served by a load balancer or application delivery controller that IP address is a virtual IP address. That simply means the IP address is not tied to a specific host. It’s kind of floating out there, waiting for requests. It’s more like a taxi than a public bus in that a public bus has a predefined route from which it does not deviate. A taxi, however, can take you wherever you want within the confines of its territory. In the case of a virtual IP address that territory is the set of virtual servers and services offered by the organization. The client (the browser, probably) uses the virtual IP address to make a request to “www.yourcompany.com” for a particular resource such as a web application (HTTP) or to send an e-mail (SMTP). Using the VIP and a TCP port appropriate for the resource, the application delivery controller directs the request to a “virtual server”. The virtual server is also an abstraction. It doesn’t really “exist” anywhere but in the application delivery controller’s configuration. The virtual server determines – via myriad options – which pool of resources will best serve to meet the user’s request. That pool of resources contains “nodes”, which ultimately map to one (or more) physical or virtual web/application servers (or mail servers, or X servers). A virtual IP address can represent multiple virtual servers and the correct mapping between them is generally accomplished by further delineating virtual servers by TCP destination port. So a single virtual IP address can point to a virtual “HTTP” server, a virtual “SMTP” server, a virtual “SSH” server, etc… Each virtual “X” server is a separate instantiation, all essentially listening on the same virtual IP address. It is also true, however, that a single virtual server can be represented by multiple virtual IP addresses. So “www1” and “www2” may represent different virtual IP addresses, but they might both use the same virtual server. This allows an application delivery controller to make routing decisions based on the host name, so “images.yourcompany.com” and “content.yourcompany.com” might resolve to the same virtual IP address and the same virtual server, but the “pool” of resources to which requests for images is directed will be different than the “pool” of resources to which content is directed. This allows for greater flexibility in architecture and scalability of resources at the content-type and application level rather than at the server level. WILS: Write It Like Seth. Seth Godin always gets his point across with brevity and wit. WILS is an ATTEMPT TO BE concise about application delivery TOPICS AND just get straight to the point. NO DILLY DALLYING AROUND. Server Virtualization versus Server Virtualization Architects Need to Better Leverage Virtualization Using "X-Forwarded-For" in Apache or PHP SNAT Translation Overflow WILS: Client IP or Not Client IP, SNAT is the Question WILS: Why Does Load Balancing Improve Application Performance? WILS: The Concise Guide to *-Load Balancing WILS: Network Load Balancing versus Application Load Balancing All WILS Topics on DevCentral If Load Balancers Are Dead Why Do We Keep Talking About Them?2.8KViews1like1CommentSSL Orchestrator Advanced Use Cases: DNS-over-HTTPS Detection
Introduction F5 BIG-IP is synonymous with "flexibility". You likely have few other devices in your architecture that provide the breadth of capabilities that come native with the BIG-IP platform. And for each and every BIG-IP product module, the opportunities to expand functionality are almost limitless. In this article series we examine the flexibility options of the F5 SSL Orchestrator in a set of "advanced" use cases. If you haven't noticed, the world has been steadily moving toward encrypted communications. Everything from web, email, voice, video, chat, and IoT is now wrapped in TLS, and that's a good thing. The problem is, malware - that thing that creates havoc in your organization, that exfiltrates personnel records to the Dark Web - isn't stopped by encryption. TLS 1.3 and multi-factor authentication don't eradicate malware. The only reasonable way to defend against it is to catch it in the act, and an entire industry of security products are designed for just this task. But ironically, encryption makes this hard. You can't protect against what you can't see. F5 SSL Orchestrator simplifies traffic decryption and malware inspection, and dynamically orchestrates traffic to your security stack. But it does much more than that. SSL Orchestrator is built on top of F5's BIG-IP platform, and as stated earlier, is abound with flexibility. Let us now explore one perspective of that flexibility and how SSL Orchestrator can be used to handle DNS-over-HTTPS and DNS-over-TLS. SSL Orchestrator Use Case: DNS-over-HTTPS and DNS-over-TLS Handling Despite the rapid evolution of Internet standards, and increasing amount of encryption, there's one aspect of our daily online world that hasn't really changed that much in its nearly 4 decades of breath. That of course is DNS. We don't tend to think about it often, which is probably why it hasn't evolved as much as other things, but it truly is the heart and backbone of everything we do online. That is, unless you want to memorize "2607:f8b0:400a:0804:0000:0000:0000:2004" as the way to get to Google, you had better have a working DNS. But DNS is inherently insecure. It's been shown to be vulnerable to all manner of attacks, and for the purposes of this discussion specifically, also exposes where you're going. That is, while the HTTP payload may be encrypted, there's still that (visible) DNS request that goes out first. That's not to say that there haven't been any improvements though. DNSSEC was developed to help secure DNS and prevent spoofing, but in the many years since its introduction it still isn't as widespread as we'd have hoped. DNS-over-HTTPS (DoH) and DNS-over-TLS (DoT) are more recent developments that focus mainly on the privacy aspect of DNS communications (or lack thereof). With DoH and DoT, clients and servers forego the typical DNS protocol request over UDP or TCP port 53 and embed the request inside an encrypted HTTPS or pure TLS connection. But...while that sounds pretty cool, there can be additional consequences to encrypting DNS, both good and bad: [Good] DoH and DoT can protect against ISPs that use your DNS information for targeted advertising. [Bad] DoH and DoT can effectively blind local DNS security and filtering controls. DNS monitoring is often an effective defense against spam and malware infections. [Bad] DoH and DoT providers are public services (ex. Cloudflare and Google), so while DNS is being protected from eavesdropping along the way, the providers have access to your DNS requests. The US National Security Agency (NSA) actually published a warning on this: https://media.defense.gov/2021/Jan/14/2002564889/-1/-1/0/CSI_ADOPTING_ENCRYPTED_DNS_U_OO_102904_21.P... The basic idea in this document is that while your DNS privacy is a good thing, encrypted DoH and DoT can also mask malware command-and-control. If you strictly follow the NSA guidance, you could either block DoH and/or DoT altogether, or force users to only use a local enterprise DoH resolver. If you don't have access to a local DoH resolver though, what else could you do? Well, I'm so glad you asked. DoH and DoT are of course encrypted, so unless you disable either protocol at the client, set up a local DoH resolver and force clients to it, or attempt to do encrypted analysis to find DoH traffic, you need to decrypt and inspect. And as it turns out, F5 has an elegant solution to do just that. In this blog post I am going to present a few solutions on how to handle these protocols through SSL Orchestrator decrypted analysis. The protocol implementations of DoH and DoT are a little different, so I will address them separately. Handling DNS-over-HTTPS For all intents and purposes, DoH looks and feels like normal HTTPS traffic. There are some semi-unique patterns you might be able to follow to infer that something is DoH rather than regular HTTPS (i.e., packet size, frequency), but it’s never going to be completely accurate. You could also potentially filter on the IP addresses of the known public DoH services, but that list is rather large and growing: https://github.com/curl/curl/wiki/DNS-over-HTTPS, and clients can ultimately choose the service they point to. If you’re serious about actually detecting DoH traffic with reasonable accuracy, then there’s no better way than through decrypted analysis. Now, once you've decrypted HTTPS and detected that it's DoH, there are a number of things you can do: Simply detect and block: there are actually three forms of DoH requests – Wireformat GET and POST, and JSON (detailed here:https://developers.cloudflare.com/1.1.1.1/dns-over-https), though the vast majority of clients use the Wireformat GET and POST methods. In any case, this option follows the NSA's first recommendation to simply detect and block DoH requests. A browser that receives a reject on a DoH request will natively revert to normal TCP/UDP DNS to allow your local DNS security tools to do what they do best. Detect and log: you may simply want to log that DoH is passing through your network, who's asking, and what they're asking for as an extension to your current DNS monitoring protocols. Detect and blackhole: also, as a possible extension to your current DNS protection protocols, you may need to block some DoH requests from happening. As mentioned earlier, if you simply block DoH, the client will revert to DNS. One of the best ways to block a DoH/DNS request is to provide a good but fake response instead, a technique called "blackholing". In this approach, you can flag specific hostnames (matching on URL categories) to provide a blackhole response. Let’s now look at each of the above options in turn. The first option is actually pretty straightforward. You just need to add the below iRule to an SSL Orchestrator outbound layer 3 topology*, on the “-in-t” TCP tunnel virtual server. On any decrypted HTTPS traffic, the iRule examines the HTTP request methods for the three signature DoH methods and sends a reject on a match. Easy peasy. when HTTP_REQUEST priority 750 { if { ( [HTTP::method] equals "GET" and [HTTP::header exists "accept"] and [HTTP::header "accept"] equals "application/dns-json" ) or \ ( [HTTP::method] equals "GET" and [HTTP::header exists "accept"] and [HTTP::header "accept"] equals "application/dns-message" ) or \ ( [HTTP::method] equals "POST" and [HTTP::header exists "content-type"] and [HTTP::header "content-type"] equals "application/dns-message" ) } { reject } } The second two options above are available in a single iRule here:https://github.com/f5devcentral/sslo-script-tools/tree/main/sslo-dns-over-https-detection. As with the previous, import the iRule to the BIG-IP, then associate that iRule with an SSL Orchestrator outbound layer 3 topology*. You'll need to make a few adjustments in the iRule to suit for your environment, and those are all presented as simple static variable assignments: static::LOCAL_LOG: enable or disable local Syslog logging (0 = off, 1 = on). static::HSL: enable or disable remote high-speed logging (0 = off, 1 = on). static::URLDB_LICENSED: set to 1 (on) if you've licensed and provisioned the subscription based URLDB, or 0 (off) if you have not and want to just use custom URL categories for DoH blackholing. static::BLACKHOLE_URLCAT: leave empty to disable or add a list of URL categories to search for URL-based DoH blackholing (ex. /Common/block-doh-urls) static::BLACKHOLE_RTYPE_A: enable or disable blackholing matched A record requests (0 = off, 1 = on). static::BLACKHOLE_RTYPE_AAAA: enable or disable blackholing matched AAAA record requests (0 = off, 1 = on). static::BLACKHOLE_RTYPE_TXT: enable or disable blackholing matched TXT record requests (0 = off, 1 = on). You can quickly test the solution by enabling DoH support in a browser. Please refer to the following instructions for each: https://developers.cloudflare.com/1.1.1.1/encryption/dns-over-https/encrypted-dns-browsers/. With the static::LOCAL_LOG value set to 1 (enabled), you can tail the BIG-IP LTM log (tail -f /var/log/ltm) and watch the DoH traffic flow across the SSL Orchestrator topology. 10.1.10.50:59804-104.18.42.171:443 :: DoH (WireFormat POST) Request: A:www.google.com 10.1.10.50:59804-104.18.42.171:443 :: DoH (WireFormat POST) Request: A:encrypted-tbn0.gstatic.com 10.1.10.50:59804-104.18.42.171:443 :: DoH (WireFormat POST) Request: A:fonts.gstatic.com 10.1.10.50:59812-104.18.42.171:443 :: DoH (WireFormat POST) Request: A:www.gstatic.com 10.1.10.50:59812-104.18.42.171:443 :: DoH (WireFormat POST) Request: A:id.google.com 10.1.10.50:59812-104.18.42.171:443 :: DoH (WireFormat POST) Request: A:play.google.com Handling DNS-over-TLS DNS-over-TLS (DoT) is essentially plain DNS wrapped in TLS. If you decrypt it, you’ll see a regular DNS request. DoT by standard, travels on TCP port 853, so that’s one very simple way to block DoT traffic, by just blocking that port. But if you wanted to actually log the DoT requests flowing through the BIG-IP, or blackhole some URLs, you again have to decrypt. The aforementioned DoH logging iRule can also handle DoT requests, inspecting any decrypted TCP:853 traffic: https://github.com/f5devcentral/sslo-script-tools/tree/main/sslo-dns-over-https-detection. The same static variable flags apply. Considering DoH/DoT proxy options I would be remiss in not mentioning options to actually proxy DoH and DoT traffic, and BIG-IP 16.1 introduced two new capabilities: DoH proxy – where the BIG-IP proxies client DoH requests to external DoH resources. You get to explicitly decrypt here, so you get another opportunity to log and control the DNS. DoH server – where the BIG-IP proxies client DoH requests to external TPC or UDP DNS resources. As with the proxy option, you get to decrypt the client’s DoH request. You can find out more about both of these capabilities here: https://support.f5.com/csp/article/K05451012 To proxy DoT to DoT, you simply need a TCP:853 virtual server with client and server SSL profiles. There are also various programmatic options to convert DoT to DNS, DNS to DoT, and DNS to DoH. But an important consideration here is that to proxy you would need clients to direct their DNS/DoT/DoH requests to your local resource. In some cases, you can do that through DHCP, or through enterprise browser policy management, but other than simply blocking outbound access to these protocols, it can be non-trivial to prevent clients from trying to use any of the external providers. That is specifically where decrypted analysis can be beneficial. Summary With an iRule and just a few configuration changes we have been able to implement a capability on top of SSL Orchestrator to log and actively control encrypted DNS. Decrypted inspection of DoH and DoT is just one of the many interesting benefits of and SSL Orchestrator solution. A demo of the above is available here:SSL Orchestrator Advanced Use Case: DNS over HTTPS * The iRules will also technically work for outbound layer 2 topologies.1.5KViews1like1Comment