BIG-IP DNS Resource Record Types: Architecture, Design and Configuration
Welcome to my first article on DevCentral! This article starts aseries about BIG-IP DNS (the artist formerly known as GTM). This article and accompanying videos take a look at the support for Domain Name System (DNS) Resource Record (RR) types that were introduced in BIG-IP version 12.0. This enhancement represented a major step forward in the capabilities available on the BIG-IP. I created these videos during the initial introduction of the feature in BIG-IP 12.0 release. Based on the timestamp in the BIG-IP GUI, that would be October of 2015. The information presented here is still relevant for later versions of BIG-IP and will be of benefit to anyone trying to understand the different DNS resource record types available on the BIG-IP. The videos assume you have used BIG-IP DNS before, and you understand the basics of DNS. If you need a refresher on DNS resource records, I present an overview of the new and existing resource records supported by BIG-IP DNS. I then go into the feature itself and the architecture and implementation details for each of the records on BIG-IP DNS. In the last video, I do a configuration walk-through of a NAPTR and SRV wideIP along with the NAPTR and SRV pools. Be sure and see the attachment at the end of the article. It is a zip file that contains a PDF of many of the slides I use in the videos. Executive Summary (9 Minutes) First up is the Executive Summary. This video introduces, at a high level, everything that will be covered in the later videos. It also talks about some things that are not explicitly called out in any of the other content. Technology Overview (8 Minutes) Next is a DNS technology overview for the different resource record types supported by BIG-IP DNS. If you are already familiar with the different DNS RR types (A, AAAA, CNAME, NAPTR, SRV, MX), you can skip this part. If you need a quick refresher on what each record type is and the fields used in them this content is for you! NOTE: This is not a discussion of how the BIG-IP DNS implements these resource records but rather a discussion of the record types themselves in a purely DNS sense. Feature/Architecture Overview If you do not watch any of the other videos in this series, watch these!! I go over the concepts and architecture behind how the different resource records are implemented and configured on BIG-IP DNS with lots of diagrams. The configurations of the different resource record types and all the associated pools can get confusing. These videos will give you a solid foundation to follow the configuration walk-through. I am going to be bold here and say that I guarantee you will learn something you did not know about BIG-IP DNS if you watch the next five videos. As a polling mechanism, hit the “Like” button on the article if I was correct in my prediction. General Information and A/AAAA Records (11:30 Minutes) This video covers general information about the DNS RR implementation along with the A and AAAA record types. For example, the video talks about a new concept in BIG-IP DNS called Non-Terminal and Terminal Pools and what they mean in wideIP configurations. CNAMEs (9 Minutes) This is a long discussion about CNAMEs. Who knew they were so interesting? Well, they are, and it is worth listening to how they are implemented on BIG-IP DNS. NAPTR, SRV and MX Records (5:30 Minutes) NAPTR, SRV and MX records are next. The configuration walk-through later in this article will implement NAPTR and SRV wideIPs. Health Checks (7:30 Minutes) Let’s talk about one of the reasons you have a BIG-IP DNS…health checks! Now that wideIPs can be pool member, the game has changed. Persistence (8 Minutes) Persistence also has some new wrinkles. If you use persistence, you want to watch this. Configuration Walk-through (11:30 Minutes) Finally, we have the configuration walk-through. This is where things get real. In this video, I do a configuration of NAPTR and SRV wideIPs along with the NAPTR and SRV pools. You can see the configuration objects I will create and the order in the diagram below. Conclusion That is all I have for this article. I hope you learned something and most importantly have a better understanding of the different DNS resource record types available on the BIG-IP DNS. I have more articles and videos to come so stay tuned. Be sure and grab the attachment if you want a copy of some of the diagrams used in the videos. Trivia: iQuery uses port 4353 for its communication. Do you know the significance/meaning of that particular port number? Drop a comment at the bottom if you know the answer.4.4KViews9likes9CommentsEdge to Pulse: IP Geolocation Database Migration
UPDATE (August 2023): Previously, we announced the end of support of the Edge legacy database. However, support for the Edge database has been extended indefinitely. All supported versions of F5 BIG-IP can now use both types of IP geolocation databases: Edge and Pulse, provided by Digital Envoy. The Edge database is based off of partner-supplied data gathered from IP traffic. The Pulse database relies on information derived from mobile devices and Wi-Fi connection points, increasing the accuracy for certain aspects, but also significantly increasing file size. Because of this, F5 does not support City level for the Pulse database. Last month, F5’s BIG-IP DNS team released the new, Pulse database from our third-party vendor, Digital Envoy. F5 migrated its IP geolocation database from the Edge database to the Pulse database because Pulse provides a higher number of available subnets. This allows for improved geolocation accuracy for querying IP addresses. Because Pulse captures more granular data of locations than Edge, it is larger than the Edge database. This installation update may take longer than expected after the migration due to the size of the database. As with any infrastructure shift, there are a few important things to know: This migration applies to all supported BIG-IP versions and is transparent to users If using City Database from Digital Envoy, then please follow the KB article before installing the Pulse City RPM (K78974041 - link below) There is no change of database format, downloading, or installation procedures Edge and Pulse databases are available for an additional three months from release date to ease the migration process Both databases are seamlessly available for customer download until the end of 2022 Beginning in January 2023, only the Pulse database will be accessible Edge database will reach End of Life by the end of December 20222KViews3likes4CommentsUsing Client Subnet in DNS Requests
BIG-IP DNS 14.0 now supports edns-client-subnet (ECS) for both responding to client requests (GSLB) or forwarding client requests (screening). The following is a quick start on using this feature. What is EDNS-Client-Subnet (ECS) If you are familiar with X-Forwarded-For headers in HTTP requests,ECS solves a similar problem. The problem is how to forward a DNS request through a proxy and preserve information about the original request (IP Address). Some of this discussion I also cover in a previous article,Implementing Client Subnet in DNS Requests . Traditional DNS Requests When a traditional DNS request is made, a client makes a request to a “local” DNS server (LDNS), and that request is forwarded to the authoritative DNS server for that domain. When a topology (send different responses based on the source address) record is evaluated it will use the source IP of the LDNS server. Usually this is OK for most applications, but it would be ideal to be able to forward more precise information from the LDNS server. ECS DNS Requests Using ECS a LDNS server can inject additional meta-data about the request that includes information about the source IP address of the client. In the following example a “Client Subnet” of 192.0.2.0/24 is forwarded to the DNS server. ECS on BIG-IP DNS F5 BIG-IP DNS can use ECS in two ways. Use ECS when handling topology requests Inject ECS when “screening” a DNS server Using ECS with BIG-IP DNS Topology There are two methods of configuring BIG-IP DNS to use ECS. Either at the wide-ip or globally. To configure ECS on a wide-ip: To configure ECS globally. Under DNS Settings. Injecting ECS records BIG-IP DNS can also proxy requests to other DNS servers (BIG-IP DNS or other vendors). When you modify the DNS profile to insert an ECS record. You will observe that the original /32 address will be forwarded to any DNS servers that are in the pool for that particular Virtual Server. The following is a diagram of the above.10KViews2likes27CommentsUnbreaking the Internet and Converting Protocols
When CloudFlare took over 1.1.1.1 for their DNS service; this got me thinking about a couple of issues: A. What do you do if you’ve been using 1.1.1.1 on your network, how do you unbreak the Internet? B. How can you enable use of DNS over TLS for clients that don’t have native support? C. How can you enable use of DNS over HTTPS for clients that don’t have native support? A. Unbreaking the Internet RFC1918 lays out “private” addresses that you should use for internal use, i.e. my home network is 192.168.1.0/24. But, we all have issues where sometimes we “forget” about the RFC and you are using routable IP space on your internal network. 1.1.1.1 is a good example of this, it’s just so nice looking and easy to type. In my home network I created a contrived situation to simulate breaking the Internet; creating a network route on my home router that sends all traffic for 1.1.1.1/32 to a Linux host on my internal network. I imagined a situation where I wanted to continue to send HTTP traffic to my own server, but send DNS traffic to the “real” 1.1.1.1. In the picture above, I have broken the Internet. It is now impossible for me to route traffic to 1.1.1.1/32. To “fix” the Internet (image below), I used a F5 BIG-IP to send HTTP traffic to my internal server and DNS traffic to 1.1.1.1. The BIG-IP configuration is straight-forward. 1.1.1.1:80 points to the local server and 1.1.1.1:53 points to 1.0.0.1. Note that I have used 1.0.0.1 since I broke routing to 1.1.1.1 (1.0.0.1 works the same as 1.1.1.1). B. DNS over TLS After fixing the Internet, I started to think about the challenge of how to use DNS over TLS. RFC7858 is a proposed standard for wrapping DNS traffic in TLS. Many clients do not support DNS over TLS unless you install additional software. I started to wonder, is it possible to have a BIG-IP “upgrade” a standard DNS request to DNS over TLS? There are two issues. DNS over TLS uses TLS (encryption) DNS over TLS uses TCP (most clients default to UDP unless handling large DNS requests) For the first issue, the BIG-IP can already wrap a TCP connection with TLS (often used in providing SSL visibility to security devices that cannot inspect SSL traffic, BIG-IP terminates SSL connection, passes traffic to security device, re-encrypts traffic to final destination). The second issue can be solved with configuring BIG-IP DNS as a caching DNS resolver that accepts UDP and TCP DNS requests and only forwards TCP DNS requests. This results in an architecture that looks like the following. The virtual server is configured with a TCP profile and serverssl profile. The serverssl profile is configure to verify the authenticity of the server certificate. The DNS cache is configured to use the virtual server that performs the UDP to TCP over TLS connection. .wlemoticon { behavior: url(#default#.WLEMOTICON_WRITER_BEHAVIOR) } img { behavior: url(#default#IMG_WRITER_BEHAVIOR) } .wlwritereditablesmartcontent { behavior: url(#default#.WLWRITEREDITABLESMARTCONTENT_WRITER_BEHAVIOR) } .wlwritersmartcontent, .wlwriterpreserve { behavior: url(#default#.WLWRITERSMARTCONTENT,_.WLWRITERPRESERVE_WRITER_BEHAVIOR) } .wlwritereditablesmartcontent > .wleditfield { behavior: url(#default#.WLWRITEREDITABLESMARTCONTENT_>_.WLEDITFIELD_WRITER_BEHAVIOR) } blockquote { behavior: url(#default#BLOCKQUOTE_WRITER_BEHAVIOR) } #extendedentrybreak { behavior: url(#default##EXTENDEDENTRYBREAK_WRITER_BEHAVIOR) } .postbody table { behavior: url(#default#.POSTBODY_TABLE_WRITER_BEHAVIOR) } .postbody td { behavior: url(#default#.POSTBODY_TD_WRITER_BEHAVIOR) } .postbody th { behavior: url(#default#.POSTBODY_TH_WRITER_BEHAVIOR) } .posttitle {margin: 0px 0px 10px 0px; padding: 0px; border: 0px;} .postbody {margin: 0px; padding: 0px; border: 0px; min-height: 400px;} A bonus of using the DNS cache is that subsequent DNS queries are extremely fast! C. DNS over HTTPS I felt pretty good after solving these two issues (A and B), but there was a third nagging issue (C) of how to support DNS over HTTPS. If you have ever looked at DNS traffic it is a binary format over UDP or TCP that is visible on the network. HTTPS traffic is very different in comparison; TCP traffic that is encrypted. The draft RFC (draft-ietf-doh-dns-over-https-05) has a handy specification for “DNS Wire Format”. Take a DNS packet, shove it into an HTTPS message, get back a DNS packet in a HTTPS response. Hmm, I bet an iRule could solve this problem of converting DNS to HTTPS. Using iRulesLX I created a process where a TCL iRule takes a UDP payload off the network from a DNS client, sends it to a Node.JS iRuleLX extension that invokes the HTTPS API, returns the payload to the TCL iRule that sends the result back to the DNS client on the network. The solution looks like the following. This works on my home network, but I did notice occasional 405 errors from the HTTPS services while testing. The TCL iRule is simple, grab a UDP payload. when CLIENT_DATA { set payload [UDP::payload] set encoded_payload [b64encode $payload] set RPC_HANDLE [ILX::init dns_over_https_plugin dns_over_https] set rpc_response [ILX::call $RPC_HANDLE query_dns $encoded_payload] UDP::respond [b64decode $rpc_response] } The Node.JS iRuleLX Extension makes the API call to the HTTPS service. ... ilx.addMethod('query_dns', function(req, res) { var dns_query = atob(req.params()[0]); var options = { host: '192.168.1.254', port: 80, path: '/dns-query?ct=application/dns-udpwireformat&dns=' + req.params()[0], method: 'GET', headers: { 'Host':'cloudflare-dns.com' } }; var cfreq = http.request(options, function(cfres) { var output = ""; cfres.setEncoding('binary'); cfres.on('data', function (chunk) { output += chunk; }); cfres.on('end', () => { res.reply(btoa(output)); }); }); cfreq.on('error', function(e) { console.log('problem with request: ' + e.message); }); cfreq.write(dns_query); cfreq.end(); }); ... After deploying the iRule; you can update your DNS cache to point to the Virtual Server hosting the iRule. Note that the Node.JS code connects back to another Virtual Server on the BIG-IP that does a similar upgrade from HTTP to HTTPS and uses a F5 OneConnect profile. You could also have it connect directly to the HTTPS server and recreate a new connection on each call. All done? Before I started I wasn’t sure whether I could unbreak the Internet or upgrade old DNS protocols into more secure methods, but F5 BIG-IP made it possible. Until the next time that the Internet breaks.2.4KViews2likes5CommentsWhat Is BIG-IP?
tl;dr - BIG-IP is a collection of hardware platforms and software solutions providing services focused on security, reliability, and performance. F5's BIG-IP is a family of products covering software and hardware designed around application availability, access control, and security solutions. That's right, the BIG-IP name is interchangeable between F5's software and hardware application delivery controller and security products. This is different from BIG-IQ, a suite of management and orchestration tools, and F5 Silverline, F5's SaaS platform. When people refer to BIG-IP this can mean a single software module in BIG-IP's software family or it could mean a hardware chassis sitting in your datacenter. This can sometimes cause a lot of confusion when people say they have question about "BIG-IP" but we'll break it down here to reduce the confusion. BIG-IP Software BIG-IP software products are licensed modules that run on top of F5's Traffic Management Operation System® (TMOS). This custom operating system is an event driven operating system designed specifically to inspect network and application traffic and make real-time decisions based on the configurations you provide. The BIG-IP software can run on hardware or can run in virtualized environments. Virtualized systems provide BIG-IP software functionality where hardware implementations are unavailable, including public clouds and various managed infrastructures where rack space is a critical commodity. BIG-IP Primary Software Modules BIG-IP Local Traffic Manager (LTM) - Central to F5's full traffic proxy functionality, LTM provides the platform for creating virtual servers, performance, service, protocol, authentication, and security profiles to define and shape your application traffic. Most other modules in the BIG-IP family use LTM as a foundation for enhanced services. BIG-IP DNS - Formerly Global Traffic Manager, BIG-IP DNS provides similar security and load balancing features that LTM offers but at a global/multi-site scale. BIG-IP DNS offers services to distribute and secure DNS traffic advertising your application namespaces. BIG-IP Access Policy Manager (APM) - Provides federation, SSO, application access policies, and secure web tunneling. Allow granular access to your various applications, virtualized desktop environments, or just go full VPN tunnel. Secure Web Gateway Services (SWG) - Paired with APM, SWG enables access policy control for internet usage. You can allow, block, verify and log traffic with APM's access policies allowing flexibility around your acceptable internet and public web application use. You know.... contractors and interns shouldn't use Facebook but you're not going to be responsible why the CFO can't access their cat pics. BIG-IP Application Security Manager (ASM) - This is F5's web application firewall (WAF) solution. Traditional firewalls and layer 3 protection don't understand the complexities of many web applications. ASM allows you to tailor acceptable and expected application behavior on a per application basis . Zero day, DoS, and click fraud all rely on traditional security device's inability to protect unique application needs; ASM fills the gap between traditional firewall and tailored granular application protection. BIG-IP Advanced Firewall Manager (AFM) - AFM is designed to reduce the hardware and extra hops required when ADC's are paired with traditional firewalls. Operating at L3/L4, AFM helps protect traffic destined for your data center. Paired with ASM, you can implement protection services at L3 - L7 for a full ADC and Security solution in one box or virtual environment. BIG-IP Hardware BIG-IP hardware offers several types of purpose-built custom solutions, all designed in-house by our fantastic engineers; no white boxes here. BIG-IP hardware is offered via series releases, each offering improvements for performance and features determined by customer requirements. These may include increased port capacity, traffic throughput, CPU performance, FPGA feature functionality for hardware-based scalability, and virtualization capabilities. There are two primary variations of BIG-IP hardware, single chassis design, or VIPRION modular designs. Each offer unique advantages for internal and collocated infrastructures. Updates in processor architecture, FPGA, and interface performance gains are common so we recommend referring to F5's hardware pagefor more information.61KViews2likes3CommentsPool member status on F5 DNS objects via iControl REST
I got a question on how to retrieve the status of pool members on F5 DNS objects via the iControl REST interface. In the GUI you get fancy red, yellow, black, blue, and green painted circles, diamonds, squares, and triangles to communicate availability. At the command line, however, it’s harder to map that visual data. There are two important settings underlying an object’s availability: availability state and enabled state. You can see when using the field-fmt option on the tmsh command to show the pool member status that both GTM and LTM objects report the same data (the remaining fields are removed for brevity): gtm pool pool gslb_pool_1:A { members { testvip:ltm3 { status.availability-state available status.enabled-state enabled status.status-reason Available } } status.availability-state available status.enabled-state enabled status.status-reason Available } ltm pool testpool { members { 192.168.103.20:80 { status.availability-state available status.enabled-state enabled status.status-reason Pool member is available } } status.availability-state available status.enabled-state enabled status.status-reason The pool is available } Through the REST interface, both status fields are available in the LTM pool object. Calling the pool member and selecting on session and state, the following request returns the data as expected: GET https://ltm3.test.local/mgmt/tm/ltm/pool/testpool/members/~Common~192.168.103.20:80?$select=session,state { "session": "monitor-enabled", "state": "up" } However, these same status fields are not available on GTM pool/pool member REST objects. Calling a GTM pool member, the following request returns no information on monitor status (that's the availability-state field from above) but does indicate the enabled state, albeit unconventionally by changing the field name: GET https://ltm3.test.local/mgmt/tm/gtm/pool/a/~Common~gslb_pool_1/members/~Common~ltm3:~Common~front_door # when enabled: { "name": "front_door", "enabled": true, } # when disabled: { "name": "front_door", "disabled": true, } So if the information is not available directly from the REST interface, how do you get it? Well, you have options. The newest approach (and one I am currently investigating for things like this) is writing your own API via the iControl LX extensions. That's not the approach I'll take here, however. Instead, I'll use a tmsh script to return the appropriate data via a bash call from iControl REST. The tmsh script is quite simple. It takes an argument for the type of GTM pool you wish to query, and then the pool name. It will then iterate through the members and return for each member the pool member name, its availability state, and its enabled state. proc script::run {} { if { $tmsh::argc != 3 } { puts "A pool type and name must be provided" exit } set pool_type [lindex $tmsh::argv 1] set pool_name [lindex $tmsh::argv 2] foreach obj [tmsh::get_status /gtm pool $pool_type $pool_name detail] { foreach member [tmsh::get_field_value $obj members] { puts "[tmsh::get_name $member],[tmsh::get_field_value $member status.availability-state],[tmsh::get_field_value $member status.enabled-state]" } } } total-signing-status not-all-signed } Running the script from bash on the BIG-IP, I get the following results: [root@ltm3:Active:Standalone] tmp # tmsh run cli script pool-status.tcl a gslb_pool_1 front_door:ltm3,available,disabled testvip:ltm3,available,enabled testvip2:ltm3,available,enabled And finally, calling this with the F5 python SDK returns the same results: >>> from f5.bigip import ManagementRoot >>> mgmt = ManagementRoot('ltm3.test.local', 'admin', 'admin', token=True) >>> ps = mgmt.tm.util.bash.exec_cmd('run', utilCmdArgs='-c "tmsh run cli script pool-status.tcl a gslb_pool_1"') >>> ps.commandResult u'front_door:ltm3,available,disabled\ntestvip:ltm3,available,enabled\ntestvip2:ltm3,available,enabled\n' You can of course modify the tmsh script to return the data in a different format, I opted for quick and dirty here to model the problem and solution. Thanks go to community member Aaron Murray for the question and the inspiration, happy coding out there!2.4KViews1like13CommentsLightboard Lessons: F5 DNS Order of Operations
In this episode of Lightboard Lessons, Jason covers the order in which all the fantastic services available within the F5 BIG-IP DNS offering are processed. Resources K14510: Overview of DNS query processing on BIG-IP systems Lightboard Lessons: Life of a Packet Getting Started with BIG-IP DNS (GTM) Configuring BIG-IP DNS (GTM)583Views1like0CommentsConfiguring Decision Logging for the F5 BIG-IP Global Traffic Manager
I was working on a GTM solution and with my limited lab I wanted to make sure that the decisions that F5 BIG-IP Global Traffic Manager made at the wideIP and pool level were as evident in the logs as they were consistent in my test results. It turns out there are some fancy little checkboxes in the wideIP configuration that you can check to enable such logs. You might notice, however, that upon enabling these checkboxes the logs are nowhere to be found. This is because there are other necessary steps. You need to configure a few objects to get those logs flowing. Log Publisher The first object is the log publisher. For as much detail as flows in the decision logging, I’d highly recommend using an HSL profile to log to a remote server, but for the purposes of testing I used the local syslog. This can also be done with tmsh. sys log-config publisher gtm_decision_logging { destinations { local-syslog { } } } DNS Logging Profile Next, create a DNS logging profile, make sure to select the Log Publisher you created in the previous step. For testing purpose I enabled the log responses and query ID as well, but those are disabled by default. This also can be created in tmsh. ltm profile dns-logging gtm_decision_logging { enable-response-logging yes include-query-id yes log-publisher gtm_decision_logging } Custom DNS Profile Now create a custom DNS profile. The only custom properties necessary are at the bottom of the profile where you enable logging and select the logging profile. This can also be configured in tmsh. ltm profile dns gtm_decision_logging { app-service none defaults-from dns enable-logging yes log-profile gtm_decision_logging } Apply the DNS Profile Now that all the objects are created, you can reference the DNS profile in the listener. in tmsh, you can modify the listener by adding the profile or if one already exists, replacing it. modify gtm listener gtmlistener profiles replace-all-with { udp_gtm_dns gtm_decision_logging } Log Details Once you have all the objects configured and the DNS profile referenced in your listener, the logging should be hitting /var/log/ltm now. For this first query, the emea pool is selected, but there is no probe data for my primary load balancing method, and the none alternate method skips to the fallback, which uses the configured fallback IP to respond to the client. 2015-06-03 08:54:21 ltm1.dc.test qid 11139 from 192.168.102.1#64536: view none: query: my.example.com IN A + (192.168.102.5%0) 2015-06-03 08:54:21 ltm1.dc.test qid 11139 from 192.168.102.1#64536 [my.example.com A] [round robin selected pool (emea)] [pool member check succeeded (vip3:192.168.103.12) - pool member state is available (green)] [QoS skipped pool member (vip3:192.168.103.12) - path has unmeasured RTT] [pool member check succeeded (vip4:192.168.103.13) - pool member state is available (green)] [QoS skipped pool member (vip4:192.168.103.13) - path has unmeasured RTT] [failed to select pool member by preferred load balancing method] [Using none load balancing method] [failed to select pool member by alternate load balancing method] [selected configured fallback IP] 2015-06-03 08:54:21 ltm1.dc.test qid 11139 to 192.168.102.1#64536: [NOERROR qr,aa,rd] response: my.example.com. 30 IN A 192.168.103.99; In this second request, the emea pool is again selected, but now there is probe data, so the pool member is selected as appropriate. 2015-06-03 08:55:43 ltm1.dc.test qid 6201 from 192.168.102.1#61503: view none: query: my.example.com IN A + (192.168.102.5%0) 2015-06-03 08:55:43 ltm1.dc.test qid 6201 from 192.168.102.1#61503 [my.example.com A] [round robin selected pool (emea)] [pool member check succeeded (vip3:192.168.103.12) - pool member state is available (green)] [QoS selected pool member (vip3:192.168.103.12) - QoS score (2082756232) is higher] [pool member check succeeded (vip4:192.168.103.13) - pool member state is available (green)] [QoS skipped pool member (vip4:192.168.103.13) from two pool members with equal scores] [QoS selected pool member (vip3:192.168.103.12)] 2015-06-03 08:55:43 ltm1.dc.test qid 6201 to 192.168.102.1#61503: [NOERROR qr,aa,rd] response: my.example.com. 30 IN A 192.168.103.12; In this final request, the americas pool is selected, but there is no valid topology score for the pool members, so query is refused. 2015-06-03 08:55:53 ltm1.dc.test qid 23580 from 192.168.102.1#59437: view none: query: my.example.com IN A + (192.168.102.5%0) 2015-06-03 08:55:53 ltm1.dc.test qid 23580 from 192.168.102.1#59437 [my.example.com A] [round robin selected pool (americas)] [pool member check succeeded (vip1:192.168.103.10) - pool member state is available (green)] [QoS selected pool member (vip1:192.168.103.10) - QoS score (0) is higher] [pool member check succeeded (vip2:192.168.103.11) - pool member state is available (green)] [QoS skipped pool member (vip2:192.168.103.11) from two pool members with equal scores] [QoS selected pool member (vip1:192.168.103.10)] [topology load balancing method failed to select pool member (vip1:192.168.103.10) - topology score is 0] [failed to select pool member by preferred load balancing method] [selected configured option Return To DNS] 2015-06-03 08:55:53 ltm1.dc.test qid 23580 to 192.168.102.1#59437: [REFUSED qr,rd] response: empty Yeah, yeah, skip all that and give me the good stuff If you want to test it quickly, you can save the config below to a file (/var/tmp/gtmlogging.txt in this example) and then merge it in. Finally, modify the wideIP and listener and you’re good to go! ### ### configuration: /var/tmp/gtmlogging.txt ### sys log-config publisher gtm_decision_logging { destinations { local-syslog { } } } ltm profile dns-logging gtm_decision_logging { enable-response-logging yes include-query-id yes log-publisher gtm_decision_logging } ltm profile dns gtm_decision_logging { app-service none defaults-from dns enable-logging yes log-profile gtm_decision_logging } ### ### Merge Command ### tmsh load sys config merge file /var/tmp/gtmlogging.txt ### ### Modify wideIP and Listener ### tmsh modify gtm wideip my.example.com load-balancing-decision-log-verbosity { pool-member-selection pool-member-traversal pool-selection pool-traversal } tmsh modify gtm listener gtmlistener profiles replace-all-with { udp_gtm_dns gtm_decision_logging } tmsh save sys config1.8KViews1like3CommentsLightboard Lessons: DNS over HTTPS
DNS over HTTPS (DoH for short) is making a name for itself in the news lately. It seems like everyone except the end user is a little up in arms over the push in Chrome and Firefox to default to DoH for lookups. In this Lightboard Lesson, Jason covers what DoH is, options to configure it in BIG-IP, and some of the industry concerns over adoption. Resources F5 Related Unbreaking the Internet. In this article, Eric Chen covers how to get around internal use of 1.1.1.1 so Cloudflare DNS can still be reached, as well as BIG-IP configuration options for clients unable to support DoH. Inbound DoH (and DoT) iRules Solution. This GitHub repository is a solution from Greg Robinson to handle inbound DoH requests with iRules and iRulesLX. Related Industry News ISPs Seek Hill Intervention in Encryption-Related Google DNS Change Why big ISPs aren't happy about Google's plans for encyrpted DNS DoH is the Wrong Partial Solution Everything You Need to Know About DNS Encryption - And Why Google May Not Be Doing Evil1.4KViews1like6CommentsImplementing Client Subnet in DNS Requests
Update 2018-07-14: Starting with BIG-IP DNS 14.0 Client Subnet is available as a checkbox feature:https://devcentral.f5.com/s/articles/using-client-subnet-in-dns-requests-31948 Original Article: Sending end-users to the right data center Using an iRule and edns-client-subnet (ECS) we can improve the accuracy of F5 GTM’s topology load balancing. In a global world you expect to send your end-users to the closest regional data center. An end-user request from Tokyo will be sent to a data center in Tokyo and an end-user in California will be sent to a data center in California. GTM does this using topology load balancing, but centralized DNS resolvers like Google DNS or OpenDNS can interfere with GTM’s ability to provide accurate topology load balancing. An end-user in Tokyo, Japan using OpenDNS will be sent to a data center in California instead of Tokyo. The end-user is sent to the wrong data center because the original DNS request appeared to originate from an OpenDNS server in California instead of the end-user’s location in Tokyo. Fortunately there is a draft protocol to address these challenges using a protocol EDNS Client Subnet (ECS). ECS enables DNS servers to provide more accurate topology information that can improve end-user experience. ECS is similar to X-Forwarded-For headers used with HTTP. X-Forwarded-For header is a technique for preserving the original client IP address for cases when the BIG-IP translates the source IP address of HTTP traffic. ECS injects information about the original client IP subnet into the DNS request. Today, with version 11.5.x and 11.6.0 the BIG-IP we can use an iRule to implement a solution that is interoperable with the current draft protocol for testing and evaluation. Implementing the protocol After reading through the draft specification you can see that ECS requires the ability to interpret ECS information that is sent with the DNS query as well as respond with an appropriate response that indicates to the DNS resolver that the DNS server supports ECS. Fortunately we have two LTM iRule events “DNS_REQUEST” and “DNS_RESPONSE” that allows us to receive and respond with the appropriate messages. We can leverage the GTM DNS_REQUEST iRule event to capture the information and the whereis command to act on the information. A graphical representation of the solution The iRule code can be found on the codeshare. Here’s a sample output of the results of using the iRule with OpenDNS’s servers (requires a request to OpenDNS support to enable your DNS servers to receive ECS from OpenDNS) and Google DNS. Rule /Common/ecs_topology_www : LDNS LOC: 208.69.37.XXX NA US California {} Rule /Common/ecs_topology_www : ECS LOC: 210.226.XXX.0 AS JP Tokyo {} Rule /Common/ecs_topology_www : Asia ECS iRule using OpenDNS Rule /Common/ecs_topology_www : LDNS LOC: 173.194.93.XXX NA US California {} Rule /Common/ecs_topology_www : ECS LOC: 210.226.XXX.0 AS JP Tokyo {} Rule /Common/ecs_topology_www : Asia ECS iRule using Google DNS Implementing a draft now ECS is still a draft protocol and may change over time. Using a F5 iRule we can interoperate with the protocol to provide better granularity for topology load balancing. The example above was done in a lab environment to demonstrate what’s possible. Improvements that could be made to the example include: More accurate implementation of the draft (it works, but I didn’t read the draft that closely) Limiting ECS responses to Google/OpenDNS/ECS DNS resolvers Implementing a ECS resolver (possible?) Using an iRule we can solve today’s problem using a draft protocol!2KViews1like6Comments