Forum Discussion

smp_86112's avatar
smp_86112
Icon for Cirrostratus rankCirrostratus
Dec 02, 2009

LTM Returns "Refused" DNS Response

Hi. I've got a DNS server in a VLAN routed by a 9.3.1HF6 LTM. Queries are sent to the DNS server through a 0.0.0.0:* wildcard virtual server. Pretty frequently (as in every 30-60 seconds) I get a DNS "Refused" response to nslookup on the client. This happens for 30-60 seconds, then I start to get successful responses for another 30-60 seconds, and it alternates.

 

 

Because I couldn't capture the traffic with the wildcard VS I created a DNS VIP, used the DNS server as a Pool Member, and took some network traces with very strange results. I see the inbound request on the external VLAN, and see the request forwarded to the Pool Member on the internal VLAN. But immediately after that, the LTM returns a DNS "Refused" response on the external VLAN with a source address of the VIP. I am getting absolutely no response from the Pool Member.

 

 

This pattern of three packets is repeating - inbound on external, inbound on internal, outbound on external, in that order. It gives the appearance of the LTM sending a DNS response without waiting for the actual DNS server to respond. And not only that, it is intermittently giving the correct answer!

 

 

The only think I have thought of at this point is maybe the connection table is reaching a point where these DNS requests are no longer being processed until the table is reaped. But I don't see any evidence of that in the logs.

 

 

I'd appreciate your thoughts. Thanks.

 

  • That's a scary statement coming from you Hoolio. But yeah, this one is out there. I did a little more research last night and came across SOL5299. It seems related, but I don't know DNS configuration well enough to conclude if that's the problem or not. I am going to review with our DNS guys before posting any traces.
  • I looked and realized I don't have any forwarders configured, so I'm not sure if this SOL applies or not. But it leads me to believe there is some type of conflict between the named config on the LTM, and the fact that it is forwarding DNS traffic through a wildcard virtual server. To eliminate this conflict, we have decided to move the DNS server out of the internal VLAN. It's a lot of work, but architecturally is the right thing to do in our environment.
  • The named on the 'host' side of the LTM (i.e., the management plane) shouldn't affect the VIP configuration at all so I very seriously doubt this is an issue. It may be worth a double check on your virtual server config to be sure it is bound to the vlans you expect it to be bound to.

     

     

    Also, is your test client on the same vlan as the pool member, by chance? Are you using SNAT? Does the pool member point to the BigIP for its default gateway?

     

     

    -MC
  • I appreciate you following up, as I don't like leaving this problem unresolved.

     

     

    I am a little concerned with your use of the term "Pool Member" here, as there is no "Pool" based on the way I think about Pools. That may just be semantics, but I want to ensure that I stated the problem correctly.

     

     

    I'm not entirely sure what you mean by "making sure it is bound to all the VLANs I expect". If I'm not mistaken, the DNS request is being forwarded through this virtual. According to the GUI, the "VLAN Traffic" property of this VIP is set to "All VLANs". There are only two vlans on this LTM - an external and internal.

     

     

     

    virtual vs_0_0_0_0_any {

     

    destination any:any

     

    ip forward

     

    profile fastl4_vs_0_0_0_0_any

     

    }

     

     

     

    The test clients are not in the same VLAN as the "Pool Member". However the DNS server is listed in the "DNS Lookup Server List" (under "General Properties" -> "Device" -> "DNS" in the admin GUI). I wonder if this is the conflict???
  • Sorry for the misunderstanding: I read "VS I created a DNS VIP, used the DNS server as a Pool Member" and assumed that you'd moved on from a forwarder configuration. Either way, it still holds true: the named config on the BigIP shouldn't affect this at all.

     

     

    So a couple of other thoughts: you've got it set up in a way that tells the BigIP to bind that 0.0.0.0 virtual to all vlans - internal, external, etc., so any traffic *that doesn't match a virtual server VIP* will pass through to this listener. It's somewhat more typical to bind a wide open forwarder to a specific VLAN for security reasons (e.g. the internal vlans for outbound access). Do you by chance have GTM installed on this box as well or any port 53 VIPs setup?

     

    -Matt
  • I didn't know this until you prompted me to look, but I do have two DNS VIPs which forward to the DNS server Pool Member - a UDP and TCP port 53 VIP. However the stats on both VIPs, the Pool, and the Node are zero. We do in fact have GTMs, but they are seperate physical hardware. I've attached my LTM named.conf. I don't recall ever editing this file purposefully:

      
      restrict rndc access to local machines  
      use the key in the default place: /config/rndc.key  
        
      controls { inet 127.0.0.1 port 953 allow { 127.0.0.1 ;}; };  
        
      logging {  
          channel logfile {  
              syslog daemon;  
              severity error;  
              print-category yes;  
              print-severity yes;  
              print-time yes;  
          };  
          category default {  
              logfile;  
          };  
          category config {  
              logfile;  
          };  
          category notify {  
              logfile;  
          };  
      };  
        
      options {   
      listen-on port 53 { 127.0.0.1; };  
      listen-on-v6 port 53 { ::1; };  
      recursion no;  
      directory "/config/namedb";  
      allow-transfer {  
      localhost;  
      };  
      forwarders {};  
      };  
        
      view "external" {  
      match-clients { "any"; };  
      };
  • The LTM will process (much like a firewall) *more* specific to *least* specific. It's possible that this is complicating things, especially if either of these are network virtual servers.

     

     

    I'd also suggest setting up a 0.0.0.0:53 forwarder as well - this will allow you to segregate out this DNS traffic and treat it in specific ways, like sending it on to a specific server or pool...

     

     

    HTH,

     

    -Matt
  • I removed the two DNS VIPs, but the behavior did not change. I did set up a VIP and forwarded to the DNS server in the pool yesterday - that's how I obtained the trace information I referenced. So I don't think setting up a forwarder will give me more information than I've already got. But thanks for helping me think through some things.