dns sec
23 TopicsThe BIG-IP GTM: Configuring DNSSEC
This is the fourth in a series of DNS articles that I'm writing. The first three are: Let's Talk DNS on DevCentral DNS The F5 Way: A Paradigm Shift DNS Express and Zone Transfers The Domain Name System (DNS) is a key component of the Internet's critical infrastructure. So, if the foundation of the Internet itself relies on DNS, then let's take a quick look at how stable this foundation really is. After all, DNS was born in the early 1980s...back when REO Speedwagon and Air Supply were cranking out hits on the radio. The question is...does the DNS of the 1980s have any issues we need to worry about today? Well as it turns out, DNS was not initially built with security in mind. When a user types a web address in his browser, he expects to be reliably directed to that website. Unfortunately, that doesn't always happen. One common example is seen when an attacker disrupts the user's request and redirects to a malicious site. Several DNS vulnerabilities like this have led the way to an interest in DNS Security Extensions (DNSSEC) to secure this critical part of our Internet infrastructure. What is DNSSEC? DNSSEC is a suite of extensions that add security to the DNS protocol by enabling responses to be validated. With DNSSEC, the DNS protocol is much less susceptible to certain types of attacks (like the DNS spoofing situation described above). DNSSEC uses digital signatures to validate DNS responses so that the end user can be assured he is visiting the correct website. Based on the design features of DNSSEC, it is most effective when deployed at each step in the DNS lookup process...from root zone to the final domain name. If you leave any of the steps unsigned, it creates weakness in the process and you won't be able to trust the entire chain. Keep in mind that DNSSEC doesn't encrypt the data, it just signs it to attest to the validity of the response. When a user requests a site, DNS kicks into gear by translating the domain name into an IP address. It does this through a series of recursive lookups that form a "chain" of requests. The picture below shows an example of a user requesting f5.com and the DNS system chaining together requests in order to match the domain name to the IP address so that he can access the website. This is all well and good, but the issue that forms the need for DNSSEC is that each stop in this chain inherently trusts the other parts of the chain...no questions asked. So, what if an attacker could somehow manipulate one of the servers (or even the traffic on the wire) and send the wrong IP address back to the user? The attacker could redirect the user to a website where malware is waiting to scan the unsuspecting user's computer. The picture below shows the same chain of requests, but this time an attacker has manipulated the last response so that the incorrect IP address is returned to the user. Not good. DNSSEC addresses this problem by validating the response of each part of the chain by using digital signatures. These signatures help build a "chain of trust" that DNS can rely on when answering requests. To form the chain of trust, DNSSEC starts with a "trust anchor" and everything below that trust anchor is trusted. Ideally, the trust anchor is the root zone. Fortunately for all of us, ICANN published the root zone trust anchor, and root operators began serving the signed root zone in July, 2010. With the root zone signed, all other zones below it can also be signed, thus forming a solid and complete chain of trust. In fact, ICANN also lists the Top Level Domains (TLD) that are currently signed and have trust anchors published as DS records in the root zone (most of the TLDs are signed). The following picture (taken from iiw.idcommons.net) shows the process for building the chain of trust from the root zone. DNSSEC uses two kinds of keys: Key Signing Keys and Zone Signing Keys. The Key Signing Key is used to sign other keys in order to build the chain of trust. This key is sometimes cryptographically stronger and has a longer lifespan than a Zone Signing Key. The Zone Signing Key is used to sign the data that is published in a zone. DNSSEC uses the Key Signing Keys and Zone Signing Keys to sign and verify records within DNS. BIG-IP Configuration The BIG-IP Global Traffic Manager (GTM) will not only respond to DNS requests, but it will also sign DNSSEC validated responses. But before you can configure the GTM to handle nameserver responses that are DNSSEC-compliant, you have to create DNSSEC keys and zones. The first step is to create the Zone Signing Key(s) and the Key Signing Key(s). The Zone Signing Key specifies the keys that the system uses to sign requests to a zone. The BIG-IP responds to DNSSEC requests to a specific zone by returning signed nameserver responses based on the currently available generations of a key. The Key Signing Key works the same as the Zone Signing Key except that it applies to keys instead of zones. To create these keys, navigate to Global Traffic >> DNSSEC Key List and create a new key. Note: this menu looks slightly different starting in version 11.5 (it's listed under "DNS" instead of "Global Traffic") but the Key Creation is still the same. On this page, you can create a Zone Signing Key and a Key Signing Key, and you can also specify several other settings like HSM use, algorithm selection, and key management action. Note that you can let the BIG-IP automatically manage your key actions or you can choose to do it manually. Configuration Settings The Bit Width for the key can be either 1024, 2048, or 4096 bits. The default is 1024. The TTL value specifies the length of time the BIG-IP stores the key in cache. A key can be cached between 0 and 4294967295 seconds (by the way, 4294967295 seconds is a little more than 136 years!). The default value is 86400 seconds (one day). This value must be less than the difference between the values of the rollover period and expiration period (referred to as the "overlap period"). Setting this value to 0 seconds indicates that client resolvers do not cache the key. The Rollover Period specifies the interval after which the BIG-IP creates a new generation of an existing key. The valid range for values for the Rollover Period is from 0 to 4294967295 seconds. The default is 0 seconds, which means the key does not roll over. This value of the rollover period must be greater than or equal to one third of the value of the expiration period and less than the value of the expiration period. The Expiration Period specifies the interval after which the BIG-IP deletes an existing key. The valid range for values is from 0 to 4294967295 seconds. The default is 0 seconds, which means the key does not expire. The value of the expiration period must be more than the value of the rollover period. Also, the overlap period must be more than the value of the TTL. FYI...the National Institute of Standards and Technology (NIST) recommends that a Zone Signing Key expire between 30-90 days, and that a Key Signing Key expire once a year. The Signature Validity Period specifies the interval after which the BIG-IP no longer uses the expired signature. The valid range for values is from 0 to 4294967295 seconds. The default is 7 days. This value must be greater than the value of the signature publication period. If you set this value to 0, the server verifying the signature never succeeds because the signature is always expired, so don't set it to 0! The Signature Publication Period specifies the interval after which the BIG-IP creates a new signature. The valid range for these values is from 0 to 4294967295 seconds. The default value is 4 days, 16 hours (two-thirds of a week). This value must be less than the value of the signature validity period. If you set this value to 0, the system does not cache the signature. TTL Values, Key Values, and Overlaps The following diagram shows an example of key generation timelines with rollover periods and overlaps. This diagram is useful when reviewing the configuration settings and values discussed in the section above. Notice that the expiration period must be greater than the rollover period, and the TTL must be less than the overlap period. You wouldn't want a key to expire before it rolls over; likewise, you wouldn't want a TTL period to outlast the overlap period...if it did, the key could still be valid after the expiration period. After you create and configure the Zone Signing Key(s) and Key Signing Key(s), the next step is to create a DNSSEC zone. A DNSSEC zone maps a domain name to a set of DNSSEC keys. In the BIG-IP, you create DNSSEC zones by navigating to Global Traffic >> DNSSEC Zone List and create a new zone. On this page, you can name the zone, configure state settings, assign algorithms, and activate zone keys. The hash algorithm options are SHA-1 (default) or SHA-256. The Zone Signing Key box specifies the zone keys that the BIG-IP uses to sign requests to a zone. The Key Signing Key works similar to the Zone Signing Key, except it is used to sign keys instead of requests. The following screenshot shows the options available for creating DNSSEC zones. To fully secure a zone, the parent zone needs to have copies of the child's public key. The parent zone then signs the child's public key with their own key and sends it up to their parent...this pattern is followed all the way to the root zone. Once you have created the DNSSEC keys and zones, you can submit the Delegation Signer (DS) record to the administrators of your parent zone. They will sign the DS record with their own key and upload it to their zone. You can find the DS record for your zone here: /config/gtm/dsset-dnssec.zone.name There's a lot to DNSSEC, and this article wasn't written to capture it all, but I hope it sheds a little light on what DNSSEC is and how you can create zones and keys on your BIG-IP. Stay tuned for more BIG-IP GTM articles in the coming days, weeks, and months. Until then, keep those DNS requests flowing, and make sure they are valid with DNSSEC! One last thing...did you know that F5 has an awesome Reference Architecture dedicated to Intelligent DNS Scale? The F5 Intelligent DNS Scale solution ensures that you can access your critical web, application, and database services whenever you need them...check it out!3.8KViews0likes5CommentsF5 Friday: The 2048-bit Keys to the Kingdom
There’s a rarely mentioned move from 1024-bit to 2048-bit key lengths in the security demesne … are you ready? More importantly, are your infrastructure and applications ready? Everyone has likely read about DNSSEC and the exciting day on which the root servers were signed. In response to security concerns – and very valid ones at that – around the veracity of responses returned by DNS, which underpins the entire Internet, the practice of signing responses was introduced. Everyone who had anything to do with encryption and certificates said something about the initiative. But less mentioned was a move to leverage longer RSA key lengths as a means to increase the security of the encryption of data, a la SSL (Secure Socket Layer). While there have been a few stories on SSL vulnerabilities – Dan Kaminsky illustrated flaws in the system at Black Hat last year – there’s been very little public discussion about the transition in key sizes across the industry. The last time we had such a massive move in the cryptography space was back when we moved from 128-bit to 256-bit keys. Some folks may remember that many early adopters of the Internet had issues with browser support back then, and the impact on the performance and capacity of infrastructure were very negatively impacted. Well, that’s about to happen again as we move from 1024-bit keys to 2048-bit keys – and the recommended transition deadline is fast approaching. In fact, NIST is recommending the transition by January 1st, 2011 and several key providers of certificates are already restricting the issuance of certificates to 2048-bit keys. NIST Recommends transition to 2048-bit key lengths by Jan 1st 2011: Special Publication 800-57 Part 1 Table 4 VeriSign Started focusing on 2048-bit keys in 2006; complete transition by October 2010. Indicates their transition is to comply with best practices as recommended by NIST GeoTrust Clearly indicates why it transitioned to only 2048-bit Keys in June 2010 Entrust Also following NIST recommendations : TN 7710 - Entrust is moving to 2048-bit RSA keys. GoDaddy "We enforced a new policy where all newly issued and renewed certificates must be 2048-bit“. Extended Validation (EV) required 2048-bit keys on 1/1/09 Note that it isn’t just providers who are making this move. Microsoft uses and recommends 2048-bit keys per the NIST guidelines for all servers and other products. Red Hat recommends 2048+ length for keys using RSA algorithm. And as of December 31, 2013 Mozilla will disable or remove all root certificates with RSA key sizes smaller than 2048 bits. That means sites that have not made the move as of that date will find it difficult for customers and visitors to hook up, as it were. THE IMPACT on YOU The impact on organizations that take advantage of encryption and decryption to secure web sites, sign code, and authenticate access is primarily in performance and capacity. The decrease in performance as key sizes increase is not linear, but more on the lines of exponential. For example, though the key size is shifting by a factor of two, F5 internal testing indicates that such a shift results in approximately a 5x reduction in performance (as measured by TPS – Transactions per Second). This reduction in performance has also been seen by others in the space, as indicated by a recent Citrix announcement of a 5x increase in performance of its cryptographic processing. This decrease in TPS is due primarily to heavy use of the key during the handshaking process. The impact on you is heavily dependent on how much of your infrastructure leverages SSL. For some organizations – those that require SSL end-to-end – the impact will be much higher. Any infrastructure component that terminated SSL and re-encrypted the data as a means to provide inline functionality (think IDS, Load balancer, web application firewall, anti-virus scan) will need to also support 2048-bit keys, and if new certificates are necessary these, too, will need to be deployed throughout the infrastructure. Any organization with additional security/encryption requirements over and above simply SSL encryption, such as FIPS 140-2 or higher, are looking at new/additional hardware to support the migration. Note: There are architectural solutions to avoid the type of forklift upgrade necessary, we’ll get to that shortly. If your infrastructure is currently supporting SSL encryption/decryption on your web/application servers, you’ll certainly want to start investigating the impact on capacity and performance now. SSL with 1024-bit keys typically requires about 30% of a server’s resources (RAM, CPU) and the increase to 2048-bit keys will require more, which necessarily comes from the resources used by the application. That means a decrease in capacity of applications running on servers on which SSL is terminated and typically a degradation in performance. In general, the decrease we’ve (and others) have seen in TPS performance on hardware should give you a good idea of what to expect on software or virtual network appliances. As a general rule you should determine what level of SSL transaction you are currently licensed for and divide that number by five to determine whether you can maintain the capacity you have today after a migration to 2048-bit keys. It may not be a pretty picture. ADVANTAGES of SSL OFFLOAD If the advantages of offloading SSL to an external infrastructure component were significant before the move from 1024-bit keys to 2048-bit keys makes them nearly indispensable to maintaining performance and capacity of existing applications and infrastructure. Offloading SSL to an external infrastructure component enabled with specialized hardware further improves the capacity and performance of these mathematically complex and compute intensive processes. ARCHITECTURAL SOLUTION to support 1024-bit key only applications If you were thinking about leveraging a virtual network appliance for this purpose, you might want to think about that one again. Early testing of RSA operations using 2048-bit keys on 64-bit commodity hardware shows a capacity in the hundreds of transactions per second. Not tens of thousands, not even thousands, but hundreds. Even if the only use of SSL in your organization is to provide secure web-based access to e-mail, a la Microsoft Web Outlook, this is likely unacceptable. Remember there is rarely a 1:1 relationship between connections and web applications today, and each connection requires the use of those SSL operations, which can drastically impact the capacity in terms of user concurrency. Perhaps as important is the ability to architect around limitations imposed by applications on the security infrastructure. For example, many legacy applications (Lotus Notes, IIS 5.0) do not support 2048-bit keys. Thus meeting the recommendation to migrate to 2048-bit keys is all but impossible for this class of application. Leveraging the capabilities of an application delivery controller that can support 2048-bit keys, however, allows for the continued support of 1024-bit keys to the application while supporting 2048-bit keys to the client. ARE YOU READY? That’s a question only you can answer, and you can only answer that by taking a good look at your infrastructure and applications. Now is a good time to evaluate your SSL strategy to ensure it’s up to the challenge of 2048-bit keys. Check your licenses, determine your current capacity and requirements, and compare those to what can be realistically expected once the migration is complete. Validate that applications currently requiring 1024-bit keys can support 2048-bit keys or whether such a migration is contraindicated by the application, and investigate whether a proxy-based (mediation) solution might be appropriate. And don’t forget to determine whether or not compliance with regulations may require new hardware solutions. Now this is an F5 Friday post, so you knew there had to be some tie-in, right? Other than the fact that the red-ball glowing ball on every BIG-IP just looks hawesome in the dim light of a data center, F5 solutions can mitigate many potential negative impacts resulting from a migration of 1024-bit to 2048-bit key lengths: BIG-IP Specialized Hardware BIG-IP hardware platforms include specialized RSA acceleration hardware that improves the performance of the RSA operations necessary to support encryption/decryption and SSL communication and enables higher capacities of the same. EM (Enterprise Manager) Streamlines Certificate Management F5’s centralized management solution, EM (Enterprise Manager), allows an organization to better manage a cryptographic infrastructure by providing the means to monitor and manage key expirations across all F5 solutions and collect TPS history and usage when sizing to better understand capacity constraints. BIG-IP Flexibility BIG-IP is a full proxy-based solution. It can mediate between clients and applications that have disparate requirements, such as may be the case with key sizes. This allows you to use 2048-bit keys but retain the use of 1024-bit keys to web/application servers and other infrastructure solutions. Strong partnerships and integration with leading centralized key management and crypto vendors that provide automated key migration and provisioning through open and standards-based APIs and robust scripting capabilities. DNSSEC Enhance security through DNSSEC to validate domain names. Although it has been suggested that 1024-bit keys might be sufficient for signing zones, with the forced migration to 2048-bit keys there will be increased pressure on the DNS infrastructure that may require a new solution for your DNS systems. THIS IS IN MANY REGARDS INFOSEC’S “Y2K” In many ways a change of this magnitude is for Information Security professionals their “Y2K” because such a migration will have an impact on nearly every component and application in the data center. Unfortunately for the security folks, we had a lot more time to prepare for Y2K…so get started, go through the checklist, and get yourself ready to make the switch now before the eleventh hour is upon us. Related blogs & articles: The Anatomy of an SSL Handshake [Network Computing] DNSSEC Readiness [ISC.org] Get Ready for the Impact of 2048-bit RSA Keys [Network Computing] SSL handshake latency and HTTPS optimizations [semicomplete.com] Pete Silva Demonstrates the FirePass SSL-VPN Data Center Feng Shui: SSL WILS: SSL TPS versus HTTP TPS over SSL SSL performance - DevCentral - F5 DevCentral > Community > Group ... DevCentral Weekly Roundup | Audio Podcast - SSL iControl Apps - #12 - Global SSL Statistics > DevCentral > F5 ... Oracle 10g SSL Offload - JInitiator:X509CertChainInvalidErr error ... Requiring an SSL Certificate for Parts of an Application ... The Order of (Network) Operations1.2KViews0likes4CommentsCarrier Grade DNS: Not your Parents DNS
Domain Name System (DNS) is one of the overlooked systems in the deployment of 4G and Next Generation All IP Networks. The focus tends to be on revenue-generating applications that provide ROI for these major investments. For these to be successful the CSP's have first got to be able to deploy these networks, and provide a high quality of experience in order to be sure that these services are truly revenue generating. However, most CSP’s have overlooked some of the basic IP functions in order to provide these revenue generating applications. The building blocks for these applications are a quality, efficient, scalable, and feature-rich IP architecture. One of the key items that are required for this IP architecture is Carrier Grade DNS. DNS has been a long-standing requirement for Internet services for CSP's. However with these all IP networks, DNS is being used for new capabilities along with supporting increases in data traffic for standard content and Internet services. For years CSP's and employed cheap, inexpensive, and basic DNS systems on their network. This was done to provide basic DNS services and to minimize cost. However with and developing networks, these basic DNS deployments will not support the requirements of the future. DNS services are starting to be used for new and unique capabilities, which include managing traffic on both the internal network along with external content that is located on the Internet. Along with this new functionality, DNS is also required to provide security of DNS transactions and have the ability to mitigate against DNS attacks, along with providing for authoritative DNS zone management, resolution, and non-authoritative support, such as caching. The significant challenge for communication service providers is to provide these DNS capabilities while still maintaining a manageable Capex and Opex. This challenge can only be met by deploying a carrier grade DNS solution. The carrier grade DNS solution comprises all the basic capabilities of DNS, along with including a logical scaling capability, security for DNS transactions, and an ability to intelligently manage authoritative zones. Historically, traditional DNS solutions have addressed scaling by simply adding more hardware. This method is a Capex nightmare. With the increases in data and data demands, these problems with DNS scaling will grow exponentially. The only solution to this problem is the ability to deploy an intelligent DNS system that allows the communication service provider the ability to manage how DNS queries and how DNS authoritative responses are managed and delivered to subscribers. Since DNS is key in the ability to identify the location of web content it is vulnerable to both DNS hijacking attacks and denial of service (DoS) or distributed denial of service (DDoS) attacks. To prevent DNS hijacking attacks, carrier grade DNS solutions must be incorporated DNSSEC. By incorporating DNSSEC, responses to subscribers are guaranteed the identity of the answering authoritative DNS. DoS/DDoS attacks cannot be prevented. The only strategy they can be taken against DoS/DDoS is to mitigate the impact of these attacks. The best way to address the mitigation the impact of DoS/DDoS attacks is through a distributed carrier grade DNS architecture. By using such technologies as Global Server Load Balancing (GSLB) and IP Anycast, a distributed carrier grade DNS architecture can isolate and limit the impacts of DoS/DDoS attacks. GSLB allows the communication service provider to manage how DNS requests are answered based upon the location of the contents and the requester. IP Anycast allows for multiple systems to share the same IP address thereby distributing the number of systems answering request. By using these distributed systems DoS/DDoS attacks can be isolated and minimize the number of systems impacted. As we have seen over the past year, data use on CSP networks is going to continue to increase. To provide a successful ARPU model, a Carrier Grade DNS that provides for high availability, economical scalability, subscriber security, and high performance in essential. With all of the many challenges in a CSP network, basic IP infrastructure can be overlooked. An intelligent management system of these IP essential systems is the first step in reducing an ever expanding Capex and providing for a high quality of experience for your subscribers. Related Articles DNS is Like Your Mom F5 Friday: No DNS? No … Anything. Audio White Paper - High-Performance DNS Services in BIG-IP ... DevCentral Weekly Roundup | Audio Podcast - DNS F5 Friday: When the Solution to a Vulnerability is Vulnerable You ... F5 News - DNS DNS Monitor Using Dig - DevCentral Wiki The End of DNS As We Know It F5 Video: DNS Express—DNS Die Another Day Ray Vinson – DNS586Views0likes0CommentsThe DNS of Things
Hey DNS - Find Me that Thing! There's a new craze occurring in homes, highways, workplaces and everywhere imaginable - the Internet of Things or as I like to call it, The Internet of Nouns. Sensors, thermostats, kitchen appliances, toilets and almost every person, place or thing will have a chip capable of connecting to the internet. And if you want to identify and find those things with recognizable words instead of a 128-bit IP address, you're going to need DNS. DNS translates the names we type into browser or mobile app into an IP address so the services can be found on the internet. It is one of the most important components of the internet, especially for human interaction. With the explosion of mobile devices and the millions of apps deployed to support those devices, DNS growth has doubled in recent years. It is also a vulnerable target. While the ability to adjust the temperature of your house or remotely flush your toilet from around the globe is cool, I think one of the biggest challenges of the Internet of Nouns will be the strain on DNS. Not only having to resolve the millions of additional 'things' getting connected but also the potential vulnerabilities and risks introduced when your washing machine connects to the internet to find the optimal temperature and detergent mix to remove those grass, wine and blood stains. Recent research suggests that the bad guys are already taking advantage of these easy targets. Arstechnica reports that the malware that has been targeting routers has now spread to DVRs. Not my precious digital video reorder!! Last week, Sans found a Bitcoin mining trojan that can infect security camera DVRs. As they were watching a script that hunted the internet for data storage devices, they learned that the bot was coming from a DVR. Most likely, they say, it was compromised through the telnet defaults. In another report, ESET said it found 11 year old malware that had been updated with the ability to compromise a residential broadband router's DNS settings. The malware finds a vulnerable router and changes the default DNS entries to either send the person to a rogue site to install more malware (join the bot, why don't ya) or to just redirect them to annoying sites. Imagine if the 50+ connected things we will soon have in our homes also joined the bot? Forget about needing compute and bandwidth from machines around the globe, you can zero in on a neighborhood to launch an attack. Nominum research shows that DNS-based DDoS amplification attacks have significantly increased in the recent months, targeting vulnerable home routers all over. A simple attack can create tens-of-gigs of traffic to disrupt networks, businesses, websites, and regular folks anywhere in the world. More than 24 million home routers on the Internet have open DNS proxies which expose ISPs to DNS-based DDoS attacks and in February 2014 alone, more than 5.3 million of these routers were used to generate attack traffic. These are especially hard to track since it is difficult to determine both the origination and target of the attack. Lastly, Ultra Electronics AEP says 47% of the internet remains insecure since many top level domains (TLDs) have failed to sign up to use domain name system security extensions (DNSSEC). These include heavy internet using countries like Italy (.it), Spain (.es) and South Africa (.za), leaving millions of internetizens open to malicious redirects to fake websites. Unless the top level domain is signed, every single website operating under a national domain can have its DNS spoofed and that's bad for the good guys. We often don't think about the Wizard behind the curtain until we are unable resolve an internet resource. DNS will become even more critical as additional nouns are connected and we want to find them by name. F5 DNS Solutions can help you manage this rapid growth with complete solutions that increase the speed, availability, scalability, and security of your DNS infrastructure. And I do imagine a time when our current commands could also work on, for instance, the connected toilet: /flushdns. Just couldn't let that one go. ps Related: “Internet of Things” is the new Windows XP—malware’s favorite target Win32/Sality newest component: a router’s primary DNS changer named Win32/RBrute 24 million home routers expose ISPs to massive DNS-based DDoS attacks 24 million reasons to lock down DNS amplification attacks Half the internet lacks DNS security extensions F5 Intelligent DNS Scale Technorati Tags: f5,dns,dnssec,ddos,security,iot,things,big-ip,malware,silva,trojan Connect with Peter: Connect with F5:557Views0likes0CommentsDNSSEC Configuration issue
Hi Team, I am trying to test DNSSEC on a trial version before rolling it out on production appliance. I have configured the Key signing key, Zone Signing key and mapped them to the DNSSEC Zone. However for some reason the DNSSEC zone is offline with error message: 'Offline (Enabled) - must contain at least one enabled KSK and enabled ZSK' I have verified that the KSK and ZSK are both in enabled state. Any pointers on why this could be happening? Best Regards, Shridhar Acharya550Views0likes3CommentsValidating resolver and trust anchors
Hi, I am trying to configure my F5 as a validating resolver. I am running 14.0 with a lab license so DNS is licensed. I am able to successfully resolve when using a transparent cache and a pool of DNS servers. I am able to successfully resolve when using a resolver cache. However, when trying to configure a validating resolver cache I am lost. If I am using a pool of dns servers which includes 8.8.8.8, what trust anchor should I configure? Also, what is the difference between a trust anchor and a dlv anchor? Do I need both? I have attempted to use the root trust anchors but I have no idea if that is correct either. Root trust anchors I used. . IN DS 19036 8 2 49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5 . IN DS 20326 8 2 E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC683457104237C7F8EC8D520Views0likes2CommentsDNS Reimagined keeps your Business Online
Whether your services are deployed in a data center or migrated to a hybrid cloud environment, DNS knows where to send your users to their cool new app or favorite social media experience. Invariably DNS points the way to a connected session. Yes, it’s at this IP address. Or, it’s not available here but accessible at that address location, all in milliseconds. So with all kinds of new mobile devices, apps, and services, DNS requests (.com/.net) continue to climb 100% over the last 5 years. In 2014, there are 10 billion devices alone with 77 billion mobile apps downloaded. It’s estimated that there will be 50 to 75 billion devices by 2020. Along with all the other products already connected or coming, the Internet of Things sending DNS requests is about to become much larger. In addition, we don’t like to wait. If a mobile user has to wait more than 10 seconds, they’ll leave for another experience and revenue potential is lost. At the same time, as more online experiences move to the hybrid cloud, these virtual services are in a myriad of locations all requiring DNS to inform the client where the data is in order for the user to have a connected session. More locations means more latency and DNS needs to perform with fast responses in order to keep users engaged. Since DNS is essentially the yellow pages of the internet resolving queries, it’s critical to have a highly scalable, exponential performing, and secure DNS infrastructure in order to deliver an optimized user experience. F5 reimagines the traditional DNS delivery infrastructure as a paradigm shift to the edge of the network moving DNS services and app routing closest to the client based on geolocation. BIG-IP Global Traffic Manager (GTM) with DNS delivery services, including Authoritative DNS, delivers very high performance responding to DNS queries on behalf of the DNS master server. In addition, much lower latency comes with Caching and Resolving services at the edge for internal users or subscribers, and increased DNS security services for DDoS mitigation and DNSSEC all come on an ICSA network firewall certified solution. Finally, implementing DNSSEC signing protects against man-in-the-middle and cache poisoning attacks that would redirect the user to a malicious session. The results are an easy integration into existing DNS infrastructure, and the performance capabilities well into the tens of millions of DNS query responses per second on higher end solutions to keep your web sites and apps available during all kinds of scenarios. Users will see a much faster and more secure query response allowing for a far better user experience and greater potential for business growth. Now your DNS infrastructure is ready to scale exponentially and securely to meet the growing demand and you have global app routing for an optimized and highly available sessions. Automatically, your users’ DNS request for that customized online experience materializes, or that new business analytics site located at a closer location to them quickly loads. And, malformed DNS queries, invalid requests, are dropped to mitigate attacks. With large increases in DNS requests; automatic responses are sent at very high performance to keep online sessions available. These and many more options for DNS services and customization support apps and services uptime, keeping your business alive. Now more recently you might have been informed that you need to move a portion of your apps and services to a hybrid cloud environment to meet cloud computing, disaster recovery, and cost reduction goals. As you ponder how to deliver a similar experience to users, consider what type of DNS and global app routing you need to replicate in various virtual and cloud environments in order to accomplish your new objective. To learn more about how DNS supports the internet and how F5 supports cloud-hosted services, read the F5 Synthesis: DNS Shrugged article to help you accomplish your new mission. For more information on Intelligent DNS and Global App Management: Intelligent DNS Services Scalable, Secure DNS and Global App Services343Views0likes0CommentsLightboard Lessons: DNSSEC
DNS is absolutely critical to your life on the Internet. But, did you know that DNS was designed back in the 1980s and didn't really consider security as a key component? DNSSEC was developed to help with that problem. In this edition of Lightboard Lessons, I discuss the basics of DNSSEC and talk about how the BIG-IP can help protect your critical DNS infrastructure. Related Resources: Configuring DNSSEC on the BIG-IP312Views0likes2CommentsA Living Architecture
You often hear people say, 'oh, this is a living document,' to indicate that the information is continually updated or edited to reflect changes that may occur during the life of the document. Your infrastructure is also living and dynamic. You make changes, updates or upgrades to address the ever changing requirements of your employees, web visitors, customers, partners, networks, applications and anything else tied to your systems. This is also true for F5's Reference Architectures. They too are living architectures. F5's Reference Architectures are the proof-points or customer scenarios that drive Synthesis to your data center and beyond. When we initially built out these RA's, we knew that they'd be continuously updated to not only reflect new BIG-IP functionality but also show new solutions to the changing challenges IT faces daily. We've recently updated the Intelligent DNS Scale Reference Architecture to include more security (DNSSEC) and to address the highly hybrid nature of enterprise infrastructures with Distributed DNS. F5’s end-to-end Intelligent DNS Scale reference architecture enables organizations to build a strong DNS foundation that maximizes the use of resources and increases service management, while remaining agile enough to support both existing and future network architectures, devices, and applications. It also provides a more intelligent way to respond and scale to DNS queries and takes into account a variety of network conditions and situations to distribute user application requests and application services based on business policies, data center conditions, network conditions, and application performance. It ensures that your customers—and your employees—can access your critical web, application, and database services whenever they need them. In this latest DNS RA rev, DNSSEC can protect your DNS infrastructure, including cloud deployments, from cache poisoning attacks and domain hijacks. With DNSSEC support, you can digitally sign and encrypt your DNS query responses. This enables the resolver to determine the authenticity of the response, preventing DNS hijacking and cache poisoning. Also included is Distributed DNS. Meaning, all the DNS solution goodness also applies to cloud deployments or infrastructures where DNS is distributed. Organizations can replicate their high performance DNS infrastructure in almost any environment. Organizations may have Cloud DNS for disaster recovery/business continuity or even a Cloud DNS service with signed DNSSEC zones. F5 DNS Services enhanced AXFR support offers zone transfers from BIG-IP to any DNS service allowing organizations to replicate DNS in physical, virtual, and cloud environments. The DNS replication service can be sent to other BIG-IPs or other general DNS servers in Data Centers/Clouds that are closest to the users. In addition, Organizations can send users to a site that will give them the best experience. F5 DNS Services uses a range of load balancing methods and intelligent monitoring for each specific app and user. Traffic is routed according to your business policies and current network and user conditions. F5 DNS Services includes an accurate, granular geolocation database, giving you control of traffic distribution based on user location. DNS helps make the internet work and we often do not think of it until we cannot connect to some resource. With the Internet of Nouns (or Things if you like) hot on our heels, I think Port 53 will continue to be a critically important piece of the internet puzzle. ps Related: Intelligent DNS Scale Resources F5 Synthesis DNS Reimagined keeps your Business Online DNS Does the Job The DNS of Things DNS Doldrums The Internet of Things and DNS Technorati Tags: f5,big-ip,dns,reference architecture,dnssec,iot,things,name_resolution,silva,security,cloud,synthesis Connect with Peter: Connect with F5:302Views0likes0Comments