scale n
28 TopicsDNS The F5 Way: A Paradigm Shift
This is the second in a series of DNS articles that I'm writing. The first is: Let's Talk DNS on DevCentral. Internet users rely heavily on DNS, and when DNS breaks, applications break. It's extremely important to implement an architecture that provides for DNS availability at all times. It's important because the number of Internet users continues to grow. In fact, a recent study conducted by the International Telecommunications Union claims that mobile devices will outnumber the people living on this planet at some point this year (2014). I'm certainly contributing to those stats as I have a smartphone and a tablet! In addition, the sophistication and complexity of websites are increasing. Many sites today require hundreds of DNS requests just to load a single page. So, when you combine the number of Internet users with the complexity of modern sites, you can imagine that the number of DNS requests traversing your network is extremely large. Verisign's average daily DNS query load during the fourth quarter of 2012 was 77 billion with a peak of 123 billion. Wow...that's a lot of DNS requests...every day! The point is this...Internet use is growing, and the need for reliable DNS is more important than ever. par·a·digm noun \ˈper-ə-ˌdīm\: a group of ideas about how something should be done, made, or thought about Conventional DNS design goes something like this... Front end (secondary) DNS servers are load balanced behind a firewall, and these servers answer all the DNS queries from the outside world. The master (primary) DNS server is located in the datacenter and is hidden from the outside world behind an internal firewall. This architecture was adequate for a smaller Internet, but in today's complex network world, this design has significant limitations. Typical DNS servers can only handle up to 200,000 DNS queries per second per server. Using the conventional design, the only way to handle more requests is to add more servers. Let's say your organization is preparing for a major event (holiday shopping, for example) and you want to make sure all DNS requests are handled. You might be forced to purchase more DNS servers in order to handle the added load. These servers are expensive and take critical manpower to operate and maintain. You can start to see the scalability and cost issues that add up with this design. From a security perspective, there is often weak DDoS protection with a conventional design. Typically, DDoS protection relies on the network firewall, and this firewall can be a huge traffic bottleneck. Check out the following diagram that shows a representation of a conventional DNS deployment. It's time for a DNS architecture paradigm shift. Your organization requires it, and today's Internet demands it. F5 Introduces A New Way... The F5 Intelligent DNS Scale Reference Architecture is leaner, faster, and more secure than any conventional DNS architecture. Instead of adding more DNS servers to handle increased DNS request load, you can simply install the BIG-IP Global Traffic Manager (GTM) in your network’s DMZ and allow it to handle all external requests. The following diagram shows the simplicity and effectiveness of the F5 design. Notice that the infrastructure footprint of this design is significantly smaller. This smaller footprint reduces costs associated with additional servers, manpower, HVAC, facility space, etc. I mentioned the external request benefit of the BIG-IP GTM...here's how it works. The BIG-IP GTM uses F5's specifically designed DNS Express zone transfer feature and cluster multiprocessing (CMP) for exponential performance of query responses. DNS Express manages authoritative DNS queries by transferring zones to its own RAM, so it significantly improves query performance and response time. With DNS Express zone transfer and the high performance processing realized with CMP, the BIG-IP GTM can scale up to more than 10 million DNS query responses per second which means that even large surges of DNS requests (including malicious ones) will not likely disrupt your DNS infrastructure or affect the availability of your critical applications. The BIG-IP GTM is much more than an authoritative DNS server, though. Here are some of the key features and capabilities included in the BIG-IP GTM: ICSA certified network firewall -- you don't have to deploy DMZ firewalls any more...it IS your firewall! Monitors the health of app servers and intelligently routes traffic to the nearest data center using IP Geolocation Protects from DNS DDoS attacks using the integrated firewall services, scaling capabilities, and IP address intelligence Allows you to utilize benefits of cloud environment by flexibly deploying BIG-IP GTM Virtual Edition (VE) Supports DNSSEC with real-time signing and validates DNSSEC responses As you can see, the BIG-IP GTM is a workhorse that literally has no rival in today's market. It's time to change the way we think about DNS architecture deployments. So, utilize the F5 Intelligent DNS Scale Reference Architecture to improve web performance by reducing DNS latency, protect web properties and brand reputation by mitigating DNS DDoS attacks, reduce data center costs by consolidating DNS infrastructure, and route customers to the best performing components for optimal application and service delivery. Learn more about F5 Intelligent DNS Scale by visiting https://f5.com/solutions/architectures/intelligent-dns-scale1KViews0likes2CommentsScaleN monitoring with SNMP?
When using ScaleN to have Virtual Servers active across multiple LTM VEs, is there a way to determine which LTM is active/passive for individual traffic-groups via SNMP monitoring? There does not appear to be MIB libraries in 13.1 that identify this data. Did I miss it? Thanks,256Views0likes1CommentORACLE AGILE PLM logging out issue
Dear team, We have Agile and PLM setup for SSO and NON SSO users. SSO is listening on Port 80 reaching the backend server port on 7777 (which is actually a file manager) However, for NON SSO, the listening port is 7001 and the backend server port is 7011. We have cookie persistence enabled with Always send cookie enabled keeping rest of them as same. Problem: We are able to log in to the environment and attach the file then upload it. However, when the file is transferred or uploaded its coming back to original screen. Which is an application page. We are not sure where exactly its happening and would highly appreciate if anyone can help.323Views0likes1CommentTertiary F5 at DR Facility: Only Come Active with Admin Intervention
I am turning up an LTM cluster in data center A, and we have OTV to make our address space available in the DR site. I need to bring up an LTM cluster in the DR facility, but I don't want it to ever go active unless an administrator forces it to become active. What is the best way to accomplish this HA behavior?266Views0likes1CommentTLS Poodle and RC4 vulnerability : default:!SSLv3:!RC4-SHA
We are running F5 LTM version 11.4.1 hostfix 4 Recently we disabled the RC4 weak CIPHER to remove the Minimal warning from our scan. But due to the recent arrival of Poodle TLS vulnarability we had to introduce !SSLv3:RC4-SHA which brought back the Minimal warning for having RC4 in the acceptable CIPHER. How can we over come this? Removing Poodle TLS padding vulnerability returns RC4 warning597Views0likes5CommentsBeyond Scalability: Achieving Availability
Scalability is only one of the factors that determine availability. Security and performance play a critical role in achieving the application availability demanded by business and customers alike. Whether the goal is to achieve higher levels or productivity or generate greater customer engagement and revenue the venue today is the same: applications. In any application-focused business strategy, availability must be the keystone. When the business at large is relying on applications to be available, any challenge that might lead to disruption must be accounted for and answered. Those challenges include an increasingly wide array of problems that cost organizations an enormous amount in lost productivity, missed opportunities, and damage to reputation. Today's applications are no longer simply threatened by overwhelming demand. Additional pressures in the form of attacks and business requirements are forcing IT professionals to broaden their views on availability to include security and performance. For example, a Kaspersky study[1] found that “61 percent of DDoS victims temporarily lost access to critical business information.” A rising class of attack known as “ransomware” has similarly poor outcomes, with the end result being a complete lack of availability for the targeted application. Consumers have a somewhat different definition of “availability” than the one found in text-books and scholarly articles. A 2012 EMA[2] study notes that “Eighty percent of Web users will abandon a site if performance is poor and 90% of them will hesitate to return to that site in the future” with poor performance designated as more than five seconds. The impact, however, of poor performance is the same as that of complete disruption: a loss of engagement and revenue. The result is that availability through scalability is simply not good enough. Contributing factors like security and performance must be considered to ensure a comprehensive availability strategy that meets expectations and ensures business availability. To realize this goal requires a tripartite of services comprising scalability, security and performance. Scalability Scalability is and likely will remain at the heart of availability. The need to scale applications and dependent services in response to demand is critical to maintaining business today. Scalability includes load balancing and failover capabilities, ensuring availability across the two primary failure domains – resource exhaustion and failure. Where load balancing enables the horizontal scale of applications, failover ensures continued access in the face of a software or hardware failure in the critical path. Both are equally important to ensuring availability and are generally coupled together. In the State of Application Delivery 2015, respondents told us the most important service – the one they would not deploy an application without – was load balancing. The importance of scalability to applications and infrastructure cannot be overstated. It is the primary leg upon which availability stands and should be carefully considered as a key criteria. Also important to scalability today is elasticity; the ability to scale up and down, out and back based on demand, automatically. Achieving that goal requires programmability, integration with public and private cloud providers as well as automation and orchestration frameworks and an ability to monitor not just individual applications but their entire dependency chain to ensure complete scalability. Security If attacks today were measured like winds we’d be looking at a full scale hurricane. The frequency, volume and surfaces for attacks have been increasing year by year and continues to surprise business after business after business. While security is certainly its own domain, it is a key factor in availability. The goal of a DDoS whether at the network or application layer is, after all, to deny service; availability is cut off by resource exhaustion or oversubscription. Emerging threats such as “ransomware” as well as existing attacks with a focus on corruption of data, too, are ultimately about denying availability to an application. The motivation is simply different in each case. Regardless, the reality is that security is required to achieve availability. Whether it’s protecting against a crippling volumetric DDoS attack by redirecting all traffic to a remote scrubbing center or ensuring vigilance in scrubbing inbound requests and data to eliminate compromise, security supports availability. Scalability may be able to overcome a layer 7 resource exhaustion attack but it can’t prevent a volumetric attack from overwhelming the network and making it impossible to access applications. That means security cannot be overlooked as a key component in any availability strategy. Performance Although performance is almost always top of mind for those whose business relies on applications, it is rarely considered with the same severity as availability. Yet it is a key component of availability from the perspective those who consume applications for work and for play. While downtime is disruptive to business, performance problems are destructive to business. The 8 second rule has long been superseded by the 5 second rule and recent studies support its continued dominance regardless of geographic location. The importance of performance to perceived availability is as real as scalability is to technical availability. 82 percent of consumers in a UK study[3] believe website and application speed is crucial when interacting with a business. Applications suffering poor performance are abandoned, which has the same result as the application simply being inaccessible, namely a loss of productivity or revenue. After all, a consumer or employee can’t tell the difference between an app that’s simply taking a long time to respond and an app that’s suffered a disruption. There’s no HTTP code for that. Perhaps unsurprisingly a number of performance improving services have at their core the function of alleviating resource exhaustion. Offloading compute-intense functions like encryption and decryption as well as connection management can reduce the load on applications and in turn improve performance. These intertwined results are indicative of the close relationship between performance and scalability and indicate the need to address challenges with both in order to realize true availability. It's All About Availability Availability is as important to business as the applications it is meant to support. No single service can ensure availability on its own. It is only through the combination of all three services – security, scalability and performance – that true availability can be achieved. Without scalability, demand can overwhelm applications. Without security, attacks can eliminate access to applications. And without performance, end-users can perceive an application as unavailable even if it’s simply responding slowly. In an application world, where applications are core to business success and growth, the best availability strategy is one that addresses the most common challenges – those of scale, security and speed. [1] https://press.kaspersky.com/files/2014/11/B2B-International-2014-Survey-DDoS-Summary-Report.pdf [2] http://www.ca.com/us/~/media/files/whitepapers/ema-ca-it-apm-1112-wp-3.aspx [3] https://f5.com/about-us/news/press-releases/gone-in-five-seconds-uk-businesses-risk-losing-customers-to-rivals-due-to-sluggish-online-experience254Views0likes0Comments