scale n
14 TopicsDNS The F5 Way: A Paradigm Shift
This is the second in a series of DNS articles that I'm writing. The first is: Let's Talk DNS on DevCentral. Internet users rely heavily on DNS, and when DNS breaks, applications break. It's extremely important to implement an architecture that provides for DNS availability at all times. It's important because the number of Internet users continues to grow. In fact, a recent study conducted by the International Telecommunications Union claims that mobile devices will outnumber the people living on this planet at some point this year (2014). I'm certainly contributing to those stats as I have a smartphone and a tablet! In addition, the sophistication and complexity of websites are increasing. Many sites today require hundreds of DNS requests just to load a single page. So, when you combine the number of Internet users with the complexity of modern sites, you can imagine that the number of DNS requests traversing your network is extremely large. Verisign's average daily DNS query load during the fourth quarter of 2012 was 77 billion with a peak of 123 billion. Wow...that's a lot of DNS requests...every day! The point is this...Internet use is growing, and the need for reliable DNS is more important than ever. par·a·digm noun \ˈper-ə-ˌdīm\: a group of ideas about how something should be done, made, or thought about Conventional DNS design goes something like this... Front end (secondary) DNS servers are load balanced behind a firewall, and these servers answer all the DNS queries from the outside world. The master (primary) DNS server is located in the datacenter and is hidden from the outside world behind an internal firewall. This architecture was adequate for a smaller Internet, but in today's complex network world, this design has significant limitations. Typical DNS servers can only handle up to 200,000 DNS queries per second per server. Using the conventional design, the only way to handle more requests is to add more servers. Let's say your organization is preparing for a major event (holiday shopping, for example) and you want to make sure all DNS requests are handled. You might be forced to purchase more DNS servers in order to handle the added load. These servers are expensive and take critical manpower to operate and maintain. You can start to see the scalability and cost issues that add up with this design. From a security perspective, there is often weak DDoS protection with a conventional design. Typically, DDoS protection relies on the network firewall, and this firewall can be a huge traffic bottleneck. Check out the following diagram that shows a representation of a conventional DNS deployment. It's time for a DNS architecture paradigm shift. Your organization requires it, and today's Internet demands it. F5 Introduces A New Way... The F5 Intelligent DNS Scale Reference Architecture is leaner, faster, and more secure than any conventional DNS architecture. Instead of adding more DNS servers to handle increased DNS request load, you can simply install the BIG-IP Global Traffic Manager (GTM) in your network’s DMZ and allow it to handle all external requests. The following diagram shows the simplicity and effectiveness of the F5 design. Notice that the infrastructure footprint of this design is significantly smaller. This smaller footprint reduces costs associated with additional servers, manpower, HVAC, facility space, etc. I mentioned the external request benefit of the BIG-IP GTM...here's how it works. The BIG-IP GTM uses F5's specifically designed DNS Express zone transfer feature and cluster multiprocessing (CMP) for exponential performance of query responses. DNS Express manages authoritative DNS queries by transferring zones to its own RAM, so it significantly improves query performance and response time. With DNS Express zone transfer and the high performance processing realized with CMP, the BIG-IP GTM can scale up to more than 10 million DNS query responses per second which means that even large surges of DNS requests (including malicious ones) will not likely disrupt your DNS infrastructure or affect the availability of your critical applications. The BIG-IP GTM is much more than an authoritative DNS server, though. Here are some of the key features and capabilities included in the BIG-IP GTM: ICSA certified network firewall -- you don't have to deploy DMZ firewalls any more...it IS your firewall! Monitors the health of app servers and intelligently routes traffic to the nearest data center using IP Geolocation Protects from DNS DDoS attacks using the integrated firewall services, scaling capabilities, and IP address intelligence Allows you to utilize benefits of cloud environment by flexibly deploying BIG-IP GTM Virtual Edition (VE) Supports DNSSEC with real-time signing and validates DNSSEC responses As you can see, the BIG-IP GTM is a workhorse that literally has no rival in today's market. It's time to change the way we think about DNS architecture deployments. So, utilize the F5 Intelligent DNS Scale Reference Architecture to improve web performance by reducing DNS latency, protect web properties and brand reputation by mitigating DNS DDoS attacks, reduce data center costs by consolidating DNS infrastructure, and route customers to the best performing components for optimal application and service delivery. Learn more about F5 Intelligent DNS Scale by visiting https://f5.com/solutions/architectures/intelligent-dns-scale1KViews0likes2CommentsBeyond Scalability: Achieving Availability
Scalability is only one of the factors that determine availability. Security and performance play a critical role in achieving the application availability demanded by business and customers alike. Whether the goal is to achieve higher levels or productivity or generate greater customer engagement and revenue the venue today is the same: applications. In any application-focused business strategy, availability must be the keystone. When the business at large is relying on applications to be available, any challenge that might lead to disruption must be accounted for and answered. Those challenges include an increasingly wide array of problems that cost organizations an enormous amount in lost productivity, missed opportunities, and damage to reputation. Today's applications are no longer simply threatened by overwhelming demand. Additional pressures in the form of attacks and business requirements are forcing IT professionals to broaden their views on availability to include security and performance. For example, a Kaspersky study[1] found that “61 percent of DDoS victims temporarily lost access to critical business information.” A rising class of attack known as “ransomware” has similarly poor outcomes, with the end result being a complete lack of availability for the targeted application. Consumers have a somewhat different definition of “availability” than the one found in text-books and scholarly articles. A 2012 EMA[2] study notes that “Eighty percent of Web users will abandon a site if performance is poor and 90% of them will hesitate to return to that site in the future” with poor performance designated as more than five seconds. The impact, however, of poor performance is the same as that of complete disruption: a loss of engagement and revenue. The result is that availability through scalability is simply not good enough. Contributing factors like security and performance must be considered to ensure a comprehensive availability strategy that meets expectations and ensures business availability. To realize this goal requires a tripartite of services comprising scalability, security and performance. Scalability Scalability is and likely will remain at the heart of availability. The need to scale applications and dependent services in response to demand is critical to maintaining business today. Scalability includes load balancing and failover capabilities, ensuring availability across the two primary failure domains – resource exhaustion and failure. Where load balancing enables the horizontal scale of applications, failover ensures continued access in the face of a software or hardware failure in the critical path. Both are equally important to ensuring availability and are generally coupled together. In the State of Application Delivery 2015, respondents told us the most important service – the one they would not deploy an application without – was load balancing. The importance of scalability to applications and infrastructure cannot be overstated. It is the primary leg upon which availability stands and should be carefully considered as a key criteria. Also important to scalability today is elasticity; the ability to scale up and down, out and back based on demand, automatically. Achieving that goal requires programmability, integration with public and private cloud providers as well as automation and orchestration frameworks and an ability to monitor not just individual applications but their entire dependency chain to ensure complete scalability. Security If attacks today were measured like winds we’d be looking at a full scale hurricane. The frequency, volume and surfaces for attacks have been increasing year by year and continues to surprise business after business after business. While security is certainly its own domain, it is a key factor in availability. The goal of a DDoS whether at the network or application layer is, after all, to deny service; availability is cut off by resource exhaustion or oversubscription. Emerging threats such as “ransomware” as well as existing attacks with a focus on corruption of data, too, are ultimately about denying availability to an application. The motivation is simply different in each case. Regardless, the reality is that security is required to achieve availability. Whether it’s protecting against a crippling volumetric DDoS attack by redirecting all traffic to a remote scrubbing center or ensuring vigilance in scrubbing inbound requests and data to eliminate compromise, security supports availability. Scalability may be able to overcome a layer 7 resource exhaustion attack but it can’t prevent a volumetric attack from overwhelming the network and making it impossible to access applications. That means security cannot be overlooked as a key component in any availability strategy. Performance Although performance is almost always top of mind for those whose business relies on applications, it is rarely considered with the same severity as availability. Yet it is a key component of availability from the perspective those who consume applications for work and for play. While downtime is disruptive to business, performance problems are destructive to business. The 8 second rule has long been superseded by the 5 second rule and recent studies support its continued dominance regardless of geographic location. The importance of performance to perceived availability is as real as scalability is to technical availability. 82 percent of consumers in a UK study[3] believe website and application speed is crucial when interacting with a business. Applications suffering poor performance are abandoned, which has the same result as the application simply being inaccessible, namely a loss of productivity or revenue. After all, a consumer or employee can’t tell the difference between an app that’s simply taking a long time to respond and an app that’s suffered a disruption. There’s no HTTP code for that. Perhaps unsurprisingly a number of performance improving services have at their core the function of alleviating resource exhaustion. Offloading compute-intense functions like encryption and decryption as well as connection management can reduce the load on applications and in turn improve performance. These intertwined results are indicative of the close relationship between performance and scalability and indicate the need to address challenges with both in order to realize true availability. It's All About Availability Availability is as important to business as the applications it is meant to support. No single service can ensure availability on its own. It is only through the combination of all three services – security, scalability and performance – that true availability can be achieved. Without scalability, demand can overwhelm applications. Without security, attacks can eliminate access to applications. And without performance, end-users can perceive an application as unavailable even if it’s simply responding slowly. In an application world, where applications are core to business success and growth, the best availability strategy is one that addresses the most common challenges – those of scale, security and speed. [1] https://press.kaspersky.com/files/2014/11/B2B-International-2014-Survey-DDoS-Summary-Report.pdf [2] http://www.ca.com/us/~/media/files/whitepapers/ema-ca-it-apm-1112-wp-3.aspx [3] https://f5.com/about-us/news/press-releases/gone-in-five-seconds-uk-businesses-risk-losing-customers-to-rivals-due-to-sluggish-online-experience252Views0likes0CommentsF5 Synthesis: All Active ADC Clustering
#SDAS #Cloud ADC clustering isn't enough because you deliver app services, not ADC instances The classic high availability (HA) deployment pattern is hard to break. It's been the keystone upon which data centers have been built since the turn of the century. Redundancy, after all, ensures reliability. But today's data centers are as concerned with efficiency as they are with reliability, and with economies of scale even more so. Assigning pairs of application delivery controllers (ADC) to every application in need of high availability is no longer economically or operationally viable. A fabric-based model cannot be based on the premise of simply extending the HA model to a larger set of devices. The traditional HA model relies on device-level failover; if the primary device fails, simply make the secondary active and voila! Continued availability. This model, of course, required a secondary (and very idle) device. In today's OpEx-aware world, that's not going to fly. And we won't even cite the number of times a primary failed after long years of service only to discover the backup was long dead, too. Active-active seems a logical way to go, except for that whole over-subscribed thing. You know, when the distributed load across the two systems is greater than the capacity of a single system. When 60% load plus 60% load = too much load. Failover? Sure, for some of the load. The rest? Bah! Who needed those thousands of dollars worth of transactions anyway, right? A better model is needed, for sure, and the advances in technology over the past few years have resulted in an awareness that it can't just be about device clustering for ADCs. The increased demand for multi-tenancy in the network has been answered, for the most part. The actual ADC platform today is capable of hosting multiple, multi-tenant (virtual) ADC instances. But if one of those fails, you don't want to impact the others. Device-level failover isn't enough for modern, virtualized networking. Clustering has to be about app services clustering, too Which is what F5 offers with Synthesis' High Performance Services Fabric through its ScaleN technology. ScaleN: Device Service Clustering ScaleN is a scalability and availability model based on the premise that infrastructure is hybrid (physical and virtual), that app services scale (and often fail) elastically and erratically, and operational efficiency is number one. Device Service Clustering (DSC) is designed to meet and exceed those requirements, by enabling a more flexible and efficient model of availability and scalability at the same time. DSC starts with the ability to cluster together (today up to 32) devices, whether physical or virtual (and by virtual we mean on any of the popular hypervisors), and create up to 2560 multi-tenant ADC instances*. Then we enable the ability to group those devices together and synchronize configurations (because you've got better things to do than copy config files from device to device, don't you?). And then we also make sure that each of the ADC instances is isolated from the other. That means if one ADC instance with all its app services has trouble and needs to fail over to another device, it doesn't impact any other instance (and all its app services) on the original device. That's app service isolation. What you end up with is a highly flexible services fabric (a pool of hardware and/or virtual resources) that enables app services to scale beyond the traditional pair of ADC instances (scale out) or to migrate from one instance to another (scale up) without disruption. DSC offers organizations the ability to optimize delivery services across a heterogeneous pool of resources without fear of oversubscribing a device. That's because ScaleN is capable of performing load-aware and user-defined failover. In the past, failover was a strictly static proposition because on a fixed order of devices. Primary to secondary, secondary to tertiary, etc... Using load aware and user-defined failover, however, the order of failover becomes dynamic and based on current conditions. That allows the fabric to maintain as equal a distribution across a cluster as possible. The goal is always to maintain the most efficient use of resources across clusters and avoid disruptive failover events - both at the device and the service level. Because you should be delivering app services, not ADC instances. And while the two are inexorably linked, they shouldn't be chained permanently together. That's the old, static HA model - whether active-standby or active-active. The new, dynamic HA model is all-active, elastic and service-aware clustering. * You can further divide an F5 ADC instance using route domains and administrative partitions. The number of possible "instances" using all three options is, well, really big. Really, really, big.234Views0likes0CommentsF5 Synthesis: Hybrid to the Core
#SDAS #SDN #Cloud #SSL #HTTP2.0 F5 continues to pave the way for business to adopt disruptive technologies without, well, as much disruption. The term hybrid is somewhat misleading. In the original sense of the word, it means to bring together two disparate "things" that result in some single new "thing". But technology has adapted the meaning of the word to really mean the bridging of two different technological models. For example, a hybrid cloud isn't really smashing up two cloud environments to form a single, new cloud, rather it's bridging the two technologies in a seamless way so as to make them interoperate and cooperate as if they were a single, unified cloud. This concept is necessary because the way in which data center and computing models evolve. We don't ditch the last generation when the next generation comes along. Rather we graft the new onto the old or combine them in ways that enable the use of both - albeit often times separately. IPv4 and IPv6, for example, pose significant challenges due to incompatibilities. The reliance on the former and the need for the latter drive us to adopt technology such as gateways and brokers to enable a smooth(er) transition from the old to the new. Hybrid is a way to keep organizations moving forward, without sacrificing support for where we are right now. As organizations are challenged to adopt the latest applications and technology based on cutting-edge protocols to improve performance and gain advantages through efficiency, they are simultaneously challenged to scale network infrastructure to handle more traffic, more applications and more "things" connecting to their networks. Cloud offers a path forward, but introduces challenges, too, in managing access, performance, security and scale across an increasingly distributed set of domains. Organizations need hybrid answers to hybrid challenges that threaten the reliability and security of their applications. F5: Hybrid to the Core F5 is no strange to providing hybrid answers to hybrid challenges. F5 Synthesis Software Defined Application Services (SDAS) provide a robust set of services spanning protocol and application layer gateway capabilities that mean you can support a hybrid cloud as easily as a hybrid network that incorporates SDN or emerging protocols like HTTP 2.0. With the release of BIG-IP 11.6 - the platform from which F5 Synthesis High Performance Services Fabric is composed - organizations will be even better positioned to take advantage of new and existing technologies simultaneously while meeting hyperscale challenges arising from even more devices and more applications in need of services. F5 is the first and only vendor to support HTTP 2.0 with BIG-IP 11.6. Like IPv6, HTTP 2.0 is incompatible with the existing de facto standard version (1.1), making it difficult for organizations to move forward and enjoy the proffered benefits of HTTP 2.0 in faster, simpler and more secure applications. F5's approach is hybrid: why be constrained to just one version when you can support both? Too, why must you choose between the performance benefits of hardware-accelerated SSL or the flexibility of a virtual ADC on off-the-shelf hardware? F5 believes you shouldn't have to, and offers another first in the industry - a hybrid SSL offload approach. Organizations can enable 8 times the SSL capacity by taking advantage of the hybrid nature of the F5 High Performance Service Fabric enabled through its unique ScaleN technology. And then, of course, there's cloud and the Internet of Things (or BYOD if you're still focusing just on devices) driving the need for a different kind of access control strategy; a hybrid one. Whether it's things or people, traditional access control techniques that rely on IP address and can't effectively manage both cloud and data center deployed applications isn't going to cut it. Add in the need to hyperscale to meet demand and you need a more hybrid-friendly approach. BIG-IP 11.6 puts the focus on identity-based firewalling into our application delivery firewall services. Combined with existing cloud-identity federation capabilities based on broad SAML support, a seamless hybrid cloud experience for SSO and access is well within reach. As F5 continues to expand and extend the capabilities of its Software-Defined Application Services (SDAS), the notion of "hybrid" architectures, technologies and networks will remain core to its capabilities to ensure organizations can continue to deploy and deliver applications without constraints.258Views0likes0CommentsAccelerating the Transition to Cloud
The benefits of moving to a cloud architecture, whether on premise private cloud or public cloud, include the agility to respond to change, scalability, and ultimately improved efficiency that translates to cost savings. Cloud (or software-defined) architectures have leveraged virtualization and automation to maximize compute, storage, and software ROI, as well as standardize services and applications onto fewer platforms. And now underway, is the same transformation of the network infrastructure, firewalls, switches, routers, and Application Delivery Controllers (ADCs). One of the main concerns in moving to a cloud or virtualized architecture is, no surprise, the security of the underlying network infrastructure as solutions are virtualized. CSOs and security teams for enterprises and cloud providers need to be able to completely assure their downstream customers that their network traffic cannot be seen or manipulated by other customers hosted on the same physical device. F5’s ScaleN virtual Clustered Multiprocessing (vCMP®) technology, part of our market leading BIG-IP application delivery services platform, provides that needed level of security. By combining the agility of virtual application services with the scalability and security of purpose-built ADC hypervisor and hardware, F5 gives cloud providers a virtualization strategy for application delivery and securing multi-tenant environments. The provider can offer performance, scalability, and security to each of their downstream customers by creating discrete virtual BIG-IP® instances (like F5’s Local Traffic Manager or Application Security Manager) on either BIG-IP appliances or VIPRION blades (see Fig 1). You get the agility and flexibility to run different versions and app services for each instance, have complete isolation of traffic and resources, and spin up or down instances as needed. For performance, these virtual instances tap into the same dedicated acceleration hardware used by the hosting platform, including SSL offload, compression, and DDoS protection. In addition, with F5’s RESTful API’s, BIG-IP virtual instances can be managed and integrated into most cloud environments. With the release of BIG-IP v11.6, the security and isolation of vCMP instances has been enhanced through a combination of hardware and software resource isolation methods, including leveraging the cpu memory management capabilities to ensure that the instances can’t access memory from the hypervisor and from each other. vCMP is secure at the system level (hypervisor and guest) and network level (dataplane and management plane), see Figure 2. Enterprises and manage service providers can be assured that vCMP instances cannot snoop or affect traffic in other instances or the host. The “noisy neighbor” problem common to virtualized environments is greatly reduced and promotes a more secure cloud and enables standardization of services on one platform. In addition, 11.6 introduces BIG-IP ASM REST API’s, which allow the manipulation of every aspect of security policy management. When combined with vCMP multi-tenant support, F5 ASM is the leading WAF solution that can be deployed in the cloud or as-a-Service. Lastly, to demonstrate how seriously we take security, and to meet specific government and FSI compliance requirements, vCMP is part of the overall BIG-IP Common Criteria EAL4+ certification that is in process and we are completing a specific vCMP PEN test done by a well-respected 3 rd party testing vendor. You will learn more in future postings how F5’s secure lifecycle development process can help you achieve your security requirements and achieve the benefits of migrating to the cloud. Additional Resources: · vCMP Whitepaper · Multi-Tenant Security with vCMP whitepaper · Peak Hosting uses vCMP for agility and multi-tenancy video312Views0likes0CommentsF5 Synthesis: SSD is Carpooling for the Network
#sdas #webperf Sometimes it's not whether or not you use hardware, but the hardware you choose to use Everybody talks a good game with respect to changing the economy of scale, but very few explain how that's going to actually happen. The reality in most enterprise data centers today is that services are expensive. Traditional architectures prescribe a redundant set of network devices be deployed in order to preserve operational reliability (performance and availability). But given the hundreds of applications being delivered every day (and with that number growing) the cost associated with such architectures can quickly become prohibitive for most applications. Yet all applications are important. If someone in the business took the time to procure/have developed an application then it's important to their specific function or the business' bottom line. Adoption and ultimately success is at least partially dependent on making sure that application is secure, fast and reliable. But it may not be able to afford to implement the network and application services required to achieve that, potentially dooming the application to a lackluster life of minimal use and the subject of many frustrated user comments. Changing the economy of scale means making the services required to make sure an application is secure, fast and reliable are also affordable to the applications that need them - even the ones that aren't considered "critical" today (because they could be tomorrow). One of the ways to achieve this is by sharing more of the infrastructure costs across more applications. Virtualization achieved this by making it possible to deploy many "servers" on a single, shared hardware platform. The cost savings were dramatic - both in capital expenses (the hardware and software necessary) and the operational expenses (administrators could manage more "boxes" than ever before). Certainly these lessons can be - and are being - applied to the network. But there are challenges in this approach, as services in the network have very different workload profiles than the applications being deployed on virtualized servers. For example, stateful services in the network (those required to operate at layers 4-7 like load balancing, acceleration, security, etc...) consume more disk and memory resources than their layer 2-3 counterparts. This is due in part to the need to store state (sessions) as well as data (caching, compression, etc...) and policies that direct how to interact with devices and applications. Traditional hardware (HDD) is not necessarily well-suited to providing a high-performance platform for multiple services with these consumption profiles. Traditional HDD technology introduces a bottleneck at the storage I/O layer that can significantly inhibit the number of virtualized services that can be deployed without impacting performance. Guest density - how many virtualized ADCs you can realistically support on a single hardware platform - is impacted which means it's harder to achieve the economy of scale desired to make sure all applications are able to use the services they need to be successful. Enter SSD (Solid state disk). SSD is known to perform better (and last longer) than its HDD predecessors. Its performance characteristics relieve the pressure point at the I/O bottleneck resulting in the ability to provision more guests on the same (shared) hardware without impacting performance. This means more services available at lower costs (economy of scale) available to more applications. Estimate provided by rideshareonline.com It's like carpooling, only for the network. Instead of driving your car alone, you pool resources with three or four other folks (pretend they're ADCs) and voila! Not only do you get to reduce the costs of commuting to work, but you also get there faster because you're using the express lane (SSD has approximately 200x the IOPS of an HDD, so it's screaming fast). F5 Synthesis 1.5: SSD-enabled High Performance Services Fabric One of the new capabilities of Synthesis recently announced is the availability of SSD-enabled appliances. This means if you compose the F5 Synthesis High Performance Services Fabric from SSD-enabled appliances, you're going to get a greater economy of scale and better performance for the Software Defined Application Services delivered by the High Performance Services Fabric. That translates into faster applications, because the faster we can apply security or performance or identity and access control services, the faster the application can be delivered. It's a win-win situation, with greater density of guests across the service fabric supporting more applications that need a little boost in performance or tighter security or additional controls on access. That's how to change the economy of scale. Additional F5 Synthesis Resources: F5 Synthesis Site iControl REST Wiki on DevCentral F5 Synthesis related posts on DevCentral177Views0likes1CommentF5 Synthesis for Service Providers: Scaling in Three Dimensions
#MWC14 #SDAS #NFV #SDN It's not just about changing the economy of service scale, it's about operations, too. Estimates based on reports from Google put the number of daily activations of new Android phones at 1.3 Million. Based on reported data from Apple, there are 641 new applications per day added to the App Store. According to Cisco's Visual Networking Index, mobile video now accounts for more than 50% of mobile data traffic. Put them all together and consider the impact on the data, application and control planes of a network. Of a mobile network. Now consider how a service provider might scale to meet the demands imposed on their networks by continued growth, but make sure to factor in the need to maintain a low cost per subscriber and the ability to create new revenue streams through service creation. Scaling service provider networks in all three dimensions is no trivial effort, but adding on the requirement to maintain or lower the cost-per-subscriber and enable new service creation? Sounds impossible - but it's not. That's exactly what F5 Synthesis for Service Providers is designed to do: enable mobile network operators to optimize, secure and monetize their networks. F5 Synthesis for Service Providers F5 Synthesis for Service Providers is an architectural framework enabling mobile network operators to optimize, secure and monetize their networks. F5 Synthesis achieves this by changing the service economy of scale by taking advantage of a common, shared platform to reduce operational overhead and improve service provisioning velocity while addressing key security concerns across the network. F5 Synthesis for Service Providers enables mobile network operators to scale in three dimensions: control, data and application planes. Control Plane The control plane is the heart of a service provider network. Tasked with the responsibility for managing subscriber use and ensuring the appropriate services are applied to traffic, it can easily become overwhelmed by signaling storms that occur due to spikes in activations or an Internet-wide gaming addiction that causes millions of concurrent players to join in. The control plane is driven by Diameter, and F5 Synthesis for Service Providers includes F5's Traffix Signaling Delivery Controller, nominated this year for Best Mobile Infrastructure at Mobile World Congress. With unparalleled performance, flexibility and programmability, F5 Traffix SDC helps mobile network operators scale the control plane while enabling the creation of new control plane services. Less often considered but no less important in the control plane are DNS services. A scalable, highly resilient and secure DNS service is critical to both the performance and security of service provider networks. F5 Synthesis for Service Providers includes DNS services. F5 Synthesis is capable of scaling to 418 million response queries per second (RQPS) and includes comprehensive protection against DNS-targeting DDoS attacks. Data Plane The service provider data plane serves as the backbone between the mobile network and the Internet, and must be able to support millions of consumer requests for applications. Banking, browsing, shopping, watching video and sharing via social media are among the most popular activities, many of which are nearly continuous for some subscribers. Bandwidth hungry applications like video can become problematic for the data plane and cause degradations in performance that hamper the subscriber experience and send them off looking for a new provider. To combat performance, security and reliability challenges, service providers have invested in a variety of targeted solutions that has led to a complex, hyper-heterogeneous infrastructure comprising the Gi network. This complexity increases the cost per subscriber by introducing operational overhead and can degrade performance by adding latency due to the number of disparate devices through which data must traverse. F5 Synthesis for Service Providers includes a high-performance service fabric comprised of any combination of hardware or virtual appliances capable of supporting over 20 Tbps. Hardware and appliances from F5 are enabled with its unique vCMP technology, which allows the creation of right-sized service instances that can be scaled up and down dynamically and ultimately reduce the cost per subscriber of the services delivered. The F5 Synthesis High Performance Service Fabric is built on a common, shared and highly optimized platform on which key service provider functions can be consolidated. By consolidating services in the Gi network on a single, unified platform like F5 Synthesis service fabric, providers can eliminate the operational overhead incurred by the need to manage multiple point products, each with its own unique management paradigm. Consolidation also means services deployed on F5 Synthesis High Performance Service Fabric gain the performance and scale advantages of a network stack highly optimized for mobile networking. Application Plane Value added services are a key differentiator and key revenue opportunity for service providers, but can also be the source of poor performance due to the requirement to route all data traffic through all services, regardless of their applicability. Sending text through a video optimization service, or video through an ad insertion service does not add value, but it does consume resources and time that impact the overall subscriber experience. F5 Synthesis services include policy enforcement management capable of selectively routing data through only the value added services that make sense for a given subscriber and application combination. Using Dynamic Service Chaining, F5 Synthesis optimizes service chains to ensure more efficient resource utilization and improved performance for subscribers. This in turn allows service providers to selectively scale highly utilized value added services that saves time and money and reduces costs to deliver. F5 Synthesis for Service Providers works in concert with virtual machine provisioning systems to enable service providers to move toward NFV-based architectures. Intelligent monitoring of value added services combined with awareness of load and demand enables F5 Synthesis for Service Providers to ensure VAS can be scaled up and down individually, resulting in significant cost savings across the VAS infrastructure. by eliminating VAS silos and the need to scale the entire VAS infrastructure at the same time. F5 Synthesis for Service Providers also offers the most flexible set of programmability features in the industry. Control plane, data plane, management plane. APIs for integration, scripting languages for service creation, iApps and a cloud-ready, multi-tenant services fabric that can be combined with a self-servicing service management platform (BIG-IQ). This level of programmability changes the operational economy of scale through automation and orchestration opportunities. With F5 Synthesis for Service Providers, mobile network operators can simplify their Gi Network while laying the foundation for rapid service creation and deployment on a highly flexible, manageable virtualized service fabric that helps providers execute on NFV initiatives.273Views0likes1CommentF5 Synthesis: Avoiding network taxes to improve app performance
#SDAS #webperf Consumers are increasingly performance-sensitive, which means every microsecond counts. In the five seconds it takes you to read this, 60% of your visitors just abandoned your site. Another 20% were gone before you even hit the first comma, and 30% of all them are purchasing from one of your competitors. That's what Limelight Networks "State of the User Experience" says, which pretty much falls in line with every other survey of consumers. They are, on the whole, impatient and unwilling to suffer poor performance when there is a veritable cornucopia of choices available out there on the Internet. In an application world, apps equal opportunity. But that opportunity is the double-edge sword of Damocles, just as able to offer opportunity to your competitors as it is you. Application performance has always been critical. So critical, in fact, that entire markets have cropped up with solutions to address the application and network problems that cause poor performance. Ironically, many of them today may contribute as much to the problem as they do resolve it. Most web performance optimization (WPO) or front-end optimization (FEO) solutions are pain point products. That means that when a pain point - like web application performance - becomes problematic enough for the business to notice, a solution is quickly acquired and deployed. And thus begins the conga line of products designed to improve application delivery. Each addition to the line introduces delay between the consumer and the application. Not because the products are slow or perform poorly, but because there are absolute minimums in terms of the time it takes to open and close TCP sockets and transmit packets over a wire. You can't eliminate it unless you eliminate the solution. Such an architecture might not be problematic if it was just one or two services, but enterprises typically end up with a significant number line of devices in the critical data path between the consumer and the application. Figure 1: Traditional conga-line deployment of application services Each of these incurs a certain amount of latency necessarily due to network and protocol requirements and contributes to the responsiveness (or lack thereof) of applications. It's a network and protocol tax on each and every service that can't be avoided in this architectural model. This is where a service platform can help. In addition to its strategic advantages, the platform approach to delivering application services through F5 Synthesis has positive performance implications, as well. When one platform can support the services you need to improve performance and enhance security, you can eliminate all the latency incurred by deploying those same services on multiple, disparate systems. Figure 2: Modern, platform deployment of application services. By building application services atop a common, high performance service platform, F5 can deliver a broad set of application services without incurring the network and connection taxes required. This eliminates many microseconds (or more, depending on the network topology and conditions) from the consumer response time, meaning applications are delivered faster and consumers don't ditch you for someone else. A platform approach to application services has many advantages. Eliminating the network and protocol taxes required by traditional service deployment models is one them. F5 Synthesis High Performance Service Fabric is built atop such a platform, and with the broadest set of application services available on a single platform, you can turn your application service infrastructure into a competitive advantage. For more information on F5 Synthesis: F5 Synthesis Site F5 Synthesis related posts on DevCentral225Views0likes0CommentsF5 Synthesis: Platform is Strategy. Product is Tactics.
#SDAS Inarguably one of the drivers of software-defined architectures (cloud, SDDC, and SDN) as well as movements like DevOps is the complexity inherent in today's data center networks. For years now we've added applications and services, and responded to new threats and requirements from the business with new boxes and new capabilities. All of them cobbled together using traditional networking principles that adhere to providing reliability and scale through redundancy. The result is complex, hard to manage, and even more difficult to change at a moments notice. Emerging architectural models based solely on cloud computing models or as part of larger, software-defined initiatives, attempt to resolve this issue by introducing abstraction and programmability. To get around the reality that deploying new services in a timely manner takes days if not weeks or even months, we figure that by moving to a programmatic, software-based model we can become more efficient. Except we aren't becoming more efficient, we're just doing what we've always done. We're just doing it faster. We're not eliminating complexity, we're getting around it by adding a layer of scripts and integration designed to make us forget just how incredibly complex our networks really are. One of the primary reasons our networks are the way they are is that we're reactive. What we've been doing for years now is just reacting to events. Threats, new applications, new requirements - all these events inevitably wind up with IT deploying yet another "middle box." A self-contained appliance - hardware or software - that does X. Protects against X, improves Y, enhances Z. And then something else happens and we do it again. And again. And ... you get the point. We react and the result is an increasingly complex topological nightmare we call the data center network. What we need to do is find a better model, a strategic model that enables us to deploy those solutions that protect against X, improve Y and enhance Z without adding complexity and increasing the already confusing topology in the network. We need to break out of our tactical mode and start thinking strategically so we can transform IT to be what it needs to be to align IT results with business expectations. That means we need to start thinking platform, not product. Platform is Strategic. Product is Tactical. We know that the number of services actually in use in the data center has been increasing in response to all the technological shifts caused by trends like security, cloud and mobility. We’ve talked to customers that have more than 20 different services (and vendors) delivering services critical to the security, performance and reliability of applications. Every time a new threat or a new trend impacts the data center, we respond with a new service. That’s one of the reasons you rarely see a detailed architectural diagram at the application flow level – because every single interaction with a customer, partner or employee can have its own unique flow and that flow traverses a variety of services depending on the user, device, network and application and even business purpose. That's the product way. What we need to do is shift our attention to platforms, and leverage them to reduce complexity while at the same time solving problems - and doing so faster and more efficiently. That's one of the primary benefits of Synthesis. Synthesis' High Performance Services Fabric is built by gluing together a platform - the ADC - using new scalability models (ScaleN). The platform is what enables organizations to deploy a wide variety of services but gain operational efficiencies from the fact that the underlying platform is the same. F5 Software Defined Application Services (SDAS) are all deployable on the same, operationally consistent platform regardless of where it might physically reside. Cloud, virtual machine or hardware makes no difference. It's the platform that brings consistency to the table and enables rapid provisioning of new services that protect X, improve Y and enhance Z. In the past year we've brought a number of new services to the Synthesis architecture including Cloud Identity Federation, Web Anti-Fraud, Mobile optimizations and a Secure Web Gateway. All these services were immediately deployable on the existing platform that comprises the Synthesis High Performance Services Fabric. As we add new capabilities and services, they, too, are deployable on the same platform, in the same fabric-based approach and immediately gain all the benefits that come from the platform: massive scalability, high performance, reliability and hardened security. A platform approach means you can realize a level of peace of mind about the future and what might crop up next. Whether it's a new business requirement or a new threat, using a platform approach means no more shoehorning a new box into the topology. It means being able to take advantage of operational consistency across cloud and on-premise deployments. It means being able to expand capabilities without needing to expand budgets to support new training, new services, and new contracts. A platform approach to service deployment in data center networks is strategic. And with the constant rate of change headed our way thanks to the Internet of Things and mobility, the one thing we can't afford to to go without is a sound strategy for dealing with the technological ramifications on the network.298Views0likes0CommentsDevops: The Operational Amplifier
#SDDC #devops #SDN #linerate When Instagram was sold to Facebook in 2012, it employed only 13 people and maintained over 4 billion photos shared by its 80 million registered users. Internally, Instagram was a small business. Externally, it was a web monster. Filling the gap between those two contradictory perspectives is DevOps. Now to be fair, Instagram (like many other web monster properties today) has it easier than most other businesses because it supported only one application. One. That's in stark contrast to large enterprises which are, by most analyst firms, said to manage not one but one hundred and even one thousand applications - at the same time. Our own data indicates an average of 312 applications per customer, many of which are certainly integrated and interacting with one another. Which makes it difficult to manage even the most innocuous of processes. Maintenance windows exist in the enterprise, after all, to manage expectations with respect to downtime and disruption specifically because of the interdependent nature of enterprise applications. The thing is, these numbers are only going to get worse as the Internet of Things continues to put pressure on organizations to up their app game with new ways to offer things and apps together using new business models. Unfortunately IT budget and staff is not necessarily going to increase at the same pace. In fact, despite analysis that suggests a highly mobile customer base requires a lower ratio of IT personnel to users due to higher complexity, it is unlikely IT will suddenly grow enough to meet a ratio nearly 40 to 1 lower than optimal to support static technology users. That means IT has to look to other means to up the output of operations teams tasked with deploying and maintaining the applications and infrastructure critical to business success. IT needs an operational amplifier. Operational Amplifiers Operational amplifiers are a lot like force multipliers in that they enable a small number of people or infrastructure to achieve more, as if they were multiplied (or cloned, if you prefer). The term comes from electrical engineering, which describes an operational amplifier as: An operational amplifier (op-amp) is a DC-coupled high-gain electronic voltage amplifier with a differential input and, usually, a single-ended output. [1] In this configuration, an op-amp produces an output potential (relative to circuit ground) that is typically hundreds of thousands of times larger than the potential difference between its input terminals. -- Wikipedia And that is what makes it possible for a 13 member staff to support 80 million users; for a small business inside to perform like a web monster outside. That operational amplifier is devops, and it's going to be critical moving forward to shift staff from break and fix to the innovation necessary to meet the demands of the Internet of Things. Now, that said, DevOpsis not a tool. It's not a thing, it's not something tangible. It's an approach, a verb, a perspective that requires organizations to shift process burdens from people to technology in a way that makes them more efficient, repeatable and consistent. And because there is no specific tool, but rather a mindset and methodology, it behooves producers of the infrastructure and platforms upon which applications and application services are deployed to enable operations to put into action the principles behind those methodologies: automation, orchestration and process re-engineering. That means APIs - strong APIs - as well as extensibility and flexibility. Infrastructure cannot remain rigid and static in an environment that is rapidly changing. It must be dynamically configured, extensible, and imminently flexible. The support of well-designed APIs and programmable data paths associated with emerging architectures like SDDC and SDN is a requirement not just for the network but for "The Network" - the whole shebang from layer 2 to layer 7. It is through these APIs and programmatic extensibility that operational excellence is amplified and repeated across the myriad applications supported by most enterprises today. DevOpscan be the amplifier necessary to enable the economies of operational scale required to efficiently meet challenges associated with rapid, explosive growth in both user communities and app deployments. Infrastructure supportive of those efforts must provide the means by which that scale can occur. Infrastructure must support DevOpsand the shift from process reliance on people to technology through both control and data path programmability, lest it become a resistor instead of an amplifier.203Views0likes0Comments