sdas
100 TopicsF5 Friday: App Proxy or ADC?
(Editors note: the LineRate product has been discontinued for several years. 09/2023) --- Choosing between BIG-IP and LineRate isn't as difficult as it seems.... Our recent announcement of the availability of LineRate Point raised the same question over and over: isn't this just a software-version of BIG-IP? How do I know when to choose LineRate Point instead of BIG-IP VE (Virtual Edition)? Aren't they the same?? No, no they aren't. LineRate Point (and really Line Rate Precision, too) is more akin to an app proxy while BIG-IP VE remains, of course, an ADC (Application Delivery Controller). That's not even pedantry, it's core to what each of the two solutions supports - both in their capabilities, their extensibility, and the applications they're designed to deliver services for. Platforms and Proxies First, let's remember that an ADC is a platform; that is, it's a software system supporting extensibility through modules. That's why we have BIG-IP . Because BIG-IP is really a platform, and its capabilities to deliver software defined application services (SDAS) are enabled through the modules it supports. Whether it's BIG-IP on F5 hardware or BIG-IP VE in the cloud or in virtual machines, it's still an extensible ADC platform. LineRate Point is a layer 7 load balancer; it's an app proxy. It's primary goal is to serve HTTP/S applications with scalability and security (like SSL and TLS). It's not extensible like BIG-IP VE. There are no "modules" you can deploy to expand its capabilities. It is what it is - a lightweight, load balancing layer 7 app proxy. Extensibility in the LineRate world is achieved with LineRate Precision, which includes node.js data path programmability (scripting) as a means to create new services, new functionality, and implement a variety of infrastructure patterns like A/B testing, Canary Deployments, Blue/Green deployments, and more. That's where the confusion with BIG-IP VE usually comes in, because in addition to its platform extensibility, BIG-IP VE also enables data path programmability through iRules. So how do you choose between the options? There's BIG-IP on F5 hardware, BIG-IP VE (virtual) and cloud (AWS, Azure, Rackspace, IBM, etc…), LineRate Point and Precision for cloud (Amazon EC2) and virtual as well as bare-metal. The best way to choose is to base it on (wait for it, wait for it) the application for which you need services delivered. C'mon, you saw that coming - it's an application world, after all, and F5 is always all about that application. Applications, Scale and Service Delivery It really is all about that application. The scale, the nature of the business function the application provides, and the services required to deliver that application are critical components in the choice of what is basically ADC or App Proxy. And you know me, a picture is worth at least 1024 words, so here it is: The first assumption we're making (and I think it's a good assumption) is that if someone deployed an application and is using it within the context of the business (or line of business or department or, well, you get the picture) then it's important enough to need some service. Maybe that's just scale or availability, maybe it needs security or a performance-boosting push, but it probably needs something. What it needs may be dependent on the number of users, the criticality of the application to productivity and profit, and the sensitivity of the data it interacts with. Given that set of criteria, you can start to see that business critical and major line of business applications - ERP, Sharepoint, Exchange, etc... - have few instances but thousands of users. These apps require high availability, massive scale, security and often performance boosts. They need multiple application services. That means BIG-IP*, and probably BIG-IP on F5 hardware or, perhaps, the deployment of a High Performance Services Fabric comprised of many BIG-IP VE using F5 Synthesis. Either way, you're talking high capacity BIG-IP. As we move down the triangular stack, we start running into line of business apps that number in the hundreds, and may have hundreds of users. These apps are of two ilk: 1. Those that require multiple application services, and 2. Those that require data path programmability Now, the app may need one or the other or both. The first question to ask (and this isn't obvious) is what protocols do the applications support? Yes, that actually is very relevant. Line Rate is basically providing app proxy services; that means app protocols like HTTP and HTTPs. Not UDP, not SIP, not RDP or PCoIP. If you need that kind of protocol support, the answer at this layer is BIG-IP VE. If the answer was HTTP or HTTPS, now you're faced with a second (easy) question: do you need multiple services? Do you need availability (load balancing and failover) plus performance boosting services like caching and acceleration options? Do you need availability plus identity management (like SSO or SAML)? If you need availability plus then the answer, again, is you should choose BIG-IP VE. If you just need availability, now you get into a more difficult decision tree. If you want (or need) data path programmability (such as might be used to patch zero-day security vulnerabilities or do some layer 7 app routing) then the question is what language do you want to script in? Do you want node.js? Choose Line Rate Precision. Want iRules? Choose BIG-IP VE. There's really no "right" or "wrong" answer here, it's a matter of preference (and probably skill set availability or standard practices in your organization). Finally, you reach the broad bottom of the triangle, where the number of apps may be in the thousands but the users per app is minimal. This is where apps need basic availability but little more. This layer is where orchestration support (robust APIs) become as important as the service itself, because continuous delivery (CD) is in play as well as other DevOps-related practices like continuous integration and testing. This environment is often very fluid, highly volatile and always in motion, requiring similar characteristics of any availability services required. In this layer of the enterprise application stack, Line Rate Point is your best choice. Coupled with our newly introduced Volume Licensing Subscription (VLS), LineRate Point here offers both the support for the environment (with its robust, proper REST API) and its software or virtual form-factor along with excellent economy of scale. Hopefully this handy-dandy guide to F5 and enterprise application segmentation helps to sort out the question whether you should choose BIG-IP, BIG-IP VE or a flavor of LineRate. Happy Friday! * Oh, I know, you could provide those services with a conga line of point products but platforms are a significant means of enabling standardization and consolidation, which greatly enhance overall value and lower both operating and capital costs.338Views0likes0CommentsF5 Synthesis: Keeping the licensing creep out of expanding software options
(Editors note: the LineRate product has been discontinued for several years. 09/2023) --- One of the funny things about infrastructure moving toward a mix of hardware and software (virtual or traditional) is that the issues that plague software come with it. Oh, maybe not right away, but eventually they crawl out of the deep recesses of the data center like a Creeper in Minecraft and explode on the unsuspecting adventurer, er, professional. While licensing network infrastructure has never been painless, it's never been as complicated or difficult as its software counterparts simply due to the sheer magnitude of difference between the number of network boxes under management and the number of software applications and infrastructure under management. That is changing. Rapidly. Whether it's because of expanding cloud footprints or a need to support microservices and highly virtualized environments, the reality is that the volume of software-based infrastructure is increasing. Like its application counterparts, that means licensing challenges are increasing too. That means We (that's the corporate F5 "we") have to change, too. As we continue to expand the software offerings available for F5 Synthesis beyond cloud and virtualization, we need to also adjust licensing options. That means staying true to the Synthesis tenet of Simplified Business Models. That's why we're making not one but two announcements at the same time. The first is the expansion of existing software options for F5 Synthesis. In addition to cloud-native and virtual editions of BIG-IP, we're making available a lightweight, load balancing service - LineRate Point. LineRate Point complements existing Synthesis services by supporting more directly the needs of application and operations teams for agile, programmable application-affine services in the data center or in the cloud, on- and off-premise. This is a missing component as the data center architecture bifurcates into a shared, core network and an app specific (business) network. Whether it's a focus on moving toward Network Service Virtualization or a need to deploy on a per-app / per-service basis thanks to microservices or increasing mobile application development, LineRate Point offers the scale and security necessary without compromising on the agility or programmability required to fit into the more volatile environment of the growing application network. But a sudden explosion of LineRate Point (or any service, really) anyway across the potential deployment spectrum would create the same kind of tracking and management headaches experienced by software infrastructure and applications. Licensing becomes a nightmare, particularly when instances might be provisioned and terminated on a more frequent basis than is typical for most network-deployed services. So along with the introduction of LineRate Point we're also bringing to F5 Synthesis Volume License Subscriptions (VLS). VLS holds true to the tenet of simplified business models both by offering F5 Synthesis software options (VE, cloud and LineRate Point) with a licensing model that fits the more expansive use of these services to support microservices, cloud and virtualization. VLS brings to F5 Synthesis the ability to support the migration of service infrastructure closer to the applications it is supporting without sacrificing the need for management and licensing. VLS also simplifies a virtual-based Synthesis High Performance Service Fabric by centralizing licensing of large numbers of virtual BIG-IP instances (VE) and simplifying the process. According to a 2014 InformationWeek survey on software licensing, nearly 40% of organizations have a dedicated resource who spends more than 50% of their time managing licenses and subscriptions. Moving to a more software-focused approach for infrastructure services will eventually do the same if it's not carefully managed from the start. By taking advantage of F5 Synthesis Simplified Business Models and its VLS offering, organizations can avoid the inevitable by bringing a simplified licensing strategy along with their software-based service infrastructure. You can learn more about F5 Synthesis Simplified Business Models by following Alex Rublowsky, Senior Director of Licensing Business Models, here on DevCentral as he shares more insight into the growing licensing options available for F5's expanding software portfolio.472Views0likes0CommentsIf apps incur technical debt then networks incur architectural debt
#devops #sdn #SDDC #cloud 72%. That's an estimate of how much of the IT budget is allocated to simply keeping the lights on (a euphemism for everything from actually keeping the lights on to cooling, heating, power, maintenance, upgrades, and day to day operations) in the data center. In a recent Forrester Research survey of IT leaders at more than 3,700 companies, respondents estimated that they spend an average 72% of the money in their budgets on such keep-the-lights-on functions as replacing or expanding capacity and supporting ongoing operations and maintenance, while only 28% of the money goes toward new projects. How to Balance Maintenance and IT Innovation This number will not, unfortunately, significantly improve without intentionally attacking it at its root cause: architectural debt. Data Center Debt The concept of "debt' is not a foreign one; we've all incurred debt in the form of credit cards, car loans and mortgages. In the data center, this concept is applied in much the same way as our more personal debt - as the need to "service" the debt over time. Experts on the topic of technical debt point out that this "debt' is chiefly a metaphor for the long-term repercussions arising from choices made in application architecture and design early on. Technical debt is a neologistic metaphor referring to the eventual consequences of poor software architecture and software development within a codebase. The debt can be thought of as work that needs to be done before a particular job can be considered complete. If the debt is not repaid, then it will keep on accumulating interest, making it hard to implement changes later on. Unaddressed technical debt increases software entropy. Wikipedia This conceptual debt also occurs in other areas of IT, particularly those in the infrastructure and networking groups, where architectural decisions have long lasting repercussions in the form of not only the cost to perform day-to-day operations but in the impact to future choices and operational concerns. The choice of a specific point product today to solve a particular pain point, for example, has an impact on future product choices. The more we move toward software-defined architectures - heavily reliant on integration to achieve efficiencies through automation and orchestration - the more interdependencies we build. Those interdependencies cause considerable complexity in the face of changes that must be made to support such a loosely coupled but highly integrated data center architecture. We aren't just maintaining configuration files and cables anymore, we're maintaining the equivalent of code - the scripts and methods used to integrated, automate and orchestrate the network infrastructure. Steve McConnell has a lengthy blog entry examining technical debt. The perils of not acknowledging your debt are clear: One of the important implications of technical debt is that it must be serviced, i.e., once you incur a debt there will be interest charges. If the debt grows large enough, eventually the company will spend more on servicing its debt than it invests in increasing the value of its other assets. Debt must be serviced, which is why the average organization dedicates so much of its budget to simply "keeping the lights on." It's servicing the architectural debt incurred by a generation of architectural decisions. Refinancing Your Architectural Debt In order to shift more of the budget toward the innovation necessary to realize the more agile and dynamic architectures required to support more things and the applications that go with them, organizations need to start considering how to shed its architectural debt. First and foremost, software-defined architectures like cloud, SDDC and SDN, enable organizations to pay down their debt by automating a variety of day-to-day operations as well as traditionally manual and lengthy provisioning processes. But it would behoove organizations to pay careful attention to the choices made in this process, lest architectural debt shift to the technical debt associated with programmatic assets. Scripts are, after all, a simple form of an application, and thus bring with it all the benefits and burdens of an application. For example, the choice between a feature-driven and an application-driven orchestration can be critical to the long-term costs associated with that choice. Feature-driven orchestration necessarily requires more steps and results in more tightly coupled systems than an application-driven approach. Loose coupling ensures easier future transitions and reduces the impact of interdependencies on the complexity of the overall architecture. This is because feature-driven orchestration (integration, really) is highly dependent on specific sets of API calls to achieve provisioning. Even minor changes in those APIs can be problematic in the future and cause compatibility issues. Application-driven orchestration, on the other hand, presents a simpler, flexible interface between provisioning systems and solution. Implementation through features can change from version to version without impacting that interface, because the interface is decoupled from the actual API calls required. Your choice of scripting languages, too, can have much more of an impact than you might think. Consider that a significant contributor to operational inefficiencies today stems from the reality that organizations have an L4-7 infrastructure comprised of not just multiple vendors, but a wide variety of domain specificity. That means a very disparate set of object models and interfaces through which such services are provisioned and configured. When automating such processes, it is important to standardize on a minimum set of environments. Using bash, python, PERL and juju, for example, simply adds complexity and begins to fall under the Law of Software Entropy as described by Ivar Jacobson et al. in "Object-Oriented Software Engineering: A Use Case Driven Approach": The second law of thermodynamics, in principle, states that a closed system's disorder cannot be reduced, it can only remain unchanged or increased. A measure of this disorder is entropy. This law also seems plausible for software systems; as a system is modified, its disorder, or entropy, always increases. This is known as software entropy. Entropy is the antithesis of what we're trying to achieve with automation and orchestration, namely the acceleration of application deployment. Entropy impedes this goal, and causes the introduction of yet another set of systems requiring day-to-day operational attention. Other considerations include deciding which virtual overlay network will be your data center standard, as well as the choice of cloud management platform for data center orchestration. While such decisions seem, on the surface, to be innocuous, they are in fact significant contributors to the architectural debt associated with the data center architecture. Shifting to Innovation Every decision brings with it debt; that cannot be avoided. The trick is to reduce the interest payments, if you will, on that debt as a means to reduce its impact on the overall IT budget and enable a shift to funding innovation. Software-defined architectures are, in a way, the opportunity for organizations to re-finance their architectural debt. They cannot forgive the debt (unless you rip and replace) but these architectures and methodologies like devops can assist in reducing the operational expenses the organization is obliged to pay on a day-to-day basis. But it's necessary to recognize, up front, that the architectural choices you make today do, in fact, have a significant impact on the business' ability to take advantage of the emerging app economy. Consider carefully the options and weigh the costs - including the need to service the debt incurred by those options - before committing to a given solution. Your data center credit score will thank you for it.391Views0likes1CommentF5 Synthesis: How do you operationalize a hybrid world?
One of the unintended consequences of cloud is the operational inconsistency it introduces. That inconsistency is introduced because cloud commoditizes the infrastructure we're used to having control over and visibility into. everything from the core network to the application services upon which business and operations relies to ensure performance, availability and security are often times obscured behind simplified services whose policies and configurations cannot be reconciled with those we maintain on-premise. You do not provision resources or deploy apps the same way in the cloud as you do in the data center. In fact, it's unlikely you'll provision resources or deploy apps the same way in cloud A as you do in cloud B. And even if you implement a private cloud, the way you provision resources and deploys apps will almost certainly be different than how you do it in the cloud. Which leaves the question of just how do you really operationalize that? In many cases, this isn't going to change. Organizations will not be able to reconcile core network services necessarily obscured behind the abstraction that is cloud with networking policies in place on-premise. But for those services exhibiting more affinity with - and therefore more influence over and impact on - applications, organizations do have a choice. The same on-premise services can generally be deployed in the cloud and therefore are able to be provisioned, managed and, therefore, governed by the same policies as are their on-premise counterparts. By ensuring a consistent service abstraction layer through the use of a standardized platform, organizations are given the opportunity to operationalize those services in a way that maintains consistent application security, performance and availability policies. The key is ensuring that the services can be deployed in the cloud and that there exist a management and orchestration solution capable of providing control over those services whether they're in the cloud or in the data center, on-premise. This is exactly what our intelligent Orchestration System, BIG-IQ, provides within F5 Synthesis. Operationalizing a Hybrid World Operationalizing in a hybrid world requires the same level of programmability as is required to operationalize the network. Additionally, however, it requires the ability to extend its reach into the cloud by connecting to those environments through cloud APIs designed to allow the remote provisioning and management of resources. Because those resources are essentially virtual machines through which F5 Software Defined Application Services (SDAS) can be delivered, BIG-IQ can reach out, to the cloud, and provision and manage SDAS in a hybrid world. This means operationalization is near. BIG-IQ provides a centralized command and control center through which provisioning and management of application services can be automated and orchestrated. Common tasks such as deployment, auto-scaling and configuration changes are managed through the same system whether deployed in the data center or in the cloud. This means consistent of policy, of management, and reporting. It means an efficient, operationalized deployment of applications in a hybrid environment, enabling the same ease of deployment whether in the data center or in the cloud. The bulk of the services required in the cloud are going to be those which display the most affinity with applications; in other words, application services like identity and access control, performance, availability, security and mobility. All SDAS, all available through the same, common platform, BIG-IP, in the cloud and in the data center. But what about ... All those other services you've got deployed that aren't delivered via F5 Synthesis or the BIG-IP platform? To operationalize those requires either a solution similar to BIG-IQ. Assuming you have such a system, and it, like BIG-IQ is enabled with an API designed to facilitate integration and automation, then you can operationalize deployment by orchestrating a process inclusive of both solutions, each provisioning and managing their respective services. The key in this case is to ensure that you can deploy, provision and manage services in the same way in all environments that comprise your hybrid world. Lacking a solution similar to BIG-IQ, if the service platform is API-enabled, you can script up an automated solution, though this option requires far more intimate knowledge of both the service platform's API and the cloud provider's API, as deployment, provisioning and management will have to be able to do so through the cloud provider's API while using the service platform's API. This is no trivial task, and why it's generally suggested that a management platform be used whenever possible to reduce the complexity and risk associated with developing scripts to recreate. You'll note the common theme, here, is architectural parity. While it is not unpossible (yes, that is too a word) to develop a custom system which interprets a declarative policy representative of operational and corporate policies regarding security, performance and availability, and then uses the appropriate programmatic methods of the services and clouds to implement, like the custom integration path, this is quite an undertaking. This is a model similar to that of OpenStack. The issue with OpenStack is its currently limited support for services (only the most basic application service (yes, that is singular) is available today) and cloud providers. In a nutshell, to operationalize a hybrid world you're going to have to either code and integrate yourself, or take a good hard look at your service delivery strategy to determine whether or not the various pieces of its comprising infrastructure fit into your vision of a hybridized world. It may be a good time to re-evaluate what platforms and products you're using to deliver what services with an eye toward how well they are able to support operationalization of a hybrid world.360Views0likes1CommentThe IoT Ready Platform
Over the last couple months, in between some video coverage for events, I've been writing a series of IoT stories. From the basic What are These "Things”? and IoT Influence on Society to the descriptive IoT Effect on Applications and the IoT Ready Infrastructure. I thought it only fair to share how F5 can play within an IoT infrastructure. Because F5 application services share a common control plane—the F5 platform—we’ve simplified the process of deploying and optimizing IoT application delivery services. With the elastic power of Software Defined Application Services (SDAS), you can rapidly provision IoT application services across the data center and into cloud computing environments, reducing the time and costs associated with deploying new applications and architectures. The beauty of SDAS is that it can provide the global services to direct the IoT devices to the most appropriate data center or hybrid cloud depending on the request, context, and application health. Customers, employees, and the IoT devices themselves receive the most secure and fastest experience possible. F5's high-performance services fabric supports traditional and emerging underlay networks. It can deployed a top traditional IP and VLAN-based networks, works with SDN overlay networks using NVGRE or VXLAN (as well as a variety of less well-known overlay protocols) and integrates with SDN network fabrics such as those from Cisco/Insieme, Arista and BigSwitch among others. Hardware, Software or Cloud The services fabric model enables consolidation of services onto a common platform that can be deployed on hardware, software or in the cloud. This reduces operational overhead by standardizing management as well as deployment processes to support continuous delivery efforts. By sharing service resources and leveraging fine-grained multi-tenancy, the cost of individual services is dramatically reduced, enabling all IoT applications - regardless of size - to take advantage of services that are beneficial to their security, reliability and performance. The F5 platform: Provides the network security to protect against inbound attacks Offloads SSL to improve the performance of the application servers Not only understands the application but also know when it is having problems Ensures not only the best end user experience but also quick and efficient data replication F5 Cloud solutions can automate and orchestrate the deployment of IoT application delivery services across both traditional and cloud infrastructures while also managing the dynamic redirection of workloads to the most suitable location. These application delivery services ensure predictable IoT experiences, replicated security policy, and workload agility. F5 BIG-IQ™ Cloud can federate management of F5 BIG-IP® solutions across both traditional and cloud infrastructures, helping organizations deploy and manage IoT delivery services in a fast, consistent, and repeatable manner, regardless of the underlying infrastructure. In addition, BIG-IQ Cloud integrates or interfaces with existing cloud orchestration engines such as VMware vCloud Director to streamline the overall process of deploying applications. Extend, Scale - and Secure F5 Cloud solutions offer a rapid Application Delivery Network provisioning solution, drastically reducing the lead times for expanding IoT delivery capabilities across data centers, be they private or public. As a result, organizations can efficiently: Extend data centers to the cloud to support IoT deployments Scale IoT applications beyond the data center when required. Secure and accelerate IoT connections to the cloud For maintenance situations, organizations no longer need to manually redirect traffic by configuring applications. Instead, IoT applications are proactively redirected to an alternate data center prior to maintenance. For continuous DDoS protection, F5 Silverline DDoS Protection is a service delivered via the F5 Silverline cloud-based platform that provides detection and mitigation to stop even the largest of volumetric DDoS attacks from reaching your IoT network. The BIG-IP platform is application and location agnostic, meaning the type of application or where the application lives really does not matter. As long as you tell the BIG-IP platform where to find the IoT application, the BIG-IP platform will deliver it. Bringing it all together, F5 Synthesis enables cloud and application providers as well as mobile network operators the architectural framework necessary to ensure the performance, reliability and security of IoT applications. Connected devices are here to stay—forcing us to move forward into this brave new world where almost everything generates data traffic. While there’s much to consider, proactively addressing these challenges and adopting new approaches for enabling an IoT-ready network will help organizations chart a clearer course toward success. An IoT-ready environment enables IT to begin taking advantage of this societal shift without a wholesale rip-and-replace of existing technology. It also provides the breathing room IT needs to ensure that the coming rush of connected devices does not cripple the infrastructure. This process ensures benefits will be realized without compromising on the operational governance required to ensure availability and security of IoT network, data, and application resources. It also means IT can manage IoT services instead than boxes. However an IoT ready infrastructure is constructed, it is a transformational journey for both IT and the business. It is not something that should be taken lightly or without a long-term strategy in place. When done properly, F5-powered IoT ready infrastructure can bring significant benefits to an organization and its people. ps Related: The Digital Dress Code Is IoT Hype For Real? What are These "Things”? IoT Influence on Society IoT Effect on Applications CloudExpo 2014: The DNS of Things Intelligent DNS Animated Whiteboard The Internet of Me, Myself & I Technorati Tags: f5,iot,things,sensors,silverline,big-ip,scale,sdas,synthesis,infrastructure Connect with Peter: Connect with F5:546Views0likes2CommentsF5 + Nutanix: Invisible Infrastructure and SDAS Joining Forces
F5 and Nutanix partner to bring the power of invisible infrastructure and software-defined application servers to critical enterprise application. Joint customers benefit from improved availability, scalability, performance, and security enabled through orchestration, management, and automation. ps Related VMworld2015 – The Preview Video VMworld2015 – Find F5 VMworld2015 – Realize the Virtual Possibilities (feat. de la Motte) VMworld2015 – Business Mobility Made Easy with F5 and VMware (feat. Venezia) Software Defined Data Center Made Simple (feat. Pindell) - VMworld2015 That’s a Wrap from VMworld2015 F5 + SimpliVity: Deploy and Simplify Application Deployments Together Technorati Tags: f5,nutanix,converged,integrated,sdas,performance,security,cloud Connect with Peter: Connect with F5:234Views0likes0CommentsF5 Synthesis: What about SDN?
#SDN #SDAS How does Synthesis impact and interact with SDN architectures? With SDN top of mind (or at least top of news feeds) of late, it's natural to wonder how F5's architecture, Synthesis, relates to SDN. You may recall that SDN -or something like it - was inevitable due to increasing pressure on IT to improve service velocity in the network to match that of development (agile) and operations (devops). The "network" really had no equivalent, until SDN came along. But SDN did not - and still does not - address service velocity at layers 4-7. The application layers where application services like load balancing, acceleration, access and identity (you know, all the services F5 platforms are known for providing) live. This is not because SDN architectures don't want to provide those services, it's because technically they can't. They're impeded from doing so because the network is naturally bifurcated. There's actually two networks in the data center - the layer 2-3 switching and routing fabric and the layer 4-7 services fabric. One of the solutions for incorporating application (layer 4-7) services into SDN architectures is service chaining. Now, this works well as a solution to extending the data path to application services but it does very little in terms of addressing the operational aspects which, of course, are where service velocity is either improved or not. It's the operational side - the deployment, the provisioning, the monitoring and management - that directly impacts service velocity. Service chaining is focused on how the network makes sure application data traversing the data path flows to and from application services appropriately. All that operational stuff is not necessarily addressed by service chaining. There are good reasons for that but we'll save enumerating them for another day in the interests of getting to an answer quickly. Suffice to say that service chaining is about execution, not operation. So something had to fill the gap and make sure that while SDN is improving service velocity for network (layer 2-3) services, the velocity of application services (layer 4-7) were also being improved. That's where F5 Synthesis comes in. We're not replacing SDN; we're not an alternative architecture. Synthesis is completely complementary to SDN and in fact interoperates with a variety of architectures falling under the "SDN" moniker as well as traditional network fabrics. Ultimately, F5's vision is to provide application-protocol aware data path elements (BIG-IP, LineRate, etc..) that can execute programmatic rules that are pushed by a centralized control plane (BIG-IQ). A centralized control-decentralized execution model implementing a programmatic application control plane and application-aware data plane architecture. Bringing the two together offers a comprehensive, dynamic software-defined architecture for the data center that addresses service velocity challenges across the entire network stack (layer 2-7). SDN automates and orchestrates the network and passes the right traffic to Synthesis High-Performance Services Fabric, which then does what it does best - apply the application services critical to ensuring apps are fast, secure and reliable. In addition to service chaining scenarios there are orchestration integrations (such as that with VMware's NSX) as well as network integrations such as a cooperative effort between F5, Arista and VMware. You might have noticed that we specifically integrate with leading SDN architectures and partners like Cisco/Insieme, VMware, HP, Arista, Dell and BIgSwitch. We're participating in all the relevant standards organizations to help find additional ways to integrate both network and application services in SDN architectures. We see SDN as validating what We (and that's the corporate we) have always believed - networks need to be dynamic, services need to be extensible and programmable, and all the layers of the network stack need to be as agile as the business it supports. We're fans, in other words, and our approach is to support and integrate with SDN architectures to enable customers to deploy a fully software-defined stack of network and application services. F5 Synthesis: The Time is Right F5 and Cisco: Application-Centric from Top to Bottom and End to End F5 Synthesis: Software Defined Application Services F5 Synthesis: Integration and Interoperability F5 Synthesis: High-Performance Services Fabric F5 Synthesis: Leave no application behind F5 Synthesis: The Real Value of Consolidation Revealed F5 Synthesis: Reference Architectures - Good for What Ails Your Apps755Views0likes1CommentDisposable Infrastructure for Disposable Apps
Conferences agendas. Event navigation. Specific tasks, like buying a house or getting a car loan. If you've installed an app for any of these things you've installed what's known as a "disposable mobile app" or DMA. Apps designed for a single use-case and with the expectation they'll be "thrown away" like brochures. Deleted until needed again. These apps are necessarily small, agile and highly volatile. Sometimes existing only for a short time - say to support an event like an election, the World Cup or a music festival - or existing for a long time on the "server" side, but not the client, like navigating a home mortgage process. These apps are increasingly popular and are considered a kind of "micro app" (which is very similar to microservices but designed for third-party, not internal, development). The reason it's important to note the existence of these kinds of apps is that they may have very different lifespans. From cradle to grave they may only exist for days, weeks or months. They flare into existence and then just as quickly they fade away. They are disposable, for the most part, and thus the infrastructure supporting them is also likely disposable. In an age of virtualized data centers, with software eating IT, this notion is not as crazy as it once may have sounded. It's not like we're tossing out multi-thousand dollar hardware, after all. It's just some bits that can be easily put up and torn down with the click of an enter key. That might be good if your only infrastructure concern is a web/app server. But as it happens these things need scalability and help with performance (they're mobile apps, after all, with most of the processing done on the server - cloud - side) and that implies several pieces of what's generally considered network infrastructure. Load balancing and caching and optimizing services. That means they, too, need to be "disposable". They must be software (or virtual) deployable and come with a robust set of APIs and templates through which the things can be quickly provisioned, configured and later torn down. It also means they must be cloud-ready or cloud-enabled or cloudified (whichever marketing term you prefer to be applied) so that the infrastructure and services supporting these disposable apps are as flexible and disposable as the applications they're delivering. It also means a DevOps approach is increasingly important to managing these very volatile environments in which many more apps are delivered and disposed of in shorter cycles. A single, monolithic "one app fits all our offerings" approach is not necessarily the way disposable mobile apps are conceived of and delivered. They are focused and purposeful, meaning they are focused on providing a specific set of functionality that will never be extended. Other functions and purposes then are delivered via other apps, which increases overall the number of applications necessary and puts additional pressure on operations to deploy and deliver those apps. Each of those apps has specific services, configurations and monitoring requirements that must be tailored to the app. One size does not fit all in the world of applications. That's a marked difference from a world in which infrastructure configuration may remain largely unchanged for the lifetime of the app excepting performance or security tweaks. That means more work, more complexity, more often for operations who must manage the infrastructure. That's why it's increasingly important for infrastructure to not only be software or virtualized, to not only present a robust provisioning and configuration API, but to also focus on participation in the growing automation ecosystems of popular frameworks and toolsets like Puppet and Chef, VMware and Cisco, OpenStack and Salt Stack. These are the frameworks enabling continuous delivery to leak out of dev and into operations and provide the means by which infrastructure is as easily put into production as it is disposed of. Software is eating IT, but that should be taken as a good thing. DevOps as an approach to lifecycle management across the application and infrastructure spectrum is necessary to manage the growth being driven by the software eating the world. CA and Vanson Bourne found quantifiable benefits of adopting a DevOps approach in a global survey, with 21% of respondents reporting more new software and services were possible and 18% saw a faster time to market. That kind of agility and speed is a requirement if you're going to be supporting disposable apps and the disposable infrastructure needed to deliver them. A key IT pain point is the reality that the network is still in the way. According to EMA research, "slow manual processes to reconfigure infrastructure to accommodate change" was cited by 39% of organizations as a significant pain point in 2014. Applying DevOps to "the network" will aid in eliminating this significant point of impedance on the path to production. Part of that effort includes disposing of our preconceptions about the nature of the network (it's hardware! It's untouchable! It's not my domain!) and start considering which pieces of the network are ripe for being treated as disposable as the apps it delivers.220Views0likes0CommentsF5 Synthesis: F5 brings Scale and Security to EVO:RAIL Horizon Edition
The goal of F5 Synthesis is to deliver the app services that deliver the apps business relies on today for productivity and for profit. That means not just delivering SDAS (Software Defined Application Services) themselves, but delivering them in all the ways IT needs to meet and exceed business expectations. Sometimes that's in the cloud marketplace and other times it's as a cloud service. Sometimes it's as an integratable on-premise architecture and other times, like now, it's as part of a hyper-converged system. As part of a full stack in a rack, if you will. EVO:RAIL is a partnership between VMware and Dell that offers a simplified, hyper-converged infrastructure. In a nutshell, it's a single, integrated rack designed to address the headaches often caused by virtual machine sprawl and heterogeneous hypervisor support as well as providing the means by which expanding deployments can be accelerated. Converged infrastructure is increasingly popular as a means to accelerate the deployment and growth of virtualized solutions such as virtual desktop delivery. Converged infrastructure solutions like EVO:RAIL abstract compute, network and storage resources from the CPUs, cables controllers and switches that make them all usable as a foundation for private cloud or, as is more often the case, highly virtualized environments. By validating F5 VE (Virtual Edition) to deliver app services in an EVO:RAIL Horizon Edition the infrastructure gains key capabilities to assure availability, security and performance of the applications that will ultimately be deployed and delivered by the infrastructure. Including F5 brings capabilities critical to seamlessly scaling VMware View by providing Global Namespace and User Name Persistence support. Additionally, F5 iApps accelerates implementation by operationalizing the deployment of SDAS with simple, menu-driven provisioning. You can learn more about Dell's VMware EVO:RAIL solution here and more on how F5 and VMware are delivering the Software Defined Data Center here.212Views0likes0CommentsSoftware is Eating IT
Software is eating the world. Everywhere you look there's an app for that. And I'm talking everywhere - including places and activities that maybe there shouldn't be an app for. No, I won't detail which those are. The Internet is your playground, I'm sure you can find examples. The point is that software is eating not just the world of consumers, but the world of IT. While most folks take this statement to mean that everything in IT is becoming software and the end of hardware is near, that's not really what it's saying. There has to be hardware somewhere, after all. Compute and network and storage resources don't come from the Stork, you know. No, it's not about the elimination of hardware but rather about how reliant on software we're becoming across all of IT. To put it succinctly, the future is software deploying software delivering software. Let me break that down now, because that's a lot of software. The first thing we note is that software is deploying, well, software. That software in turn is responsible for delivering software, a.k.a apps. And that's really what I mean when I say "software is eating IT". It's the inevitable realization that manual anything doesn't scale and to achieve that scale we need tools. In the case of IT that's going to be software. Software like Chef and Puppet, VMware and Cisco, OpenStack and OpenDaylight. Software that deploys. Software that automates and orchestrates the deployment of other things, including but not limited to, the apps transforming the world. What is that software deploying? More software, but not just the software we know as "apps" but the software responsible for hosting and delivering those apps. Infrastructure and platform software as well as the software that transports its data - every bit of someone's cat photo - from the database to their phablet, tablet or phone. The software that delivers apps. That's the lengthy list of application services that are responsible for the availability, performance and security of the software they deliver. The services so important to organizations they'd rather eat worms than go without. Those services are, in turn, delivering software. Software critical to business and front and center of just about every trend driving transformations in IT today. Go ahead, think of one - any one - trend that does not have at its core a focus on applications. I'll wait. See? It really is an application (software) world. And the impact of that being the case trickles down and across all of the business and IT. It means greater scale is required, operationally and humanly. It means faster time to market competing with the need for stability. It means increasing security risks balancing against performance and speed of provisioning. So into development we end with agile and continuous delivery and cloud. Down into operations and networking we find SDN and DevOps. It's everywhere. That's why the future is software (Chef, Puppet, Python, OpenStack, VMware, Cisco) deploying software (SDAS, VMs, as a service services) delivering software (apps, mobile apps, web apps, IoT apps) with a lot more automation, orchestration and scale than ever before. Operationalization. It's leaving IT a lean, mean app deployment and delivery machine. Which is exactly what business needs to get to market, remain competitive, and engage with the consumers that ultimately pay the bills. Productivity and profit are critical to business success and apps are key in the formula for improving both. Every initiative has to support those apps, which ultimately means everything is supportive of applications. Of their development, deployment and delivery. And that's why software is eating IT.285Views0likes0Comments