vmware view
11 TopicsDelivering Security and Scalability Across the Digital Workspace with Workspace ONE and F5 APM
Hey Everyone! Just wanted to provide an exciting update on a new document in the series for Integration/Deployment guides for F5 with VMware Products. This integration has been a long time coming and really shows F5's and VMware's joint vision of a digital workspace. I am happy to announce that the next document APM Proxy with Workspace ONE is now available to the public! What is Workspace ONE? VMware Workspace ONE, powered by VMware AirWatch technology, is an intelligence-driven digital workspace platform that simply and securely delivers and manages any app on any device by integrating access control, application management, and multi-platform endpoint management. With Workspace ONE, organizations can remove siloes of cloud, desktop and mobile investments, and unify management of all devices and apps from one platform. Where does F5 Help? When combined with Workspace ONE, the portfolio of BIG-IP’s leading ADC technologies optimizes the user experience by delivering speed, scale, and resiliency. Customers can reap several benefits from the integration, including: Access to Apps without Disruption - This integration helps clients non-disruptively accelerate, simplify, and secure the delivery of business applications. End users are presented with a modern workspace that increases productivity with single sign-on access. IT organizations can utilize their Workspace ONE platform to extend the same user experience to legacy or custom applications. Using identity integrations, VMware provides the platform and user experience, while F5 provides the scale and application interoperability. Reducing Risk Across the Entire Organization - IT now has access policies that reduce the risk of data loss across the entire organization. Policies include app access (including legacy apps), conditional access and device compliance. Workspace ONE and F5 can leverage modern authentication protocols like OAuth to offload and simplify identity and access management. Providing Great User Experience Across All Devices - New features in the Workspace ONE and F5 integration, like OAuth and JSON Web Tokens (JWT) help deliver a transparent user experience while support ensuring secure access across all devices including mobile, desktop and web interface. Consolidation of Gateways - Gateway-sprawl can lead to complexity in an environment. With this integration, IT can simplify management of gateways by consolidating them into a single platform using the Workspace ONE and F5 integration. What does this Integration Guide Detail? This documentation focuses on deploying F5 BIG-IP APM for with VMware Workspace ONE (Cloud or VIDM onpremise) to deliver VMware Horizon desktops and applications in a production environment. This guide will provide the necessary steps to configuring your Workspace ONE Cloud or VIDM onpremise and BIG-IP to work with the JWT Token integration that was developed and tested by VMware and F5. Once configured, access to desktops and applications will become seamless and secure through single-sign on with VMware Workspace One and BIG-IP APM. Here is an example from the integration guide that shows the Workspace ONE network ranges "All Ranges" page with the newly added "Wrap Artifact in JWT" and "Audience in JWT" settings. This will allow the F5 BIG-IP APM to consume the JWT Token to validate a user at the perimeter (DMZ) and once validated will then pass along the SAML Artifact to the Horizon Connection Server(s) for authentication. In the All Ranges Network Setting Enable the checkbox for "Wrap Artifact in JWT" on the Horizon Environment that was configured in previous steps Click the + under the "Audience in JWT" next to the checkbox and provide a unique name (our example is f5cpa) Click the Save button. You can now download the updated step-by-step guide for APM Proxy with Workspace ONE. Special Thanks to the VMware Workspace ONE development team for all of their assistance putting this together!468Views0likes2CommentsF5 ... Wednesday: Bye Bye Branch Office Blues
#virtualization #VDI Unifying desktop management across multiple branch offices is good for performance – and operational sanity. When you walk into your local bank, or local retail outlet, or one of the Starbucks in Chicago O'Hare, it's easy to forget that these are more than your "local" outlets for that triple grande dry cappuccino or the latest in leg-warmer fashion (either I just dated myself or I'm incredibly aware of current fashion trends, you decide which). For IT, these branch offices are one of many end nodes on a corporate network diagram located at HQ (or the mother-ship, as some of us known as 'remote workers' like to call it) that require care and feeding – remotely. The number of branch offices continues to expand and, regardless of how they're counted, number in the millions. In a 2010 report, the Internet Research Group (IRG) noted: Over the past ten years the number of branch office locations in the US has increased by over 21% from a base of about 1.4M branch locations to about 1.7M at the end of 2009. While back in 2004, IDC research showed four million branch offices, as cited by Jim Metzler: The fact that there are now roughly four million branch offices supported by US businesses gives evidence to the fact that branch offices are not going away. However, while many business leaders, including those in the banking industry, were wrong in their belief that branch offices were unnecessary, they were clearly right in their belief that branch offices are expensive. One of the reasons that branch offices are expensive is the sheer number of branch offices that need to be supported. For example, while a typical company may have only one or two central sites, they may well have tens, hundreds or even thousands of branch offices. -- The New Branch Office Network - Ashton, Metzler & Associates Discrepancies appear to derive from the definition of "branch office" – is it geographic or regional location that counts? Do all five Starbucks at O'Hare count as five separate branch offices or one? Regardless how they're counted, the numbers are big and growth rates say it's just going to get bigger. From an IT perspective, which has trouble scaling to keep up with corporate data center growth let alone branch office growth, this spells trouble. Compliance, data protection, patches, upgrades, performance, even routine troubleshooting are all complicated enough without the added burden of accomplishing it all remotely. Maintaining data security, too, is a challenge when remote offices are involved. It is just these challenges that VMware seeks to address with its latest Branch Office Desktop solution set, which lays out two models for distributing and managing virtual desktops (based on VMware View, of course) to help IT mitigate if not all then most of the obstacles IT finds most troubling when it comes to branch office anything. But as with any distributed architecture constrained by bandwidth and technological limitations, there are areas that benefit from a boost from VMware partners. As long-time strategic and technology partners, F5 brings its expertise in improving performance and solving unique architectural challenges to the VMware Branch Office Desktop (BOD) solution, resulting in LAN-like convenience and a unified namespace with consistent access policy enforcement from HQ to wherever branch offices might be located. F5 Streamlines Deployments of Branch Office Desktops KEY BENEFITS · Local and global intelligent traffic management with single namespace and username persistence support · Architectural freedom of combining Virtual Editions with Physical Appliances · Optimized WAN connectivity between branches and primary data centers Using BIG-IP Global Traffic Manager (GTM), a single namespace (for example, https://desktop.example.com) can be provided to all end users. BIG-IP GTM and BIG-IP Local Traffic Manager (LTM) work together to ensure that requests are sent to a user’s preferred data center, regardless of the user’s current location. BIG-IP Access Policy Manager (APM) validates the login information against the existing authentication and authorization mechanisms such as Active Directory, RADIUS, HTTP, or LDAP. In addition, BIG-IP LTM works with the F5 iRules scripting language, which allows administrators to configure custom traffic rules. F5 Networks has tested and published an innovative iRule that maintains connection persistence based on the username, irrespective of the device or location. This means that a user can change devices or locations and log back in to be reconnected to a desktop identical to the one last used. By taking advantage of BIG-IP LTM to securely connect branch offices with corporate headquarters, users benefit from optimized WAN services that dramatically reduce transfer times and performance of applications relying on data center-hosted resources. Together, F5 and VMware can provide more efficient delivery of virtual desktops to the branch office without sacrificing performance or security or the end-user experience.212Views0likes0CommentsF5 Friday: Automating Operations with F5 and VMware
#cloud #virtualization #vmworld #devops Integrating F5 and VMware with the vCloud Ecosystem Framework to achieve automated operations A third of IT professionals, when asked about the status of their IT cross-collaboration efforts 1 (you know, networking and server virtualization groups working together) indicate that sure, it's a high priority, but a lack of tools makes it difficult to share information and collaborate proactively. Whether we're talking private cloud or dynamic data center efforts, that collaboration is essential to realizing the efficiency promised by these modern models in part by the ability to automate scalability, i.e. elasticity. While virtualization vendors have invested a lot of effort in developing APIs that provide extensibility and control, automating those infrastructures is simply not a part of the core virtualization feature set. And yet, controlling a virtualized infrastructure is going to be a key point of any automation strategy, because virtualization is where your resource pools and elasticity live. -- Information Week reports, "Automating the Private Cloud", Jake McTigue Consider that in a recent sampling of more than 2003 BIG-IPs the majority of resource pools comprised either 10 to 50 members or anywhere from 100 to 999 members, with the average across all BIG-IPs being about 102 members. The member of a pool, in the load balancing vernacular by the way, is an application service: the combination of an IP address and a port, such as defines a web or e-mail or other application service. Such services might be traditional (physical) or hosted in a virtual machine. That's a lot of individual services that need to be managed and, more importantly, at some point deployed. And as we know, deploying an application isn't just launching a VM – it's managing the network components that may go along with it, as well. While leveraging an application delivery controller as a strategic point of control insulates organizations from the impact of such voluminous change on delivery services such as security, access control, and capacity, it doesn't mean it is immune from the impact of such change itself. After all, for elasticity to occur the load balancing service must be aware of changes in its pool of resources. Members must be added or removed, and the appropriate health monitoring enabled or disabled to ensure real-time visibility into status. A lack of tools to automate the infrastructure collaboration necessary to deploy and subsequently manage changes to applications is a part of the perception that IT is sluggish to respond, and why many cite lengthy application deployment times as problematic for their organization. THE TOOLS to COLLABORATE and ENABLE AUTOMATION VMware and F5 both seek to provide technologies that make software defined data centers a reality. A key component is the ability to integrate application services into data center operations and thus enable the automation of the application deployment lifecycle. One way we're enabling that is through the VMware vCloud Ecosystem Framework (vCEF). Designed to allow third-parties to integrate with VMware vShield Manager which can then integrate with VMware vCloud Director, enabling private or public cloud or dynamic data center deployments. The integrated solution takes advantage of F5's northbound API as well as vShield Manager's REST-based API to enable bi-directional collaboration between vShield Manager and F5 management solutions. Through this collaboration, a VMware vApp as well as an F5 iApp can be deployed. Together, these two packages describe an application – from end-to-end. Deployment of required application delivery services occurs when F5's management solution uses its southbound API to instruct appropriate F5 BIG-IP devices to execute the appropriate iApp. The iApp is automatically executed again upon any change in resource pool make-up, i.e. a virtual machine is launched or de-provisioned. This enables the automatic elasticity desired to manage volatility automatically, without requiring lengthy manual processes to add or remove resources from a pool. It also enables newly deployed application to be delivered with the appropriate set of application delivery settings, such as those encapsulated in F5 developed iApps that define the optimal TCP, HTTP, and network parameters for specific applications. The business and operational benefits are fairly straightforward – you're automating a process that spans IT groups and infrastructure, and gaining the ability to create repeatable, successful application deployments that can be provisioned in minutes rather than days. This is just one of the many joint solutions F5 and VMware have developed over the past few years. Whether it's VDI or server virtualization, intra or inter-data center, we've got a solution for VMware technology that will enhance the security, performance, and reliability of not just the delivery of applications, but their deployment. 1 Enterprise Management Associates' 2012 Network Automation Survey Results Additional Resources for F5 and VMware Solutions Related blogs and articles Enabling IT Agility with the BIG-IP System and VMware vCloud Operationalizing Elastic Applications F5 and vCloud Solutions Username Persistence for VMware View Deployments Enable Single Namespace for VMware View Deployments F5 BIG-IP Enhances VMware View 5.0 on FlexPod How to Have Your (VDI) Cake and Deliver it Too F5 Solutions for VMware View Mobile Secure Desktop The Cloud’s Hidden Costs Hype Cycles, VDI, and BYOD Devops Proverb: Process Practice Makes Perfect F5 Friday: Programmability and Infrastructure as Code Lori MacVittie is a Senior Technical Marketing Manager, responsible for education and evangelism across F5’s entire product suite. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University. She is the author of XAML in a Nutshell and a co-author of The Cloud Security Rules242Views0likes0CommentsF5 Friday: Doing VDI, Only Better
#F5 does #VDI, and it does it better. There are three core vendors and protocols supporting VDI today. Microsoft with RDP, Citrix with ICA, and VMware with PCoIP. For most organizations a single vendor approach has been necessary, primarily because the costs associated with the supporting network and application delivery network infrastructure required to deliver VDI with the appropriate levels of security while meeting performance expectations of users and the need to maintain high availability. It’s a tall order that’s getting taller with every mobile client introduced, especially when you toss in a liberal dose of enforcing policies regarding access to virtual desktops. Most folks are well aware of F5’s long history of deep integration with its partners Microsoft and VMware. Whether it’s integrating with management systems or designing, testing, and documenting the often times complex joint architectures required to deliver enterprise-class applications like SharePoint and Exchange or building out a dynamic data center model to support cloud computing , F5 works in tandem with its partners to ensure the best experience possible not only for the ultimate consumers but for the IT operations folks who must deploy the solutions. But what most folks aren’t likely as aware of is F5’s commitment and expertise to delivering Citrix VDI as well. That’s natural. After all, Citrix competes with F5 at the application delivery tier and it might seem natural to assume that Citrix could deliver its own technology better than any competitor. But that assumption ignores that F5’s core focus has been and continues to be unified application delivery rather than applications – like VDI - themselves. That unified is in bold because it’s a key factor in why F5 is able to deliver all VDI solutions better, faster, and more efficiently than any other solution today. See, F5’s approach since introducing v9 and its platform has been about the integration of application delivery services. Whether those services reside on the same physical (or virtual) platform is not as important as the integration and collaboration between those services that is made possible by being designed, developed, and ultimately deployed on a common, high-speed, high-security application delivery platform. Consider, for example, the case of a comprehensive Citrix VDI delivery solution: That’s a lot of components, each of which adversely impacts performance and increases operational risk by adding additional complexity and components to the architecture. That’s ignoring the cost, as well, added by not only the need to deploy these solutions but to power them, manage them, and maintain them over time. It’s costly, it’s complex, and it’s ultimately not very extensible. Authentication, for example, must be managed in multiple locations, which increases the risk of misconfiguration or human error, and makes it more likely that orphaned identities will be left behind, always a concern as it creates an opportunity for a breach. This solution also requires manual scripting to integrate the disparate authentication sources, yet another tedious, manual and error-prone process. Now consider the same solution, but leveraging F5 and its platform with BIG-IP Local Traffic Manager and BIG-IP Access Policy Manager deployed: Consolidated (and integrated) authentication. Highly extensible policy management and enforcement, and we’ve eliminated the Web Interface Servers (and NetScalers, but as we’ve replaced them with BIG-IP that’s more of a wash than a win). But it’s not just about reducing the complexity (and ultimately the cost) of such a deployment. BIG-IP LTM and APM can simultaneously support Microsoft and VMware VDI while delivering Citrix VDI – as well as a host of other applications. F5’s solution isn’t a VDI delivery solution, it’s an application delivery solution with support for all VDI implementations and protocols. That includes Citrix Session Reliability to session roaming and reconnection as well as SmartAccess filters. F5 BIG-IP APM can populate SmartAccess filter values based upon any information discovered using VPE(source IP address, AV presence, client certificate presence, etc.) and pass them to the XML broker for evaluation. And let’s not forget about Citrix Multi-Streaming, which to give Citrix credit where due is an innovative solution to the problem of traffic prioritization in VDI delivery. If you aren’t familiar with Multi-streaming, it was introduced in XenDesktop 5.5 & XenApp 6.5 and uses multiple TCP connections (aka Multi-Stream ICA) to carry the ICA traffic between the client and the server. Each of the connections is associated with a different class of service, which allows the network administrator to prioritize each class of service, independently from each other, based on the TCP port number used for the connection. F5 supports Multi-Streaming and has for some time now. No worries. Then there’s VMware PCoIP – which can be challenging, especially when paired with DTLS for security. F5 has that covered, too, as well as its long-term support for optimal delivery of Microsoft-based solutions including its broad set of VDI solutions . I know, you’ve heard configuring F5 BIG-IP is hard and cumbersome. Well, in the past that may have been true but the introduction of iApp with BIG-IP v11 has changed that tune from a dirge to a delightful melody. iApp deployment templates and accompanying deployment guides for XenApp and XenDesktop make deploying BIG-IP painless and far less error-prone than manual processes. One of the drawbacks of VDI architectural complexity is it often presents itself as a single-vendor solution – and a reason for a single vendor virtualization strategy. If your application delivery and access management solution is capable of unifying access while delivering secure, highly performing, very available of any flavor, you’d have more of a choice in what your overall architecture would look like. That kind of choice is enabled through flexibility of the underlying application delivery network infrastructure, which is exactly the role F5 plays in your data center. If your application delivery solution is a flexible platform and not a product, then your network becomes an enabler of architecture and choice rather than being the limiting factor. VDI Resources: Updated Citrix XenApp/XenDesktop APM Template Citrix XenApp/XenDesktop Combined Load-balancing iApp VMware View 5 iApp Template Delivering Virtual Desktop Infrastructure with a Joint F5-Microsoft Solution Optimizing VMware View VDI Deployments F5 Friday: A Single Namespace to Rule Them All (Overcoming VMware Pod Limitations) F5 Friday: Cookie Cutter vApps Realized (Overcoming IP address dependencies to enable application mobility) More Users, More Access, More Clients, Less Control WILS: The Importance of DTLS to Successful VDI From a Network Perspective, What Is VDI, Really? Scaling VDI Architectures VMworld 2011: F5 BIG-IP v11 iApps for Citrix280Views0likes0CommentsIs It Time For IT Role Reorgs?
When I was hired in to a utility to head an Automated Meter Reading project that was just getting organized – R&D was largely done, but implementation was not started – the team was set up in a rather odd manner. We had our own datacenter, we had our own networking, we had our own well, everything. And that was a conscious choice on the part of management. As it was presented to me, they didn’t want the early phases of the project mired in “we can’t set up load balancing for our app, you have to go talk to the network team” type issues. The long-term plan would make a complete mirror of IT for this project – operations, networking, appdev. Again, as presented to me, the point was to have a group of people completely knowledgeable in the ins-and-outs of the applications and networking (including power line carrier, phone lines, cell towers, and satellite) that tied it all together. The project was huge, and by the time I left for another part of the company, had grown to be the largest I’ve ever been involved in – in terms of staff, dollars, however you want to measure. And my team knew those systems in ways that most IT projects never have to, largely because of the initial design. Traditionally, the issues that concern network staff are not the issues that keep systems admins up at night. Generally speaking, the application people worry about whatever is bothering these other two groups plus whatever is wrong with the application. In a highly complex environment – like nearly every datacenter is these days – it can be downright painful to track all of the pain points from the moment a user logs in to the culmination of application usage. The traditional silos – particularly around appdev, whose managers tend to jealously hoard their time as if the next rev of the application is always the most important thing in the future of the company – make it difficult to get a clear view of the application. The ecosystem in which a given application lives is massive. Really very massive. And there are a lot of places where improvements could be made… If the right group is available and that group has statistics on that bit, and, and, and. So across the board performance reporting is needed. The type that can track how long it took ADS to respond to the login request, and how long it took to get a response from the database, and how responsive overall the application is… How much CPU is being utilized on both the virtual and physical machines, how much disk usage the entire system is overseeing, and if that’s a bottleneck… We’re getting there. ADCs can manage load across multiple servers and report on responsiveness, VMWare VCenter, for example, can help with system resource usage monitoring from a more holistic point of view, and now F5 products support iApps reporting to get detailed reporting on a wide variety of app and server metrics. No doubt (if they can) our competitors will implement similar functionality. It returns managing an app to being a discussion about the app, rather than a bunch of disjoint discussions about generic resources. So what’s next? As the title implies, it just might be time to rethink silos. Now some of you will strongly disagree, and I’m good with that – but consider the possibilities along the lines of that AMR project I worked on. VCenter offers management at the physical machine level, but views into the application (actually the VM) itself. iApps offers management at the network level, but views into the overall impact of the network on a specific applications’ performance. Network hardware still exists and has to be maintained, servers still exist and need to be maintained, but much of that maintenance has been moved into an arena that allows less specialized staff to interface with it. Thus, you will still need a router jockey, but most of your resources could be realigned to focus on the application itself. Call them “Application Management Engineers”, and give them knowledge about the application. This only works well for some big and not-likely-to-go-anywhere applications like Oracle DBMS, or Microsoft Exchange, but that’s a lot of staff time that can be moved over. And conveniently, iApps has customized templates for most of the really big applications out there, from VDI to Exchange to Oracle to Sharepoint. Of course it can work for smaller applications, you’ll just need people to juggle a whole collection of applications at once. Less hardware management staff and more application management staff. That’s what I’m thinking. Add that to my last post about making developers more involved in operations, and you start to look like a different organization. The focus having been shifted dramatically from hardware bits to overall application health. These types of shifts always have some issues though – we all know that if you specialize a bunch of people in Sharepoint, then you lose some synergies with similar networking applications. But then you have a group that does Sharepoint-like applications. Essentially all web based information sharing across the organization. For small orgs this type of organization would not be feasible, but that’s true of today’s organization too – how many shops don’t have dedicated security or storage staff because they just don’t have the people for it. Then people simply take on multiple responsibilities. The benefit is a stronger focus on the only thing your users (be they internal or external) care about – the application. Because in the end, it is (or should be) about the apps.186Views0likes0CommentsF5 Friday: A Single Namespace to Rule Them All
#vmware An infrastructure architecture that overcomes VMware View concurrency limitations Sheer volume and geographically disparate deployment of VMware View pods can result in a confusing array of locations from which users must choose to find their preferred desktop. Currently, View deployments are called “pods” and each is limited to a maximum 10,000 concurrent users. That may seem an unlikely upper limit to hit, but there are organizations for which that number is an issue. Every additional 10,000 concurrent users requires a unique supporting infrastructure along with a unique endpoint – an URL – to which the client must point. Users must be aware of which URL they should use; Bob cannot rely on Alice for the information, because Alice may be assigned to a different pod. The same restrictions apply to geographically disparate deployments. A west coast-east coast or even region-based distributed architecture is not uncommon for large and global organizations. Each location requires that the infrastructure supporting the pod be local, too, which means duplicated infrastructure across each geographical location at which it is desirable to deliver virtual desktops. Again, each pod has its own unique endpoint (URL). This can be confusing for end users, and renders more difficult the process of automating client distribution and management, as it may not be known at installation and configuration time to which of the many endpoints the client should point, leaving it in the hands of users who may or may not remember the URL. A combination of F5 solutions mitigates these pain points by supporting a single, global “namespace” for VMware View, i.e. one URL from which virtual desktops can be delivered, regardless of pod membership or physical location. HOW IT WORKS Kevin’s preferred virtual desktop is in the east coast data center. This means if the east coast data center is available, it is preferred to have him connect there, most likely because of Kevin’s proximity to the east coast data center. Kevin travels to California for a business trip, and wants to access his desktop. His desktop has not traveled, and it is preferable to use the same namespace as Kevin would use when on the east coast. To accomplish this, we make use of several F5 technologies, enabling a consistent, global namespace without sacrificing security or performance. 1. The View client connects to the global namespace, e.g. mydesktop.example.com 2. BIG-IP GTM determines Kevin’s location correctly as being on the west coast, and directs his client to the west coast data center, returning 1.1.1.1 as an IP address in response to the DNS lookup. Kevin’s View Client then connects to 1.1.1.1 on port 443. 3. BIG-IP LTM watches Kevin authenticate, sending the username to Active Directory. Examining the response and user attributes, BIG-IP LTM determines that Kevin’s primary desktop is deployed on the east coast. 4. The BIG-IP LTM on the west coast forwards the login request to an available server on the east coast. The connection server logs Kevin in and shows him his available desktop. Kevin opens his preferred desktop. 5. BIG-IP LTM relays the appropriate information back to Kevin’s View client 6. Kevin’s View Client now has the connection information necessary to open his PCoIP session directly to the server in the east coast data center, and does so. SOLUTION SUMMARY BIG-IP Global Traffic Manager (GTM) provides the single namespace, e.g. mydesktop.example.com. BIG-IP Local Traffic Manager (LTM) provides SSL offloading and load balancing of of connection broker traffic, enabling scalability and improved performance. It also provides user-based persistence, enabling BIG-IP LTM to direct the user to the correct server based on the VMware View JsessionID. BIG-IP Access Policy Manager (APM) takes the username and validates it against Active Directory, Radius, LDAP or an HTTP-based authentication service as well as determining group membership to locate the preferred desktop. Working in concert with VMware View servers, this trifecta of intelligent application delivery technologies enables a single hostname for VMware View clients worldwide. It uses the recommended VMware pod deployment model, and has been tested with the iPad, Windows, and zero-client platforms.342Views0likes0CommentsF5 Friday: A War of Ecosystems
Nokia’s brutally honest assessment of its situation identifies what is not always obvious in the data center - it’s about an ecosystem. In what was certainly a wake-up call for many, Nokia’s CEO Stephen Elop tells his organization its “platform is burning.” In a leaked memo reprinted by Engadget and picked up by many others, Elop explained the analogy as well as why he believes Nokia is in trouble. Through careful analysis of its competitors and their successes, he finds the answer in the ecosystem its competitors have built -comprising developers, applications and more. The battle of devices has now become a war of ecosystems, where ecosystems include not only the hardware and software of the device, but developers, applications, ecommerce, advertising, search, social applications, location-based services, unified communications and many other things. Our competitors aren’t taking our market share with devices; they are taking our market share with an entire ecosystem. This means we’re going to have to decide how we either build, catalyse or join an ecosystem. If you’re wondering what this could possibility have to do with networking and application delivery, well, the analysis Elop provides regarding the successes of a mobile device vendor can be directly applied to the data center. The nature of data centers and networks is changing. It’s becoming more dynamic, more integrated, more dependent upon collaboration and connections between devices (components) that have traditionally stood alone on their own. But as data center models evolve and morph and demands placed upon them increase the need for contextual awareness and collaboration and the ability to be both reactive and proactive in applying policies across a wide spectrum of data center concerns, success becomes as dependent on a components ability to support and be supported by an ecosystem. Not just the success of vendors, which was Elop’s focus, but success of data center architecture implementations. To counter the rising cost and complexity introduced by new computing and networking models requires automation, orchestration, and collaboration across data center components. cloud computing and virtualization has turned the focus from technology focused components to process-oriented platforms. From individual point solutions to integrated, collaborative systems that encourage development and innovation as a means to address the challenges arising from extreme dynamism. F5 Networks Wins VMware Global Technology Innovator Award Yesterday we took home top honors for enhancing the value of VMware virtualization solutions for companies worldwide. At VMware Partner Exchange 2011, VMware’s annual worldwide partner event, F5 was recognized with VMware’s Technology Innovator Partner of the Year Award. Why is that important? Because it recognizes the significant value placed on building a platform and developing an ecosystem in which that platform can be leveraged to integrate and collaborate on solutions with partners and customers alike. And it is about an ecosystem; it is about collaborative solutions that address key data center challenges that may otherwise hinder the adoption of emerging technologies like cloud computing and virtualization. A robust and flexible application delivery platform provides not only the means by which data and traffic can be dynamically delivered and secured, but also the means through which a more comprehensive strategy to address operational challenges associated with increasingly dynamic data center architectures can be implemented. The collaboration between VMware and F5’s BIG-IP platforms is enabled through integration, through infrastructure 2.0 enabled systems that create an environment in which flexible architectures and dynamism can be managed efficiently. In 2010 alone, F5 and VMware collaborated on a number of solutions leveraging the versatile capabilities of F5’s BIG-IP product portfolio, including: Accelerated long distance live migration with VMware vMotion The joint solution helps solve latency, bandwidth, and packet-loss issues, which historically have prevented customers from performing live migrations between data centers over long distances. An integrated enterprise cloudbursting solution with VMware vCloudDirector The joint solution simplifies and automates use of cloud resources to enhance application delivery performance and availability while minimizing capital investment. Optimized user experience and secure access capabilities with VMware View The solution enhances VMware View user experience with secure access, single sign-on, high performance, and scalability. “Since joining VMware’s Technology Alliance Partner program in 2008, F5 has driven a number of integration and interoperability efforts aimed at enhancing the value of customers’ virtualization and cloud deployments,” said Jim Ritchings, VP of Business Development at F5. “We’re extremely proud of the industry-leading work accomplished with VMware in 2010, and we look forward to continued collaboration to deliver new innovations around server and desktop virtualization, cloud solutions, and more.” It is just such collaboration that builds a robust ecosystem that is necessary to successfully move forward with dynamic data center models built upon virtualization and cloud computing principles. Without this type of collaboration, and the platforms that enable it, the efficiencies of private cloud computing and economy of scale of public cloud computing simply wouldn’t be possible. F5 has always been focused on delivering applications, and that has meant not just partnering extensively with application providers like Oracle and Microsoft and IBM, it has also meant partnering and collaborating with infrastructure providers like HP and Dell and VMware to create solutions that address the very real challenges associated with data center and traffic management. Elop is exactly right when he points to ecosystems being the key to the future. In the case of network and application networking solutions that ecosystem is both about vendor relationships and partnerships as much as it is solutions that enable IT to better align with business and operational goals; to reduce the complexity introduced by increasingly dynamic operations. VMware’s recognition of the value of that ecosystem, of the joint solutions designed and developed through partnerships, is great validation of the important role of the ecosystem in the successful implementation of emerging data center models. F5 Friday: Join Robin “IT” Hood and Take Back Control of Your Applications F5 Friday: The Dynamic VDI Security Game WILS: The Importance of DTLS to Successful VDI F5 Friday: Elastic Applications are Enabled by Dynamic Infrastructure F5 Friday: Efficient Long Distance Transfer of VMs with F5 BIG-IP WOM and NetApp Flexcache F5 Friday: Playing in the Infrastructure Orchestra(tion) Why Virtualization is a Requirement for Private Cloud Computing F5 VMware View Solutions F5 VMware vSphere Solutions Application Delivery for Virtualized Infrastructure DevCentral - VMware / F5 Solutions Topic Group261Views0likes2CommentsDevCentral Top5 1/28/2011
This week brought new meaning to the old, familiar phrase “so overstuffed with goodness it’s unknowable”. Well okay, maybe it’s a silly new phrase I just made up, but that doesn’t change the fact that this was yet another killer week for content on DevCentral. There were forum posts by the hundreds, blog posts by the dozen and tech tips a plenty. There was even some twitter inspired swan diving silliness. Through all of that I’ve picked what I thought were some of the most interesting things out of the bunch to highlight. To be fair, though, I could have easily listed 15 things this week, so definitely go trolling for yourself on DevCentral to see what you can find. That being said, here you are, this week’s DC Top5: Scatter Plotting Response Times with iRules and Google Charts http://bit.ly/ePngP2 The whole “Google Charts magic via iRules” thing isn’t exactly new anymore. We’ve shown you at least a half dozen examples by now of truly killer ways you can make graphs and charts ranging from handy to awe inspiring with a combination of the charting API from Google and some serious iRules fu. Well, chalk one more wicked example up to that list. Joe endeavored this week to show off a new type of graph that no one had touched yet, from an iRuling perspective: The Scatter Chart. What he’s doing here is charting the duration to complete an HTTP request as it relates to first status code, then request size. He’s plotting these on a scatter chart which is an interesting way to look at this data. The chart is cool, but to me the true magic is in the iRule. Fun stuff and worth a look for sure. Ruby and iControl: Understanding Complex Type Syntax http://bit.ly/fxKThD A while back George released a Ruby plug-in for iControl, which was a big step in helping make the world a safer, friendlier place for anyone that’s a Ruby fan, and interested in iControl, or vice versa. This week he comes to us with another look at how to make life simpler for those people, along with anyone at all interested in iControl, by doing an in-depth look at complex type syntax and how it works. This is one of those things that can seem pretty rough when first digging into iControl, or any SOAP based API really. With a little patience and a walk through like this, though, it really makes a lot of sense. Once you get that light bulb moment and this concept clicks, the world of iControl becomes your oyster. Figuratively of course...we’re not talking actual crustaceans here or anything. Anyway, read this article if you have any inkling of sinking your teeth into iControl any time soon. It’ll help, honest. BIG-IP Configuration Visualizer – iControl Style http://bit.ly/fZngFr While we’re on the topic of iControl, let’s talk about Jason’s tech tip this week. He’s bringing to light some truly hawesome code whipped up by community member Russell Moore. This code, in the form of a Perl script (that’s right, Perl...what of it?), will suck in your BIG-IP config and by way of iControl, Perl and GraphViz magic, output a graphical representation of your config. No seriously, it makes an image that shows you each config object laid out logically. Every VIP, profile, IP, VLAN, etc is graphically represented and laid out logically based on how your system is currently configured. This is a stellar way to get a good look at how things are set up in complex environments. Yes I agree, it’s amazing. That’s why it’s on the Top 5. Now stop freaking out and go look. Simplify VMware View Deployments http://bit.ly/ei3Tf2 Peter Silva struck a chord with me this week by writing about deploying VMware view while attempting to be at least somewhat efficient. VMware view, for those somehow not in the know, is a method by which you can deliver a desktop environment to a user from a central location. In other words, you spin up servers in a data center that host images of client systems, and those clients simply remote into them to get their work done. There are many appealing parts to this concept, centralization, efficiency in hardware usage/maintenance/repair, etc. The things that aren’t so appealing are the network traffic required to make this kind of distributed system work, and the havoc that a little latency or packet loss can play in people getting their jobs done. Not to mention the security headaches. Enter LTM and APM. Pete walks through a few ways in which F5 can help make this type of deployment work in the real world and alleviate some of the pain that could otherwise be inherent. Take a look, it’s an interesting read. iRule::ology; Connection Limiting Take 2 http://bit.ly/gw4rpG For the second installation of what is quickly becoming my favorite Tech Tip series ever, I have another connection limiting iRule in this week’s iRule::ology installment. This time, however, rather than limiting a VIP to a given number of HTTP requests per time interval, we’re trying to limit it to a given number of concurrent TCP connections. This tends to be a bit more tricky, but thanks to some code by the illustrious spark himself, the task itself is no problem. What I love about this series is that it gives me a chance to truly dig in and explain each command and statement, why it’s being used, how it works, what to watch out for, etc. It’s an intense study of a single iRule and I feel like I learn something every time, let alone what I’m hoping to show others about the code chosen. I hope you like the format as much as I do, because I have no intentions of stopping. There you have it, five more wicked cool things to dig into on DC. There are a ton more out there, seriously, so don’t be shy, go hunting yourself. If you’ve got questions, comments or feedback, I’m all ears. Otherwise, thanks for reading and see you next time. #Colin182Views0likes0CommentsWILS: The Importance of DTLS to Successful VDI
One of the universal truths about user adoption is that if performance degrades, they will kick and scream and ultimately destroy your project. Most VDI (Virtual Desktop Infrastructure) solutions today still make use of traditional thin-client protocols like RDP (Remote Desktop Protocol) as a means to enable communication between the client and their virtual desktop. Starting with VMware View 4.5, VMware introduced the high-performance PCoIP (PC over IP) communications protocol. While PCoIP is usually associated with rich media delivery, it is also useful in improving performance over distances. Such as the distances often associated with remote access. You know, the remote access by employees whose communications you particularly want to secure because it’s traversing the wild, open Internet. Probably with the use of an SSL VPN. Unfortunately, most traditional SSL VPN devices are unable to properly handle this unique protocol and therefore run slow, which degrades the user experience. The result? A significant hindrance to adoption of VDI has just been introduced and your mission, whether you choose to accept it or not, is to find a way to improve performance such that both IT and your user community can benefit from using VDI. The solution is actually fairly simple, at least in theory. PCoIP is a datagram (UDP) based protocol. Wrapping it up in what is a TCP-based security protocol, SSL, slows it down. That’s because TCP is (designed to be) reliable, checking and ensuring packets are received before continuing on. On the other hand UDP is a fire-and-assume-the-best-unless-otherwise-notified protocol, streaming out packets and assuming clients have received them. It’s not as reliable, but it’s much faster and it’s not at all uncommon. Video, audio, and even DNS often leverages UDP for speedy transmission with less overhead. So what you need, then, is a datagram-focused transport layer security protocol. Enter DTLS: In information technology, the Datagram Transport Layer Security (DTLS) protocol provides communications privacy for datagram protocols. DTLS allows datagram-based applications to communicate in a way that is designed to prevent eavesdropping, tampering, or message forgery. The DTLS protocol is based on the stream-oriented TLS protocol and is intended to provide similar security guarantees. The datagram semantics of the underlying transport are preserved by the DTLS protocol — the application will not suffer from the delays associated with stream protocols, but will have to deal with packet reordering, loss of datagram and data larger than a datagram packet size. -- Wikipedia If your increasingly misnamed SSL VPN (which is why much of the industry has moved to calling them “secure remote access” devices) is capable of leveraging DTLS to secure PCoIP, you’ve got it made. If it can’t, well, attempts to deliver VDI to remote or roaming employees over long distances may suffer setbacks or outright defeat due to a refusal to adopt based on performance and availability challenges experienced by the end users. DTLS is the best alternative to ensuring secure remote access to virtual desktops remains secured over long distances without suffering unacceptable performance degradation. If you’re looking to upgrade, migrate, or just now getting into secure remote access and you’re also considering VDI via VMware, ask about DTLS support before you sign on the dotted line. WILS: Write It Like Seth. Seth Godin always gets his point across with brevity and wit. WILS is an ATTEMPT TO BE concise about application delivery TOPICS AND just get straight to the point. NO DILLY DALLYING AROUND. Related blogs & articles: WILS: Load Balancing and Ephemeral Port Exhaustion All WILS Topics on DevCentral WILS: SSL TPS versus HTTP TPS over SSL WILS: Three Ways To Better Utilize Resources In Any Data Center WILS: Why Does Load Balancing Improve Application Performance? WILS: A Good Hall Monitor Actually Checks the Hall Pass WILS: Applications Should Be Like Sith Lords F5 Friday: Beyond the VPN to VAN F5 Friday: Secure, Scalable and Fast VMware View Deployment Desktop Virtualization Solutions from F5390Views0likes2CommentsF5 Friday: The Dynamic VDI Security Game
Balancing security, speed, and scalability is easy if you have the right infrastructure. A dynamic infrastructure. All the talk about “reusing” and “sharing” resources in highly virtualized and cloud computing environments makes it sound as if IT has never before understood how to leverage dynamic, on-demand services before. After all, while Infrastructure 2.0 (dynamic infrastructure) may only have been given its moniker since the advent of cloud computing, it’s not as if it didn’t exist before then and organizations weren’t taking advantage of its flexibility. It’s a lot like devops: we’ve been talking about bridging that gap between operations and development for years now – we just never had a way to describe it so succinctly until devops came along. The ability to dynamically choose delivery profiles – whether it be those associated with acceleration and optimization or those associated with security – is an important facet of application delivery solutions in today’s highly virtualized and cloud computing environments. Call it “reuse” of policies, or “sharing” of profiles, whatever you like – this ability has been a standard feature of F5’s application delivery platform for a long, long time. This dynamic, on-demand provisioning of services based on context is the defining characteristic of an infrastructure 2.0 solution. In the case of VDI, and specifically VDI implemented using VMware View 4.5 or later, it’s specifically about the ability to dynamically provision the right encryption solution at the right time, which is paramount to the success of VDI when remote access is required. THE CHALLENGE Secure remote access (you know, for us remote and roaming folks who rarely see the inside of corporate headquarters) to hosted desktops that reside behind corporate firewalls (where they belong) requires tunneling all VMware View connections. Not an uncommon scenario in general, right? Tunneling access to corporate resources is a pretty common theme when talking secure remote access. The key here is secure, meaning encrypted which for most applications delivered today via the Web means SSL. For VMware View when RDP (remote desktop protocol) is the protocol of choice, that means a solution that scales poorly due to the intensive CPU consumption for SSL by the View security servers. And if PCoIP is chosen for its enhanced ability to deliver rich-media and perform better over long distances instead of RDP, then the challenge becomes enabling security in an architecture in which it is not supported (PCoIP is UDP based, which is not supported by View security servers). SSL VPN solutions can be leveraged and tunnel PCoIP in SSL, but there’s a significant degradation of performance associated with that decision that will negatively impact the user experience. So the challenge is: enable secure remote access to virtual desktops within the corporate data center without negatively impacting performance or scalability of the architecture. THE SOLUTION This particular challenge can be met by employing the use of Datagram Transport Layer Security (DTLS) in lieu of SSL. DTLS is a derivative of TLS that provides the same security measures for UDP-based protocols as SSL provides for TCP-based protocols, without the performance degradation. F5 BIG-IP Edge Gateway supports both SSL and DTLS encryption tunnels. This becomes important because View security servers do not support DTLS and while falling back to SSL may be an option, the performance degradation for the user combined with the increased utilization on View security servers to perform SSL operations do not make for a holistically successful implementation. BIG-IP Edge Gateway addresses this challenge in three ways: BIG-IP Edge Gateway offloads the cryptographic processing from the servers, increasing utilization and scalability of the supporting infrastructure and improving performance. Because the cryptographic processing is handled by dedicated hardware designed to accelerate and process such operations efficiently, the implementation scales better whether using DTLS or SSL or a combination of both. BIG-IP Edge Gateway can dynamically determine which encryption protocol to use depending on the display protocol and client support for that user and device. It’s context-aware, and makes the decision when the client begins their session. It leverages a dynamic and reusable set of policies designed to aid in optimizing connectivity between the client and corporate resources based on conditions that exist at the time requests are made. Lastly, BIG-IP Edge Gateway automatically falls back to using TCP if a high-performance UDP tunnel cannot be established. This an important capability, as a slower connection is generally preferred over no connection, and there are scenarios in which a high-performance UDP tunnel simply can’t be setup for the client. Infrastructure should support security, not impede it. It’s great to be able to leverage the improvement in display protocol performance offered by PCoIP, but not at the expense of security. Leveraging an intermediary capable of dynamically providing the best security services for remote access to virtual desktops residing within the corporate data center means not having to sacrifice speed or scalability for security. Related blogs & articles: WILS: The Importance of DTLS to Successful VDI F5 Friday: It’s a Data Tsunami for Service Providers F5 Friday: Beyond the VPN to VAN All F5 Friday Posts on DevCentral Some Services are More Equal than Others Service Delivery Networking Presentation Why Virtualization is a Requirement for Private Cloud Computing What is Network-based Application Virtualization and Why Do You Need It? You Can’t Have IT as a Service Until IT Has Infrastructure as a Service217Views0likes1Comment