saas
34 TopicsWhy you still need layer 7 persistence
Tony Bourke of the Load Balancing Digest points out that mega proxies are largely dead. Very true. He then wonders whether layer 7 persistence is really all that important today, as it was largely implemented to solve the problems associated with mega-proxies - that is, large numbers of users coming from the same IP address. Layer 7 persistence is still applicable to situations where you may have multiple users coming from a single IP address (such as a small client base coming from a handful of offices, with each office using on public IP address), but I wonder what doing Layer 4 persistence would do to a major site these days. I’m thinking, not much. I'm going to say that layer 4 persistence would likely break a major site today. Layer 7 persistence is even more relevant today than it has been in the past for one very good reason: session coherence. Session coherence may not have the performance and availability ramifications of the mega-proxy problem, but it is essential to ensure that applications in a load-balanced environment work correctly. Where's F5? VMWorld Sept 15-18 in Las Vegas Storage Decisions Sept 23-24 in New York Networld IT Roadmap Sept 23 in Dallas Oracle Open World Sept 21-25 in San Francisco Storage Networking World Oct 13-16 in Dallas Storage Expo 2008 UK Oct 15-16 in London Storage Networking World Oct 27-29 in Frankfurt SESSION COHERENCE Layer 7 persistence is still heavily used in applications that are session sensitive. The most common example is shopping carts stored in the application server session, but it also increasingly important to Web 2.0 and interactive applications where state is important. Sessions are used to store that state and therefore Layer 7 persistence becomes important to maintaining that state in a load-balanced environment. It's common to see layer 7 persistence driven by JSESSIONID or PHPSESSIONID header variables today. It's a question we see in the forums here on DevCentral quite often. Many applications are rolled out, and then inserted into a load balanced environment, and subsequently break because sessions aren't shared across web application servers and the client isn't always routed to the same server or they come back "later" (after the connections have timed out) and expect the application to continue where they left it. If they aren't load balanced back to the same server, the session data isn't accessible and the application breaks. Application server sessions generally persist for hours as opposed to the minutes or seconds allowed for a TCP connection. Layer 4 (TCP) persistence can't adequately address this problem. Source port and IP address aren't always enough to ensure routing to the correct server because it doesn't persist once the connection is closed, and multiple requests coming from the same browser use multiple connections now, each with a different source port. That means two requests on the same page may not be load balanced to the same server, even though they both may require access to the application session data. These sites and applications are used for hours, often with long periods of time between requests, which means connections have often long timed out. Could layer 4 persistence work? Probably, but only if the time-out on these connections were set unreasonably high, which would consume a lot more resources on the load balancer and reduce its capacity significantly. And let's not forget SaaS (Software as a Service) sites like salesforce.com, where rather than mega-proxy issues cropping up we'd have lots-of-little-proxy issues cropping up as businesses still (thanks to IPv4 and the need to monitor Internet use) employ forward proxies. And SSL, too, is highly dependent upon header data to ensure persistence today. I agree with Tony's assessment that the mega proxy problem is largely a non-issue today, but session coherence is taking its place a one of the best reasons to implement layer 7 persistence over layer 4 persistence.366Views0likes1CommentWould You Put Corporate Applications in the Cloud?
There once was a time when organizations wouldn’t consider deploying critical applications in the cloud. It was too much of a business risk from both an access and an attack perspective—and for good reason, since 28 percent of enterprises have experienced more security breaches in the public cloud than with on-premises applications. This is changing, however. Over the last few years, cloud computing has emerged as a serious option for delivering enterprise applications quickly, efficiently, and securely. Today almost 70 percent of organizations are using some cloud technology. And that approach continues to grow. According to the latest Cisco Global Cloud Index report, global data center IP traffic will nearly triple over the next five years. Overall, data center IP traffic will grow at a compound annual growth rate of 25 percent from 2012 to 2017. This growth is to support our on-demand, always connected lifestyle, where content and information must be accessible/available anytime, anywhere, and on any screen. Mobility is the new normal, and the cloud is the platform to deliver this content. No wonder enterprises are scrambling to add cloud components to their existing infrastructure to provide agility, flexibility, and secure access to support the overall business strategy. Applications that used to take months to launch now take minutes, and organizations can take advantage of innovations quickly. But most IT organizations want the cloud benefits without the risks. They want the economics and speed of the cloud without worrying about the security and integration challenges. Use of the corporate network itself has become insecure, even with firewalls in place. Gone are the days of “trusted” and “untrusted,” as the internal network is now dangerous. It'll only get worse once all those IoT wearables hit the office. Even connecting to the corporate network via VPN can be risky due to the network challenges. Today, almost anything can pose a potential security risk, and unauthorized access is a top data security concern. Going against the current trend, some organizations are now placing critical applications in the cloud and facing the challenge of providing secure user access. This authentication is typically handled by the application itself, so user credentials are often stored and managed in the cloud by the provider. Organizations, however, need to keep close control over user credentials, and for global organizations, the number of identity systems can be in the thousands, scattered across geographies, markets, brands, or acquisitions. It becomes a significant challenge for IT to properly authenticate the person (whether located inside or outside the corporate network) to a highly available identity provider (such as Active Directory) and then direct them to the proper resources. The goal is to allow access to corporate data from anywhere with the right device and credentials. Speed and productivity are key. Authentication, authorization, and encryption help provide the fine-grained access, regardless of the user’s location and network. Employee access is treated the same whether the user is at a corporate office, at home, or connected to an open, unsecured Wi-Fi network at a bookstore. This eliminates the traditional VPN connection to the corporate network and also encrypts all connections to corporate information, even from the internal network. In this scenario, an organization can deploy the BIG-IP platform, especially virtual editions, in both the primary and cloud data centers. BIG-IP intelligently manages all traffic across the servers. One pair of BIG-IP devices sits in front of the servers in the core network; another pair sits in front of the directory servers in the perimeter network. By managing traffic to and from both the primary and directory servers, the F5 devices ensure the availability and security of cloud resources—for both internal and external (federated) employees. In addition, directory services can stay put as the BIG-IP will simply query those to determine appropriate access. While there are some skeptics, organizations like GE and Google are already transitioning their corporate applications to cloud deployments and more are following. As Jamie Miller, President & CEO at GE Transportation, says, 'Start Small, Start Now.' ps Related: CIOs Face Cloud Computing Challenges, Pitfalls Google Moves Its Corporate Applications to the Internet GE's Transformation... Start Small, Start Now Ask the Expert Why Identity and Access Management? Technorati Tags: cloud,big-ip,authentication,saas,f5 Connect with Peter: Connect with F5:228Views0likes0CommentsF5 and Versafe: Because Mobility Matters
#F5 #security #cloud #mobile #context Context means more visibility into devices, networks, and applications even when they're unmanaged. Mobility is a significant driver of technology today. Whether it's mobility of applications between data center and cloud, web and mobile device platform or users from corporate to home to publicly available networks, mobility is a significant factor impacting all aspects of application delivery but in particular, security. Server virtualization, BYOD, SaaS, and remote work all create new security problems for data center managers. No longer can IT build a security wall around its data center; security must be provided throughout the data and application delivery process. This means that the network must play a key role in securing data center operations as it “touches” and “sees” all traffic coming in and out of the data center. -- Lee Doyle, GigaOM "Survey: SDN benefits unclear to enterprise network managers" 8/29/2013 It's a given that corporate data and access to applications need to be protected when delivered to locations outside corporate control. Personal devices, home networks, and cloud storage all introduce the risk of information loss through a variety of attack vectors. But that's not all that poses a risk. Mobility of customers, too, is a source of potential disaster waiting to happen as control over behavior as well as technology is completely lost. Industries based on consumers and using technology to facilitate business transactions are particularly at risk from consumer mobility and, more importantly, from the attackers that target them. If the risk posed by successful attacks - phishing, pharming and social engineering - isn't enough to give the CISO an ulcer, the cost of supporting sometimes technically challenged consumers will. Customer service and support has become in recent years not only a help line for the myriad web and mobile applications offered by an organization, but a security help desk, as well, as consumers confused by e-mail and web attacks make use of such support lines. F5 and Security F5 views security as a holistic strategy that must be able to dig into not just the application and corporate network, but into the device and application, as well as the networks over which users access both mobile and web applications. That's where Versafe comes in with its unique combination of client-side intelligent visibility and subscription-based security service. Versafe's technology employs its client-side visibility and logic with expert-driven security operations to ensure real-time detection of a variety of threat vectors common to web and mobile applications alike. Its coverage of browsers, devices and users is comprehensive. Every platform, every user and every device can be protected from a vast array of threats including those not covered by traditional solutions such as session hijacking. Versafe approaches web fraud by monitoring the integrity of the session data that the application expects to see between itself and the browser. This method isn’t vulnerable to ‘zero-day’ threats: malware variants, new proxy/masking techniques, or fraudulent activity originating from devices, locations or users who haven’t yet accumulated digital fraud fingerprints. Continuous Delivery Meets Continuous Security Versafe's solution can accomplish such comprehensive coverage because it's clientless,relying on injection into web content in real time. That's where F5 comes in. Using F5 iRules, the appropriate Versafe code can be injected dynamically into web pages to scan and detect potential application threats including script injection, trojans, and pharming attacks. Injection in real-time through F5 iRules eliminates reliance on scanning and updating heterogeneous endpoints and, of course, relying on consumers to install and maintain such agents. This allows the delivery process to scale seamlessly along with users and devices and reasserts control over processes and devices not under IT control, essentially securing unsecured devices and lines of communication. Injection-based delivery also means no impact on application developers or applications, which means it won't reduce application development and deployment velocity. It also enables real-time and up-to-the-minute detection and protection against threats because the injected Versafe code is always communicating with the latest, up-to-date security information maintained by Versafe at its cloud-based, Security Operations Center. User protection is always on, no matter where the user might be or on what device and doesn't require updating or action on the part of the user. The clientless aspect of Versafe means it has no impact on user experience. Versafe further takes advantage of modern browser technology to execute with no performance impact on the user experience, That's a big deal, because a variety of studies on real behavior indicates performance hits of even a second on load times can impact revenue and user satisfaction with web applications. Both the web and mobile offerings from Versafe further ensure transaction integrity by assessing a variety of device-specific and behavioral variables such as device ID, mouse and click patterns, sequencing of and timing between actions and continuous monitoring of JavaScript functions. These kinds of checks are sort of an automated Turing test; a system able to determine whether an end-user is really a human being - or a bot bent on carrying out a malicious activity. But it's not just about the mobility of customers, it's also about the mobility - and versatility - of modern attackers. To counter a variety of brand, web and domain abuse, Versafe's cloud-based 24x7x365 Security Operations Center and Malware Analysis Team proactively monitors for organization-specific fraud and attack scheming across all major social and business networks to enable rapid detection and real-time alerting of suspected fraud. EXPANDING the F5 ECOSYSTEM The acquisition of Versafe and its innovative security technologies expands the F5 ecosystem by exploiting the programmable nature of its platform. Versafe technology supports and enhances F5's commitment to delivering context-aware application services by further extending our visibility into the user and device domain. Its cloud-based, subscription service complements F5's IP Intelligence Service, which provides a variety of similar service-based data that augments F5 customers' ability to make context-aware decisions based on security and location data. Coupled with existing application security services such as web application and application delivery firewalls, Versafe adds to the existing circle of F5 application security services comprising user, network, device and application while adding brand and reputation protection to its already robust security service catalog. We're excited to welcome Versafe into the F5 family and with the opportunity to expand our portfolio of delivery services. More information on Versafe: Versafe Versafe | Anti-Fraud Solution (Anti Phishing, Anti Trojan, Anti Pharming) Versafe Identifies Significant Joomla CMS Vulnerability & Corresponding Spike in Phishing, Malware Attacks Joomla Exploit Enabling Malware, Phishing Attacks to be Hosted from Genuine Sites 'Eurograbber' online banking scam netted $47 million450Views0likes1CommentAsk the Expert – Why Identity and Access Management?
Michael Koyfman, Sr. Global Security Solution Architect, shares the access challenges organizations face when deploying SaaS cloud applications. Syncing data stores to the cloud can be risky so organizations need to utilize their local directories and assert the user identity to the cloud. SAML is a standardized way of asserting trust and Michael explains how BIG-IP can act either as an identity provider or a service provider so users can securely access their workplace tools. Integration is key to solve common problems for successful and secure deployments. ps Related: Ask the Expert – Are WAFs Dead? Ask the Expert – Why SSL Everywhere? Ask the Expert – Why Web Fraud Protection? Application Availability Between Hybrid Data Centers F5 Access Federation Solutions Inside Look - SAML Federation with BIG-IP APM RSA 2014: Layering Federated Identity with SWG (feat Koyfman) Technorati Tags: f5,iam,saas,saml,cloud,identity,access,security,silva,video,AAA Connect with Peter: Connect with F5:244Views0likes0CommentsCollaborate in the Cloud
Employee collaboration and access to communication tools are essential for workplace productivity. Organizations are increasing their use of Microsoft Office 365, a subscription-based service that provides hosted versions of familiar Microsoft applications. Most businesses choose Exchange Online as the first app in Office 365 they adopt. The challenge with any SaaS application such as Office 365 is that user authentication is usually handled by the application itself, so user credentials are typically stored and managed in the cloud by the provider. The challenge for IT is to properly authenticate the employee (whether located inside or outside the corporate network) to a highly available identity provider (such as Active Directory). Authentication without complexity Even though Office 365 runs in a Microsoft-hosted cloud environment, user authentication and authorization are often accomplished by federating on premises Active Directory with Office 365. Organizations subscribing to Office 365 may deploy Active Directory Federation Services (ADFS) on premises, which then authenticates users against Active Directory. Deploying ADFS typically required organizations to deploy, manage, and maintain additional servers onsite, which can complicate or further clutter the infrastructure with more hardware. SAML (security assertion markup language) is often the enabler to identify and authenticate the user. It then directs the user to the appropriate Office 365 service location to access resources. SAML-enabled applications work by accepting user authentication from a trusted third party—an identity provider. In the case of Office 365, the BIG-IP platform acts as the identity provider. For example, when a user requests his or her OWA email URL via a browser using Office 365, that user is redirected to a BIG-IP logon page to validate the request. The BIG-IP system authenticates the user on behalf of Office 365 and then grants access. The Office 365 environment will recognize the individual and provide their unique Office 365 OWA email environment. The BIG-IP platform provides a seamless experience for Office 365 users and with the federated identity that the BIG-IP platform enables, the IT team is able to extend SSO capabilities to other applications. The benefit of using the BIG-IP platform to support Office 365 with SAML is that organizations can reduce the complexity and requirements of deploying ADFS. By default, when enabling Office 365, administrators need to authenticate those users in the cloud. If an IT administrator wants to use the corporate authentication mechanism, ADFS must be put into the corporate infrastructure. With the BIG-IP platform, organizations can support authentication to Office 365 and the ADFS requirement disappears, resulting in centralized access control with improved security. Secure collaboration Because email is a mission-critical application for most organizations, it is typically deployed on premises. Organizations using BIG-IP-enhanced Microsoft Exchange Server and Outlook can make it easier for people to collaborate regardless of their location. For example, if a company wanted to launch a product in Europe that had been successfully launched in the United States, it needs workers and contractors in both locations to be able to communicate and share information. In the past, employees may have emailed plain-text files to each other as attachments or posted them online using a web-based file hosting service. This can create security concerns since potentially confidential information is leaving the organization and being stored on the Internet without any protection or encryption. There are also concerns about ease of use for employees and how the lack of an efficient collaboration tool negatively impacts productivity. Internal and external availability 24/7 To solve these issues, many organizations move from the locally managed Exchange Server deployment to Microsoft Office 365. Office 365 makes it easier for employees to work together no matter where they are in the world. Employees connect to Office 365 using only a browser, and they don’t have to remember multiple usernames and passwords to access email, SharePoint, or other internal-only applications and file shares. In this scenario, an organization would deploy the BIG-IP platform in both the primary and secondary data centers. BIG-IP LTM intelligently manages all traffic across the servers. One pair of BIG-IP devices sits in front of the servers in the core network; another pair sits in front of the directory servers in the perimeter network. By managing traffic to and from both the primary and directory servers, the F5 devices ensure availability of Office 365—for both internal and external (federated) users. Ensuring global access To provide for global application performance and disaster recovery, organizations should also deploy BIG-IP GTM devices in the perimeter network at each data center. BIG-IP GTM scales and secures the DNS infrastructure, provides high-speed DNS query responses, and also reroutes traffic when necessary to the most available application server. Should an organization’s primary data center ever fail, BIG-IP GTM would automatically reroute all traffic to the backup data center. BIG-IP GTM can also load balance the directory servers across data centers to provide cross-site resiliency. The BIG-IP platform provides the federated identity services and application availability to allow organizations to make a quick migration to Office 365, ensuring users worldwide will always have reliable access to email, corporate applications, and data. ps Related: Leveraging BIG-IP APM for seamless client NTLM Authentication Enabling SharePoint 2013 Hybrid Search with the BIG-IP (SCRATCH THAT! Big-IP and SAML) with Office 365 Technorati Tags: office 365,o365,big-ip,owa,exchange,employee,collaborate,email,saml,adfs,silva,saas,cloud Connect with Peter: Connect with F5:247Views0likes0CommentsSaaS Creating Eventually Consistent Business Model
Our reliance on #cloud and external systems has finally trickled down (or is it up?) to the business. The success of SOA, which grew out of the popular Object Oriented development paradigm, was greatly hampered by the inability of architects to enforce its central premise of reuse. But it wasn't just the lack of reusing services that caused it to fail to achieve the greatness predicted, it was the lack of adopting the idea of an authoritative source for business critical objects, i.e. data. A customer, an order, a lead, a prospect, a service call. These "business objects" within SOA were intended to represented by a single, authoritative source as a means to ultimately provide a more holistic view of a customer that could be then be used by various business applications to ensure more quality service. It didn't turn out that way, mores the pity, and while organizations adopted the protocols and programmatic methods associated with SOA, they never really got down to the business of implementing authoritative sources for business critical "objects". As organizations increasingly turn to SaaS solutions, particularly for CRM and SFA solutions (Gartner’s Market Trends: SaaS’s Varied Levels of Cannibalization to On-Premises Applications published: 29 October 2012) the ability to enforce a single, authoritative source becomes even more unpossible. What's perhaps even more disturbing is the potential inability to generate that holistic view of a customer that's so important to managing customer relationships and business processes. The New Normal Organizations have had to return to an integration-focused strategy in order to provide applications with the most current view of a customer. Unfortunately, that strategy often relies upon APIs from SaaS vendors who necessarily put limits on APIs that can interfere with that integration. As noted in "The Quest for a Cloud Integration Strategy", these limitations can strangle integration efforts to reassemble a holistic view of business objects as an organization grows: "...many SaaS applications have very particular usage restrictions about how much data can be sent through their API in a given time window. It is critical that as data volumes increase that the solution adequately is aware of and handles those restrictions." Note that the integration solution must be "aware of" and "handle" the restrictions. It is nearly a foregone conclusion that these limitations will eventually be met and there is no real solution around them save paying for more, if that's even an option. While certainly that approach works for the provider - it keeps the service available - the definition of availability with respect to data is that it's, well, available. That means accessible. The existence of limitations means that at times and under certain conditions, your data will not be accessible, ergo by most folks definition it's not available. If it's not available, the ability to put together a view of the customer is pretty much out of the question. But eventually, it'll get there, right? Eventually, you'll have the data. Eventually, the data you're basing decisions on, managing customers with, and basing manufacturing process on, will be consistent with reality. Kicking Costs Down the Road - and Over the Wall Many point to exorbitant IT costs to setup, scale, and maintain on-premise systems such as CRM. It is truth that a SaaS solution is faster and likely less expensive to maintain and scale. But it is also true that if the SaaS is unable to scale along with your business in terms of your ability to access, integrate, and analyze your own data, that you're merely kicking those capital and operating expenses down to the road - and over the wall to the business. The problem of limitations on cloud integration (specifically SaaS integration) methods are not trivial. A perusal of support forums shows a variety of discussion on how to circumvent, avoid, and workaround these limitations to enable timely integration of data with other critical systems upon which business stakeholders rely to carry out their daily responsibilities to each other, to their investors, and to their customers. Fulfillment, for example, may rely on data it receives as a result of integration with a SaaS. It is difficult to estimate fulfillment on data that may or may not be up to date and thus may not be consistent with the customer's view. Accounting may be relying on data it assumes is accurate, but actually is not. Most SaaS systems impose a 24 hour interval in which it enforces API access limits, which may set the books off by as much as a day - or more, depending on how much of a backlog may occur. Customers may be interfacing with systems that integrate with back-office SaaS that shows incomplete order histories, payments and deliveries, which in turn can result in increasing call center costs to deal with the inaccuracies. The inability to access critical business data has a domino effect on every other system in place. The more distributed the sources of authoritative data the more disruptive an effect the inability to access that data due to provider-imposed limitations has on the entire business. Eventually consistent business models are not optimal, yet the massive adoption of SaaS solutions make such a model inevitable for organizations of all sizes as they encounter artificial limitations imposed to ensure system wide availability but not necessarily individual data accessibility. Being aware of such limitations can enable the development and implementation of strategies designed to keep data - especially authoritative data - as consistent as possible. But ultimately, any strategy is going to be highly dependent upon the provider and its ability to scale to meet demand - and loosen limitations on accessibility.230Views0likes1CommentLooking to 2014
2013 has been a year of major developments in the enterprise, so I thought it’d be worth putting together a few predictions for 2014. With cloud computing and BYOD both taking off in businesses after years of hype, over the next 12 months I think these trends will continue to drive further innovations and we can expect to hear more about Bring Your Own Network – or Bring Your Own Anything – as the lines between consumer and business technology continue to blur. These will put real pressure on network managers and anyone working in IT for large businesses as maintaining security, speed and reliability become ever more important. 2014 will also see an explosion of XaaS markets as Software as a Service (SaaS) is already too broad and must now support sub-categorisation with clearer definitions. For example, Disaster Recovery (DRaaS), Security (SECaaS), Management (MaaS) have already been defined and we can expect to see further granularity of definitions appearing over the next 12 months. EMEA, in general, has been slow to adopt cloud computing, with concerns about security and regulatory compliance coming up again and again. However, with the growing popularity of cloud-based financial services like TEMENOS T24 showing how it can be done safely, we should expect this to now change. In terms of Software Defined Networks (SDN), volumes have been written about it in 2013, but in 2014 we will finally start to see it breaking into the mainstream – with more pilot projects maturing into production environments and an increased interest in the technology from a more diverse customer base. 2014 could very well be the year of Software Defined Anything (Application Services, Datacentres, Storage, etc.) In conclusion, 2014 is set to be another exciting year for the enterprise, with lots of developments in cloud and SDN. We will see businesses becoming more innovative in the ways they work, which will bring it’s challenges, but will revolutionise the way we work forever.215Views0likes0CommentsThe Future of Hybrid Cloud
You keep using that word. I do no think it means what you think it means... An interesting and almost ancillary point was made during a recent #cloudtalk hosted by VMware vCloud with respect to the definition of "hybrid" cloud. Sure, it implies some level of integration, but how much integration is required to be considered a hybrid cloud? The way I see it, there has to be some level of integration that supports the ability to automate something - either resources or a process - in order for an architecture to be considered a hybrid cloud. A "hybrid" anything, after all, is based on the premise of joining two things together to form something new. Simply using Salesforce.com and Ceridian for specific business functions doesn't seem to quality. They aren't necessarily integrated (joined) in any way to the corporate systems or even to processes that execute within the corporate environs. Thus, it seems to me that in order to truly be a "hybrid" cloud, there must be some level of integration. Perhaps that's simply at the process level, as is the case with SaaS providers when identity is federated as a means to reassert control over access as well as potentially provide single sign-on services. Similarly, merely launching a development or test application in a public IaaS environment doesn't really "join" anything, does it? To be classified as "hybrid" one would expect there be network or resource integration, via such emerging technologies as cloud bridges and gateways. The same is true internally with SDN and existing network technologies. Integration must be more than "able to run in the environment". There must be some level of orchestration and collaboration between the networking models in order to consider it "hybrid". From that perspective, the future of hybrid cloud seems to rely upon the existence of a number of different technological solutions: Cloud bridges Cloud gateways Cloud brokers SDN (both application layer and network layer) APIs (to promote the integration of networks and resources) Standards (such as SAML to enable the orchestration and collaboration at the application layer) Putting these technologies all together and you get what seems to be the "future" of a hybrid cloud: SaaS, IaaS, SDN and traditional technology integrated at some layer that enables both the business and operations to choose the right environment for the task at hand at the time they need it. In other words, our "network" diagrams of the future will necessarily need to extend beyond the traditional data center perimeter and encompass both SaaS and IaaS environments. That means as we move forward IT and operations needs to consider how such environments will fit into and with existing solutions, as well as how emerging solutions will enable this type of hybrid architecture to come to fruition. Yes, you did notice I left out PaaS. Isn't that interesting?260Views0likes0CommentsInside Look - F5 Mobile App Manager
I meet with WW Security Architect Corey Marshall to get an Inside Look and detailed demo of F5's Mobile App Manager. BYOD 2.0: Moving Beyond MDM. ps Related: F5's Feeling Alive with Newly Unveiled Mobile App Manager BYOD 2.0 – Moving Beyond MDM with F5 Mobile App Manager F5 MAM Overview F5 BYOD 2.0 Solution Empowers Mobile Workers Is BYO Already D? Will BYOL Cripple BYOD? Freedom vs. Control F5's YouTube Channel In 5 Minutes or Less Series (23 videos – over 2 hours of In 5 Fun) Inside Look Series Technorati Tags: f5,byod,smartphone,mobile,mobile device,saas,research,silva,security,compliance, video Connect with Peter: Connect with F5:271Views0likes0CommentsIn the Cloud Differentiation Means Services
And integration.. Don't forget the integration. Scott Bils has a post on the "Five Mistakes that Enterprise Cloud Service Providers are Making" over on Leverhawk. Points four and five were particularly interesting because it seems there's a synergistic opportunity there. Point number four from Scott: Omitting SaaS and PaaS: Cloud infrastructure service providers have little incentive to migrate customers to public cloud SaaS offerings such as Salesforce.com or Workday. For many customers, migrating legacy apps to SaaS models will be the right answer. Many enterprise cloud service providers conveniently omit this lever from their transformation story and lose customer credibility as a result. And point number five: Failing to differentiate: Many vendors position themselves as providing managed services that make cloud models ”enterprise ready.” The problem is that every other vendor is saying the exact same thing. Enterprise cloud service providers need to think harder about what their distinctive customer value proposition really is. I will not, for the sake of brevity and out of consideration for your time, offer an extensive list of my own posts on this very point save this one. Suffice to say, differentiation of services is something I've noted in the past and continue to note. Mostly because, as Scott points out, it's a problem. What caught my eye here is the relationship between these two points, specifically the relationship between SaaS and services in IaaS. Scott is right when he points out the hyper-adoption rates of SaaS as compared to IaaS. As has been often pointed out, SaaS enjoys higher adoption rates than any other cloud model A Gartner/Goldman Sachs Cloud CIO Survey In 2011 noted 67% of respondents "already do" SaaS. The survey indicated that 75% would be using SaaS by 2017. A modest number, I think, if you look at the rates of adoption over the past few years. Combined with this is the interest in hybrid cloud models. While usually pointing to the marriage of on- and off-premise cloud environments, hybrid cloud is more generically the joining of two disparate cloud environments. That could also be the joining of two public providers irrespective of model (SaaS, IaaS, PaaS). What IaaS providers can do to address both points four and five simultaneously is offer services specifically designed to integrate with a variety of at least SaaS offerings. Services that provide federation and/or SSO services for and with Salesforce.com or Google, for example. Services that differentiate the IaaS provider simply by making integration easier for the ever increasing number of enterprises who are adopting SaaS solutions. IaaS differentiation is not going to come through more varied instance sizes and configurations or through price wars. Enterprise customers understand the value - the business and operational value - of services and pricing is less a deciding factor than it is just one more factor in the overall equation. The value offered by pre-integrated services that make building a hybrid cloud easier, faster and with a greater level of reliability in the stability of the service - such as would be offered by such services - has greater weight than "is it cheap." Vendors who offer APIs for the purposes of external control and ultimately integration know this already. While having an API has become tablestakes, what is more valued by the customer is the availability of pre-integrated, pre-tested, and validated integration with other enterprise-class systems. Having an API is great, but having existing, validated integration with VMWare vCD, for example, is of considerable value and differentiates one solution from another. IaaS providers would do well to consider how providing similar services - pre-integrated and validated - would immediately differentiate their entire offering and provide the confidence and incentive for customers to choose their service over another.240Views0likes0Comments