multi-tenant
5 TopicsNetwork Virtualization: Instances versus Tenants
#SDN #Cloud #VCMP #SDAS #IoT Technology shifts are creating a lot of chaos, including the way we use words. Cloud. SDN. Multi-tenant. Instances. They're all inter-related and seem to have different meanings depending on who's trying to sell you what today. That's more than a tad bit disconcerting, because you know what you mean when you say "multi-tenant" but other people (trying to sell you stuff) may have a different definition. And that means when you ask about it and they say yes, you may not be getting what you expected - and that's not good for either end of the transaction. So let's talk network virtualization today, particularly with respect to the difference between "instances" and "tenants." Instance An instance, made a common part of technology's growing vernacular, stems from the need to separate the physical from the virtual, a la server virtualization. Because "server" is used to describe about fifty different things - all in the realm of technology - it became necessary to distinguish between an application "server" and an application "instance" to avoid confusion. Thus, an instance is often shorthand for virtual machine or virtual instance and essentially describes a container of functionality. For example, if I refer to an "instance" of BIG-IP I mean a virtual machine in which the BIG-IP platform is running. Note that this says nothing about the underlying hardware, which could be COTS or cloud or purpose-built hardware. That's because one of the characteristics of virtualization is abstraction, and its benefits are generally derived from the fact that it decouples the "solution" from the underlying resource provider (the hardware). Now, that's an instance. Confusion generally comes in when we start adding multi-tenancy to the discussion which, of course, is a requirement for modern architectures and deployment environments. Multi-tenancy The basic principles of multi-tenancy are similar to that of an apartment complex. Multiple tenants, all with their own isolated "living space" cohabitate within the same physical space. This enables the tenants to share the cost of the infrastructure (the physical structure) and thus lower the overall costs of living. In technological terms, the same concept applies. We want to allow multiple tenants (applications) to share the cost of the infrastructure and thus lower the overcall costs of delivery (all the services you have to have to make sure the application is secure, reliable and available). Multi-tenancy in infrastructure enables multiple tenants to cohabitate while being assured they can manage their own space in an isolated, secure fashion. The way this is achieved is to segment each instance into isolated domains, usually on a per-application basis. Depending on specific architectural, regulatory or business requirements, a single instance can be treated as equal to a single tenant. But more often than not a single instance is segmented into multiple tenant domains to enable greater sharing of costs. The end result should be the more tenants, the lower the costs*. The reason this is important is because applications require greater diversity in network policies with respect to performance, availability and access. The days of applying the same set of network policies to web application A and B are pretty much over. The coming of the Internet of Things is going to force highly differentiated policies to be put in place on a per-application basis. That means that infrastructure needs to provide multi-tenant instances able to go far beyond the simple "tenant = instance" assumption that is frequently made when discussing network virtualization because the number of applications that will be rising to support new business models and take advantage of opportunities is only going to increase in the next few years. So be careful with your words as you start to lay the network foundation you're going to need to succeed in the coming years. Make sure you know exactly what the person on the other side of the table means when they say "multi-tenant instance" and make sure it will be able to support the way in which you're going to need to deliver all those new applications. * Assuming the business model associated can achieve the economies scale required by modern architectures. Many cannot.571Views0likes0CommentsMulti-Tenancy Requires More Than Just Isolating Customers
Multi-tenancy encompasses the management of heterogeneous business, technical, delivery, and security models. Last week, during what was certainly an invigorating if not agonizingly redundant debate regarding the value of public versus private cloud computing , it was suggested that perhaps if we’d just refer to “private cloud” computing as “single-tenant cloud” all would be well. I could point out that we’ve been over this before, and that the value proposition of shared infrastructure internal to an “organization” is the sharing of resources across projects, departments, and lines of business all of which are endowed with their very own budgets. There are “customer” level distinctions to be made internal to an organization, particularly a large one, that may perhaps be lost on those who’ve never been (un)fortunate enough to work within the trenches of an actual enterprise IT organization. The problem is larger than that, however, and goes far beyond the simplistic equating of “line of business” with “company”. Both still assume that tenant is analogous to business (customer in the eyes of a public cloud provider) and that’s simply not always the case. THE TYPE of CLOUD DETERMINES the NATURE of the TENANT Certainly in certain types of clouds, specifically a SaaS (Software as a Service) offering, the heterogeneity of the tenancy is at the customer level. But as you dive down the cloud “stack” from SaaS –> PaaS –> IaaS you’ll find that the “tenant” being managed changes. In a SaaS, of course, the analogy holds true – to an extent. It is business unit and financial obligation that defines a “tenant”, but primarily because SaaS focuses on delivering one application and “customer” at that point becomes the only real way to distinguish one from another. An organization that is deploying a similar on-premise SaaS may in fact be multi-tenant simply by virtue of supporting multiple lines of business, all of whom have individual financial responsibility and in many cases may be financially independent from the “mothership.” Tenancy becomes more granular and, at the very bottom layer, at IaaS, you’ll find that the tenant is actually an application and that each one has its own unique set of operational and infrastructure needs. Two applications, even though deployed by the same organization, may have a completely different – and sometimes conflicting – set of parameters under which it must be deployed, secured, delivered, and managed. A truly “multi-tenant” cloud (or any other multi-tenant architecture) recognizes this. Any such implementation must be able to differentiate between applications either by applying the appropriate policy or by routing through the appropriate infrastructure such that the appropriate policies are automatically applied by virtue of having traversed the component. The underlying implementation is not what defines an architecture as multi-tenant or not, it’s how it behaves. When you consider a high-level architectural view of a public cloud versus an on-premise cloud, it should be fairly clear that the only thing that really changes between the two is who is getting billed. The same requirements regarding isolation, services, and delivery on a per-application basis remain. THE FUTURE VALUE of CLOUD is in RECOGNIZING APPLICATIONS as INDIVIDUAL ENTITIES This will become infinitely more important as infrastructure services begin to provide differentiation for cloud providers. As different services are able to be leveraged in a public cloud computing environment, each application will become more and more its own entity with its own infrastructure and thus metering and ultimately billing. This is ultimately the way cloud providers will be able to grow their offerings and differentiate from their competitors – their value-added services in the infrastructure that delivers applications powered by on-demand compute capacity. The tenants are the applications, not necessarily the organization, because the infrastructure itself must support the ability to isolate each application from every other application. Certainly a centralized management and billing framework may allow customers to manage all their applications from one console, but in execution the infrastructure – from the servers to the network to the application delivery network – must be able to differentiate and treat each individual application as its own, unique “customer”. And there’s no reason an organization with multiple internal “customers” can’t – or won’t – build out an infrastructure that is ultimately a smaller version of a public cloud computing environment that supports such a business model. In fact, they will – and they’ll likely be able to travel the path to maturity faster because they have a smaller set of “customers” for which they are responsible. And this, ultimately, is why the application of the term “single-tenant” to an enterprise deployed cloud computing environment is simply wrong. It ignores that differentiation in a public IaaS cloud is (or should be) at the same level of the hierarchy as an internal IaaS cloud. CLOUD COMPUTING is ULTIMATELY a DEPLOYMENT and DELIVERY MODEL Dismissing on-premise cloud as somehow less sophisticated because its customers (who are billed in most organizations) are more granular is naive or ignorant, perhaps both. It misses the fact that public cloud only bills by customer, its actual delivery model is per-application, just as it would be in the enterprise. And it is certainly insulting to presume that organizations building out their own on-premise cloud don’t face the same challenges and obstacles as cloud providers. In most cases the challenges are the same, simply on a smaller scale. For the largest of enterprises – the Fortune 50, for example – the challenges are actually more demanding because they, unlike public cloud providers, have myriad regulations with which they must comply while simultaneously building out essentially the same architecture. Anyone who has worked inside a large enterprise IT shop knows that most inter-organizational challenges are also intra-organizational challenges. IT even talks in terms of customers; their customers may be internal to the organization but they are treated much the same as any provider-customer relationship. And when it comes to technology, if you think IT doesn’t have the same supply-chain management issues, the same integration challenges, the same management and reporting issues as a provider then you haven’t been paying attention. Dividing up a cloud by people makes little sense because the reality is that the architectural model divides resources up by application. Ultimately that’s because cloud computing is used by applications, not people or businesses. Related Posts205Views0likes2CommentsLike Load Balancing WAN Optimization is a Feature of Application Delivery
Convergence, consolidation, and common-sense. When WAN optimization was getting its legs under it as a niche in the broader networking industry it got a little boost from the fact that remote/branch office connectivity was the big focus of data centers and C-level execs in the enterprise. Latency and congested WAN links between corporate data centers and remote offices around the globe were the source of lost productivity. The obvious solution – get thee a fatter pipe – was at the time far too expensive a proposition and, in some cases, not a feasible option. We’d had bandwidth management and other asymmetric solutions in the past and while they worked well enough for web-based content the problem now was fat files and the transfer of “big data” across the WAN. We needed something else. TOO MUCH DATA? JUST MAKE LESS of IT The problem, it was posited, was simply that there was too much data to traverse the constrained network links tying organizations to remote offices and thus the answer, logically, was to do away with trying to juggle it all in some sort of priority order and simply make less data. A sound proposition, one that was nearly simultaneously gaining traction on the consumer side of the equation in the form of real-time web application data compression. Here we are, many years later, and the proposition is still sound: if the problem is limited bandwidth in the face of applications and their ever growing data girth, then it behooves the infrastructure to reduce the size of that data as much as possible. This solution – whether implemented through traditional compression techniques or data deduplication or optimizing of transport and application protocols – is effective. It produces faster response times and thus the appearance, at least, of more responsive applications. As the specter of intercloud and cloud computing and the need to transport ginormous data sets (“big data”) in the form of data and virtual machine images continues to loom large on the horizon of most organizations it makes sense that folks would turn to solutions that by definition are focused on the reduction of data as a means to improve performance and success in transfer across increasingly constrained networks. No argument there. #GartnerDC Major IT Trend #2 is: 'Big Data - The Elephant in the Room'. Growth 800% over next 5 years - w/80% unstructured. Tiering critical @ZimmerHDS Harry Zimmer The argument begins when we start looking at the changes in connectivity between then and now. The “internet” is the primary connectivity between users and applications today, even when they’re working from a “remote office.” Cloud computing changes the equation from which the solution of WAN optimization was derived and renders it a less than optimal solution on its own because it does not fit the connectivity paradigm upon which cloud computing is based - one that is increasingly unmanageable on both ends of the pipe. Luckily, decreasing data size is just one of many other methods that can be used to improve application performance and should be used in conjunction with those other methods based on context. INCREASINGLY IRRELEVANT to APPLICATION CONSUMERS Because of the way in which WAN optimization solutions work (in pairs) they are generally the last hop in the corporate network and the first hop into the remote network. This is a static implementation, one that leaves little flexibility. It also assumes the existence of a matching WAN optimization solution – whether hardware or software deployed – on the other end of the pipe. This is not a practical implementation for the most constrained and growing environments – mobile devices – because as an organization you have very little control over the endpoint (device) in the first place (consider the consumerization of IT) and absolutely no control over the network on which it operates. A traditional WAN optimization solution may be able to help specific classes of mobile devices if the user has installed the appropriate “soft client” that allows the WAN optimization solution to do its data deduplication trick. That’s feasible for corporate users over which you have control. What about the millions of end-users out there on iPhones, BlackBerries, and tablets over whom you do not have control. They are just as important and it is performance on which your organization/offering/solution will be judged by them. They’re an impatient lot, according to both Amazon and Google, and there are no studies to indicate that their conclusions are wrong, and have garnered enough mindshare to be awarded the right to run even the most stolid of enterprise applications: Senior IT executives plan to make CRM, ERP and proprietary apps available to mobile devices Ellen Messmer, Network World Roughly 75% of senior IT executives plan to make internal applications available to employees on a variety of smartphones and mobile devices, according to new research from McAfee's Trust Digital unit. In particular, 57% of respondents said they intend to mobilize beyond e-mail and make CRM, ERP and proprietary in-house applications available to mobile devices. In addition, 45% are planning to support the iPhone and Android smartphones due to employee demand, even though many of these organizations already support BlackBerry devices. Even if the end-user is not using a mobile device, it’s likely that their connection to the Internet exhibits very different characteristics than those experienced by corporate end-users. While download “speeds” have been increasing in the consumer market, we know there’s a difference between throughput and bandwidth, and that there is a relationship between ability of the servers to serve and consumers to consume. That relationship is often impeded by congestion, packet loss, endpoint resource constraints, and the shared nature of broadband networks. It is simply no longer the case that we can assume ownership of any kind over the endpoint and certainly not over the network on which it resides. And then you’ve got cloud. Cloud, oh cloud, wherefore art thou cloud? If you can deploy WAN optimization as a virtual network appliance then you have to be careful to choose a cloud that supports whatever virtualization platform the vendor currently supports. If you’ve already invested time and effort in a cloud provider and only later determined you need WAN optimization to improve the increased traffic between you and the provider (over the open, unmanaged Internet) you may be in for an unpleasant surprise. CONTEXT. IT ALWAYS COMES BACK to CONTEXT But the even larger problem with WAN optimization as an individual solution is that it loses context. It assumes that it will always need to do its thing on the data. It’s generally automatic, with very little intelligence built into it. The architecture on which such point solutions were developed is not the same data center architecture we’re working with today. As we continue to push the envelope of cloud computing and how it integrates with our data center architectures we find that it may be the case that a user on the LAN is directed to a cloud-hosted application while a user on the WAN is directed to the local corporate data center. In both cases it is (today) difficult to leverage a symmetric WAN optimization solution because in the first case you have little control over the infrastructure deployment and in the latter you probably have no control over the user’s network endpoint. What you need is a solution that is aware it is symmetric when it is and asymmetric when it isn’t. Atop that, you need a solution that can simultaneously service both users while providing the best possible response time by applying the appropriate optimization and acceleration policies to their responses. That’s context, on-demand. It’s about the application and the user and the network; it’s a collaborative, integrated unified method of applying delivery policies in real-time. It’s not about simply decreasing the amount of data. That’s just one of many varied techniques and methods of improving performance. Like compression, it’s possible that introducing WAN optimization into the flow of data might impede performance because the task of deduplication may require more cycles than it would to just transfer the data across a LAN. It’s possible that given the network conditions that decreasing size isn’t enough; you may need to apply TCP optimization and acceleration to the session to improve the transfer at that time. WAN OPTIMIZATION is PART of A BIGGER PICTURE WAN optimization techniques are just that – techniques – and they should be applied on-demand, as necessary and dictated by the conditions under which requests are made and responses must be delivered. It’s beneficial to examine the relative importance (and applicability) of WAN optimization solutions in the context of the “big picture”. That picture includes transport and application layer impedances that also need to be addressed, as well as the generalized difficulties in deploying such solutions in the increasingly mobile and virtual environments from which applications are being deployed. A unified approach to application delivery, which encompasses WAN optimization as a service rather than an individual solution, is better suited to interpreting context and applying the appropriate policies in a way that makes sense. Cloud really brings to the fore the architectural issues with most WAN optimization solutions. Because of the way they work, they must be paired (not impossible to overcome at all) and they must be the “last hop”, which makes multi-tenant support an interesting proposition. Contrast that with WAN application delivery services, which recognize that WAN links (any higher latency, constrained link really) requires different behavior at the network and transport and even application layers in terms of delivery, and you’ll find that the latter makes much more sense in the dynamic, services-oriented cloud computing environments currently available today. It’s just part of the bigger picture – the application delivery picture – and it has to become more integrated if it’s going to be useful for multi-tenant, dynamic environments like cloud computing. Just as load balancing is no longer a solution of its own, WAN optimization has become a feature of a broader, holistic unified application delivery solution.232Views0likes0CommentsMulti-Tenant Security Is More About the Neighbors Than the Model
Scott Sanchez recently rebutted the argument that “Cloud Isn’t Secure Because It Is Multi-Tenant” by pointing out that “internal data centers are multi-tenant today, and you aren’t managing them as well as a public cloud is managed.” Despite the truth of that statement, his argument doesn’t take into consideration that multi-tenant cloud security isn’t just about the risks of the model, it’s about the neighbors. After all, there’s no such thing as a “renters association” that has the right to screen candidate tenants before they move in and start drinking beer on their shared, digital lawn in a public environment. When an organization implements a multi-tenant model in their data center the tenants are applications with the same owner. In a public cloud the tenants are still applications, but those applications are owned by any number of different organizations and, in some cases, individuals. IT’S STILL ABOUT CONTROL With the exception of co-location and dedicated hosting, this is essentially the same risk that caused organizations not to embrace the less expensive option to outsource web-application and its infrastructure. Once the bits leave the building there is a loss of control, of visibility, and of ability to make decisions regarding what will and more importantly what won’t run on the same machine as a critical business application. If the bits stay in the building as Scott points out there’s still very little control afforded to the business stakeholder but there’s also less need for such concerns because every application running in the internal data center is ultimately serving the same business. Unlike the public clouds, the resources of the private cloud are shared only within the corporate community. They're controlled by the corporation, not a third-party vendor that has the ability to lease them to anyone it chooses. Private cloud computing Takes Off in Companies Not Keen on Sharing See full article from DailyFinance: http://srph.it/98APyI And if somehow one of the applications in the data center, whether multi-tenant or not – manages to chew up resources or utilize so much bandwidth other applications starve, IT can do something about it. Immediately. When everything is running in the same controlled environment the organization has, well, more control over what’s going on. The public cloud multi-tenant model is different because the organization’s neighbors may not be Mr. Rogers and they might just be Atilla the Hun. And even if they are harmless today there’s no guarantee they will be tomorrow – or in the next hour. There’s no way to know whether applications of the serious business kind or of the serious making-money-phishing kind are running on or near the organization’s application. And that’s important because there is very little (if any) visibility into the cloud infrastructure, which is also inherently multi-tenant and shared. There’s no transparency, nothing that’s offered to assuage the fears of the organization. No guarantees of bandwidth even if the app next door start spraying UDP packets like water from a fire-hose and saturates the physical network or any one of several intermediate network devices between the server and the boundary of the cloud provider’s network. In many cases, the customer can’t even be assured that its data (you know, the lifeblood of the organization) is actually isolated on the network from cloud boundary to application. They can’t be certain that their data won’t be silently captured or viewed by someone lucky enough to have rented out the room above their store for the night. Deploying an application that handles highly sensitive data in a public cloud computing environment is very nearly a crap shoot in terms of what kind of neighbors you’ll have at any given point in the day. THEN THERE’S BUSINESS RISK Even if you can assure these reluctant organizations that it is completely secure, there’s still business risk to contend with. Turning to business risk, the issues are more related to operational control and certainty of policy adherence. Some companies would be very reluctant to have their ongoing operations out of their direct control, so they may insist on running their applications on their own servers located within their own data center (this issue is not cloud-specific—it is often raised regarding SaaS as well as more general cloud computing services). The Case Against Cloud Computing, Part Two, CIO.com Bernard Golden Much of that business risk comes not from the technology model of multi-tenancy, but from the business model employed by cloud, i.e. open-door, no-questions-asked, rent by the hour compute resources, as well as the policies that seem to prevent the level of transparency into the underlying infrastructure that might convince a few more organizations that the risks are offset by the advantages. Some organizations will remain steadfast in their refusal to leverage public cloud. It may take until cloud as an implementation reaches full maturity and provides the control they require before they will consider public cloud an option, if they ever do. They’ll certainly not be swayed by arguments that simply dismiss their concerns based on the assumption that their own security practices are flawed anyway. Related blogs & articles: Why private clouds are surging: It's the control, stupid! Private Cloud Model Will Win over Public Cloud Model Cloud Today is Just Capacity On-Demand Cloud Isn’t Secure Because It’s Multi-Tenant Why IT Needs to Take Control of Public Cloud Computing Architectural Multi-tenancy F5 Friday: Never Outsource Control The Other Hybrid Cloud Architecture The Corollary to Hoff’s Law Optimize Prime: The Self-Optimizing Application Delivery Network Cool Hand Cloud The Three Reasons Hybrid Clouds Will Dominate197Views0likes1CommentArchitectural Multi-tenancy
Almost every definition of cloud, amongst the myriad definitions that exist, include the notion of multi-tenancy, a.k.a. the ability to isolate customer-specific traffic, data, and configuration of resources using the same software and interfaces. In the case of SaaS (Software as a Service) multi-tenancy is almost always achieved via a database and configuration, with isolation provided at the application layer. This form of multi-tenancy is the easiest to implement and is a well-understood model of isolation. In the case of IaaS (Infrastructure as a Service) this level of isolation is primarily achieved through server virtualization and configuration, but generally does not yet extend throughout the datacenter to resources other than compute or storage. This means the network components do not easily support the notion of multi-tenancy. This is because the infrastructure itself is only somewhat multi-tenant capable, with varying degrees of isolation and provisioning of resources possible. load balancing solutions, for example, support multi-tenancy inherently through virtualization of applications, i.e. Virtual IP Addresses or Virtual Servers, but that multi-tenancy does not go as “deep” as some might like or require. This model does support configuration-based multi-tenancy because each Virtual Server can have its own load balancing profiles and peculiar configuration, but generally speaking it does not support the assignment of CPU and memory resources at the Virtual Server layer. One of the ways to “get around this” is believed to be the virtualization of the solution as part of the hardware solution. In other words, the hardware solution acts more like a physical server that can be partitioned via internal, virtualized instances of the solution*. That’s a lot harder than it sounds to implement for the vendor given constraints on custom hardware integration and adds a lot of overhead that can negatively impact the performance of the device. Also problematic is scalability, as this approach inherently limits the number of customers that can be supported on a single device. If you’re only supporting ten customers this isn’t a problem, but if you’re supporting ten thousand customers, this model would require many, many more hardware devices to implement. Taking this approach would drastically slow down the provisioning process, too, and impact the “on-demand” nature of the offering due to acquisition time in the event that all available resources have already been provisioned. Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines. -- NIST Cloud Computing Definition v15 And yet for IaaS to truly succeed the infrastructure must support multi-tenancy, lest customers be left with only compute and storage resources on-demand without the benefit of network and application delivery network (load balancing, caching, web application firewall, acceleration, protocol security) components to assist in realizing a complete on-demand application architecture. What is left as an option, then, is to implement multi-tenancy architecturally, by combining the physical and virtualized networking components in a way that is scalable, secure, and fits into the provider’s data center.409Views0likes2Comments