computer science
2 TopicsApplying Scalability Patterns to Infrastructure Architecture
Too often software design patterns are overlooked by network and application delivery network architects but these patterns are often equally applicable to addressing a broad range of architectural challenges in the application delivery tier of the data center. The “High Scalability” blog is fast becoming one of my favorite reads. Last week did not disappoint with a post highlighting a set of scalability design patterns that was, apparently, inspired by yet another High Scalability post on “6 Ways to Kill Your Servers: Learning to Scale the Hard Way.” Credit:Michael Chow/azcentral.com This particular post caught my attention primarily because although I’ve touched on many of these patterns in the past, I’ve never thought to call them what they are: scalability patterns. That’s probably a side-effect of forgetting that building an architecture of any kind is at its core computer science and thus algorithms and design patterns are applicable to both micro- and macro-architectures, such as those used when designing a scalable architecture. This is actually more common than you’d think, as it’s rarely the case that a network guy and a developer sit down and discuss scalability patterns over beer and deep fried cheese curds (hey, I live in Wisconsin and it’s my blog post so just stop making faces until you’ve tried it). Developers and architects sit over there and think about how to design a scalable application from the perspective of its components – databases, application servers, middleware, etc… Network architects sit over here and think about how to scale an application from the perspective of network components – load balancers, trunks, VLANs, and switches. The thing is that the scalability patterns leveraged by developers and architects can almost universally be abstracted and applied to the application delivery network – the set of components integrated as a means to ensure availability, performance, and security of applications. That’s why devops is so important and why devops has to bring dev into ops as much as its necessary to bring some ops into dev. There needs to be more cross-over, more discussion, between the two groups if not an entirely new group in order to leverage the knowledge and skills that each has in new and innovative ways. ABSTRACT and APPLY So the aforementioned post is just a summary of a longer and more detailed post, but for purposes of this post I think the summary will do with the caveat that the original, “Scalability patterns and an interesting story...” by Jesper Söderlund is a great read that should definitely be on your “to read” list in the very near future. For now, let’s briefly touch on the scalability patterns and sub-patterns Jesper described with some commentary on how they fit into scalability from a network and application delivery network perspective. The original text from the High Scalability blog are in red(dish) text. Load distribution - Spread the system load across multiple processing units This is a horizontal scaling strategy that is well-understood. It may take the form of “clustering” or “load balancing” but in both cases it is essentially an aggregation coupled with a distributed processing model. The secret sauce is almost always in the way in which the aggregation point (strategic point of control) determines how best to distribute the load across the “multiple processing units.” load balancing / load sharing - Spreading the load across many components with equal properties for handling the request This is what most people think of when they hear “load balancing”, it’s just that at the application delivery layer we think in terms of directing application requests (usually HTTP but can just about any application protocol) to equal “servers” (physical or virtual) that handle the request. This is a “scaling out” approach that is most typically associated today with cloud computing and auto-scaling: launch additional clones of applications as virtual instances in order to increase the total capacity of an application. The load balancing distributes requests across all instances based on the configured load balancing algorithm. Partitioning - Spreading the load across many components by routing an individual request to a component that owns that data specific This is really where the architecture comes in and where efficiency and performance can be dramatically increased in an application delivery architecture. Rather than each instance of an application being identical to every other one, each instance (or pool of instances) is designated as the “owner”. This allows for devops to tweak configurations of the underlying operating system, web and application server software for the specific type of request being handled. This is, also, where the difference between “application switching” and “load balancing” becomes abundantly clear as “application switching” is used as a means to determine where to route a particular request which is/can be then load balanced across a pool of resources. It’s a subtle distinction but an important one when architecting not only efficient and fast but resilient and reliable delivery networks. Vertical partitioning - Spreading the load across the functional boundaries of a problem space, separate functions being handled by different processing units When it comes to routing application requests we really don’t separate by function unless that function is easily associated with a URI. The most common implementation of vertical partitioning at the application switching layer will be by content. Example: creating resource pools based on the Content-Type HTTP header: images in pool “image servers” and content in pool “content servers”. This allows for greater optimization of the web/application server based on the usage pattern and the content type, which can often also be related to a range of sizes. This also, in a distributed environment, allows architects to leverage say cloud-based storage for static content while maintaining dynamic content (and its associated data stores) on-premise. This kind of hybrid cloud strategy has been postulated as one of the most common use cases since the first wispy edges of cloud were seen on the horizon. Horizontal partitioning - Spreading a single type of data element across many instances, according to some partitioning key, e.g. hashing the player id and doing a modulus operation, etc. Quite often referred to as sharding. This sub-pattern is inline with the way in which persistence-based load balancing is accomplished, as well as the handling of object caching. This also describes the way in which you might direct requests received from specific users to designated instances that are specifically designed to handle their unique needs or requirements, such as the separation of “gold” users from “free” users based on some partitioning key which in HTTP land is often a cookie containing the relevant data. Queuing and batch - Achieve efficiencies of scale by processing batches of data, usually because the overhead of an operation is amortized across multiple request I admit defeat in applying this sub-pattern to application delivery. I know, you’re surprised, but this really is very specific to middleware and aside from the ability to leverage queuing for Quality of Service (QoS) at the delivery layer this one is just not fitting in well. If you have an idea how this fits, feel free to let me know – I’d love to be able to apply all the scalability patterns and sub-patterns to a broader infrastructure architecture. Relaxing of data constraints - Many different techniques and trade-offs with regards to the immediacy of processing / storing / access to data fall in this strategy This one takes us to storage virtualization and tiering and the way in which data storage and access is intelligently handled in varying properties based on usage and prioritization of the content. If one relaxes the constraints around access times for certain types of data, it is possible to achieve a higher efficiency use of storage by subjugating some content to secondary and tertiary tiers which may not have the same performance attributes as your primary storage tier. And make no mistake, storage virtualization is a part of the application delivery network – has been since its inception – and as cloud computing and virtualization have grown so has the importance of a well-defined storage tiering strategy. We can bring this back up to the application layer by considering that a relaxation of data constraints with regards to immediacy of access can be applied by architecting a solution that separates data reads from writes. This implies eventual consistency, as data updated/written to one database must necessarily be replicated to the databases from which reads are, well, read, but that’s part of relaxing a data constraint. This is a technique used by many large, social sites such as Facebook and Plenty of Fish in order to scale the system to the millions upon millions of requests it handles in any given hour. Parallelization - Work on the same task in parallel on multiple processing units I’m not going to be able to apply this one either, unless it was in conjunction with optimizing something like MapReduce and SPDY. I’ve been thinking hard about this one, and the problem is the implication that “same task” is really the “same task”, and that processing is distributed. That said, if the actual task can be performed by multiple processing units, then an application delivery controller could certainly be configured to recognize that a specific URL should be essentially sent to some other proxy/solution that performs the actual distribution, but the processing model here deviates sharply from the request-reply paradigm under which most applications today operate. DEVOPS CAN MAKE THIS HAPPEN I hate to sound-off too much on the “devops” trumpet, but one of the primary ways in which devops will be of significant value in the future is exactly in this type of practical implementation. Only by recognizing that many architectural patterns are applicable to not only application but infrastructure architecture can we start to apply a whole lot of “lessons that have already been learned” by developers and architects to emerging infrastructure architectural models. This abstraction and application from well-understood patterns in application design and architecture will be invaluable in designing the new network; the next iteration of network theory and implementation that will allow it to scale along with the applications it is delivering. Related blogs & articles: Cloud is not Rocket Science but it is Computer Science Implementing SOA Patterns: The Router Implementing SOA Patterns: The Service Firewall Implementing SOA Patterns: Input/Output Validator Lori MacVittie - interstitial request pattern (AJAX) Business-Layer Load Balancing I Find Your Lack of Win Disturbing Cloud Computing: Vertical Scalability is Still Your Problem Vertical Scalability Cloud Computing Style Scalability Only One Half the Reliability Equation Automating scalability and high availability services Service Virtualization Helps Localize Impact of Elastic Scalability Web 2.0: Integration, APIs, and Scalability To Take Advantage of Cloud Computing You Must Unlearn, Luke. Statistics Collection and Management Pack Scalability345Views0likes1CommentCloud is not Rocket Science but it is Computer Science
That doesn’t mean it isn’t hard - it means it’s a different kind of hard. For many folks in IT it is likely you might find in their home a wall on which you can find hanging a diploma. It might be a BA, it might be a BS, and you might even find one (or two) “Master of Science” as well. Now interestingly enough, none of the diplomas indicate anything other than the level of education (Bachelor or Master) and the type (Arts or Science). But we all majored in something, and for many of the people who end up in IT that something was Computer Science. There was not, after all, an option to earn a “MS of Application Development” or a “BS of Devops”. While many higher education institutions offer students the opportunity to emphasize in a particular sub-field of Computer Science, that’s not what the final degree is in. It’s almost always a derivation of Computer Science. Yet when someone asks – anyone, regardless of technological competency – you what you do, you don’t reply “I’m a computer scientist.” You reply “I’m a sysadmin” or “I’m a network architect” or “I’m a Technical Marketing Manager” (which in the technological mecca of the midwest that is Green Bay gets some very confused expressions in response). We don’t describe ourselves as “computer scientists” even though by education that’s what we are. And what we practice is, no matter what our focus is at the moment, computer science. The scripts, the languages, the compilers, the technology – they’re just tools. They’re a means to an end. CLOUD is COMPUTER SCIENCE The definition of computer science includes the word “computer” as a means to limit the field of study. It is not intended to limit the field to a focus on computers and ultimately we should probably call it computing science because that would free us from the artificial focus on specific computing components. Computer science or computing science (sometimes abbreviated CS) is the study of the theoretical foundations of information and computation, and of practical techniques for their implementation and application in computer systems. It is frequently described as the systematic study of algorithmic processes that create, describe, and transform information. [emphasis added] -- Wikipedia, “Computer Science” Interestingly enough, Christofer Hoff recently made this same observation in perhaps a more roundabout but absolutely valid arrangement of words: Cloud is only rocket science if you’re NASA and using the Cloud for rocket science. Else, for the rest of us, it’s an awesome platform upon which we leverage various opportunities to improve the way in which we think about and implement the practices and technology needed to secure the things that matter most to us. [emphasis added] /Hoff -- Hoff’s 5 Rules Of Cloud Security… Hoff is speaking specifically to security, but you could just as easily replace “secure” with “deliver” or “integrate” or “automate”. It’s not about the platform, it’s the way in which we leverage and think about and implement solutions. It is, at its core, about an architecture; a way of manipulating data and delivering it to a person so that it can become information. It’s computing science, the way in which we combine and apply compute resources – whether network or storage or server – to solve a particular (business) problem. That is, in a nutshell, the core of what cloud computing really is. It’s “computer science” with an focus on architecting a system by which the computing resources necessary to secure, optimize, and deliver applications can be achieved most efficiently. COMPONENTS != CLOUD Virtualization, load balancing, and server time-sharing are not original concepts. Nor are the myriad infrastructure components that make up the network and application delivery network and the storage network. Even most of the challenges are not really “new”, they’re just instantiations of existing challenges (integration, configuration management, automation, and IP address management) that are made larger and more complex by the sheer volume of systems being virtualized, connected, and networked. What is new are the systems and architectures that tie these disparate technologies together to form a cohesive operating environment in which self-service IT is a possibility and the costs of managing all those components are much reduced. Cloud is about the infrastructure and how the rest of the infrastructure and applications collaborate, integrate, and interact with the ecosystem in order to deliver applications that are available, fast, secure, efficient, and affordable. The cost efficiency of cloud comes from its multi-tenant model – sharing resources. The operational efficiency, however, comes from the integration and collaborative nature of its underlying infrastructure. It is the operational aspects of cloud computing that make self-service IT possible, that enable a point-and-click provisioning of services to be possible. That infrastructure is comprised of common components that are not new or unfamiliar to IT, but the way in which it interacts and collaborates with its downstream and upstream components is new. That’s the secret sauce, the computer science of cloud. WHY is THIS IMPORTANT to REMEMBER It is easy to forget that the networks and application architectures that make up a data center or a cloud are founded upon the basics we learned from computer science. We talk about things like “load balancing algorithms” and choosing the best one to meet business or technical needs, but we don’t really consider what that means that the configuration decisions we’re making are ultimately making a choice between well-known and broadly applicable algorithms some of which carry very real availability and performance implications. When we try to automate capacity planning (elastic scalability) we’re really talking about codifying a decision problem in algorithmic fashion. We may not necessarily use formal statements and proofs to explain the choices for one algorithm or another, or the choice to design a system this way instead of that, but that formality and the analysis of our choices is something that’s been going on, albeit perhaps subconsciously. The phrase “it isn’t rocket science” is generally used to imply that “it” isn’t difficult or requiring of special skills. Cloud is not rocket science, but it is computer science, and it will be necessary to dive back into some of the core concepts associated with computer science in order to design the core systems and architectures that as a whole are called “cloud”. We (meaning you) are going to have to make some decisions, and many of them will be impacted – whether consciously or not – by the core foundational concepts of computer science. Recognizing this can do a lot to avoid the headaches of trying to solve problems that are, well, unsolvable and point you in the direction of existing solutions that will serve well as you continue down the path to dynamic infrastructure maturity, a.k.a. cloud. Related Posts213Views0likes1Comment