soa delivery
38 TopicsLayer 7 Switching + Load Balancing = Layer 7 Load Balancing
Modern load balancers (application delivery controllers) blend traditional load-balancing capabilities with advanced, application aware layer 7 switching to support the design of a highly scalable, optimized application delivery network. Here's the difference between the two technologies, and the benefits of combining the two into a single application delivery controller. LOAD BALANCING Load balancing is the process of balancing load (application requests) across a number of servers. The load balancer presents to the outside world a "virtual server" that accepts requests on behalf of a pool (also called a cluster or farm) of servers and distributes those requests across all servers based on a load-balancing algorithm. All servers in the pool must contain the same content. Load balancers generally use one of several industry standard algorithms to distribute request. Some of the most common standard load balancing algorithms are: round-robin weighted round-robin least connections weighted least connections Load balancers are used to increase the capacity of a web site or application, ensure availability through failover capabilities, and to improve application performance. LAYER 7 SWITCHING Layer 7 switching takes its name from the OSI model, indicating that the device switches requests based on layer 7 (application) data. Layer 7 switching is also known as "request switching", "application switching", and "content based routing". A layer 7 switch presents to the outside world a "virtual server" that accepts requests on behalf of a number of servers and distributes those requests based on policies that use application data to determine which server should service which request. This allows for the application infrastructure to be specifically tuned/optimized to serve specific types of content. For example, one server can be tuned to serve only images, another for execution of server-side scripting languages like PHP and ASP, and another for static content such as HTML , CSS , and JavaScript. Unlike load balancing, layer 7 switching does not require that all servers in the pool (farm/cluster) have the same content. In fact, layer 7 switching expects that servers will have different content, thus the need to more deeply inspect requests before determining where they should be directed. Layer 7 switches are capable of directing requests based on URI, host, HTTP headers, and anything in the application message. The latter capability is what gives layer 7 switches the ability to perform content based routing for ESBs and XML/SOAP services. LAYER 7 LOAD BALANCING By combining load balancing with layer 7 switching, we arrive at layer 7 load balancing, a core capability of all modern load balancers (a.k.a. application delivery controllers). Layer 7 load balancing combines the standard load balancing features of a load balancing to provide failover and improved capacity for specific types of content. This allows the architect to design an application delivery network that is highly optimized to serve specific types of content but is also highly available. Layer 7 load balancing allows for additional features offered by application delivery controllers to be applied based on content type, which further improves performance by executing only those policies that are applicable to the content. For example, data security in the form of data scrubbing is likely not necessary on JPG or GIF images, so it need only be applied to HTML and PHP. Layer 7 load balancing also allows for increased efficiency of the application infrastructure. For example, only two highly tuned image servers may be required to meet application performance and user concurrency needs, while three or four optimized servers may be necessary to meet the same requirements for PHP or ASP scripting services. Being able to separate out content based on type, URI, or data allows for better allocation of physical resources in the application infrastructure.1.6KViews0likes2CommentsLoad Balancing as an ESB Service
Most people, upon hearing the term "load balancing" immediately think of web and application servers deployed at the edge of the network. After all, that's where load balancing is most often used - to ensure that a public facing web site is always as available and fast as possible. What many architects don't consider, however, is that in the process of deploying a SOA (Service Oriented Architecture) those same web and application servers end up residing deeper in the data center, away from the edge of the network. These web and application servers are hosting the services that make up a public facing application or site, but aren't necessarily afforded the same consideration in terms of availability as the initial entry point into that application. This situation is compounded by the fact that there may be an ESB (Enterprise Service Bus) orchestrating that application, and that services critical to the application are not afforded the same measure of reliability as those closer to the edge of the network. While most ESBs are capable of load balancing these critical services, this capability is often limited and lacking the more robust and dynamic features offered by application delivery controllers (ADC). Consider a public facing service that takes advantage of an ESB: The Compliance and Shipping Services are critical to the public facing service, as are the other services provided by theESB. And yet in a typical implementation only the public facing service (the Order Management Service) will be afforded the reliability and optimization provided by an application delivery controller. It may be the case that the ESB is load balancing the Compliance and Shipping services, as simple load balancing capabilities are provided by most ESBs. While you might find the load balancing capabilities adequate, consider that most ESBs lack the health monitoring capabilities of application delivery controllers in addition to being unable to balance the load based on real-time factors such as number of connections and response time, making it difficult to optimize your SOA. The lesson learned by application delivery vendors in the past have not yet been incorporated into ESBs. Just because a server responds to an ICMP ping or can successfully open a TCP socket does not mean the application - or service - is actually running and returning valid responses. An ADC is capable of monitoring services at the application level, ensuring that not only is the service running but that it's also returning valid responses. This is something most ESBs are not capable of providing, which reduces the effectiveness of such health checks and can result in a service being treated as available even if it's malfunctioning. When it's really important, i.e. when services are critical to a critical application - such as a customer/public facing application - then it's likely you should ensure that those back-end services are always available and optimized. An application delivery controller isn't limited to hanging out in the DMZ or on the edge of the network. In fact, an ADC can just as easily slide into your SOA and provide the same high availability, optimization, and failover services as it does for public facing applications. By integrating an application delivery controller into the depths of the data center you can let the ESB do what it's best at doing: transformation, message enrichment, and reliable messaging and remove the burden of performing tasks that are better suited to an ADC than an ESB such as load balancing, protocol optimization, and health monitoring. The addition of an ADC to assist the ESB and ensure reliability also provides a layer of abstraction that fits nicely into your SOA and aligns with one of SOA's primary goals: agility. SOA implementations are by their nature distributed. Composing applications from distributed services achieves agility and reuse, but introduces potential show-stopping problems usuallyexperiencedat the edge of the network into the heart of thedata center. It is important to consider the impact of failure of a service on the application(s) that may be using that service, and if that application is critical, then its dependent services should also be treated as critical and worthy of the protection of an application delivery controller. Imbibing: Orange Juice Technorati tags: MacVittie, F5, ESB, SOA, application delivery, service oriented application delivery, load balancing1KViews0likes0CommentsThe Stealthy Ascendancy of JSON
While everyone was focused on cloud, JSON has slowly but surely been taking over the application development world It looks like the debate between XML and JSON may be coming to a close with JSON poised to take the title of preferred format for web applications. If you don’t consider these statistics to be impressive, consider that ProgrammableWeb indicated that its “own statistics on ProgrammableWeb show a significant increase in the number of JSON APIs over 2009/2010. During 2009 there were only 191 JSON APIs registered. So far in 2010 [August] there are already 223!” Today there are 1262 JSON APIs registered, which means a growth rate of 565% in the past eight months, nearly catching up to XML which currently lists 2162 APIs. At this rate, JSON will likely overtake XML as the preferred format by the end of 2011. This is significant to both infrastructure vendors and cloud computing providers alike, because it indicates a preference for a programmatic model that must be accounted for when developing services, particularly those in the PaaS (Platform as a Service) domain. PaaS has yet to grab developers mindshare and it may be that support for JSON will be one of the ways in which that mindshare is attracted. Consider the results of the “State of Web Development 2010” survey from Web Directions in which developers were asked about their cloud computing usage; only 22% responded in the affirmative to utilizing cloud computing. But of those 22% that do leverage cloud computing, the providers they use are telling: PaaS represents a mere 7.35% of developers use of cloud computing, with storage (Amazon S3) and IaaS (Infrastructure as a Service) garnering 26.89% of responses. Google App Engine is the dominant PaaS platform at the moment, most likely owing to the fact that it is primarily focused on JavaScript, UI, and other utility-style services as opposed to Azure’s middle-ware and definitely more enterprise-class focused services. SaaS, too, is failing to recognize the demand from developers and the growing ascendancy of JSON. Consider this exchange on the Salesforce.com forums regarding JSON. Come on salesforce lets get this done. We need to integrate, we need this [JSON]. If JSON continues its steady rise into ascendancy, PaaS and SaaS providers alike should be ready to support JSON-style integration as its growth pattern indicates it is not going away, but is instead picking up steam. Providers able to support JSON for PaaS and SaaS will have a competitive advantage over those that do not, especially as they vie for the hearts and minds of developers which are, after all, their core constituency. THE IMPACT What the steady rise of JSON should trigger for providers and vendors alike is a need to support JSON as the means by which services are integrated, invoked, and data exchanged. Application delivery, service-providers and Infrastructure 2.0 focused solutions need to provide APIs that are JSON compatible and which are capable of handling the format to provide core infrastructure services such as firewalling and data scrubbing duties. The increasing use of JSON-based APIs to integrate with external, third-party services continues to grow and the demand for enterprise-class service to support JSON as well will continue to rise. There are drawbacks, and this steady movement toward JSON has in some cases a profound impact on the infrastructure and architectural choices made by IT organizations, especially in terms of providing for consistency of services across what is likely a very mixed-format environment. Identity and access management and security services may not be prepared to handle JSON APIs nor provide the same services as it has for XML, which through long established usage and efforts comes with its own set of standards. Including social networking “streams” in applications and web-sites is now as common as including images, but changes to APIs may make basic security chores difficult. Consider that Twitter – very quietly – has moved to supporting JSON only for its Streaming API. Organizations that were, as well they should, scrubbing such streams to prevent both embarrassing as well as malicious code from being integrated unknowingly into their sites, may have suddenly found that infrastructure providing such services no longer worked: API providers and developers are making their choice quite clear when it comes to choosing between XML and JSON. A nearly unanimous choice seems to be JSON. Several API providers, including Twitter, have either stopped supporting the XML format or are even introducing newer versions of their API with only JSON support. In our ProgrammableWeb API directory, JSON seems to be the winner. A couple of items are of interest this week in the XML versus JSON debate. We had earlier reported that come early December, Twitter plans to stop support for XML in its Streaming API. --JSON Continues its Winning Streak Over XML, ProgrammableWeb (Dec 2010) Similarly, caching and acceleration services may be confused by a change from XML to JSON; from a format that was well-understood and for which solutions were enabled with parsing capabilities to one that is not. IT’S THE DATA, NOT the API The fight between JSON and XML is one we continue to see in a general sense. See, it isn’t necessarily the API that matters, in the end, but the data format (the semantics) used to exchange that data which matters. XML is considered unstructured, though in practice it’s far more structured than JSON in the sense that there are meta-data standards for XML that constrain security, identity, and even application formats. JSON, however, although having been included natively in ECMA v5 (JSON data interchange format gets ECMA standards blessing) has very few standards aside from those imposed by frameworks and toolkits such as JQuery. This will make it challenging for infrastructure vendors to support services targeting application data – data scrubbing, web application firewall, IDS, IPS, caching, advanced routing – to continue to effectively deliver such applications without recognizing JSON as an option. The API has become little more than a set of URIs and nearly all infrastructure directly related to application delivery is more than capable of handling them. It is the data, however, that presents a challenge and which makes the developers’ choice of formats so important in the big picture. It isn’t just the application and integration that is impacted, it’s the entire infrastructure and architecture that must adapt to support the data format. The World Doesn’t Care About APIs – but it does care about the data, about the model. Right now, it appears that model is more than likely going to be presented in a JSON-encoded format. JSON data interchange format gets ECMA standards blessing JSON Continues its Winning Streak Over XML JSON versus XML: Your Choice Matters More Than You Think I am in your HTTP headers, attacking your application The Web 2.0 API: From collaborating to compromised Would you risk $31,000 for milliseconds of application response time? Stop brute force listing of HTTP OPTIONS with network-side scripting The New Distribution of The 3-Tiered Architecture Changes Everything Are You Scrubbing the Twitter Stream on Your Web Site?898Views0likes0CommentsInfrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait
Infrastructure 2.0 ≠ cloud computing ≠ IT as a Service. There is a difference between Infrastructure 2.0 and cloud. There is also a difference between cloud and IT as a Service. But they do go together, like a parfait. And everybody likes a parfait… The introduction of the newest member of the cloud computing buzzword family is “IT as a Service.” It is understandably causing some confusion because, after all, isn’t that just another way to describe “private cloud”? No, actually it isn’t. There’s a lot more to it than that, and it’s very applicable to both private and public models. Furthermore, equating “cloud computing” to “IT as a Service” does both a big a disservice as making synonyms of “Infrastructure 2.0” and “cloud computing.” These three [ concepts | models | technologies ] are highly intertwined and in some cases even interdependent, but they are not the same. In the simplest explanation possible: infrastructure 2.0 enables cloud computing which enables IT as a service. Now that we’ve got that out of the way, let’s dig in. ENABLE DOES NOT MEAN EQUAL TO One of the core issues seems to be the rush to equate “enable” with “equal”. There is a relationship between these three technological concepts but they are in no wise equivalent nor should be they be treated as such. Like SOA, the differences between them revolve primarily around the level of abstraction and the layers at which they operate. Not the layers of the OSI model or the technology stack, but the layers of a data center architecture. Let’s start at the bottom, shall we? INFRASTRUCTURE 2.0 At the very lowest layer of the architecture is Infrastructure 2.0. Infrastructure 2.0 is focused on enabling dynamism and collaboration across the network and application delivery network infrastructure. It is the way in which traditionally disconnected (from a communication and management point of view) data center foundational components are imbued with the ability to connect and collaborate. This is primarily accomplished via open, standards-based APIs that provide a granular set of operational functions that can be invoked from a variety of programmatic methods such as orchestration systems, custom applications, and via integration with traditional data center management solutions. Infrastructure 2.0 is about making the network smarter both from a management and a run-time (execution) point of view, but in the case of its relationship to cloud and IT as a Service the view is primarily focused on management. Infrastructure 2.0 includes the service-enablement of everything from routers to switches, from load balancers to application acceleration, from firewalls to web application security components to server (physical and virtual) infrastructure. It is, distilled to its core essence, API-enabled components. CLOUD COMPUTING Cloud computing is the closest to SOA in that it is about enabling operational services in much the same way as SOA was about enabling business services. Cloud computing takes the infrastructure layer services and orchestrates them together to codify an operational process that provides a more efficient means by which compute, network, storage, and security resources can be provisioned and managed. This, like Infrastructure 2.0, is an enabling technology. Alone, these operational services are generally discrete and are packaged up specifically as the means to an end – on-demand provisioning of IT services. Cloud computing is the service-enablement of operational services and also carries along the notion of an API. In the case of cloud computing, this API serves as a framework through which specific operations can be accomplished in a push-button like manner. IT as a SERVICE At the top of our technology pyramid, as it is likely obvious at this point we are building up to the “pinnacle” of IT by laying more aggressively focused layers atop one another, we have IT as a Service. IT as a Service, unlike cloud computing, is designed not only to be consumed by other IT-minded folks, but also by (allegedly) business folks. IT as a Service broadens the provisioning and management of resources and begins to include not only operational services but those services that are more, well, businessy, such as identity management and access to resources. IT as a Service builds on the services provided by cloud computing, which is often called a “cloud framework” or a “cloud API” and provides the means by which resources can be provisioned and managed. Now that sounds an awful lot like “cloud computing” but the abstraction is a bit higher than what we expect with cloud. Even in a cloud computing API we are steal interacting more directly with operational and compute-type resources. We’re provisioning, primarily, infrastructure services but we are doing so at a much higher layer and in a way that makes it easy for both business and application developers and analysts to do so. An example is probably in order at this point. THE THREE LAYERS in the ARCHITECTURAL PARFAIT Let us imagine a simple “application” which itself requires only one server and which must be available at all times. That’s the “service” IT is going to provide to the business. In order to accomplish this seemingly simple task, there’s a lot that actually has to go on under the hood, within the bowels of IT. LAYER ONE Consider, if you will, what fulfilling that request means. You need at least two servers and a Load balancer, you need a server and some storage, and you need – albeit unknown to the business user – firewall rules to ensure the application is only accessible to those whom you designate. So at the bottom layer of the stack (Infrastructure 2.0) you need a set of components that match these functions and they must be all be enabled with an API (or at a minimum by able to be automated via traditional scripting methods). Now the actual task of configuring a load balancer is not just a single API call. Ask RackSpace, or GoGrid, or Terremark, or any other cloud provider. It takes multiple steps to authenticate and configure – in the right order – that component. The same is true of many components at the infrastructure layer: the APIs are necessarily granular enough to provide the flexibility necessary to be combined in a way as to be customizable for each unique environment in which they may be deployed. So what you end up with is a set of infrastructure services that comprise the appropriate API calls for each component based on the specific operational policies in place. LAYER TWO At the next layer up you’re providing even more abstract frameworks. The “cloud API” at this layer may provide services such as “auto-scaling” that require a great deal of configuration and registration of components with other components. There’s automation and orchestration occurring at this layer of the IT Service Stack, as it were, that is much more complex but narrowly focused than at the previous infrastructure layer. It is at this layer that the services become more customized and able to provide business and customer specific options. It is also at this layer where things become more operationally focused, with the provisioning of “application resources” comprising perhaps the provisioning of both compute and storage resources. This layer also lays the foundation for metering and monitoring (cause you want to provide visibility, right?) which essentially overlays, i.e. makes a service of, multiple infrastructure resource monitoring services. LAYER THREE At the top layer is IT as a Service, and this is where systems become very abstracted and get turned into the IT King “A La Carte” Menu that is the ultimate goal according to everyone who’s anyone (and a few people who aren’t). This layer offers an interface to the cloud in such a way as to make self-service possible. It may not be Infrabook or even very pretty, but as long as it gets the job done cosmetics are just enhancing the value of what exists in the first place. IT as a Service is the culmination of all the work done at the previous layers to fine-tune services until they are at the point where they are consumable – in the sense that they are easy to understand and require no real technical understanding of what’s actually going on. After all, a business user or application developer doesn’t really need to know how the server and storage resources are provisioned, just in what sizes and how much it’s going to cost. IT as a Service ultimately enables the end-user – whomever that may be – to easily “order” IT services to fulfill the application specific requirements associated with an application deployment. That means availability, scalability, security, monitoring, and performance. A DYNAMIC DATA CENTER ARCHITECTURE One of the first questions that should come to mind is: why does it matter? After all, one could cut out the “cloud computing” layer and go straight from infrastructure services to IT as a Service. While that’s technically true it eliminates one of the biggest benefits of a layered and highly abstracted architecture : agility. By presenting each layer to the layer above as services, we are effectively employing the principles of a service-oriented architecture and separating the implementation from the interface. This provides the ability to modify the implementation without impacting the interface, which means less down-time and very little – if any – modification in layers above the layer being modified. This translates into, at the lowest level, vender agnosticism and the ability to avoid vendor-lock in. If two components, say a Juniper switch and a Cisco switch, are enabled with the means by which they can be enabled as services, then it becomes possible to switch the two at the implementation layer without requiring the changes to trickle upward through the interface and into the higher layers of the architecture. It’s polymorphism applied to an data center operation rather than a single object’s operations, to put it in developer’s terms. It’s SOA applied to a data center rather than an application, to put it in an architect’s terms. It’s an architectural parfait and, as we all know, everybody loves a parfait, right? Related blogs & articles: Applying Scalability Patterns to Infrastructure Architecture The Other Hybrid Cloud Architecture The New Distribution of The 3-Tiered Architecture Changes Everything Infrastructure 2.0: Aligning the network with the business (and ... Infrastructure 2.0: As a matter of fact that isn't what it means Infrastructure 2.0: Flexibility is Key to Dynamic Infrastructure Infrastructure 2.0: The Diseconomy of Scale Virus Lori MacVittie - Infrastructure 2.0 Infrastructure 2.0: Squishy Name for a Squishy Concept Pay No Attention to the Infrastructure Behind the Cloudy Curtain Making Infrastructure 2.0 reality may require new standards The Inevitable Eventual Consistency of Cloud Computing Cloud computing is not Burger King. You can't have it your way. Yet.721Views0likes0CommentsImpact of Load Balancing on SOAPy and RESTful Applications
A load balancing algorithm can make or break your application’s performance and availability It is a (wrong) belief that “users” of cloud computing and before that “users” of corporate data center infrastructure didn’t need to understand any of that infrastructure. Caution: proceed with infrastructure ignorance at the (very real) risk of your application’s performance and availability. Think I’m kidding? Stefan’s SOA & Enterprise Architecture Blog has a detailed and very explanatory post on Load Balancing Strategies for SOA Infrastructures that may change your mind. This post grew, apparently, out of some (perceived) bad behavior on the part of a load balancer in a SOA infrastructure. Specifically, the load balancer configuration was overwhelming the very services it was supposed to be load balancing. Before we completely blame the load balancer, Stefan goes on to explain that the root of the problem lay in the load balancing algorithm used to distribute requests across the services. Specifically, the load balancer was configured to use a static round robin algorithm and to apply source IP address-based affinity (persistence) while doing so. The result is that one instance of the service was constantly sent requests while the others remained idle and available. Stefan explains how the load balancing algorithm was changed to utilize a dynamic ratio algorithm that takes into consideration the state of each service (CPU and memory available) and removed the server affinity requirement. The problem wasn’t the load balancer, per se. The load balancer was acting exactly as it was configured to act. The problem lay deeper: in understanding the interaction between the network, the application network, and the services themselves. Services, particularly stateless services as offered by SOA and REST-based APIs today, do not generally require persistence. In cases where they do require persistence, that persistence needs to be based on application-layer information, such as an API key or user (usually available in a cookie). But this problem isn’t unique to SOA. Consider, if you will, the effect that such an unaware distribution might have on any one of the popular social networking sites offering RESTful APIs for integration. Imagine that all Twitter API requests ended up distributed to one server in Twitter’s infrastructure. It would fall over quickly, no doubt about that, because the requests are distributed without any consideration for current load and almost, one could say, blindly. Stefan points this out as he continues to examine the effect of load balancing algorithms on his SOA infrastructure: “Secondly, the static round-robin algorithm does not take in effect, which state each cluster node has. So, for example if one cluster node is heavily under load, because it processes some complex orders, and this results in 100% cpu load, then the load balancer will not recognize this but route lots of other requests to this node causing overload and saturation.” Load balancing algorithms that do not take into account the current state of the server and application, i.e. they are not context-aware, are not appropriate for today’s dynamic application architectures. Such algorithms are static, brittle, and blind when it comes to distributed load efficiently and will ultimately result in an uneven request load that is likely to drive an application to downtime. THE APPLICATION SHOULD BE A PART OF THE ALGORITHM It is imperative in a distributed application architecture like SOA or REST that the application network infrastructure, i.e. the load balancer, be able to take into consideration the current load on any given server before distributing a request. If one node in the (pool|farm|cluster) is processing a complex order that consumes most of the CPU resources available, the load balancer should not continue to send it requests. This requires that the load balancer, the application delivery controller, be aware of the application, its environment, as well as the network and the user. It must be able to make a decision, in real-time, about where to direct any given request based on all the variables available. That includes CPU resources, what the request is, and even who the user/application is. For example, Twitter uses a system of inbound rate limiting on API calls to help manage the load on its infrastructure. Part of that equation could be the calling application. HTTP as a transport protocol contains a somewhat surprisingly rich array of information in its headers that can be parsed and inspected and made a part of the load balancing equation in any environment. This is particularly useful to sites like Twitter where multiple “applications” (clients) are making use of the API. Twitter can easily require the use of a custom HTTP header that includes the application name and utilize that as part of its decision making processes. Like RESTful APIs, SOAP envelopes are full of application specifics that provide data to the load balancer, if it’s context-aware, that can be utilized to determine how best to distribute a request. The name of the operation being invoked, for example, can be used to not only load balance at the service level, but at the operation level. That granularity can be important when operations vary in their consumption of resources. This application layer information, in conjunction with current load and connections on the server provide a wealth of information as to how best, i.e. most efficiently, to distribute any given request. But if the folks in charge of configuring the load balancer aren’t aware of the impact of algorithms on the application and its infrastructure, you can end up in a situation much like that described in Stefan’s blog on the subject. CLOUD WILL MAKE THIS SITUATION WORSE Cloud computing won’t solve this problem and, in fact, it will probably make it worse. The belief that the infrastructure should be “hidden” from the user (that’s you) means that configuration options – like the load balancing algorithm – aren’t available to you as a user/deployer of cloud-based applications. Even though load balancing is going to be used to scale your application, you have no clue or control over how that’s going to occur. That’s why it’s important that you ask questions of your provider on this subject. You need to know what algorithm is being used and how requests are distributed so you can determine how that’s going to impact your application and its performance once its deployed. You can’t – or shouldn’t – assume that the load balancing provided is going to magically distribute requests perfectly across your scaled application because it wasn’t configured with your application in mind. If you deploy an application – particularly a SOA or RESTful one – you may find that with scalability comes poor performance or even unavailable applications because of the configuration of that infrastructure you “aren’t supposed to worry about.” Applications are not islands; they aren’t deployed stand-alone even though the virtualization of applications is making it seem like that’s the case. The delivery of applications requires collaboration between a growing number of components in the data center and load balancing is one of the key components that can make or break your application’s performance and availability. Five questions you need to ask about load balancing and the cloud Dr. Dobb’s Journal: Coding in the Cloud Cloud Computing: Vertical Scalability is Still Your Problem Server Virtualization versus Server Virtualization SOA & Web 2.0: The Connection Management Challenge The Impact of the Network on AJAX Have a can of Duh! It’s on me Intro to Load Balancing for Developers – The Algorithms Not All Virtual Servers are Created Equal611Views0likes0CommentsSOA, Stacks, and Standards
For the past week there's been a great deal of grumbling surrounding the idea of finding a "standard SOA Stack". Rich Seeley did some digging into the issue, and raised the ire of David Linthicum, who claims that anyone searching for the illusive SOA stack just doesn't get it. Bradley F. Shimmin, principal analyst of application infrastructure at Current Analysis LLC. agreed that standardization on a single Web services stack is unlikely given competing stacks from different vendors and the heterogeneous environments of most customers. "I don't think that will ever happen. I don't see how it could happen. It's like assuming that software will never get versioned." It appears that there is a disjunct across industries regarding the definition of a "stack". Anyone who's grown up in the networking industry understands the term "stack" to generally refer to the implementation of a network architecture designed in layers, like TCP/IP and the OSI model. Hence the use of the term "layer X" to describe a variety of network products, such as Layer 4 load balancing, and Layer 7 switching. There are many implementations of the TCP/IP stack, and certainly no "standard" stack to which customers flock to when considering a network deployment. In fact, many companies make it their business to build and tweak and tune its product's TCP/IP stack until it's screaming fast, and bank (literally) on the performance of that custom TCP/IP stack. As long as they comply with the appropriate RFCs and, more importantly, actually interoperate with other vendors' implementations, all is good. The problem with a "standard" SOA stack is, in a nutshell, that the well-defined and accepted "layers" associated with TCP/IP and OSI don't exist. There have been multiple attempts to define a SOA stack, but thus far there is no consensus regarding what layers should be included. While it's generally accepted that there's a transport (HTTP, FTP, SMTP, JMS) layer, a messaging layer (SOAP), a description layer (WSDL), and a discovery layer (UDDI, but increasingly vendors are demanding something more robust), there are a ton of other "layers" that potentially could be included: AAA, transaction management, coordination, and orchestration are just a few possibilities. Once you leave the relatively safe "core" layers of the SOA stack, you begin to limit the ability to compose an architecture from different products and potentially from different vendors. That's the danger of taking the idea of a SOA stack too far, and one of the reasons David Linthicum (rightly) sounds vehemently opposed to the concept. It's a relatively safe assumption that no single vendor's "stack" will be adopted, and that's okay. History tells us that as long as competing stacks are interoperable, that's a good thing. It would also be dangerous to start defining a SOA stack that goes behind the core layers necessary to provide connectivity and interoperable integration. Doing so limits the flexibility and interoperability of a SOA, and that's exactly the oppositeofwhat a SOA is intended to provide. Even if we grant that there should be (and is) a core SOA stack comprised of the transport, messaging, description, and delivery layers, still there is no reason to expect or even want a "standard" stack from a single vendor. The definition of a SOA is fluid and unique to each organization, so there's really no way to define a standard stack above and beyond the core layers for SOA that meets everyone's needs and goals. Imbibing: Water (yeah, you read that right. Got a problem with that?) Technorati tags: MacVittie, SOA, SOAP, XML, standards, F5499Views0likes0CommentsApplying Scalability Patterns to Infrastructure Architecture
Too often software design patterns are overlooked by network and application delivery network architects but these patterns are often equally applicable to addressing a broad range of architectural challenges in the application delivery tier of the data center. The “High Scalability” blog is fast becoming one of my favorite reads. Last week did not disappoint with a post highlighting a set of scalability design patterns that was, apparently, inspired by yet another High Scalability post on “6 Ways to Kill Your Servers: Learning to Scale the Hard Way.” Credit:Michael Chow/azcentral.com This particular post caught my attention primarily because although I’ve touched on many of these patterns in the past, I’ve never thought to call them what they are: scalability patterns. That’s probably a side-effect of forgetting that building an architecture of any kind is at its core computer science and thus algorithms and design patterns are applicable to both micro- and macro-architectures, such as those used when designing a scalable architecture. This is actually more common than you’d think, as it’s rarely the case that a network guy and a developer sit down and discuss scalability patterns over beer and deep fried cheese curds (hey, I live in Wisconsin and it’s my blog post so just stop making faces until you’ve tried it). Developers and architects sit over there and think about how to design a scalable application from the perspective of its components – databases, application servers, middleware, etc… Network architects sit over here and think about how to scale an application from the perspective of network components – load balancers, trunks, VLANs, and switches. The thing is that the scalability patterns leveraged by developers and architects can almost universally be abstracted and applied to the application delivery network – the set of components integrated as a means to ensure availability, performance, and security of applications. That’s why devops is so important and why devops has to bring dev into ops as much as its necessary to bring some ops into dev. There needs to be more cross-over, more discussion, between the two groups if not an entirely new group in order to leverage the knowledge and skills that each has in new and innovative ways. ABSTRACT and APPLY So the aforementioned post is just a summary of a longer and more detailed post, but for purposes of this post I think the summary will do with the caveat that the original, “Scalability patterns and an interesting story...” by Jesper Söderlund is a great read that should definitely be on your “to read” list in the very near future. For now, let’s briefly touch on the scalability patterns and sub-patterns Jesper described with some commentary on how they fit into scalability from a network and application delivery network perspective. The original text from the High Scalability blog are in red(dish) text. Load distribution - Spread the system load across multiple processing units This is a horizontal scaling strategy that is well-understood. It may take the form of “clustering” or “load balancing” but in both cases it is essentially an aggregation coupled with a distributed processing model. The secret sauce is almost always in the way in which the aggregation point (strategic point of control) determines how best to distribute the load across the “multiple processing units.” load balancing / load sharing - Spreading the load across many components with equal properties for handling the request This is what most people think of when they hear “load balancing”, it’s just that at the application delivery layer we think in terms of directing application requests (usually HTTP but can just about any application protocol) to equal “servers” (physical or virtual) that handle the request. This is a “scaling out” approach that is most typically associated today with cloud computing and auto-scaling: launch additional clones of applications as virtual instances in order to increase the total capacity of an application. The load balancing distributes requests across all instances based on the configured load balancing algorithm. Partitioning - Spreading the load across many components by routing an individual request to a component that owns that data specific This is really where the architecture comes in and where efficiency and performance can be dramatically increased in an application delivery architecture. Rather than each instance of an application being identical to every other one, each instance (or pool of instances) is designated as the “owner”. This allows for devops to tweak configurations of the underlying operating system, web and application server software for the specific type of request being handled. This is, also, where the difference between “application switching” and “load balancing” becomes abundantly clear as “application switching” is used as a means to determine where to route a particular request which is/can be then load balanced across a pool of resources. It’s a subtle distinction but an important one when architecting not only efficient and fast but resilient and reliable delivery networks. Vertical partitioning - Spreading the load across the functional boundaries of a problem space, separate functions being handled by different processing units When it comes to routing application requests we really don’t separate by function unless that function is easily associated with a URI. The most common implementation of vertical partitioning at the application switching layer will be by content. Example: creating resource pools based on the Content-Type HTTP header: images in pool “image servers” and content in pool “content servers”. This allows for greater optimization of the web/application server based on the usage pattern and the content type, which can often also be related to a range of sizes. This also, in a distributed environment, allows architects to leverage say cloud-based storage for static content while maintaining dynamic content (and its associated data stores) on-premise. This kind of hybrid cloud strategy has been postulated as one of the most common use cases since the first wispy edges of cloud were seen on the horizon. Horizontal partitioning - Spreading a single type of data element across many instances, according to some partitioning key, e.g. hashing the player id and doing a modulus operation, etc. Quite often referred to as sharding. This sub-pattern is inline with the way in which persistence-based load balancing is accomplished, as well as the handling of object caching. This also describes the way in which you might direct requests received from specific users to designated instances that are specifically designed to handle their unique needs or requirements, such as the separation of “gold” users from “free” users based on some partitioning key which in HTTP land is often a cookie containing the relevant data. Queuing and batch - Achieve efficiencies of scale by processing batches of data, usually because the overhead of an operation is amortized across multiple request I admit defeat in applying this sub-pattern to application delivery. I know, you’re surprised, but this really is very specific to middleware and aside from the ability to leverage queuing for Quality of Service (QoS) at the delivery layer this one is just not fitting in well. If you have an idea how this fits, feel free to let me know – I’d love to be able to apply all the scalability patterns and sub-patterns to a broader infrastructure architecture. Relaxing of data constraints - Many different techniques and trade-offs with regards to the immediacy of processing / storing / access to data fall in this strategy This one takes us to storage virtualization and tiering and the way in which data storage and access is intelligently handled in varying properties based on usage and prioritization of the content. If one relaxes the constraints around access times for certain types of data, it is possible to achieve a higher efficiency use of storage by subjugating some content to secondary and tertiary tiers which may not have the same performance attributes as your primary storage tier. And make no mistake, storage virtualization is a part of the application delivery network – has been since its inception – and as cloud computing and virtualization have grown so has the importance of a well-defined storage tiering strategy. We can bring this back up to the application layer by considering that a relaxation of data constraints with regards to immediacy of access can be applied by architecting a solution that separates data reads from writes. This implies eventual consistency, as data updated/written to one database must necessarily be replicated to the databases from which reads are, well, read, but that’s part of relaxing a data constraint. This is a technique used by many large, social sites such as Facebook and Plenty of Fish in order to scale the system to the millions upon millions of requests it handles in any given hour. Parallelization - Work on the same task in parallel on multiple processing units I’m not going to be able to apply this one either, unless it was in conjunction with optimizing something like MapReduce and SPDY. I’ve been thinking hard about this one, and the problem is the implication that “same task” is really the “same task”, and that processing is distributed. That said, if the actual task can be performed by multiple processing units, then an application delivery controller could certainly be configured to recognize that a specific URL should be essentially sent to some other proxy/solution that performs the actual distribution, but the processing model here deviates sharply from the request-reply paradigm under which most applications today operate. DEVOPS CAN MAKE THIS HAPPEN I hate to sound-off too much on the “devops” trumpet, but one of the primary ways in which devops will be of significant value in the future is exactly in this type of practical implementation. Only by recognizing that many architectural patterns are applicable to not only application but infrastructure architecture can we start to apply a whole lot of “lessons that have already been learned” by developers and architects to emerging infrastructure architectural models. This abstraction and application from well-understood patterns in application design and architecture will be invaluable in designing the new network; the next iteration of network theory and implementation that will allow it to scale along with the applications it is delivering. Related blogs & articles: Cloud is not Rocket Science but it is Computer Science Implementing SOA Patterns: The Router Implementing SOA Patterns: The Service Firewall Implementing SOA Patterns: Input/Output Validator Lori MacVittie - interstitial request pattern (AJAX) Business-Layer Load Balancing I Find Your Lack of Win Disturbing Cloud Computing: Vertical Scalability is Still Your Problem Vertical Scalability Cloud Computing Style Scalability Only One Half the Reliability Equation Automating scalability and high availability services Service Virtualization Helps Localize Impact of Elastic Scalability Web 2.0: Integration, APIs, and Scalability To Take Advantage of Cloud Computing You Must Unlearn, Luke. Statistics Collection and Management Pack Scalability345Views0likes1CommentYour Bus Needs a New Driver
Threereasons why anADC should be an essential component of your SOA initiative An ESB (Enterprise Service Bus) can be a valuable component of your SOA initiative. It provides orchestration, service virtualization, content-based routing, and asynchronous messaging capabilties. All good things and in some cases (not all) necessary for your organizational specific SOA initiative to succeed. But don't fall into the trap of using an ESB to perform duties for which itwas truly not designed. Nearly all ESBs overlap in functionality with application delivery controllers (ADC), but their implementation of such functionality is rudimentary and unlikely to suffice as even adequate in most complex processing environments. There are three reasons why you should consider deploying an ADC to provide these particular services rather than rely on the basic functionality provided by your ESB. 1. Load balancing While the most basic of load balancing algorithms are supported by ESBs, they are exactly that - the basics. Load balancing has always been the purview of the application delivery controller and these devices provide a plethora of options to ensure that not only are services load balanced, but that they are load balanced in a way that improves performance and ensures availability. Whilethe ADC has traditionallybeen deployed at the edge of the network, on the permiter, there is no reason why they cannot also be deployed inside the data center, as a service on your ESB providing the most advanced load balancing capabilities and assurances of availability possible. 2. Failover In the same vein as load balancing, ESBsprovide only the most basic of failover options - and thenusuallyonly at the broker or nodelevel. While an ESB is capable ofensuringavailability ofnodes through a distributed computing model, it is not capable of providing failover for services which are deployed outside its domain of control. Additionally, most ESBs providing failover support require that the backup nodes be hardcoded, introducing unnecessary configuration complexity and increasing the time and effort necessary to scale your implementation. While the ESB may be able to ensure its own availability, it can do very little if anything to ensure the availability of services it is orchestrating. An ADC, however, is purpose built to provide failover capabilities and ensure availability of those services and is well-versed in easing the process of scaling systems. 3. Advanced Health Checking While the ESB may be able to provide load balancing and even content based routing, these products do not take into consideration the health of the service being load balanced. ADCs have long recognized that though a service may be accessible, still it may be returning errors that result in the same end as being unavailable. While an ESB will continue to route to a service regardless of whether the service is actually returning valid content, an ADC is capable of validating the content being returned and ensuring that traffic is routed to an available and correctly functioning service. This eliminates unnecessary error handling and delayed processing of messages within the bus and ensures a better performing infrastructure. An ADC cannot take the place of an ESB, but neither can an ESB take the place of an ADC. Use the best tool for the job and be not fooled by the offering of rudimentary features in your ESB. If assurance of availbility is a must, then design your SOA infrastructure with an ADC up front. Your transactions - and SOA - will thank you for it. Imbibing: Coffee Technorati tags: application delivery, SOA, F5, ESB, load balancing329Views0likes0CommentsPort Knocking: What are you hiding in there?
I read with interest an article on port knocking as a mechanism for securing SOA services on CIO.com. If you aren't familiar with port knocking (I wasn't) then you'll find it somewhat interesting: From Nicholas Petreley's "There is More to SOA Security Than Authorization and Authentication" For the sake of argument, let's say you have an SOA server component for your custom client software that uses port 4000. Port knocking can close off port 4000 (and every other port) to anyone who doesn't know the "secret method" for opening it. Any cracker who scans your server for open ports will never discover that you have an SOA service available on that port. All ports will appear unresponsive, which makes your server appear to offer no services at all. Ironically, your client gains access to port 4000 in a way similar to the way crackers discover existing open ports. As described above, port scanners step through all available ports sequentially, knocking on each one to see if there's an answer. By default, a port knocking-enabled firewall never answers on any port. The secret to unlocking any given port is in the non-sequential order your client uses to check for open ports. For example, your client software might check ports 22, 8000, 45, 1056, in that order. Each time, there will be no answer. But the server will recognize that your device —running the legitimate client software—knocked on just the right ports in the right order, like the key to a combination lock. Having gotten the right combination, the firewall will open port 4000 to the authenticated device and only to that device. Port 4000 will continue to look closed and unused to the rest of the world. A great description is also available here along with client and server side software. At first I thought "this is way cool". Then I thought about it some more and thought "Wow. That's going to destroy performance, increase development and support costs, and put a big target on your services. Still cool from a technical perspective, but from an enterprise architecture point of view? Not so cool." While the initial value of port knocking - making it more difficult to find services to attack in the first place - seems obvious, there are better ways to secure access to services than by hiding them and requiring modifications to clients (or requiring custom clients in the first place) to access them. The description from Nicolas includes "running the legitimate client software", which implies there is some method of identifying legitimate - and therefore illegitimate - clients. There are much less intrusive methods of determining the legitimacy of clients such as client certificates, custom HTTP headers, and, for SOA specific applications, digital signatures. The use of port knocking necessarily adds additional requests, which degrades the responsiveness of applications simply because it requires more time to "get into" the application. SOA applications aren't all that speedy in the first place, so adding more latency into the equation is certainly not going to improve user adoption or satisfaction with the application in terms of performance. Requiring special clients increases the support costs for any application as well as introduces variability that can delay development and deployment, such as OS platform support. Will all the clients be .NET? Java? What versions? What about patching and upgrades? What do the port knocking libraries support? How secure are they? Do they even exist? It also destroys ubiquitous web access to services through a browser, which destroys the benefits of reusability of services. If you need a special client it's not likely that browser-based access is going to be possible. And what about changes to your infrastructure? Port knocking requires a port knocking enabled firewall or server, and it almost certainly will be need to be deployed on the outside perimeter, in front of your enterprise firewall, because all ports need to be "open and answerable" in order to port forward the client to the "next door" in the sequence. This changes your network infrastructure and requires that you add yet another point of failure in the security chain. SOA has some well defined security that goes further than just authentication and authorization in WS-Security 1.1. The use of encryption, digital signatures, and SSL goes a long way toward securing your SOA. Hiding the existence of services makes it harder to find them, as the article points out. But attackers aren't stupid; if you go to extreme lengths to hide those services then they must be worth finding. Will it stop script kiddies? Certainly. Will it stop the hard-core organized attack squads? Absolutely not. Hiding your services with port knocking is a lot like turning off all the lights and pretending not to be home. Thieves generally don't go after the well-lit, occupied homes. They like the empty ones, or the ones just pretending to be empty.322Views0likes0CommentsThe death of SOA has been greatly exaggerated
Amidst the hype of cloud computing and virtualization have been the publication of several research notes regarding SOA. Adoption, they say, is slowing. Oh noes! Break out the generators, stock up on water and canned food! An article from JavaWorld quotes research firm Gartner as saying: The number of organizations planning to adopt SOA for the first time decreased to 25 percent; it had been 53 percent in last year's survey. Also, the number of organizations with no plans to adopt SOA doubled from 7 percent in 2007 to 16 percent in 2008. This dramatic falloff has been happening since the beginning of 2008, Gartner said. Some have reacted with much drama to the news, as if the reports indicate that SOA has lost its shine and is disappearing into the realm of legacy technology along with COBOL and fat-clients and CORBA. Not true at all. The reports indicate a drop in adoption of SOA, not the use of SOA. That should be unsurprising. At some point the number of organizations who have implemented SOA should reach critical mass, and the number of new organizations adopting the technology will slow down simply because there are fewer of them than there are folks who have already adopted SOA. As Don pointed out when this discussion came up, the economy is factoring in heavily for IT and technology, and the percentages cited by Gartner are not nearly as bad as they look when applied to real numbers. For example, if you ask 100 organizations about their plans for SOA and 16 say "we're not doing anything with it next year" that doesn't sound nearly as impressive as 16%, especially considering that means that 84% are going to be doing something with SOA next year. As with most surveys and polls, it's all about how the numbers are presented. Statistics are the devil's playground. It is also true that most organizations don't consider that by adopting or piloting cloud computing in the next year that they will likely be taking advantage of SOA. Whether it's because their public cloud computing provider requires the use of Web Services (SOA) to deploy and manage applications in the cloud or they are building a private cloud environment and will utilize service-enabled APIs and SOA to integrate virtualization technology with application delivery solutions, SOA remains an integral part of the IT equation. SOA simply isn't the paradigm shift it was five years ago. Organizations who've implemented SOA are still using it, it's still growing in their organizations as they continue to build new functionality and features for their applications, as they integrate new partners and distributors and applications from inside and outside the data center. As organizations continue to get comfortable with SOA and their implementations, they will inevitably look to governance and management and delivery solutions with which to better manage the architecture. SOA is not dead yet; it's merely reached the beginning of its productive life and if the benefits of SOA are real (and they are) then organizations are likely to start truly realizing the return on their investments. Related articles by Zemanta HP puts more automation into SOA governance Gartner reports slowdown in SOA adoption Gartner picks tech top 10 for 2009 SOA growth projections shrinking276Views0likes1Comment