load-balancing
9 TopicsMultiple method persistence
Hello. I need to setup load-balancing for a visio application, which is quite complex, as I don't need just to ensure session persistence for a single user, but for multiple users participating to the same conference. According to my understanding of reference documentation, I need to use an universal persistence profile (or eventually hash persistence profile, as it only differs by hashing lookup value), and write an iRule, such as: when HTTP_REQUEST { # extract roomID from room parameter in query string set roomID [getfield [URI::query [HTTP::uri] room] "@" 1 ] if { $roomID != "" } { persist uie $roomID 3600 log local0. "Using Jitsi room ID $roomID for persistence: [persist lookup uie $roomID]" } } Once a corresponding persistence profile assigned to the virtual server, it works as expected. However, I also have to ensure persistence for authentication requests, this time with more classical requirements, ie every authentication requests for a given user must reach the same pool node. I first considered the use of a fallback persistence profile (either cookie, ssl, or source address), so as to keep the irule simple. However, documentation discourage using fallback persistence for this purpose: If Fallback persistence becomes the chosen persistence method, a Default persistence entry will not be created for the client connection until the Fallback persistence idle timeout period expires. Because of this, Fallback persistence may appear to override Default persistence and may not be a good choice. See Recommendations, following, for additional information. So I added another clause in my iRule, still using uie method, but with client address as lookup key, hence reinventing simple persistence: if { [HTTP::path] starts_with "/Shibboleth.sso" } { persist uie [IP::client_addr] 3600 log local0. "Using client IP adress for persistence: [persist lookup uie [IP::client_addr]]" } According to the documentation, I may be able to mix persistence methods in a single iRule (one of the example given here mixes source_addr and cookie methods), but some of those methods (ssl, msrdp, cookie) also requires a corresponding persistence profile assigned to the virtual server. Whereas I already use an universal persistence profile. So basically, I'm a bit lost among multiple options, especially the relation between persistence profiles and persistence methods, and I have a few questions: Is there any recommended practice for using multiples persistence methods in a single iRule ? if only ssl and cookie methods require a corresponding profile, what is the interest of using an universal persistence profile, instead of just assigning the persistence irule to the virtual server ? If I'm assigning a cookie persistence profile and a persistence irule using uie method to the same virtual server, how will persistence work ? I hope I have been clear enough 🙂 Thanks for your interest.870Views0likes3Comments2.5 bad ways to implement a server load balancing architecture
I'm in a bit of mood after reading a Javaworld article on server load balancing that presents some fairly poor ideas on architectural implementations. It's not the concepts that are necessarily wrong; they will work. It's the architectures offered as a method of load balancing made me do a double-take and say "What?" I started reading this article because it was part 2 of a series on load balancing and this installment focused on application layer load balancing. You know, layer 7 load balancing. Something we at F5 just might know a thing or two about. But you never know where and from whom you'll learn something new, so I was eager to dive in and learn something. I learned something alright. I learned a couple of bad ways to implement a server load balancing architecture. TWO LOAD BALANCERS? The first indication I wasn't going to be pleased with these suggestions came with the description of a "popular" load-balancing architecture that included two load balancers: one for the transport layer (layer 4) and another for the application layer (layer 7). In contrast to low-level load balancing solutions, application-level server load balancing operates with application knowledge. One popular load-balancing architecture, shown in Figure 1, includes both an application-level load balancer and a transport-level load balancer. Even the most rudimentary, entry level load balancers on the market today - software and hardware, free and commercial - can handle both transport and application layer load balancing. There is absolutely no need to deploy two separate load balancers to handle two different layers in the stack. This is a poor architecture introducing unnecessary management and architectural complexity as well as additional points of failure into the network architecture. It's bad for performance because it introduces additional hops and points of inspection through which application messages must flow. To give the author credit he does recognize this and offers up a second option to counter the negative impact of the "additional network hops." One way to avoid additional network hops is to make use of the HTTP redirect directive. With the help of the redirect directive, the server reroutes a client to another location. Instead of returning the requested object, the server returns a redirect response such as 303. I found it interesting that the author cited an HTTP response code of 303, which is rarely returned in conjunction with redirects. More often a 302 is used. But it is valid, if not a bit odd. That's not the real problem with this one, anyway. The author claims "The HTTP redirect approach has two weaknesses." That's true, it has two weaknesses - and a few more as well. He correctly identifies that this approach does nothing for availability and exposes the infrastructure, which is a security risk. But he fails to mention that using HTTP redirects introduces additional latency because it requires additional requests that must be made by the client (increasing network traffic), and that it is further incapable of providing any other advanced functionality at the load balancing point because it essentially turns the architecture into a variation of a DSR (direct server return) configuration. THAT"S ONLY 2 BAD WAYS, WHERE'S THE .5? The half bad way comes from the fact that the solutions are presented as a Java based solution. They will work in the sense that they do what the author says they'll do, but they won't scale. Consider this: the reason you're implementing load balancing is to scale, because one server can't handle the load. A solution that involves putting a single server - with the same limitations on connections and session tables - in front of two servers with essentially the twice the capacity of the load balancer gains you nothing. The single server may be able to handle 1.5 times (if you're lucky) what the servers serving applications may be capable of due to the fact that the burden of processing application requests has been offloaded to the application servers, but you're still limited in the number of concurrent users and connections you can handle because it's limited by the platform on which you are deploying the solution. An application server acting as a cluster controller or load balancer simply doesn't scale as well as a purpose-built load balancing solution because it isn't optimized to be a load balancer and its resource management is limited to that of a typical application server. That's true whether you're using a software solution like Apache mod_proxy_balancer or hardware solution. So if you're implementing this type of a solution to scale an application, you aren't going to see the benefits you think you are, and in fact you may see a degradation of performance due to the introduction of additional hops, additional processing, and poorly designed network architectures. I'm all for load balancing, obviously, but I'm also all for doing it the right way. And these solutions are just not the right way to implement a load balancing solution unless you're trying to learn the concepts involved or are in a computer science class in college. If you're going to do something, do it right. And doing it right means taking into consideration the goals of the solution you're trying to implement. The goals of a load balancing solution are to provide availability and scale, neither of which the solutions presented in this article will truly achieve.322Views0likes1CommentPersistent and Persistence, What's the Difference?
The English language is one of the most expressive, and confusing, in existence. Words can have different meaning based not only on context, but on placement within a given sentence. Add in the twists that come from technical jargon and suddenly you've got words meaning completely different things. This is evident in the use of persistent and persistence. While the conceptual basis of persistence and persistent are essentially the same, in reality they refer to two different technical concepts. Both persistent and persistence relate to the handling of connections. The former is often used as a general description of the behavior of HTTP and, necessarily, TCP connections, though it is also used in the context of database connections. The latter is most often related to TCP/HTTP connection handling but almost exclusively in the context of load-balancing. Persistent Persistent connections are connections that are kept open and reused. The most commonly implemented form of persistent connections are HTTP, with database connections a close second. Persistent HTTP connections were implemented as part of the HTTP 1.1 specification as a method of improving the efficiency Related Links HTTP 1.1 RFC Persistent Connection Behavior of Popular Browsers Persistent Database Connections Apache Keep-Alive Support Cookies, Sessions, and Persistence of HTTP in general. Before HTTP 1.1 a browser would generally open one connection per object on a page in order to retrieve all the appropriate resources. As the number of objects in a page grew, this became increasingly inefficient and significantly reduced the capacity of web servers while causing browsers to appear slow to retrieve data. HTTP 1.1 and the Keep-Alive header in HTTP 1.0 were aimed at improving the performance of HTTP by reusing TCP connections to retrieve objects. They made the connections persistent such that they could be reused to send multiple HTTP requests using the same TCP connection. Similarly, this notion was implemented by proxy-based load-balancers as a way to improve performance of web applications and increase capacity on web servers. Persistent connections between a load-balancer and web servers is usually referred to as TCP multiplexing. Just like browsers, the load-balancer opens a few TCP connections to the servers and then reuses them to send multiple HTTP requests. Persistent connections, both in browsers and load-balancers, have several advantages: Less network traffic due to less TCP setup/teardown. It requires no less than 7 exchanges of data to set up and tear down a TCP connection, thus each connection that can be reused reduces the number of exchanges required resulting in less traffic. Improved performance. Because subsequent requests do not need to setup and tear down a TCP connection, requests arrive faster and responses are returned quicker. TCP has built-in mechanisms, for example window sizing, to address network congestion. Persistent connections give TCP the time to adjust itself appropriately to current network conditions, thus improving overall performance. Non-persistent connections are not able to adjust because they are open and almost immediately closed. Less server overhead. Servers are able to increase the number of concurrent users served because each user requires fewer connections through which to complete requests. Persistence Persistence, on the other hand, is related to the ability of a load-balancer or other traffic management solution to maintain a virtual connection between a client and a specific server. Persistence is often referred to in the application delivery networking world as "stickiness" while in the web and application server demesne it is called "server affinity". Persistence ensures that once a client has made a connection to a specific server that subsequent requests are sent to the same server. This is very important to maintain state and session-specific information in some application architectures and for handling of SSL-enabled applications. Examples of Persistence Hash Load Balancing and Persistence LTM Source Address Persistence Enabling Session Persistence 20 Lines or Less #7: JSessionID Persistence When the first request is seen by the load-balancer it chooses a server. On subsequent requests the load-balancer will automatically choose the same server to ensure continuity of the application or, in the case of SSL, to avoid the compute intensive process of renegotiation. This persistence is often implemented using cookies but can be based on other identifying attributes such as IP address. Load-balancers that have evolved into application delivery controllers are capable of implementing persistence based on any piece of data in the application message (payload), headers, or at in the transport protocol (TCP) and network protocol (IP) layers. Some advantages of persistence are: Avoid renegotiation of SSL. By ensuring that SSL enabled connections are directed to the same server throughout a session, it is possible to avoid renegotiating the keys associated with the session, which is compute and resource intensive. This improves performance and reduces overhead on servers. No need to rewrite applications. Applications developed without load-balancing in mind may break when deployed in a load-balanced architecture because they depend on session data that is stored only on the original server on which the session was initiated. Load-balancers capable of session persistence ensure that those applications do not break by always directing requests to the same server, preserving the session data without requiring that applications be rewritten. Summize So persistent connections are connections that are kept open so they can be reused to send multiple requests, while persistence is the process of ensuring that connections and subsequent requests are sent to the same server through a load-balancer or other proxy device. Both are important facets of communication between clients, servers, and mediators like load-balancers, and increase the overall performance and efficiency of the infrastructure as well as improving the end-user experience.4.9KViews0likes2CommentsLayer 7 Switching + Load Balancing = Layer 7 Load Balancing
Modern load balancers (application delivery controllers) blend traditional load-balancing capabilities with advanced, application aware layer 7 switching to support the design of a highly scalable, optimized application delivery network. Here's the difference between the two technologies, and the benefits of combining the two into a single application delivery controller. LOAD BALANCING Load balancing is the process of balancing load (application requests) across a number of servers. The load balancer presents to the outside world a "virtual server" that accepts requests on behalf of a pool (also called a cluster or farm) of servers and distributes those requests across all servers based on a load-balancing algorithm. All servers in the pool must contain the same content. Load balancers generally use one of several industry standard algorithms to distribute request. Some of the most common standard load balancing algorithms are: round-robin weighted round-robin least connections weighted least connections Load balancers are used to increase the capacity of a web site or application, ensure availability through failover capabilities, and to improve application performance. LAYER 7 SWITCHING Layer 7 switching takes its name from the OSI model, indicating that the device switches requests based on layer 7 (application) data. Layer 7 switching is also known as "request switching", "application switching", and "content based routing". A layer 7 switch presents to the outside world a "virtual server" that accepts requests on behalf of a number of servers and distributes those requests based on policies that use application data to determine which server should service which request. This allows for the application infrastructure to be specifically tuned/optimized to serve specific types of content. For example, one server can be tuned to serve only images, another for execution of server-side scripting languages like PHP and ASP, and another for static content such as HTML , CSS , and JavaScript. Unlike load balancing, layer 7 switching does not require that all servers in the pool (farm/cluster) have the same content. In fact, layer 7 switching expects that servers will have different content, thus the need to more deeply inspect requests before determining where they should be directed. Layer 7 switches are capable of directing requests based on URI, host, HTTP headers, and anything in the application message. The latter capability is what gives layer 7 switches the ability to perform content based routing for ESBs and XML/SOAP services. LAYER 7 LOAD BALANCING By combining load balancing with layer 7 switching, we arrive at layer 7 load balancing, a core capability of all modern load balancers (a.k.a. application delivery controllers). Layer 7 load balancing combines the standard load balancing features of a load balancing to provide failover and improved capacity for specific types of content. This allows the architect to design an application delivery network that is highly optimized to serve specific types of content but is also highly available. Layer 7 load balancing allows for additional features offered by application delivery controllers to be applied based on content type, which further improves performance by executing only those policies that are applicable to the content. For example, data security in the form of data scrubbing is likely not necessary on JPG or GIF images, so it need only be applied to HTML and PHP. Layer 7 load balancing also allows for increased efficiency of the application infrastructure. For example, only two highly tuned image servers may be required to meet application performance and user concurrency needs, while three or four optimized servers may be necessary to meet the same requirements for PHP or ASP scripting services. Being able to separate out content based on type, URI, or data allows for better allocation of physical resources in the application infrastructure.1.6KViews0likes2CommentsThe Disadvantages of DSR (Direct Server Return)
I read a very nice blog post yesterday discussing some of the traditional pros and cons of load-balancing configurations. The author comes to the conclusion that if you can use direct server return, you should. I agree with the author's list of pros and cons; DSR is the least intrusive method of deploying a load-balancer in terms of network configuration. But there are quite a few disadvantages missing from the author's list. Author's List of Disadvantages of DSR The disadvantages of Direct Routing are: Backend server must respond to both its own IP (for health checks) and the virtual IP (for load balanced traffic) Port translation or cookie insertion cannot be implemented. The backend server must not reply to ARP requests for the VIP (otherwise it will steal all the traffic from the load balancer) Prior to Windows Server 2008 some odd routing behavior could occur in In some situations either the application or the operating system cannot be modified to utilse Direct Routing. Some additional disadvantages: Protocol sanitization can't be performed. This means vulnerabilities introduced due to manipulation of lax enforcement of RFCs and protocol specifications can't be addressed. Application acceleration can't be applied. Even the simplest of acceleration techniques, e.g. compression, can't be applied because the traffic is bypassing the load-balancer (a.k.a. application delivery controller). Implementing caching solutions become more complex. With a DSR configuration the routing that makes it so easy to implement requires that caching solutions be deployed elsewhere, such as via WCCP on the router. This requires additional configuration and changes to the routing infrastructure, and introduces another point of failure as well as an additional hop, increasing latency. Error/Exception/SOAP fault handling can't be implemented. In order to address failures in applications such as missing files (404) and SOAP Faults (500) it is necessary for the load-balancer to inspect outbound messages. Using a DSR configuration this ability is lost, which means errors are passed directly back to the user without the ability to retry a request, write an entry in the log, or notify an administrator. Data Leak Prevention can't be accomplished. Without the ability to inspect outbound messages, you can't prevent sensitive data (SSN, credit card numbers) from leaving the building. Connection Optimization functionality is lost. TCP multiplexing can't be accomplished in a DSR configuration because it relies on separating client connections from server connections. This reduces the efficiency of your servers and minimizes the value added to your network by a load balancer. There are more disadvantages than you're likely willing to read, so I'll stop there. Suffice to say that the problem with the suggestion to use DSR whenever possible is that if you're an application-aware network administrator you know that most of the time, DSR isn't the right solution because it restricts the ability of the load-balancer (application delivery controller) to perform additional functions that improve the security, performance, and availability of the applications it is delivering. DSR is well-suited, and always has been, to UDP-based streaming applications such as audio and video delivered via RTSP. However, in the increasingly sensitive environment that is application infrastructure, it is necessary to do more than just "load balancing" to improve the performance and reliability of applications. Additional application delivery techniques are an integral component to a well-performing, efficient application infrastructure. DSR may be easier to implement and, in some cases, may be the right solution. But in most cases, it's going to leave you simply serving applications, instead of delivering them. Just because you can, doesn't mean you should.5.9KViews0likes4CommentsBursting the Cloud
The cloud computing craze is leading to some interesting new terms. Cloudware and cloudbursting are two terms I particularly like for their ability to describe specific computing models based on cloud computing. Today we're going to look at cloudbursting, which is basically a new twist on an old concept. Cloudbursting appears to be to marry the traditional safe enterprise computing model with cloud computing; in essence, bursting into the cloud when necessary or using the cloud when additional compute resources are required temporarily. Jeff at Amazon Web Services Blog talks about the inception of this term as applied to the latter and describes it in his blog post as a method used by Thomas Brox Røst to regenerate a number of dynamic pages in 5 hours rather than the 7 hours that would be required if he had attempted such a feat internally. His approach is further described on The High Scalability Blog. Cloudbursting can also be used to shoulder the burden of some of an application's processing. For example, basic application functionality could be provided from within the cloud while more critical (e.g. revenue-generating) applications continue to be served from within the controlled enterprise data center. This assumes that only a portion of consumers will actually be interacting with the data-driven side of a web site (customer management, process visibility, etc...) while the greater portion will simply be browsing around on the non-interactive, as it were, side of the site. Bursting has traditionally been applied to resource allocation and automated provisioning/de-provisioning of resources, historically focused on bandwidth. Today, in the cloud, it is being applied to resources such as servers, application servers, application delivery systems, and other infrastructure required to provide on-demand computing environments that expand and contract as necessary, without manual intervention. This requires the ability to automate the cloud's data center. Data center automation in a cloud computing environment, regardless of the opacity of the model, requires more than simple workflow systems. It requires on-demand control and management over all devices in the delivery chain, from the storage to the application and web servers to the load-balancers and acceleration offerings that deliver the applications to end-users. This is more akin to data center orchestration than it is automation, as it requires that many moving parts and pieces be coordinated in order to perform a highly complex set of tasks seamlessly and with as little manual intervention as possible. This is one of the foundational requirements of a cloud computing infrastructure: on-demand, automated scalability. Data center automation is nothing new. Hosting and service providers have long automated their data centers in order to reduce the cost of customer acquisition and management, and to improve efficiency of provisioning and de-provisioning processes. These benefits can also be realized inside the data center, regardless of the model being employed. The same automation required for smooth, cost-effective management of a cloud computing data center can be utilized to achieve smooth, cost-effective management of an enterprise data center. The hybrid application deployment model involving cloud computing requires additional intelligence on the part of the application delivery network. The application delivery network must be able to understand what is being requested and where it resides; it must be able to intelligently route requests. This, too, is a fundamental attribute of cloud computing infrastructure: intelligence. When distributing an application across multiple locations, whether local servers or remote data centers or "in the cloud", it becomes necessary for a controlling node to properly route those requests based on application data. In a less sophisticated model, global load balancing could be substituted as a means of directing requests to the appropriate site, a task for which global load balancers seem a perfect fit. A hybrid approach like cloudbursting seems to be particularly appealing. Enterprises seem reluctant to move business critical applications into the cloud at this juncture but are likely more willing to assign responsibility to an outsourced provider for less critical application functionality with variable volume requirements, which fits well with an on-demand resource bursting model. Cloudbursting may be one solution that makes everyone happy.286Views0likes1CommentCan the future of application delivery networks be found in neural network theory?
I spent a big chunk of time a few nights ago discussing neural networks with my oldest son over IM. It's been a long time since I've had reason to dig into anything really related to AI (artificial intelligence) and at first I was thinking how cool it would be to be back in college just exploring topics like that. Then, because I was trying to balance a conversation with my oldest while juggling my (fussy) youngest on my lap, I thought no, no it wouldn't. Artificial neural networks (ANN) are good for teaching a system how to recognize patterns, discern complex mathematical relationships, and make predictions based on a variety of inputs. It learns by trying and trying again until the output matches what is expected given a sample (training) data set. That learning process requires feedback; feedback that is often given via backpropagation. Backpropagation can be tricky, but essentially it's the process of determining how far off the output is from the expected output, and then propagating that back into the network so it can essentially learn from its mistakes. Just like us. If you guessed that this was going to tie back into application delivery, you guessed correctly. An application delivery network is not a neural network, but it often has many of the same properties, such as using something similar to a hidden layer (the application delivery controller) to make decisions about application messages, such as to which server to distribute them and how to best optimize those messages. More interestingly, perhaps, is the ability to backpropagate errors and information through the application delivery network such that the application delivery network automatically adjusts itself and makes different decisions for subsequent requests. If the application delivery network is enabled with a services-based API, for example, it can be integrated into applications to provide valuable feedback regarding the state of that application and the messages it receives to the application delivery controller, which can then be adjusted to reflect changes in the state of that application. This is how we change the weights of individual servers in the load balancing algorithms in what is somewhat akin to modifying the weights of the connections between neurons in a neural net. But it's merely a similarity now; it's not a real ANN as it's missing some key attributes and behaviors that would make it one. When you look at the way in which an application delivery network is deployed and how it acts, you can (or at least I can) see the possibilities of employing a neural network model in building an even smarter, more adaptable delivery network. Right now we have engineers that deploy, configure, and test application delivery networks for specific applications like Oracle, Microsoft, and BEA. It's an iterative process in which they continually tweak the configuration of the solutions that make up an application delivery network based on feedback such as response time, size of messages, and load on individual servers. When they're finished, they've documented an Application Ready Network with a configuration that is configured for optimal performance and scalability for that application that can easily be deployed by customers. But the feedback loop for this piece is mostly manual right now, and we only have so many engineers available for the hundreds of thousands of applications out there. And that's not counting all the in-house developed applications that could benefit from a similar process. And our environment is not your environment. In the future, it would awesome if application delivery networks acted more like neural networks, incorporating the feedback themselves based on designated thresholds (response time must be less than X, load on the server must not exceed Y) and tweak itself until it met its goals; all based on the applications and environment unique to the organization. It's close; an intelligent application delivery controller is able to use thresholds for response time and size of application messages to determine to which server an individual request should be sent. And it can incorporate feedback through the use of service-based APIs integrated with the application. But it's not necessarily modifying its own configuration permanently based on that information; it doesn't have a "learning mode" like so many application firewall and security solutions. That's an important piece we're missing - the ability to learn the behavior of an application in a specific environment and adjust automatically to that unique configuration. Like learning that in your environment a specific application task runs faster on server X than it does on servers Y and Z, so it always sends that task to server X. We can do the routing via layer 7 switching, but we can't (yet) deduce what that routing should be from application behavior and automatically configure it. We've come a long way since the early days of load balancing, where the goal was simply to distribute requests across machines equally. We've learned how to intelligently deliver applications, not just distribute them, in the years since the web was born. So it's not completely crazy to think that in the future the concepts used to build neural networks will be used to build application delivery neural networks. At least I don't think it is. But then crazy people don't think they're crazy, do they?273Views0likes1CommentI do not think that word means what you think it means
Greg Ferro over at My Etherealmind has a, for lack of a better word, interesting entry in his Network Dictionary on the term "Application Delivery Controller." He says: Application Delivery Controller (ADC) - Historically known as a “load balancer”, until someone put a shiny chrome exhaust and new buttons on it and so it needed a new marketing name. However, the Web Application Firewall and Application Acceleration / Optimisation that are in most ADC are not really load balancing so maybe its alright. Feel free to call it a load balancer when the sales rep is on the ground, guaranteed to upset them. I take issue with this definition primarily because an application delivery controller (ADC) is different from a load-balancer in many ways, and most of them aren't just "shiny chrome exhaust and new buttons". He's right that web application firewalls and web application acceleration/optimization features are also included, but application delivery controllers do more than just load-balancing these days. Application delivery controller is not just a "new marketing name", it's a new name because "load balancing" doesn't properly describe the functionality of the products that fall under the ADC moniker today. First, load-balancing is not the same as layer 7 switching. The former is focused on distribution of requests across a farm or pool of servers whilst the latter is about directing requests based on application layer data such as HTTP headers or application messages. An application delivery controller is capable of performing layer 7 switching, something a simple load-balancer is not. When the two are combined you get layer 7 load-balancing which is a very different beast than the simple load-balancing offered in the past and often offered today by application server clustering technologies, ESB (enterprise service bus) products, and solutions designed primarily for load-balancing. Layer 7 load balancing is the purvey of application delivery controllers, not load-balancers, because it requires application fluency and run-time inspection of application messages - not packets, mind you, but messages. That's an important distinction, but one best left for another day. The core functionality of an application delivery controller is load-balancing, as this is the primary mechanism through which high-availability and failover is provided. But a simple load-balancer does little more than take requests and distribute them based on simple algorithms; they do not augment the delivery of applications by offering additional features such as L7 rate shaping, application security, acceleration, message security, and dynamic inspection and manipulation of application data. Second, a load balancer isn't a platform; an application delivery controller is. It's a platform to which tasks generally left to the application can be offloaded such as cookie encryption and decryption, input validation, transformation of application messages, and exception handling. A load balancer can't dynamically determine the client link speed and then determine whether compression would improve or degrade performance, and either apply it or not based on that decision. A simple load balancer can't inspect application messages and determine whether it's a SOAP fault or not, and then once it's determined it is execute logic that handles that exception. An application delivery controller is the evolution of load balancing to something more; to application delivery. If you really believe that an application delivery controller is just a marketing name for a load-balancer then you haven't looked into the differences or how an ADC can be an integral part of a secure, fast, and available application infrastructure in a way that load-balancers never could. Let me 'splain. No, there is too much. Let me sum up. A load balancer is a paper map. An ADC is a Garmin or a TomTom.260Views0likes2CommentsReliability does not come from SOA Governance
An interesting InformationWeek article asks whether SOA intermediaries such as "enterprise service bus, design-time governance, runtime management, and XML security gateways" are required for an effective SOA. It further posits that SOA governance is a must for any successful SOA initiative. As usual, the report (offered free courtesy of IBM), focuses on SOA infrastructure that while certainly fitting into the categories of SOA intermediary and governance does very little to assure stability and reliability of those rich Internet applications and composite mashups being built atop the corporate SOA. Effective SOA Requires Intermediaries via InformationWeek In addition to attracting new customers with innovative capabilities, it's equally important for businesses to offer stable, trusted services that are capable of delivering the high quality of service that users now demand. Without IT governance, the Web-oriented world of rich Internet applications and composite mashups can easily become unstable and unreliable. To improve your chances for success, establish discipline through a strong IT governance program where quality of service, security, and management issues are of equal importance. As is often the case, application delivery infrastructure is relegated to "cloud" status; it's depicted as a cloud within the SOA or network and obscured, as though it has very little to do with the successful delivery of services and applications. Application delivery infrastructure is treated on par with layer 2-3 network infrastructure: dumb boxes whose functionality and features have little to do with application development, deployment, or delivery and is therefore beneath the notice of architects and developers alike. SOA intermediaries, while certainly a foundational aspect of a strong, reliable SOA infrastructure, are only part of the story. Reliability of services can't be truly offered by SOA intermediaries nor can they be provided by traditional layer 2-3 (switches, routers, hubs) network infrastructure. A dumb load-balancer cannot optimize inter-service communication to ensure higher capacity (availability and reliability) and better performance. A traditional layer 2/3 switch cannot inspect XML/SOAP/JSON messages and intelligently direct those messages to the appropriate ESB or service pool. But neither can SOA intermediaries provide reliability and stability of services. Like ESB load-balancing and availability services, SOA intermediaries are largely incapable of ensuring the reliable delivery of SOA applications and services because their tasks are focused on runtime governance (authentication, authorization, monitoring, content based routing) and their load-balancing and network-focused delivery capabilities are largely on par with that of traditional l2-3 network infrastructure. High-availability and failover functionality is rudimentary at best in SOA intermediaries. The author mentions convergence and consolidation of the SOA intermediary market, but that same market has yet to see the issue of performance and reliability truly addressed by any SOA intermediary. Optimization and acceleration services, available to web applications for many years, have yet to be offered to SOA by these intermediaries. That's perfectly acceptable, because it's not their responsibility. When it comes to increasing capacity of services, ensuring quality of service, and intelligently managing the distribution of requests the answer is not a SOA intermediary or a traditional load-balancer; that requires an application delivery network with an application fluent application delivery controller at its core. The marriage of Web 2.0 and SOA has crossed the threshold. It's reality. SOA intermediaries are not designed with the capacity and reliability needs of a large-scale Web 2.0 (or any other web-based) application. That chore is left to the "network cloud" in which application delivery currently resides. But it should be its own "cloud", it's own distinct part of the overall architecture. And it ought to be considered as part of the process rather than an afterthought. SOA governance solutions can do very little to improve the capacity, reliability, and performance of SOA and applications built atop that SOA. A successful SOA depends on more than governance and SOA intermediaries; it depends on a well-designed architecture that necessarily includes consideration for the reliability, scalability, and security of both services and the applications - Web 2.0 or otherwise - that will take advantage of those services. That means incorporating an intelligent, dynamic application delivery infrastructure into your SOA before reliability becomes a problem.185Views0likes0Comments