platform
8 Topics業界唯一のシャーシ型ADCであるViprionシリーズの最小モデル、C2200シャーシを提供開始
このたび、F5ネットワークスジャパン株式会社は、F5 Synthesisアーキテクチャモデルの恩恵を増大する新製品、Viprionシリーズの新モデル、2スロット式小型シャーシであるC2200を発表いたしました。C2200は、従来のミッドレンジであるC2400、上位モデルのC4480、フラッグシップのC4800に加え、小型で省スペース、お求めやすい価格設定で従来と変わらない機能をお届けします。 主なキーポイントは以下の通りです。 Viprionシリーズ最小の2RU(ラックユニット)というサイズ 対応するブレードは最新のミッドレンジブレードであるB2150 / B2250 最大ブレード2枚搭載可能。つまり最大40のvCMP仮想インスタンスを構築可能 対応ソフトウェア(TMOS)のバージョンは11.5.0以降 詳細な情報はViprion製品ページをご参照下さい。スペックを含めたデータシートやプラットフォーム一覧表などもございます。 Viprion C2200では、システムをユーザの必要に応じてアップグレードする能力を保ちながら、スケーリング可能な処理力を加えることが可能となり、企業にとって重要なアプリケーションサービスのパフォーマンスとスケーリングの両方を実現します。F5の仮想クラスタ・マルチプロセシング(vCMP ® )テクノロジを用いて、アプリケーションサービスと十分に活用されていないアプリケーション・デリバリ・コントローラ(ADC)を効率的に統合させ、最高密度のマルチテナントソリューションを提供いたします。 今までも、そこまでインフラの拡張が大きく見込まれないユーザ様環境では、従来機のViprionで最大4枚・8枚という中1-2枚程度で運用が続いている事例も数多くございます。このように、より小規模なキャパシティプランニングをされているユーザ様向けにも拡張性、仮想化ソリューションを展開し、より小型で少ない投資から始める事ができる、というご提案が可能になります。新しいViprion C2200を是非ご検討下さい! 出荷体制は整っております。製品に関する詳しい情報に関しては、F5ネットワークスジャパン株式会社(https://interact.f5.com/JP-Contact.html)、または各販売代理店までご連絡ください。233Views0likes0Comments業界唯一のシャーシ型ADCであるViprionシリーズの最小モデル、C2200シャーシを提供開始
このたび、F5ネットワークスジャパン株式会社は、F5 Synthesisアーキテクチャモデルの恩恵を増大する新製品、Viprionシリーズの新モデル、2スロット式小型シャーシであるC2200を発表いたしました。C2200は、従来のミッドレンジであるC2400、上位モデルのC4480、フラッグシップのC4800に加え、小型で省スペース、お求めやすい価格設定で従来と変わらない機能をお届けします。 主なキーポイントは以下の通りです。 Viprionシリーズ最小の2RU(ラックユニット)というサイズ 対応するブレードは最新のミッドレンジブレードであるB2150 / B2250 最大ブレード2枚搭載可能。つまり最大40のvCMP仮想インスタンスを構築可能 対応ソフトウェア(TMOS)のバージョンは11.5.0以降 詳細な情報はViprion製品ページをご参照下さい。スペックを含めたデータシートやプラットフォーム一覧表などもございます。 Viprion C2200では、システムをユーザの必要に応じてアップグレードする能力を保ちながら、スケーリング可能な処理力を加えることが可能となり、企業にとって重要なアプリケーションサービスのパフォーマンスとスケーリングの両方を実現します。F5の仮想クラスタ・マルチプロセシング(vCMP ® )テクノロジを用いて、アプリケーションサービスと十分に活用されていないアプリケーション・デリバリ・コントローラ(ADC)を効率的に統合させ、最高密度のマルチテナントソリューションを提供いたします。 今までも、そこまでインフラの拡張が大きく見込まれないユーザ様環境では、従来機のViprionで最大4枚・8枚という中1-2枚程度で運用が続いている事例も数多くございます。このように、より小規模なキャパシティプランニングをされているユーザ様向けにも拡張性、仮想化ソリューションを展開し、より小型で少ない投資から始める事ができる、というご提案が可能になります。新しいViprion C2200を是非ご検討下さい! 出荷体制は整っております。製品に関する詳しい情報に関しては、F5ネットワークスジャパン株式会社(https://interact.f5.com/JP-Contact.html)、または各販売代理店までご連絡ください。211Views0likes0CommentsThe Three Axioms of Application Delivery
#fasterapp If you know these three axioms, then you’ll know application delivery when you see it. Like most technology jargon, there are certain terms and phrases that end up mangled, conflated, and generally misapplied as they gain traction in the wider market. Cloud is merely the latest incarnation of this phenomenon, and there will be others in the future. Guaranteed. Of late the term “application delivery” has been creeping up into the vernacular. That could be because cloud has pushed it to the fore, necessarily. Cloud purports to eliminate the “concern” of infrastructure and allows IT to focus on … you guessed it, the application. Which in turn means the delivery of applications is becoming more and more pervasive in the strategic vocabulary of the market. But like cloud and its predecessors, the term application delivery is somewhat vague and without definition. I am not going to define it, in case you were wondering, because quite frankly I’ve watched its expansion and transformation over the past decade and understand that application delivery is not static. As new technology and deployment models arise, new techniques and architectures must also arise to meet the challenges that naturally arise along with those applications. But how, then, do you know what is and is not application delivery? If it can morph and grow and transform with time and technology, then anything can be considered application delivery, right? Not entirely. Application delivery, after all, is about an end-to-end process. It’s about a request that is sent to an application and subsequently fulfilled and returned to the originator of the request. Depending on the application this process may be simple or exceedingly complex, requiring authentication, logging, verification, interaction of multiple services and, one hopes, a wealth of security services ensuring that what is delivered is what was intended and desired, and is not carrying along something malicious. A definition comprising these concepts would be either be far too broad so as to be meaningless, or so narrow that it left no room to adapt to future technologies. Neither is acceptable, in my opinion. A much better way to understand what is (and conversely what is not) application delivery is to learn three simple axioms that define the core concepts upon which application delivery is based. APPLICATION-CENTRIC “Applications are not servers, hypervisors, or operating systems.” Applications are not servers. They are not the physical or virtual server upon which they are deployed and from where they draw core resources. They are not the web and application servers on which they rely for application-layer protocol support. They are not the network stack from which they derive their IP address or TCP connection characteristics. They are uniquely separate entities that must be managed individually. The concrete example of this axiom in action is health-monitoring of applications. Too many times we see load balancing services configured with health-checking options that are focused on IP or TCP or HTTP parameters. Ping checks, TCP half-open checks, HTTP status checks. None of these options are relevant to whether or not the application is available and executing correctly. A ping check assures us the network is operating and the OS is responding. A TCP half-open check tells us network stack is operating properly. An HTTP status check tells us the web or application server is running and accepting requests. But none of these even touches on whether or not the application is executing and responding correctly. Similarly, applications are not ports, and security services must be able to secure the application, not merely its operating environment. Applications are not – or should not – be defined by their network characteristics, and neither should they be secured based on these parameters. Applications are not servers, hypervisors, or operating systems. They are individual entities that must be managed individually, from a performance, availability, and security perspective. MITIGATE OPERATIONAL RISK “Availability, performance, and security are not separate operational challenges.” In most IT organizations the people responsible for security are not responsible for performance or availability, and vice-versa. While devops tries to bridge the gap between applications and operations-focused professionals, we may need to intervene first and unify operations. These three operational concerns are intertwined, they are interrelated, they are paternal triplets. A DDoS attack is security, but it has – or likely will have – a profound impact on both performance and availability. Availability has an impact on performance, both positive and negative. And too often performance concerns result in the avoidance of security that can ultimately return to bite availability in the derriere. Application delivery recognizes that all three components of operational risk are inseparable, and they must be viewed as a holistic concern. Each challenge should be addressed with the others in mind, and with the understanding that changes in one will impact the others. OPERATE WITHIN CONTEXT “Application delivery decisions cannot be made efficiently or effectively in a vacuum.” Finally, application delivery recognizes that decisions regarding application performance, security, and availability cannot be made within a vacuum. What may improve performance for a mobile client accessing an application over the Internet may actually impair performance for a mobile client accessing the application over the internal data center network. What is appropriate authentication methods for a remote PC desktop are unlikely to be applicable to the same user requesting access over a smartphone. The various components of context provide the means by which the appropriate policies are enforced and applied at the right time to the right client for the right application. It is context that provides the unique set of parameters that enfolds any given request. We cannot base decisions solely on user, because user may migrate during the day from one client device to another, and one location to another. We cannot base decisions solely on device, because network conditions and type may change as the user roams from home to the office and out to lunch, moving seamlessly between mobile carrier network and WiFi. We cannot base decisions solely on application, because the means and location of the client may change its behavior and impact delivery in a negative way. When you put these axioms into action, the result is application delivery. A comprehensive, holistic and highly strategic approach to delivering applications. It is impossible to say application delivery is these five products delivered as a solution because whether or not those products actually comprise an application delivery network depends on whether or not they are able to deliver on the promise of these three axioms of application delivery.229Views0likes0CommentsRed Herring: Hardware versus Services
In a service-focused, platform-based infrastructure offering, the form factor is irrelevant. One of the most difficult aspects of cloud, virtualization, and the rise of platform-oriented data centers is the separation of services from their implementation. This is SOA applied to infrastructure, and it is for some reason a foreign concept to most operational IT folks – with the sometimes exception of developers. But sometimes even developers are challenged by the notion, especially when it begins to include network hardware. ARE YOU SERIOUSLY? The headline read: WAN Optimization Hardware versus WAN Optimization Services. I read no further, because I was struck by the wrongness of the declaration in the first place. I’m certain if I had read the entire piece I would have found it focused on the operational and financial benefits of leveraging WAN optimization as a Service as opposed to deploying hardware (or software a la virtual network appliances) in multiple locations. And while I’ve got a few things to say about that, too, today is not the day for that debate. Today is for focusing on the core premise of the headline: that hardware and services are somehow at odds. Today is for exposing the fallacy of a premise that is part of the larger transformational challenge with which IT organizations are faced as they journey toward IT as a Service and a dynamic data center. This transformational challenge, often made reference to by cloud and virtualization experts, is one that requires a change in thinking as well as culture. It requires a shift from thinking of solutions as boxes with plugs and ports and viewing them as services with interfaces and APIs. It does not matter one whit whether those services are implemented using hardware or software (or perhaps even a combination of the two, a la a hybrid infrastructure model). What does matter is the interface, the API, the accessibility as Google’s Steve Yegge emphatically put it in his recent from-the-gut-not-meant-to-be-public rant. What matters is that a product is also a platform, because as Yegge so insightfully noted: A product is useless without a platform, or more precisely and accurately, a platform-less product will always be replaced by an equivalent platform-ized product. A platform is accessible, it has APIs and interfaces via which developers (consumer, partner, customer) can access the functions and features of the product (services) to integrate, instruct, and automate in a more agile, dynamic architecture. Which brings us back to the red herring known generally as “hardware versus services.” HARDWARE is FORM-FACTOR. SERVICE is INTERFACE. This misstatement implies that hardware is incapable of delivering services. This is simply not true, any more than a statement implying software is capable of delivering services would be true. That’s because intrinsically nothing is actually a service – unless it is enabled to do so. Unless it is, as today’s vernacular is wont to say, a platform. Delivering X as a service can be achieved via hardware as well as software. One need only look at the varied offerings of load balancing services by cloud providers to understand that both hardware and software can be service-enabled with equal alacrity, if not unequal results in features and functionality. As long as the underlying platform provides the means by which services and their requisite interfaces can be created, the distinction between hardware and “services” is non-existent. The definition of “service” does not include nor preclude the use of hardware as the underlying implementation. Indeed, the value of a “service” is that it provides a consistent interface that abstracts (and therefore insulates) the service consumer from the underlying implementation. A true “service” ensures minimal disruption as well as continued compatibility in the face of upgrade/enhancement cycles. It provides flexibility and decreases the risk of lock-in to any given solution, because the implementation can be completely changed without requiring significant changes to the interface. This is the transformational challenge that IT faces: to stop thinking of solutions in terms of deployment form-factors and instead start looking at them with an eye toward the services they provide. Because ultimately IT needs to offer them “as a service” (which is a delivery and deployment model, not a form factor) to achieve the push-button IT envisioned by the term “IT as a Service.”200Views0likes0CommentsF5 Friday: Platform versus Product
There’s a significant difference between a platform and a product, especially when it comes to architecting a dynamic data center In the course of nearly a thousand blogs it’s quite likely you’ve seen BIG-IP referenced as a platform, and almost never as a product. There’s a reason for that, and it’s one that is increasingly becoming important as organizations begin to look at some major transformations to their data center architecture. It’s not that BIG-IP isn’t a product. Ultimately, of course, it is in the traditional sense of the word. But it’s also a platform, an infrastructure platform, designed specifically to allow the deployment of application delivery-related services in a modular fashion. In the most general way, modern browsers are products and platforms, as they provide an application framework through which additional plug-ins (modules) can be deployed. BIG-IP is similar to this model with the noted exception that its internal application framework is intended for use by F5 engineers to develop new and integrate existing functionality as “plug-ins” within the core architectural framework we call TMOS™. There are myriad reasons why this distinction is important. Primarily among them is a unified internal architecture implies internal, high-speed interconnects that allow inbound and outbound data to be shared across modules (plug-ins) without incurring the overhead of network-layer communication. Many developers can explain the importance of zero-copy operations as it relates to performance. Those that can’t will still likely be able to describe the difference between pass by reference and pass by value which, in many respects, has similar performance implications as the former simply passes a pointer to a memory location and the latter makes a copy. It’s similar to the difference between collaborative editing in Google Docs and tracking revisions in Word via e-mail – the former acts on a single, shared copy while the latter passes around the entire document. Obviously, working on the same document at the same time is more efficient and ultimately faster than the alternative of passing around a complete copy and waiting for it to return, marked up with changes. FROM THEORY to PRACTICE This theory translates well to the architectural principles behind TMOS and the BIG-IP platform: inbound and outbound data is shared across modules (plug-ins) in order to reduce the overhead associated with traditional network-based architectures that chain multiple products together. While the end-result may be similar, performance will certainly suffer and the loss of context incurred by architectural chaining may negatively impact the effectiveness (not to mention capabilities) of security-related functions. The second piece of the platform puzzle are programmatic interfaces for external, i.e. third-party, development. This is the piece of the puzzle that makes a platform extensible. F5 TMOS provides for this with iRules, a programmatic scripting language that can be used to do, well, just about anything you want to do to inbound and outbound traffic. Whether it’s manipulating HTML, JSON, or HTTP headers or inspecting and modifying IP packets (disclaimer: we are not responsible for the anger of your security and/or network team for doing this without their involvement), iRules allows you to deploy unique functionality for just about any situation you can think of. Most often these capabilities are used to mitigate emergent threats – such as the THC SSL Renegotiation vulnerability – but they are also used to perform a variety of operational and application-specific tasks, such as redirection and holistic error-handling. And of course, who could forget my favorite, the random dice roll iRule. While certainly not of value to most organizations, such efforts can be good for learning. (That’s my story and I’m sticking to it.) TMOS is a full proxy, and is unique in its ability to inspect and control entire application conversations. This enables F5 to offer an integrated, operationally consistent solution that can act based on the real time context of the user, network, and application across a variety of security, performance, and availability concerns. That means access control and application security as well as load balancing and DNS services leverage the same operational model, the same types of policies, the same environment across all services regardless of location or form-factor. iRules can simultaneously interact with DNS and WAF policies, assuming both BIG-IP GTM and BIG-IP ASM are deployed on the same instance. The zero-copy nature of the high-speed bus that acts as the interconnect between the switching backplane and the individual modules insures the highest levels of performance without requiring a traversal of the network. Because of the lack of topological control in cloud computing environments – public and private – the need for an application delivery platform is increasing. The volatility in IP topology is true for not only server and storage infrastructure, but increasingly for the network as well, making the architecture of a holistic application delivery network using individually chained components more and more difficult, if not impossible. A platform with the ability to scale out and across both physical and virtual instances while simultaneously sharing configuration to ensure operational consistency is a key component to a successful, cloud-based initiative whether its private, public, or a combination of both. A platform provides the flexibility and extensibility required to meet head on the challenges of highly dynamic environments while ensuring the ability to enforce policies that directly address and mitigate operational risk (security, performance, availability). A product, without the extensibility and programmatic nature of a platform, is unable to meet these same challenges. Context is lost in the traversal of the network and performance is always negatively impacted when multiple network-based connections must be made. A platform maintains context and performance while allowing the broadest measure of flexibility in deploying the right solutions at the right time. At the Intersection of Cloud and Control… What is a Strategic Point of Control Anyway? F5 Friday: Performance, Throughput and DPS Cloud Computing: Architectural Limbo Operational Risk Comprises More Than Just Security All F5 Friday Posts on DevCentral Why Single-Stack Infrastructure Sucks Your load balancer wants to take a level of fighter and wizard189Views0likes0CommentsThe Future of Cloud: Infrastructure as a Platform
Cloud needs to become a platform, and that means its comprising infrastructure must also embrace the platform paradigm. There’s been a spate of articles, blogs, and mentions of OpenFlow in the past few months. IBM was the latest entry into the OpenFlow game, releasing an enabling RackSwitch G8264, an update of a 64-port, 10 Gigabit Ethernet switch IBM put out a year ago. Interest in the specification appears to be growing and not just because it’s got the prefix-du-jour as part of its name, implying everything to everyone – free, extensible, interoperable, etc… While all those modifiers are indeed interesting and, to some, a highly important facet of the would-be standard, there’s something else about it that is driving its popularity. That something-else can be summed it with the statement: “infrastructure as a platform.” THE WEB 2.0 LESSON. AGAIN. The importance of turning infrastructure into a platform can be evidenced by noting commentary on Web 2.0, a.k.a. social networking, applications and their failure/success to garner mind-share. Recently, a high-profile engineer at Google mistakenly posted a length and refreshingly blunt commentary on what he views as Google’s failure to recognize the importance of platform to successful offerings in today’s demanding marketplace. To Google’s credit, once the erroneous posting was discovered, it decided to “let it stand” and thus we are able to glean some insight about the importance of platform to today’s successful offerings: While Yegge doesn’t have a lot of good things to say about Amazon and its founder Jeff Bezos, he does note that Bezos – unlike Google – understands that its not just about developing interesting products, but that it takes a platform to create a great product. -- SiliconFilter, “Google Engineer: “Google+ is a Prime Example of Our Complete Failure to Understand Platforms” This insight is not restricted to software developers and engineers at all; the rising interest of PaaS (Platform as a Service) and the continued siren’s song that it will dominate the cloud landscape in the future is all tied to the same premise: it is the availability of a robust platform that makes or breaks solutions today, not features or functions or price. It is the ability to be successful by building, as Yegge says in his post, “an entire constellation of products by allowing other people to do the work.” Lest you think this concept applicable only to software, let me remind you of Nokia CEO Stephen Elop’s somewhat blunt assessment of his company’s failure to recognize this truth: The battle of devices has now become a war of ecosystems, where ecosystems include not only the hardware and software of the device, but developers, applications, ecommerce, advertising, search, social applications, location-based services, unified communications and many other things. Our competitors aren’t taking our market share with devices; they are taking our market share with an entire ecosystem. This means we’re going to have to decide how we either build, catalyse or join an ecosystem. -- DevCentral F5 Friday, “A War of Ecosystems” Interestingly, 47% of respondents surveyed by Zenoss/Cloud.com for its Cloud Computing Outlook 2011 indicated use of PaaS in 2011. Like SaaS, PaaS has some wiggle room in its definition, but its general popularity seems to indicate that yes, indeed, platform is an important factor. OpenFlow essentially provides this capability, turning infrastructure into a platform and enabling extensibility and customization that could not be achieved otherwise. It basically turns a piece of infrastructure into a giant backplane for new functions, features, and services. It introduces, allegedly, dynamism into what is typically a static network. It is what IaaS had the promise to be, but as of yet has failed to achieve. CLOUD as a PLATFORM The takeaway for cloud and infrastructure providers is that organizations want platforms. Developers want platforms. Operations wants platforms (see Puppet and Chef as examples of operational platforms). It’s about enabling an ecosystem that encourages innovation, i.e. new features and functions and services, without requiring the wheel to be reinvented. It’s about drag and drop, figuratively speaking, in the realm of infrastructure. Bringing the ability to deploy new services atop a platform that provides the basics. OpenFlow promises just such capabilities for infrastructure much in the same way Facebook provides these basics for game and application developers. Mobile platforms offer the same for devices and operating systems. It’s about enabling an ecosystem in which organizations can focus on not the core infrastructure, but on custom functionality and process automation that delivers efficiency to IT across operations and development alike. “The beauty of this is it gives more flexibility and control to the network,” said Shaughnessy [marketing manager for system networking at IBM], “so you could actually adjust the way the traffic flows go through your network dynamically based on what’s going on with your applications.” -- IBM releases OpenFlow-enabled switch It enables flexibility in the network, the means to deploy more dynamism in traffic policy enforcement and shaping and ties back to cloud with its ability to impart multi-tenant capabilities to infrastructure without completely modifying the internal architecture of components – a major obstacle for many network-focused devices. OpenFlow is not a panacea, there are myriad reasons why it may not be appropriate as the basis for architecting the cloud platform foundation required to support future initiatives. But it is a prime example of the kind of platform-focused capabilities organizations desire to move ahead in their journey to IT as a Service. The cloud on which organizations will be able to build their future data center architecture will be a platform, and that means from the bottom (infrastructure) to the middle (development) to the top (operations). What cloud and infrastructure providers must do is simulate the Facebook experience at the infrastructure layer. Infrastructure as a platform is the next step in the evolution of cloud computing . IT Services: Creating Commodities out of Complexity IBM releases OpenFlow-enabled switch The Cloud Configuration Management Conundrum IT as a Service: A Stateless Infrastructure Architecture Model If a Network Can’t Go Virtual Then Virtual Must Come to the Network You Can’t Have IT as a Service Until IT Has Infrastructure as a Service This is Why We Can’t Have Nice Things WILS: Automation versus Orchestration The Infrastructure Turk: Lessons in Services Putting the Cloud Before the Horse241Views0likes0CommentsTop-to-Bottom is the New End-to-End
End-to-end is a popular term in marketing circles to describe some feature that acts across an entire “something.” In the case of networking solutions this generally means the feature acts from client to server. For example, end-to-end protocol optimization means the solution optimizes the protocol from the client all the way to the server, using whatever industry standard and proprietary, if applicable, techniques are available. But end-to-end is not necessarily an optimal solution – not from a performance perspective, not from a CAPEX or OPEX perspective, and certainly not from a dynamism perspective. The better option, the more optimal, cost-efficient, and context-aware solution, is a top-to-bottom option. WHAT’S WRONG with END-to-END? “End-to-end optimization” is generally focused on one or two specific facets of a connection between the client and the server. WAN optimization, for example, focuses on the network connection and the data, typically reducing it in size through the use of data de-duplication technologies so that it transfers (or at least appears to transfer) faster. Web application acceleration focuses on HTTP and web application data in much the same, optimizing the protocol and trying to reduce the amount of data that must be transferred as a means to speed up page load times. Web application acceleration often employs techniques that leverage the client’s browser cache, which makes it and end-to-end solution. Similarly, end-to-end security for web-based applications is almost always implemented through the use of SSL, which encrypts the data traversing the network from the client to the server and back. Now, the problem with this is that each of these “end-to-end” implementations is a separate solution, usually deployed as either a network device or a software solution or, more recently, as a virtual network appliance. Taking our examples from above that means this “end-to-end” optimization solution comprises three separate and distinct solutions. These are deployed individually, which means each one has to process the data and optimize the relevant protocols individually, as islands of functionality. Each of these is a “hop” in the network and incurs the expected penalty of latency due to the processing required. Each one is separately managed and, what’s worse, each one has no idea the other exists. They each execute in an isolated, non-context aware environment. Also problematic is that you must be concerned with the order of operations when implementing such an architecture. SSL encryption should not be applied until after application acceleration has been applied, and WAN optimization (data de-duplication) should occur before compression or protocol optimization is employed. The wrong order can reduce the effectiveness of the optimization techniques and can, in some cases, render them inert. THE TOP-to-BOTTOM OPTION The top-to-bottom approach is still about taking in raw data from an application and spitting out optimized data. The difference is that data is optimized from top (the application layer) to the bottom (network layer) via a unified platform approach. A top-to-bottom approach respects the rules for application of security and optimization techniques based on the appropriate order of operations but the data never leaves the platform. Rather than stringing together multiple security and optimization solutions in an end-to-end chain of intermediaries, the “chain” is internal via a high-speed interconnect that both eliminates the negative performance impact of chaining proxies but also maintains context across each step. In most end-to-end architectures only solution closest to the user has the endpoint context – information about the user, the user’s connection, and the user’s environment. It does not share that information with other solutions as the data is passed along the vertical chain of solutions. Similarly only the solution closest to the application has the application endpoint’s context – status, condition of the network, and capacity. A top-to-bottom, unified approach maintains the context across all three components - end-user endpoint, application endpoint, and the network – and allows each optimization, acceleration, and security solution to leverage that context to apply the right policy at the right time based on that information. This is particularly useful for “perimeter” deployed solutions, such as WAN optimization, that must be by design one of the last (or the last) solutions in the chain of intermediaries in order to perform data de-duplication. Such solutions rarely have visibility into the full context of a request and response, and are therefore limited in how optimization features can be applied to the data. A top-to-bottom approach mitigates this obstacle by ensuring the WAN optimization has complete visibility into the contextual metadata for the request and response and can therefore apply optimization policies in a dynamic way based on full transactional context. Because the data never leaves a unified application delivery platform, the traditional performance penalties associated with chaining multiple solutions together – network time to transfer, TCP connection setup and teardown – are remediated. From an operational viewpoint a top-to-bottom approach leveraging a unified application delivery platform decreases operational costs associated with management of multiple solutions and controls the complexity associated with managing multiple configurations and policies across multiple solution deployments. A unified application delivery approach uses the same device, the same management mechanisms (GUI, CLI, and scripting) to configure and manage the solution. It reduces the physical components necessary, as well, as it eliminates the need for a one-to-one physical-solution relationship between solution and hardware, which eliminates complexity of architecture and removes multiple points of failure in the data path. TOP-to-BOTTOM is END-to-END only BETTER Both end-to-end and top-to-bottom ultimately perform the same task: securing, optimizing and accelerating the delivery of data between a client and an application. Top-to-bottom actually is an intelligent form of end-to-end; it simply consolidates and centralizes multiple components in the end-to-end “chain” and instead employs a top-to- bottom approach to performing the same tasks on a single, unified platform.177Views0likes1CommentIs PaaS Just Outsourced Application Server Platforms?
There’s a growing focus on PaaS (Platform as a Service), particularly as Microsoft has been rolling out Azure and VMware continues to push forward with its SpringSource acquisition. Amazon, though generally labeled as IaaS (Infrastructure as a Service) is also a “player” with its SimpleDB and SQS (Simple Queue Service) and more recently, its SNS (Simple Notification Service). But there’s also Force.com, the SaaS (Software as a Service) giant Salesforce.com’s incarnation of a “platform” as well as Google’s App Engine. As is the case with “cloud” in general, the definition of PaaS is varied and depends entirely on to whom you’re speaking at the moment. What’s interesting about SpringSource and Azure and many other PaaS offerings is that as far as the customer is concerned they’re very much like an application server platform. The biggest difference being, of course, that the customer need not concern themselves with the underlying management and scalability. The application however, is still the customer’s problem. That’s not that dissimilar from what enterprise-class organizations build out in their own data centers using traditional application server platforms like .NET and JavaEE. The application server platform is, well, a platform, in which multiple applications are deployed in their own cozy little isolated containers. You might even recall that JavaEE containers are called, yeah, “virtual machines.” And even though Force.com and Google App Engine are proprietary platforms (and generally unavailable for deployment elsewhere) they still bear many of the characteristic marks of an application server platform.228Views0likes0Comments