general soa
43 TopicsXML Threat Prevention
Where should security live? The divide between operations and application developers is pretty wide, especially when it comes to defining who should be ultimately responsible for application security. Mike Fratto and I have often had lively discussions (read: arguments) about whether security is the responsibility of the developer or the network and security administrators. It's wholly inappropriate to recreate any of these discussions here, as they often devolve to including the words your mother said not to use in public. 'Nuff said. The truth is that when XML enters the picture then the responsibility for securing that traffic has to be borne by both the network/security administrators and the developers. While there is certainly good reason to expect that developers are doing simply security checks for buffer overflows, length restrictions on incoming data, and strong typing, the fact is that there are some attacks in XML that make it completely impractical to check for in the code. Let's take a couple of attack types as examples. XML Entity Expansion This attack is a million laughs, or at least a million or more bytes of memory. Applications need to parse XML in order to manipulate it, so the first thing that happens when XML hits an application is that it is parsed - before the developer even has a chance to check it. In an application server this is generally done before the arguments to the specific operation being invoked are marshaled - meaning it is the application server, not the developer that is responsible for handling this type of attack. These messages can be used to force recursive entity expansion or other repeated processing that exhausts server resources. The most common example of this type of attack is the "billion laughs" attack, which is widely available. The CPU is monopolized while the entities are being expanded, and each entity takes up X amount of memory - eventually consuming all available resources and effectively preventing legitimate traffic from being processed. It's essentially a DoS (Denial of Service) attack. ... ]> &ha128; It is accepted that almost all traditional DoS attacks (ping of death, teardrop, etc...) should be handled by a perimeter security device - a firewall oran application delivery controller - so why should a DoS attack that is perpetrated through XML be any different? It shouldn't. This isn't a developer problem, it's a parser problem and for the most part developers have little or no control over the behavior of the parser used by the application server. The application admin, however, can configure most modern parsers to prevent this type of attack, but that's assuming that their parser is modern and can be configured to handle it. Of course then you have to wonder what happens if that arbitrary limit inhibits processing of valid traffic? Yeah, it's a serious problem. SQL Injection SQL Injection is one of the most commonly perpetrated attacks via web-based applications. It consists basically of inserting SQL code into string-based fields which the attacks thinks (or knows) will be passed to a database as part of an SQL query. This type of attack can easily be accomplished via XML as well simply by inserting the appropriate SQL code into a string element. Aha! The developer can stop this one, you're thinking. After all, the developer has the string and builds the SQL that will be executed, so he can just check for it before he builds the string and sends it off for execution. While this is certainly true, there are myriad combinations of SQL commands that might induce the database to return more data than it should, or to return sensitive data not authorized to the user. This extensive list of commands and combinations of commands would need to be searched for in each and every parameter used to create an SQL query and on every call to the database. That's a lot of extra code and a lot of extra processing - which is going to slow down the application and impede performance. And when a new attack is discovered, each and every function and application needs to be updated, tested, and re-deployed. I'm fairly certain developers have better ways to spend their time than updating parameter checking in every function in every application they have in production. And we won't even talk about third-party applications and the dangers inherent in that scenario. One of the goals of SOA is engendering reuse, and this is one of the best examples of taking advantage of reuse in order to ensure consistent behavior between applications and to reduce the lengthy development cycle required to update, test, and redeploy whenever a new attack is discovered. By placing the onus for keeping this kind of attack from reaching the server on an edge device such as an application firewall like F5's application firewall, updates to address new attacks are immediately applied to all applications and there is no need to recode and redeploy applications. Although there are some aspects of security that are certainly best left to the developer, there are other aspects of security that are better deployed in the network. It's the most effective plan in terms of effort, cost, and consistent behavior where applications are concerned. Imbibing: Mountain Dew Technorati tags: security, application security, application firewall, XML, developers, networking, application delivery302Views0likes0CommentsIT as a Service: A Stateless Infrastructure Architecture Model
The dynamic data center of the future, enabled by IT as a Service, is stateless. One of the core concepts associated with SOA – and one that failed to really take hold, unfortunately – was the ability to bind, i.e. invoke, a service at run-time. WSDL was designed to loosely couple services to clients, whether they were systems, applications or users, in a way that was dynamic. The information contained in the WSDL provided everything necessary to interface with a service on-demand without requiring hard-coded integration techniques used in the past. The theory was you’d find an appropriate service, hopefully in a registry (UDDI-based), grab the WSDL, set up the call, and then invoke the service. In this way, the service could “migrate” because its location and invocation specific meta-data was in the WSDL, not hard-coded in the client, and the client could “reconfigure”, as it were, on the fly. There are myriad reasons why this failed to really take hold (notably that IT culture inhibited the enforcement of a strong and consistent governance strategy) but the idea was and remains sound. The goal of a “stateless” architecture, as it were, remains a key characteristic of what is increasingly being called IT as a Service – or “private” cloud computing . TODAY: STATEFUL INFRASTRUCTURE ARCHITECTURE The reason the concept of a “stateless” infrastructure architecture is so vital to a successful IT as a Service initiative is the volatility inherent in both the application and network infrastructure needed to support such an agile ecosystem. IP addresses, often used to bypass the latency induced by resolution of host names at run-time from DNS calls, tightly couple systems together – including network services. Routing and layer 3 switching use IP addresses to create a virtual topology of the architecture and ensure the flow of data from one component to the next, based on policy or pre-determine routes as meets the needs of the IT organization. It is those policies that in many cases can be eliminated; replaced with a more service-oriented approach that provisions resources on-demand, in real-time. This eliminates the “state” of an application architecture by removing delivery dependencies on myriad policies hard-coded throughout the network. Policies are inexorably tied to configurations, which are the infrastructure equivalent of state in the infrastructure architecture. Because of the reliance on IP addresses imposed by the very nature of network and Internet architectural design, we’ll likely never reach full independence from IP addresses. But we can move closer to a “stateless” run-time infrastructure architecture inside the data center by considering those policies that can be eliminated and instead invoked at run-time. Not only would such an architecture remove the tight coupling between policies and infrastructure, but also between applications and the infrastructure tasked with delivering them. In this way, applications could more easily be migrated across environments, because they are not tightly bound to the networking and security policies deployed on infrastructure components across the data center. The pre-positioning of policies across the infrastructure requires codifying topological and architectural meta-data in a configuration. That configuration requires management; it requires resources on the infrastructure – storage and memory – while the device is active. It is an extra step in the operational process of deploying, migrating and generally managing an application. It is “state” and it can be reduced – though not eliminated – in such a way as to make the run-time environment, at least, stateless and thus more motile. TOMORROW: STATELESS INFRASTRUCTURE ARCHITECTURE What’s needed to move from a state-dependent infrastructure architecture to one that is more stateless is to start viewing infrastructure functions as services. Services can be invoked, they are loosely coupled, they are independent of solution and product. Much in the same way that stateless application architectures address the problems associated with persistence and impede real-time migration of applications across disparate environments, so too does stateless infrastructure architectures address the same issues inherent in policy-based networking – policy persistence. While standardized APIs and common meta-data models can alleviate much of the pain associated with migration of architectures between environments, they still assume the existence of specific types of components (unless, of course, a truly service-oriented model in which services, not product functions, are encapsulated). Such a model extends the coupling between components and in fact can “break” if said service does not exist. Conversely, a stateless architecture assumes nothing; it does not assume the existence of any specific component but merely indicates the need for a particular service as part of the application session flow that can be fulfilled by any appropriate infrastructure providing such a service. This allows the provider more flexibility as they can implement the service without exposing the underlying implementation – exactly as a service-oriented architecture intended. It further allows providers – and customers – to move fluidly between implementations without concern as only the service need exist. The difficulty is determining what services can be de-coupled from infrastructure components and invoked on-demand, at run-time. This is not just an application concern, it becomes an infrastructure component concern, as well, as each component in the flow might invoke an upstream – or downstream – service depending on the context of the request or response being processed. Assuming that such services exist and can be invoked dynamically through a component and implementation-agnostic mechanism, it is then possible to eliminate many of the pre-positioned, hard-coded policies across the infrastructure and instead invoke them dynamically. Doing so reduces the configuration management required to maintain such policies, as well as eliminating complexity in the provisioning process which must, necessarily, include policy configuration across the infrastructure in a well-established and integrated enterprise-class architecture. Assuming as well that providers have implemented support for similar services, one can begin to see the migratory issues are more easily redressed and the complications caused by needed to pre-provision services and address policy persistence during migration mostly eliminated. SERVICE-ORIENTED THINKING One way of accomplishing such a major transformation in the data center – from policy to service-oriented architecture – is to shift our thinking from functions to services. It is not necessarily efficient to simply transplant a software service-oriented approach to infrastructure because the demands on performance and aversion to latency makes a dynamic, run-time binding to services unappealing. It also requires a radical change in infrastructure architecture by adding the components and services necessary to support such a model – registries and the ability of infrastructure components to take advantage of them. An in-line, transparent invocation method for infrastructure services offers the same flexibility and motility for applications and infrastructure without imposing performance or additional dependency constraints on implementers. But to achieve a stateless infrastructure architectural model, one must first shift their thinking from functions to services and begin to visualize a data center in which application requests and responses communicate the need for particular downstream and upstream services with them, rather than completely in hard-coded policies stored in component configurations. It is unlikely that in the near-term we can completely eliminate the need for hard-coded configuration, we’re just no where near that level of dynamism and may never be. But for many services – particularly those associated with run-time delivery of applications, we can achieve the stateless architecture necessary to realize a more mobile and dynamic data center. Now Witness the Power of this Fully Operational Feedback Loop Cloud is the How not the What Challenging the Firewall Data Center Dogma Cloud-Tiered Architectural Models are Bad Except When They Aren’t Cloud Chemistry 101 You Can’t Have IT as a Service Until IT Has Infrastructure as a Service Let’s Face It: PaaS is Just SOA for Platforms Without the Baggage The New Distribution of The 3-Tiered Architecture Changes Everything502Views0likes1CommentLayer 7 Switching + Load Balancing = Layer 7 Load Balancing
Modern load balancers (application delivery controllers) blend traditional load-balancing capabilities with advanced, application aware layer 7 switching to support the design of a highly scalable, optimized application delivery network. Here's the difference between the two technologies, and the benefits of combining the two into a single application delivery controller. LOAD BALANCING Load balancing is the process of balancing load (application requests) across a number of servers. The load balancer presents to the outside world a "virtual server" that accepts requests on behalf of a pool (also called a cluster or farm) of servers and distributes those requests across all servers based on a load-balancing algorithm. All servers in the pool must contain the same content. Load balancers generally use one of several industry standard algorithms to distribute request. Some of the most common standard load balancing algorithms are: round-robin weighted round-robin least connections weighted least connections Load balancers are used to increase the capacity of a web site or application, ensure availability through failover capabilities, and to improve application performance. LAYER 7 SWITCHING Layer 7 switching takes its name from the OSI model, indicating that the device switches requests based on layer 7 (application) data. Layer 7 switching is also known as "request switching", "application switching", and "content based routing". A layer 7 switch presents to the outside world a "virtual server" that accepts requests on behalf of a number of servers and distributes those requests based on policies that use application data to determine which server should service which request. This allows for the application infrastructure to be specifically tuned/optimized to serve specific types of content. For example, one server can be tuned to serve only images, another for execution of server-side scripting languages like PHP and ASP, and another for static content such as HTML , CSS , and JavaScript. Unlike load balancing, layer 7 switching does not require that all servers in the pool (farm/cluster) have the same content. In fact, layer 7 switching expects that servers will have different content, thus the need to more deeply inspect requests before determining where they should be directed. Layer 7 switches are capable of directing requests based on URI, host, HTTP headers, and anything in the application message. The latter capability is what gives layer 7 switches the ability to perform content based routing for ESBs and XML/SOAP services. LAYER 7 LOAD BALANCING By combining load balancing with layer 7 switching, we arrive at layer 7 load balancing, a core capability of all modern load balancers (a.k.a. application delivery controllers). Layer 7 load balancing combines the standard load balancing features of a load balancing to provide failover and improved capacity for specific types of content. This allows the architect to design an application delivery network that is highly optimized to serve specific types of content but is also highly available. Layer 7 load balancing allows for additional features offered by application delivery controllers to be applied based on content type, which further improves performance by executing only those policies that are applicable to the content. For example, data security in the form of data scrubbing is likely not necessary on JPG or GIF images, so it need only be applied to HTML and PHP. Layer 7 load balancing also allows for increased efficiency of the application infrastructure. For example, only two highly tuned image servers may be required to meet application performance and user concurrency needs, while three or four optimized servers may be necessary to meet the same requirements for PHP or ASP scripting services. Being able to separate out content based on type, URI, or data allows for better allocation of physical resources in the application infrastructure.1.6KViews0likes2CommentsSimplifying Application Architecture in a Dynamic Data Center through Virtualization
Application architecture has never really been easy, but the introduction of virtualization may make it even less easy – unless you plan ahead Most applications today maintain at least two if not three or more "tiers" within their architecture. Web (presentation), application server (business logic), and database (data) are the three most common "tiers" to an application, web-based or otherwise, with the presence of middleware (queues and buses) being an optional fourth tier, depending on the application and its integration needs. It's never been an easy task to ensure that the web servers know where the application servers know where the middleware know where the databases are during a deployment. Most of this information is manually configured as pools of connections defined by IP addresses. IP addresses that, with the introduction of virtualization and cloud computing , may be dynamic. Certainly not as dynamic as one would expect in a public cloud environment, but dynamic nonetheless. A high frequency of change is really no more disruptive to such an IP-dependent environment than a single change, as it requires specific configuration modifications, additions or, in the case of elasticity, removals. It is only somewhat ironic that the same technology that introduces the potential for problems is the same technology that offers a simple solution: virtualization. Only this virtualization is not the one-to-many server-style virtualization, but rather the many-to-one network virtualization that has existed since the advent of proxy-based solutions. VIRTUAL SERVICE ENDPOINTS If considered before deployment – such as during design and implementation – then the use of virtual service endpoints as implemented through network virtualization techniques can dramatically improve the ability of application architectures to scale up and down and deal with any mobility within the infrastructure that might occur. Rather than directing any given tier to the next directly, all that is necessary is to direct the tier to a virtual service endpoint (an IP-port combination) on the appropriate infrastructure and voila! Instant scalability. The endpoint on the infrastructure, such as an application delivery controller, can ensure the scalability and availability (as well as security and other operationally necessary functions) by acting on that single "virtual service endpoint". Each tier scales elastically of its own accord, based on demand, without disrupting the connectivity and availability of other tiers. No other tier need care or even be aware of changes in the IP address assignments of other tiers, because it sees the entire tier as being the "virtual service endpoint" all the time. It stabilizes integration challenges by eliminating the problems associated with changing IP addresses in each tier of the architecture. It also has the benefit of eliminating service-IP sprawl that is often associated with clustering or other proxy-based solutions that must also scale along with demand and consume more and more (increasingly) valuable IP addresses. This decoupling is also an excellent form of abstraction that enables versioning and upgrades to occur without disruption, and can be used to simultaneously support multiple versions of the same interface simply by applying some application (page level) routing at the point of ingress (the virtual service endpoint) rather than using redirects or rewrites in the application itself. The ability to leverage network virtualization to create virtual service endpoints through which application architectures can be simplified and scaled should be seriously considered during the architectural design phase to ensure applications are taking full advantage of its benefits. Once virtual service endpoints are employed, there are a variety of other functions and capabilities that may be able to further simplify or extend application architectures such as two-factor authentication and OTP (one time password) options. The future is decoupled – from internal application design to application integration to the network. Employing a decoupling-oriented approach to enterprise architecture should be a focus for architects to enable IT to take advantage of the benefits and eliminate potential operational challenges.204Views0likes0CommentsF5 Friday: Programmability and Infrastructure as Code
#SDN #ADN #cloud #devops What does that mean, anyway? SDN and devops share some common themes. Both focus heavily on the notion of programmability in network devices as a means to achieve specific goals. For SDN it’s flexibility and rapid adaptation to changes in the network. For devops, it’s more a focus on the ability to treat “infrastructure as code” as a way to integrate into automated deployment processes. Each of these notions is just different enough to mean that systems supporting one don’t automatically support the other. An API focused on management or configuration doesn’t necessarily provide the flexibility of execution exhorted by SDN proponents as a significant benefit to organizations. And vice-versa. INFRASTRUCTURE as CODE Devops is a verb, it’s something you do. Optimizing application deployment lifecycle processes is a primary focus, and to do that many would say you must treat “infrastructure as code.” Doing so enables integration and automation of deployment processes (including configuration and integration) that enables operations to scale along with the environment and demand. The result is automated best practices, the codification of policy and process that assures repeatable, consistent and successful application deployments. F5 supports the notion (and has since 2003 or so) of infrastructure as code in two ways: iControl iControl, the open, standards-based API for the entire BIG-IP platform, remains the primary integration point for partners and customers alike. Whether it’s inclusion in Opscode Chef recipes, or pre-packaged solutions with systems from HP, Microsoft, or VMware, iControl offers the ability to manage the control plane of BIG-IP from just about anywhere. iControl is service-enabled and has been accessed and integrated through more programmatic languages than you can shake a stick at. Python, PERL, Java, PHP, C#, PowerShell… if it can access web-based services, it can communicate with BIG-IP via iControl. iApp A latter addition to the BIG-IP platform, iApp is best practice application delivery service deployment codified. iApps are service- and application-oriented, enabling operations and consumers of IT as a Service to more easily deploy requisite application delivery services without requiring intimate knowledge of the hundreds of individual network attributes that must be configured. iApp is also used in conjunction with iControl to better automate and integrate application delivery services into an IT as a Service environment. Using iApp to codify performance and availability policies based on application and business requirements, consumers – through pre-integrated solutions – can simply choose an appropriate application delivery “profile” along with their application to ensure not only deployment but production success. Infrastructure as code is an increasingly important view to take of the provisioning and deployment processes for network and application delivery services as they enable more consistent, accurate policy configuration and deployment. Consider research from Dimension Data that found “total number of configuration violations per device has increased from 29 to 43 year over year -- and that the number of security-related configuration errors (such as AAA Authentication, Route Maps and ACLS, Radius and TACACS+) also increased. AAA Authentication errors in particular jumped from 9.3 per device to 13.6, making it the most frequently occurring policy violation.” The ability to automate a known “good” configuration and policy when deploying application and network services can decrease the risk of these violations and ensure a more consistent, stable (and ultimately secure) network environment. PROGRAMMABILITY Less with “infrastructure as a code” (devops) and more-so with SDN comes the notion of programmability. On the one hand, this notion squares well with the “infrastructure as code” concept, as it requires infrastructure to be enabled in such as a way as to provide the means to modify behavior at run time, most often through support for a common standard (OpenFlow is the darling standard du jour for SDN).For SDN, this tends to focus on the forwarding information base (FIB) but broader applicability has been noted at times, and no doubt will continue to gain traction. The ability to “tinker” with emerging and experimental protocols, for example, is one application of programmability of the network. Rather than wait for vendor support, it is proposed that organizations can deploy and test support for emerging protocols through OpenFlow enabled networks. While this capability is likely not really something large production networks would undertake, still, the notion that emerging protocols could be supported on-demand, rather than on a vendor' driven timeline, is often desirable. Consider support for SIP, before UCS became nearly ubiquitous in enterprise networks. SIP is a message-based protocol, requiring deep content inspection (DCI) capabilities to extract AVP codes as a means to determine routing to specific services. Long before SIP was natively supported by BIG-IP, it was supported via iRules, F5’s event-driven network-side scripting language. iRules enabled customers requiring support for SIP (for load balancing and high-availability architectures) to program the network by intercepting, inspecting, and ultimately routing based on the AVP codes in SIP payloads. Over time, this functionality was productized and became a natively supported protocol on the BIG-IP platform. Similarly, iRules enables a wide variety of dynamism in application routing and control by providing a robust environment in which to programmatically determine which flows should be directed where, and how. Leveraging programmability in conjunction with DCI affords organizations the flexibility to do – or do not – as they so desire, without requiring them to wait for hot fixes, new releases, or new products. SDN and ADN – BIRDS of a FEATHER The very same trends driving SDN at layer 2-3 are the same that have been driving ADN (application delivery networking) for nearly a decade. Five trends in network are driving the transition to software defined networking and programmability. They are: • User, device and application mobility; • Cloud computing and service; • Consumerization of IT; • Changing traffic patterns within data centers; • And agile service delivery. The trends stretch across multiple markets, including enterprise, service provider, cloud provider, massively scalable data centers -- like those found at Google, Facebook, Amazon, etc. -- and academia/research. And they require dynamic network adaptability and flexibility and scale, with reduced cost, complexity and increasing vendor independence, proponents say. -- Five needs driving SDNs Each of these trends applies equally to the higher layers of the networking stack, and are addressed by a fully programmable ADN platform like BIG-IP. Mobile mediation, cloud access brokers, cloud bursting and balancing, context-aware access policies, granular traffic control and steering, and a service-enabled approach to application delivery are all part and parcel of an ADN. From devops to SDN to mobility to cloud, the programmability and service-oriented nature of the BIG-IP platform enables them all. The Half-Proxy Cloud Access Broker Devops is a Verb SDN, OpenFlow, and Infrastructure 2.0 Devops is Not All About Automation Applying ‘Centralized Control, Decentralized Execution’ to Network Architecture Identity Gone Wild! Cloud Edition Mobile versus Mobile: An Identity Crisis360Views0likes0CommentsNever attribute to technology that which is explained by the failure of people
#cloud Whether it’s Hanlon or Occam or MacVittie, the razor often cuts both ways. I am certainly not one to ignore the issue of complexity in architecture nor do I dismiss lightly the risk introduced by cloud computing through increased complexity. But I am one who will point out absurdity when I see it, and especially when that risk is unfairly attributed to technology. Certainly the complexity introduced by attempts to integrate disparate environments, computing models, and networks will give rise to new challenges and introduce new risk. But we need to carefully consider whether the risk we discover is attributable to the technology or to simple failure by those implementing it. Almost all of the concepts and architectures being “discovered” in conjunction with cloud computing are far from original. They are adaptations, evolutions, and maturation of existing technology and architectures. Thus, it is almost always the case that when a “risk” of cloud computing is discovered it is not peculiar to cloud computing at all, and thus likely has it roots in implementation not the technology. This is not to say there aren’t new challenges or risks associated with cloud computing, there are and will be cloud-specific risks that must be addressed (IP Identity Theft was heretofore unknown before the advent of cloud computing). But let’s not make mountains out of molehills by failing to recognize those “new” risks that actually aren’t “new” at all, but rather are simply being recognized by a wider audience due to the abundance of interest in cloud computing models. For example, I found this article particularly apocalyptic with respect to cloud and complexity on the surface. Digging into the “simple scenario”, however, revealed that the meltdown referenced was nothing new, and certainly wasn’t a technological problem – it was another instance of lack of control, of governance, of oversight, and of communication. The risk is being attributed to technology, but is more than adequately explained by the failure of people. The Hidden Risk of a Meltdown in the Cloud Ford identifies a number of different possibilities. One example involves an application provider who bases its services in the cloud, such as a cloud -based advertising service. He imagines a simple scenario in which the cloud operator distributes the service between two virtual servers, using a power balancing program to switch the load from one server to the other as conditions demand. However, the application provider may also have a load balancing program that distributes the customer load. Now Ford imagines the scenario in which both load balancing programs operate with the same refresh period, say once a minute. When these periods coincide, the control loops start sending the load back and forth between the virtual servers in a positive feedback loop. Could this happen? Yes. But consider for a moment how it could happen. I see three obvious possibilities: IT has completely abdicated its responsibility to governing foundational infrastructure services like load balancing and allowed the business or developers to run amokwithout regard for existing services. IT has failed to communicate its overarching strategy and architecture with respect to high-availability and scale in inter-cloud scenarios to the rest of the IT organization, i.e. IT has failed to maintain control (governance) over infrastructure services. The left hand of IT and the right hand of IT have been severed from the body of IT and geographically separated with no means to communicate. Furthermore, each hand of IT wholeheartedly believes that the other is incompetent and will fail to properly architect for high-availability and scalability, thus requiring each hand to implement such services as required to achieve high-availability. While the third possibility might make a better “made for SyFy tech-horror” flick, the reality is likely somewhere between 1 and 2. This particular scenario, and likely others, is not peculiar to cloud. The same lack of oversight in a traditional architecture could lead to the same catastrophic cascade described by Ford in the aforementioned article. Given a load balancing service in the application delivery tier, and a cluster controller in the application infrastructure tier, the same cascading feedback loop could occur, causing a meltdown and inevitably downtime for the application in question. Astute observers will conclude that an IT organization in which both a load balancing service and a cluster controller are used to scale the same application has bigger problems than duplicated services and a failed application. This is not a failure of technology, nor is it caused by excessive complexity or lack of transparency within cloud computing environments. It’s a failure to communicate, to control, to oversee the technical implementation of business requirements through architecture. That’s a likely conclusion before we even start considering an inter-cloud model with two completely separate cloud providers sharing access to virtual servers deployed in one or the other – maybe both? Still, the same analysis applies – such an architecture would require willful configuration and knowledge of how to integrate the environments. Which ultimately means a failure on the part of people to communicate. THE REAL PROBLEM The real issue here is failure to oversee – control – the integration and use of cloud computing resources by the business and IT. There needs to be a roadmap that clearly articulates what services should be used and in what environments. There needs to be an understanding of who is responsible for what services, where they connect, with whom they share information, and by whom they will (and can be) accessed. Maybe I’m just growing jaded – but we’ve seen this lack of roadmap and oversight before. Remember SOA? It ultimately failed to achieve the benefits promised not because the technology failed, but because the implementations were generally poorly architected and governed. A lack of oversight and planning meant duplicated services that undermined the success promised by pundits. The same path lies ahead with cloud. Failure to plan and architect and clearly articulate proper usage and deployment of services will undoubtedly end with the same disillusioned dismissal of cloud as yet another over-hyped technology. Like SOA, the reality of cloud is that you should never attribute to technology that which is explained by the failure of people. BFF: Complexity and Operational Risk The Pythagorean Theorem of Operational Risk At the Intersection of Cloud and Control… What is a Strategic Point of Control Anyway? The Battle of Economy of Scale versus Control and Flexibility Hybrid Architectures Do Not Require Private Cloud Control, choice, and cost: The Conflict in the Cloud Do you control your application network stack? You should. The Wisdom of Clouds: In Cloud Computing, a Good Network Gives You Control...186Views0likes0CommentsThe API is the Center of the Application (Integration) Universe
#mobile #fasterapp #ccevent Today, at least. Tomorrow, who knows? Some have tried to distinguish between “mobile cloud” and “cloud” by claiming the former is the use of the web browser on a mobile device to access services while the latter uses device-native applications. Like all things cloud, the marketing fluff is purposefully obfuscating and sweeping under the rug the technology required to make things work for consumers, whether those consumers be your kids or IT professionals. Infrastructure is not eliminated when organizations take to the cloud nor do the constraints of web-based protocols and methodologies become irrelevant when Bob uses a service to store photos of his kid’s piano recital on Flickr. The applications and web browsers on a mobile device are using the same technology, the same protocols, suffering under the same constraints as the rest of us in wireline land. If developers are as smart as they are lazy (and I say that as a compliment because it is the laziness of developers that more often than not leads to innovation) they have already moved to an API-centric model in which web site and device native-app interfaces both leverage the same APIs. This isn’t just a social integration phenomenon – it isn’t just about Twitter and Facebook and Google. API usage and demand is growing, and it is not expected to stop any time soon. Given the option, developers asked about desire to connect to services (assuming service = API) the overwhelming response was developers would like to connect to “everything, if it were easy.” (API Integration Pain Survey Results) The API is rapidly becoming (if it isn’t already) the center of the application (integration) universe. This unfortunately has the potential to cause confusion and chaos in the data center. When a single API is consumed by multiple clients – mobile, remote, applications, partners, etc.. – solutions unique to each quickly seem to make their way into the code to deal with “exceptions” and “peculiarities” inherent to the client platform. That’s inefficient and, when one considers the growing number of platforms and form-factors associated with mobile communications alone, it is not scalable from a people and process perspective. But reality is that these exceptions and peculiarities – often times caused by a lack of feature parity across form-factors and platforms – must be addressed somewhere, and that somewhere is unfortunately almost unilaterally determined to be the application. Do we need to treat mobile devices differently? In terms of performance and delivery concerns, yes. But that’s where we leverage the application delivery tier to differentiate by device to ensure delivery. That’s the beauty of an abstracted, service-enabled data center – there’s an intelligent and agile layer of application delivery services that mediates between clients (regardless of their form factor) and services to ensure that delivery needs (security, performance, and availability) are met in part by addressing the unique characteristics and reality of access via mobile devices. ABSTRACT and ISOLATE This is exactly the type of problem application delivery is designed to address. Multiple clients, multiple networks, all accessing the same application service or API but requiring specific authentication, security, and delivery characteristics to ensure that operational risk is mitigated in the most efficient manner possible. This includes the ability to throttle services based on user and client, a common approach used by mega-sites such as Twitter. This includes the ability to provide single sign-on capabilities to all clients, regardless of platform, form-factor and support for enterprise-grade authentication integration to the same API or application service. This includes leveraging the appropriate security policies to ensure inbound and outbound security of data regardless of client, such that corporate data is not infected and spread to other consumers. A flexible, scalable application delivery tier addresses the problem of a single API being utilized by a variety of clients in a way that precludes the need to codify specific functionality on a per-platform or form-factor basis in the application logic itself, making the API simpler and easier to maintain as well as test and upgrade. It makes APIs and application services more scalable in terms of people and processes, which in turn makes the development and deployment process more efficient and able to focus on new services rather than constantly modifying and updating existing ones. Service-oriented architecture may have begun in the application demesne as a means to abstract and isolate services such that they could more easily be integrated, maintained, and changed without disruption, but the concept is applicable to the data center as a whole. By leveraging SOA concepts at the data center architecture level, the entire technological landscape of the business can be transformed into one that is ultimately more adaptable, more scalable, and more secure. I’ll be at CloudConnect 2012 and we’ll discuss the subject of cloud and performance a whole lot more at the show! Sessions Facebook Wins “Worst API” in Developer Survey API Integration Pain Survey Results IT Survey: Businesses Embrace APIs for Apps Integration, Not Social The Pythagorean Theorem of Operational Risk At the Intersection of Cloud and Control… Operational Risk Comprises More Than Just Security IT Services: Creating Commodities out of Complexity The Three Axioms of Application Delivery The Magic of Mobile Cloud206Views0likes0CommentsInfrastructure Architecture: Whitelisting with JSON and API Keys
Application delivery infrastructure can be a valuable partner in architecting solutions …. AJAX and JSON have changed the way in which we architect applications, especially with respect to their ascendancy to rule the realm of integration, i.e. the API. Policies are generally focused on the URI, which has effectively become the exposed interface to any given application function. It’s REST-ful, it’s service-oriented, and it works well. Because we’ve taken to leveraging the URI as a basic building block, as the entry-point into an application, it affords the opportunity to optimize architectures and make more efficient the use of compute power available for processing. This is an increasingly important point, as capacity has become a focal point around which cost and efficiency is measured. By offloading functions to other systems when possible, we are able to increase the useful processing capacity of an given application instance and ensure a higher ratio of valuable processing to resources is achieved. The ability of application delivery infrastructure to intercept, inspect, and manipulate the exchange of data between client and server should not be underestimated. A full-proxy based infrastructure component can provide valuable services to the application architect that can enhance the performance and reliability of applications while abstracting functionality in a way that alleviates the need to modify applications to support new initiatives. AN EXAMPLE Consider, for example, a business requirement specifying that only certain authorized partners (in the integration sense) are allowed to retrieve certain dynamic content via an exposed application API. There are myriad ways in which such a requirement could be implemented, including requiring authentication and subsequent tokens to authorize access – likely the most common means of providing such access management in conjunction with an API. Most of these options require several steps, however, and interaction directly with the application to examine credentials and determine authorization to requested resources. This consumes valuable compute that could otherwise be used to serve requests. An alternative approach would be to provide authorized consumers with a more standards-based method of access that includes, in the request, the very means by which authorization can be determined. Taking a lesson from the credit card industry, for example, an algorithm can be used to determine the validity of a particular customer ID or authorization token. An API key, if you will, that is not stored in a database (and thus requires a lookup) but rather is algorithmic and therefore able to be verified as valid without needing a specific lookup at run-time. Assuming such a token or API key were embedded in the URI, the application delivery service can then extract the key, verify its authenticity using an algorithm, and subsequently allow or deny access based on the result. This architecture is based on the premise that the application delivery service is capable of responding with the appropriate JSON in the event that the API key is determined to be invalid. Such a service must therefore be network-side scripting capable. Assuming such a platform exists, one can easily implement this architecture and enjoy the improved capacity and resulting performance boost from the offload of authorization and access management functions to the infrastructure. 1. A request is received by the application delivery service. 2. The application delivery service extracts the API key from the URI and determines validity. 3. If the API key is not legitimate, a JSON-encoded response is returned. 4. If the API key is valid, the request is passed on to the appropriate web/application server for processing. Such an approach can also be used to enable or disable functionality within an application, including live-streams. Assume a site that serves up streaming content, but only to authorized (registered) users. When requests for that content arrive, the application delivery service can dynamically determine, using an embedded key or some portion of the URI, whether to serve up the content or not. If it deems the request invalid, it can return a JSON response that effectively “turns off” the streaming content, thereby eliminating the ability of non-registered (or non-paying) customers to access live content. Such an approach could also be useful in the event of a service failure; if content is not available, the application delivery service can easily turn off and/or respond to the request, providing feedback to the user that is valuable in reducing their frustration with AJAX-enabled sites that too often simply “stop working” without any kind of feedback or message to the end user. The application delivery service could, of course, perform other actions based on the in/validity of the request, such as directing the request be fulfilled by a service generating older or non-dynamic streaming content, using its ability to perform application level routing. The possibilities are quite extensive and implementation depends entirely on goals and requirements to be met. Such features become more appealing when they are, through their capabilities, able to intelligently make use of resources in various locations. Cloud-hosted services may be more or less desirable for use in an application, and thus leveraging application delivery services to either enable or reduce the traffic sent to such services may be financially and operationally beneficial. ARCHITECTURE is KEY The core principle to remember here is that ultimately infrastructure architecture plays (or can and should play) a vital role in designing and deploying applications today. With the increasing interest and use of cloud computing and APIs, it is rapidly becoming necessary to leverage resources and services external to the application as a means to rapidly deploy new functionality and support for new features. The abstraction offered by application delivery services provides an effective, cross-site and cross-application means of enabling what were once application-only services within the infrastructure. This abstraction and service-oriented approach reduces the burden on the application as well as its developers. The application delivery service is almost always the first service in the oft-times lengthy chain of services required to respond to a client’s request. Leveraging its capabilities to inspect and manipulate as well as route and respond to those requests allows architects to formulate new strategies and ways to provide their own services, as well as leveraging existing and integrated resources for maximum efficiency, with minimal effort. Related blogs & articles: HTML5 Going Like Gangbusters But Will Anyone Notice? Web 2.0 Killed the Middleware Star The Inevitable Eventual Consistency of Cloud Computing Let’s Face It: PaaS is Just SOA for Platforms Without the Baggage Cloud-Tiered Architectural Models are Bad Except When They Aren’t The Database Tier is Not Elastic The New Distribution of The 3-Tiered Architecture Changes Everything Sessions, Sessions Everywhere3.1KViews0likes0CommentsThe Infrastructure Turk: Lessons in Services
#devops #cloud If your goal is IT as a Service, then at some point you have to actually service-enable the policies that govern IT infrastructure. My eldest shared the story of “The Turk” recently and it was a fine example of how appearances can be deceiving – and of the power of abstraction. If you aren’t familiar with the story, let me briefly share before we dive in to how this relates to infrastructure and, specifically, IT as a Service. The Turk, the Mechanical Turk or Automaton Chess Player was a fake chess-playing machine constructed in the late 18th century. The Turk was in fact a mechanical illusion that allowed a human chess master hiding inside to operate the machine. With a skilled operator, the Turk won most of the games played during its demonstrations around Europe and the Americas for nearly 84 years, playing and defeating many challengers including statesmen such as Napoleon Bonaparte and Benjamin Franklin. Although many had suspected the hidden human operator, the hoax was initially revealed only in the 1820s by the Londoner Robert Willis. [2] -- Wikipedia, “The Turk” The Automaton was actually automated in the sense that the operator was able to, via mechanical means, move the arm of the Automaton and thus give the impression the Automaton was moving pieces around the board. The operator could also nod and shake its head and offer rudimentary facial expressions. But the Automaton was not making decisions in any way, shape or form. The operator made the decisions and did so quite well, defeating many a chess champion of the day. [ You might also recall this theme appeared in the “Wizard of Oz”, wherein the Professor sat behind a “curtain” and “automated” what appeared to the inhabitants to be the great Wizard of Oz. ] The Turk was never really automated in the sense that it could make decisions and actually play chess. Unlike Watson, the centuries old Automaton was never imbued with the ability to dynamically determine what moves to make itself. This is strikingly similar to modern “automation” and in particular the automation being enabled in modern data centers today. While automated configuration and set up of components and applications is becoming more and more common, the actual decisions and configuration are still handled by operators who push the necessary levers and turn the right knobs to enable infrastructure to react. IT as a SERVICE needs POLICIES as well as RESOURCES We need to change this model. We need to automate the Automaton in a way that enables automated provisioning initiated by the end-user, i.e. application owner. We need infrastructure and ultimately operational services not only to configure and manage infrastructure, but to provision it. More importantly, end-users need to be able to provision the appropriate infrastructure services (policies) as well. Right now, devops is doing a great job enabling deployment automation; that is, creating scripts and recipes that are repeatable with respect to provisioning the appropriate infrastructure resources necessary to successfully deploy an application. But what we aren’t doing (yet) is enabling those as services. We’re currently the 18 th century version of the Automaton, when we want is the 21 st century equivalent – automation from top to bottom (or underneath, as the analogy would require). What we’ve done thus far is put a veneer over what is still a very manual process. Ops still determines the configuration on a per-application basis and subsequently customizes the configurations before pushing out the script. Certainly that script reduces operational costs and time whenever additional capacity is required for that application as it becomes possible to simply replicate the configuration, but it does not alleviate the need for manual configuration in the first place. Nor does it leave room for end-users to tweak or otherwise alter the policies that govern myriad operational functions across network, storage, and server infrastructure that have a direct impact – for good and for ill –on the performance, security, and stability of applications. End users must still wait for the operator hidden inside the Automaton to make a move. IT as a Service needs services. And not just services for devops, but services for end users, for the consumers of IT. The application owner, the business stakeholder, the admin. These services need to not only take into consideration the basic provisioning of the resources required, but the policies that govern them. The intelligence behind the Automaton needs to be codified and encapsulated in a way that makes them as reusable as the basic provisionable resources. We need not only provision resources – an IP address, network bandwidth, and the pool of resources from which applications are served and scale, but the policies governing access, security, and even performance. These policies are at the heart of what IT provides for its consumers; the security that enables compliance and protects applications from intrusions and downtime, the dynamic adjustments required to keep applications performing within specified business requirements, the thresholds that determine the ebb and flow of compute capacity required to keep the application available. These policies should be service-enabled and provisionable by the end-user, by the consumers of IT services. The definitions of cloud computing , from wherever they originate, tend to focus on resources and lifecycle management of those resources. If one construes that to include applicable policies as well, then we are on the right track. But if we do not, then we need to consider from a more strategic point of view what is required of a successful application deployment. It is not just the provisioning of resources, but policies, as well, that make a deployment successful. The Automaton is a great reminder of the power of automation, but it is just as powerful a reminder of the failure to encapsulate the intelligence and decision-making capabilities required. In the 18th century it was nearly impossible to imagine a mechanical system that could make intelligent, real-time decisions. That’s one of the reasons the Automaton was such a fascinating and popular exhibition. The revelation of the Automaton was a disappointment, because it revealed that under the hood, that touted mechanical system was still relying on manual and very human intelligence to function. If we do not pay attention to this lesson, we run the risk of the dynamic data center also being exposed as a hoax one day, still primarily enabled by manual and very human processes to function. Service-enablement of policy lifecycle management is a key component to liberating the data center and an integral part of enabling IT as a Service. Resolution to the Case (For & Against) X-Driven Scalability in Cloud Computing Environments The Cloud Configuration Management Conundrum IT as a Service: A Stateless Infrastructure Architecture Model If a Network Can’t Go Virtual Then Virtual Must Come to the Network You Can’t Have IT as a Service Until IT Has Infrastructure as a Service This is Why We Can’t Have Nice Things The Consumerization of IT: The OpsStore An Aristotlean Approach to Devops and Infrastructure Integration The Impact of Security on Infrastructure Integration Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait193Views0likes0CommentsForget Hyper-Scale. Think Hyper-Local Scale.
It’s kind of like thinking globally but acting locally… While I rail against the use of the too vague and cringe-inducing descriptor “workload” with respect to scalability and cloud computing , it is perhaps at least bringing to the fore an important distinction that needs to be made: that of the impact of different compute resource utilization patterns on scalability. What categorizing workloads has done is to separate “types” of processing and resource needs: some applications require more I/O, some less. Others are CPU hogs while others chew up memory at an alarming rate. Applications have different resource utilization needs across the network, storage and compute spectrum that have a profound impact on their scalability. This leads to models in which some applications scale better horizontally and others vertically. Unfortunately, there are very few “pure” applications that can be dissected down to a simple model in which it is simply a case of providing more “X” as a means to scale. It is more often the case that some portions of the application are more network intensive while others require more compute. Functional partitioning is certainly a better option for scaling out such applications, but is an impractical design methodology during development as the overhead resulting from separation of duties at the functional level requires a more service-oriented approach, one that is not currently aligned with modern web application development practices. Yet we see the need on a daily basis for hyper-scalability of applications. Applications are being pushed to their resource limits with more users, more devices, more environments in which they must perform without delay. The “one size fits all” scalability model offered by cloud computing providers today is inadequate as a means to scale our rapidly and nearly infinitely, without overrunning budgets. This is because along with resource consumption patterns comes constraints on concurrency. Cloud computing offers an easy button to this problem – auto-scalability. Concurrency demands are easily met, just spin up another instance. While certainly one answer, it can be an expensive one and it’s absolutely an inefficient model of scalability. HYPER-LOCAL SCALE The lessons we should learn from cloud computing and hyper-scalability demands is that different functional processing scales, well, differently. The resource consumption patterns of one functional process may differ dramatically from another, and both need to be addressed in order to efficiently scale the application. If it is impractical to functionally separate “workloads” in the design and development process, then it is necessary to do so during the deployment phase leveraging those identifying contextual clues indicating the type of workload being invoked, i.e. hyper-local scale. Hyperlocal scalability requires leveraging scalability domains, those functional workload divisions that are similar in nature and require similar resources to scale. Scalability domains allow functional partitioning as a scalability pattern to be applied to an application without requiring function level separation (and all the management, maintenance and deployment headaches that go along with it). Scalability domains are discrete pools of similar processing workloads, partitioned as part of the architecture, that allow specific configuration and architectural techniques to be applied to the underlying network and platform that specifically increase the performance and scalability of those workloads. This is the notion of hyperlocal scalability: an architectural scaling strategy that leverages scalability domains to isolate similar functional partitions requiring hyper-scale from those partitions that do not. In doing so, highly efficient scalability domains can be used to scale up or out those functional partitions requiring it while allowing other functional partitions to scale at a more nominal rate, incurring therefore less costs. Consider the notion a form of offload, where high resource impact processing is offloaded to another instance, thereby increasing the available resources on the first instance which results in higher concurrency. The offloaded processing can hyper-scale as necessary in a purpose-configured instance at higher efficiency, resulting in better concurrency. Where a traditional scalability pattern – effectively replication – may have required ten instances to meet demand, a hyper-localized scalability pattern may require only six or seven, with the plurality of those serving the high resource consuming processing. Fewer instances results in lower costs whilst simultaneously improving performance and efficiency. INFRASTRUCTURE REQUIRED Hyper-localized scalability architectures can leverage a variety of infrastructure scalability patterns, but the fact that they are dependent upon infrastructure and its ability to perform application layer routing and switching is paramount to success. Today’s demanding business and operational requirements cannot be met with the simple scalability strategies of yesterday. Not only are legacy strategies based on infinite resources and budgets, they are inherently based on legacy application design in which functional partitioning was not only difficult, it was nearly impossible without the aid of methodologies like SOA. Web applications are uniquely positioned such that they are perfectly suited to partitioning strategies whether at the functional or type or session layers. The contextual data shared by web applications with infrastructure capable of intercepting, inspecting and acting upon that data means more modern, architectural-based scaling strategies can be put into play. Doing so affords organizations the means to achieve higher efficiency and utilization rates, while in turn improving the performance, resiliency and availability of those applications. These strategies require infrastructure services and an understanding of the resource needs and usage patterns of the application as well as the ability to implement that architecture using infrastructure services and platform optimization. It’s a job for devops. Service Virtualization Helps Localize Impact of Elastic Scalability Intercloud: Are You Moving Applications or Architectures? IT as a Service: A Stateless Infrastructure Architecture Model Load Balancing Fu: Beware the Algorithm and Sticky Sessions Infrastructure Scalability Pattern: Sharding Sessions Infrastructure Scalability Pattern: Partition by Function or Type It’s 2am: Do You Know What Algorithm Your Load Balancer is Using? Lots of Little Virtual Web Applications Scale Out Better than Scaling Up Sessions, Sessions Everywhere Choosing a Load Balancing Algorithm Requires DevOps Fu235Views0likes0Comments