standards
27 TopicsBIG-IP Configuration Object Naming Conventions
George posted an excellent blog on hostname nomenclature a while back, but something we haven’t discussed much in this space is a naming convention for the BIG-IP configuration objects. Last week, DevCentral community user Deon posted a question on exactly that. Sometimes there are standards just for the sake of having one, but in most cases, and particularly in this case, having standards is a very good thing. Señor Forum, hoolio, and MVP hamish weighed in with some good advice. [app name]_[protocol]_[object type] Examples: www.example.com_http_vs www.example.com_http_pool www.example.com_http_monitor As hoolio pointed out in the forum, each object now has a description field, so the metadata capability is there to establish identifying information (knowledge base IDs, troubleshooting info, application owners), but having an object name that is quickly searchable and identifiable to operational staff is key. Hamish had a slight alternative format for virtuals: [fqdn]_[port] For network virtuals, I’ve always made the network part of the name, as hamish also recommends in his guidance: network VS's tend to be named net-net.num.dot.ed-masklen. e.g. net-0.0.0.0-0 is the default address. Where they conflict (e.g. two defaults depending on src clan, it gets an extra descriptor between net- and the ip address. e.g. net-wireless-0.0.0.0-0 (Default network VS for a wireless VLAN). I don't currently have any network VS's for specific ports. But they'd be something like net-0.0.0.0-0-port Your Turn What standards do you use? Share in the comments section below, or post to the forum thread.1.1KViews0likes0CommentsSOAP vs REST: The war between simplicity and standards
SOA is, at its core, a design and development methodology. It embraces reuse through decomposition of business processes and functions into core services. It enables agility by wrapping services in an accessible interface that is decoupled from its implementation. It provides a standard mechanism for application integration that can be used internally or externally. It is, as they say, what it is. SOA is not necessarily SOAP, though until the recent rise of social networking and Web 2.0 there was little real competition against the rising standard. But of late the adoption of REST and its use on the web facing side of applications has begun to push around the incumbent. We still aren't sure who swung first. We may never know, and at this point it's irrelevant: there's a war out there, as SOAP and REST duke it out for dominance of SOA. At the core of the argument is this: SOAP is weighted down by the very standards designed to promote interoperability (WS-I), security (WS-Security), and reliability (WS-Reliability). REST is a lightweight compared to its competitor, with no standards at all. Simplicity is its siren call, and it's being heard even in the far corners of corporate data centers. A February 2007 Evans Data survey found a 37% increase in those implementing or considering REST, with 25% considering REST-Based Web Services as a simpler alternative to SOAP-based services. And that was last year, before social networking really exploded and the integration of Web 2.0 sites via REST-based services took over the face of the Internet. It was postulated then that WOA (Web Oriented Architecture) was the face of SOA (Service Oriented Architecture). That REST on the outside was the way to go, but SOAP on the inside was nearly sacrosanct. Apparently that thought, while not wrong in theory, didn't take into account the fervor with which developers hold dear their beliefs regarding everything from language to operating system to architecture. The downturn in the economy hasn't helped, either, as REST certainly is easier and faster to implement, even with the plethora of development tools and environments available to carry all the complex WS-* standards that go along with SOAP like some sort of technology bellhop. Developers have turned to the standard-less option because it seems faster, cheaper, and easier. And honestly, we really don't like being told how to do things. I don't, and didn't, back in the day when the holy war was between structured and object-oriented programming. While REST has its advantages, certainly, standard-less development can, in the long-run, be much more expensive to maintain and manage than standards-focused competing architectures. The argument that standards-based protocols and architectures is difficult because there's more investment required to learn the basics as well as associated standards is essentially a red herring. Without standards there is often just as much investment in learning data formats (are you using XML? JSON? CSV? Proprietary formats? WWW-URL encoded?) as there is in learning standards. Without standards there is necessarily more documentation required, which cuts into development time. Then there's testing. Functional and vulnerability testing which necessarily has to be customized because testing tools can't predict what format or protocol you might be using. And let's not forget the horror that is integration, and how proprietary application protocols made it a booming software industry replete with toolkits and libraries and third-party packages just to get two applications to play nice together. Conversely, standards that are confusing and complex lengthen the implementation cycle, but make integration and testing as well as long term maintenance much less painful and less costly. Arguing simplicity versus standards is ridiculous in the war between REST and SOA because simplicity without standards is just as detrimental to the costs and manageability of an application as is standards without simplicity. Related articles by Zemanta RESTful .NET Has social computing changed attitudes toward reuse? The death of SOA has been greatly exaggerated Web 2.0: Integration, APIs, and scalability Performance Impact: Granularity of Services581Views0likes3CommentsWelcome to the The Phygital World
Standards for 'Things' That thing, next to the other thing, talking to this thing needs something to make it interoperate properly. That's the goal of the Industrial Internet Consortium (IIC) which hopes to establish common ways that machines share information and move data. IBM, Cisco, GE and AT&T have all teamed up to form the Industrial Internet Consortium (IIC), an open membership group that’s been established with the task of breaking down technology silo barriers to drive better big data access and improved integration of the physical and digital worlds. The Phygital World. The IIC will work to develop a ‘common blueprint' that machines and devices from all manufacturers can use to share and move data. These standards won’t just be limited to internet protocols, but will also include metrics like storage capacity in IT systems, various power levels, and data traffic control. Sensors are getting standards. Soon. As more of these chips are getting installed on street lights, thermostats, engines, soda machines and even into our own body the IIC will focus on testing IoT applications, produce best practices and standards, influence global IoT standards for Internet and industrial systems and create a forum for sharing ideas. Explore new worlds so to speak. I think it's nuts that we're in an age where we are trying to figure out how the blood sensor talks to the fridge sensor which notices there is no more applesauce and auto-orders from the local grocery to have it delivered that afternoon. Almost there. Initially, the new group will focus on the 'industrial Internet' applications in manufacturing, oil and gas exploration, healthcare and transportation. In those industries, vendors often don't make it easy for hardware and software solutions to work together. The IIC is saying, 'we all have to play with each other.' That will become critically important when your imbedded sleep monitor/dream recorder notices your blood sugar levels rising indicating that you're about to wake up, which kicks off a series of workflows that start the coffee machine, heat & distribute the hot water and display the day's news and weather on the refrigerator's LCD screen. Any minute now. It will probably be a little while (years) before these standards can be created and approved, but when they are they’ll help developers of hardware and software to create solutions that are compatible with the Internet of Things. The end result will be the full integration of sensors, networks, computers, cloud systems, large enterprises, vehicles, businesses and hundreds of other entities that are 'connected.' With London cars getting stolen using electronic gadgets and connected devices as common as electricity by 2025, securing the Internet of Things should be one of the top priorities facing the consortium. ps Related: Consortium Wants Standards for ‘Internet of Things’ AT&T, Cisco, GE, IBM and Intel form Industrial Internet Consortium for IoT standards IBM, Cisco, GE & AT&T form Industrial Internet Consortium The “Industrial” Internet of Things and the Industrial Internet Consortium The Internet of Things Will Thrive by 2025 Securing the Internet of Things: is the web already breaking up? Connected Devices as Common as Electricity by 2025 The ABCs of the Internet of Things Some Predictions About the Internet of Things and Wearable Tech From Pew Research Car-Hacking Goes Viral In London Technorati Tags: iot,things,internet of things,standards,security,sensors,nouns,silva,f5 Connect with Peter: Connect with F5:532Views0likes0CommentsFedRAMP Federates Further
FedRAMP (Federal Risk and Authorization Management Program), the government’s cloud security assessment plan, announced late last week that Amazon Web Services (AWS) is the first agency-approved cloud service provider. The accreditation covers all AWS data centers in the United States. Amazon becomes the third vendor to meet the security requirements detailed by FedRAMP. FedRAMP is the result of the US Government’s work to address security concerns related to the growing practice of cloud computing and establishes a standardized approach to security assessment, authorizations and continuous monitoring for cloud services and products. By creating industry-wide security standards and focusing more on risk management, as opposed to strict compliance with reporting metrics, officials expect to improve data security as well as simplify the processes agencies use to purchase cloud services. FedRAMP is looking toward full operational capability later this year. As both the cloud and the government’s use of cloud services grow, officials found that there were many inconsistencies to requirements and approaches as each agency began to adopt the cloud. Launched in 2012, FedRAMP’s goal is to bring consistency to the process but also give cloud vendors a standard way of providing services to the government. And with the government’s cloud-first policy, which requires agencies to consider moving applications to the cloud as a first option for new IT projects, this should streamline the process of deploying to the cloud. This is an ‘approve once, and use many’ approach, reducing the cost and time required to conduct redundant, individual agency security assessment. AWS's certification is for 3 years. FedRAMP provides an overall checklist for handling risks associated with Web services that would have a limited, or serious impact on government operations if disrupted. Cloud providers must implement these security controls to be authorized to provide cloud services to federal agencies. The government will forbid federal agencies from using a cloud service provider unless the vendor can prove that a FedRAMP-accredited third-party organization has verified and validated the security controls. Once approved, the cloud vendor would not need to be ‘re-evaluated’ by every government entity that might be interested in their solution. There may be instances where additional controls are added by agencies to address specific needs. The BIG-IP Virtual Edition for AWS includes options for traffic management, global server load balancing, application firewall, web application acceleration, and other advanced application delivery functions. ps Related: Cloud Security With FedRAMP FedRAMP Ramps Up FedRAMP achieves another cloud security milestone Amazon wins key cloud security clearance from government Cloud Security With FedRAMP CLOUD SECURITY ACCREDITATION PROGRAM TAKES FLIGHT FedRAMP comes fraught with challenges F5 iApp template for NIST Special Publication 800-53 Now Playing on Amazon AWS - BIG-IP Connecting Clouds as Easy as 1-2-3 F5 Gives Enterprises Superior Application Control with BIG-IP Solutions for Amazon Web Services Technorati Tags: f5,fedramp,government,cloud,service providers,risk,standards,silva,compliance,cloud security,aws,amazon Connect with Peter: Connect with F5:476Views0likes0CommentsInfrastructure 2.0: The Feedback Loop Must Include Applications
Greg Ness calls it "connectivity intelligence" but it seems that we're really talking about is the ability of network infrastructure to both be agile itself and enable IT agility at the same time. Brittle, inflexible infrastructures - whether they are implemented in hardware or software or both - are not agile enough to deal with an evolving, dynamic application architecture. Greg says in a previous post The static infrastructure was not architected to keep up with these new levels of change and complexity without a new layer of connectivity intelligence, delivering dynamic information between endpoint instances and everything from Ethernet switches and firewalls to application front ends. Empowered with dynamic feedback, the existing deployed infrastructure can evolve into an even more responsive, resilient and flexible network and deliver new economies of scale. The issue I see is this: it's all too network focused. Knowing that a virtual machine instance came online and needs an IP address, security policies, and to be added to a VLAN on the switch is very network-centric. Necessary, but network-centric. The VM came online for a reason, and that reason is most likely an application specific one. Greg has referred several times to the Trusted Computing Group's IF-MAP specification, which provides the basics through which connectivity intelligence could certainly be implemented if vendors could all agree to implement it. The problem with IF-MAP and, indeed, most specifications that come out of a group of network-focused organizers is that they are, well, network-focused. In fact, reading through IF-MAP I found many similarities between its operations (functions) and those found in the more application-focused security standard, SAML. While IF-MAP allows for custom data to be included, which could be used by application vendors to IF-MAP enable application servers through which more application specific details could be included in the dynamic infrastructure feedback loop, that's not as agile as it could be because it doesn't allow for a simple, standard mechanism through which application developers can integrate application specific details into that feedback loop. And yet that's exactly what we need to complete this dynamic feedback loop and create a truly flexible, agile infrastructure because the applications are endpoints; they, too, need to be managed and secured and integrated into the Infrastructure 2.0 world. While I agree with Greg that IP address management in general and managing a constantly changing heterogeneous infrastructure is a nightmare that standards like IF-MAP might certainly help IT wake up from, there's another level of managing the dynamic environments associated with cloud computing and virtualization that generally isn't addressed by very network-specific standards like IF-MAP: the application layer. In order for a specification like IF-MAP to address the application layer, application developers would need to integrate (become an IF-MAP client) the code necessary to act as part of an IF-MAP enabled infrastructure. That's because knowing that a virtual machine just came online is one thing; understanding which application it is, what application policies need to be applied, and what application-specific processing might be necessary in the rest of the infrastructure is another. It's all contextual, and based on variables we can't know ahead of time. This can't be determined before the application is actually written, so it can't be something written by vendors and shipped as a "value add". Application security and switching policies are peculiar to the application; they're unique and the only way we, as vendors, can provide that integration without foreknowledge of that uniqueness is to abstract applications to a general use case. That completely destroys the concept of agility because it doesn't take into consideration the application environment as it is at any given moment in time. It results in static, brittle integration that is essentially no more useful than SNMP would be if it were integrated into an application. We can all sit around and integrate with VMWare, and Hyper-V, and Xen. We can learn to speak IF-MAP or (some other common standard) and integrate with DNS and DHCP servers, with network security devices and with layer 2-3 switches. But we are still going to have to manually manage the applications that are ultimately the reason for the existence of such virtualized environments. While we are getting our infrastructure up to speed so that it is easier and less costly to manage is necessary, let's not forget about the applications we also still have to manage. Dynamic feedback is great and we have, today, the ability to enable pieces of that dynamic feedback loop. Customers can, today, use tools like iControl and iRules to build a feedback loop between their application delivery network and applications, regardless of whether those applications are in a VM or a Java EE container, or on a Microsoft server. But this feedback is specific to one vendor, and doesn't necessarily include the rest of the infrastructure. Greg is talking about general dynamic feedback at the network layer. He's specifically (and understandably) concerned with network agility, not application agility. That's why he calls it infrastructure 2.0 and not application something 2.0. Greg points as an example to the constant levels of change introduced by virtual machines coming on and off line and the difficulties inherent in trying to manage that change via static, infrastructure 1.0 products. That's all completely true and needs to be addressed by infrastructure vendors. But we also need to consider how to enable agility at the application layer, so the feedback loop that drives security and routing and switching and acceleration and delivery configurations in real-time can adapt to conditions within and around the applications we are trying to manage in the first place. It's all about the application in the end. Endpoints - whether internal or external to the data center - are requesting access and IP addresses for one reason: to get a resource served by an application. That application may be TCP-based, it may be HTTP-based, it may be riding on UDP. Regardless of the network-layer transport mechanisms, it's still an application - a browser, a server-side web application, a SOA service - and its unique needs must be considered in order for the feedback loop to be complete. How else will you know which application just came online or went offline? How do you know what security to apply if you don't know what you might be trying to secure? Somehow the network-centric standards that might evolve from a push to a more agile infrastructure must broaden their focus and consider how an application might integrate with such standards or what information they might provide as part of this dynamic feedback loop that will drive a more agile infrastructure. Any such standard emerging upon which Infrastructure 2.0 is built must somehow be accessible and developer-friendly and take into consideration application-specific resources as well as network-resources, and provide a standard means by which information about the application that can drive the infrastructure to adapt to its unique needs can be shared. If it doesn't, we're going to end up with the same fractured "us versus them" siloed infrastructure we've had for years. That's no longer reasonable. The network and the application are inexorably linked now, thanks to cloud computing and the Internet in general. Managing thousands of instances of an application will be as painful as managing thousands of IP addresses. As Greg points out, that doesn't work very well right now and it's costing us a lot of money and time and effort to do so. We know where this ends up, because we've seen it happen already. The same diseconomies of scale that affect TCP/IP are going to affect application management. We should be more proactive in addressing the same management issues that will arise with trying to manage thousands of applications and services rather than waiting until it, too, can no longer be ignored.335Views0likes1CommentWhat Do Database Connectivity Standards and the Pirate’s Code Have in Common?
A: They’re both more what you’d call “guidelines” than actual rules. An almost irrefutable fact of application design today is the need for a database, or at a minimum a data store – i.e. a place to store the data generated and manipulated by the application. A second reality is that despite the existence of database access “standards”, no two database solutions support exactly the same syntax and protocols. Connectivity standards like JDBC and ODBC exist, yes, but like SQL they are variable, resulting in just slightly different enough implementations to effectively cause vendor lock-in at the database layer. You simply can’t take an application developed to use an Oracle database and point it at a Microsoft or IBM database and expect it to work. Life’s like that in the development world. Database connectivity “standards” are a lot like the pirate’s Code, described well by Captain Barbossa in Pirates of the Carribbean as “more what you’d call ‘guidelines’ than actual rules.” It shouldn’t be a surprise, then, to see the rise of solutions that address this problem, especially in light of an increasing awareness of (in)compatibility at the database layer and its impact on interoperability, particularly as it relates to cloud computing . Forrester Analyst Noel Yuhanna recently penned a report on what is being called Database Compatibility Layers (DCL). The focus of DCL at the moment is on migration across database platforms because, as pointed out by Noel, they’re expensive, time consuming and very costly. Database migrations have always been complex, time-consuming, and costly due to proprietary data structures and data types, SQL extensions, and procedural languages. It can take up to several months to migrate a database, depending on database size, complexity, and usage of these proprietary features. A new technology has recently emerged for solving this problem: the database compatibility layer, a database access layer that supports another database management system’s (DBMS’s) proprietary extensions natively, allowing existing applications to access the new database transparently. -- Simpler Database Migrations Have Arrived (Forrester Research Report) Anecdotally, having been on the implementation end of such a migration I can’t disagree with the assessment. Whether the right answer is to sit down and force some common standards on database connectivity or build a compatibility layer is a debate for another day. Suffice to say that right now the former is unlikely given the penetration and pervasiveness of existing database connectivity, so the latter is probably the most efficient and cost-effective solution. After all, any changes in the core connectivity would require the same level of application modification as a migration; not an inexpensive proposition at all. According to Forrester a Database Compatibility Layer (DCL) is a “database layer that supports another DBMS’s proprietary SQL extensions, data types, and data structures natively. Existing applications can transparently access the newly migrated database with zero or minimal changes.” By extension, this should also mean that an application could easily access one database and a completely different one using the same code base (assuming zero changes, of course). For the sake of discussion let’s assume that a DCL exists that exhibits just that characteristic – complete interoperability at the connectivity layer. Not just for migration, which is of course the desired use, but for day to day use. What would that mean for cloud computing providers – both internal and external? ENABLING IT as a SERVICE Based on our assumption that a DCL exists and is implemented by multiple database solution vendors, a veritable cornucopia of options becomes a lot more available for moving enterprise architectures toward IT as a Service than might be at first obvious. Consider that applications have variable needs in terms of performance, redundancy, disaster recovery, and scalability. Some applications require higher performance, others just need a nightly or even weekly backup and some, well, some are just not that important that you can’t use other IT operations backups to restore if something goes wrong. In some cases the applications might have varying needs based on the business unit deploying them. The same application used by finance, for example, might have different requirements than the same one used by developers. How could that be? Because the developers may only be using that application for integration or testing while finance is using it for realz. It happens. What’s more interesting, however, is how a DCL could enable a more flexible service-oriented style buffet of database choices, especially if the organization used different database solutions to support different transactional, availability, and performance goals. If a universal DCL (or near universal at least) existed, business stakeholders – together with their IT counterparts – could pick and choose the database “service” they wished to employ based on not only the technical characteristics and operational support but also the costs and business requirements. It would also allow them to “migrate” over time as applications became more critical, without requiring a massive investment in upgrading or modifying the application to support a different back-end database. Obviously I’m picking just a few examples that may or may not be applicable to every organization. The bigger thing here, I think, is the flexibility in architecture and design that is afforded by such a model that balances costs with operational characteristics. Monitoring of database resource availability, too, could be greatly simplified from such a layer, providing solutions that are natively supported by upstream devices responsible for availability at the application layer, which ultimately depends on the database but is often an ignored component because of the complexity currently inherent in supporting such a varied set of connectivity standards. It should also be obvious that this model would work for a PaaS-style provider who is not tied to any given database technology. A PaaS-style vendor today must either invest effort in developing and maintaining a services layer for database connectivity or restrict customers to a single database service. The latter is fine if you’re creating a single-stack environment such as Microsoft Azure but not so fine if you’re trying to build a more flexible set of offerings to attract a wider customer base. Again, same note as above. Providers would have a much more flexible set of options if they could rely upon what is effectively a single database interface regardless of the specific database implementation. More importantly for providers, perhaps, is the migration capability noted by the Forrester report in the first place, as one of the inhibitors of moving existing applications to a cloud computing provider is support for the same database across both enterprise and cloud computing environments. While services layers are certainly a means to the same end, such layers are not universally supported. There’s no “standard” for them, not even a set of best practice guidelines, and the resulting application code suffers exactly the same issues as with the use of proprietary database connectivity: lock in. You can’t pick one up and move it to the cloud, or another database without changing some code. Granted, a services layer is more efficient in this sense as it serves as an architectural strategic point of control at which connectivity is aggregated and thus database implementation and specifics are abstracted from the application. That means the database can be changed without impacting end-user applications, only the services layer need be modified. But even that approach is problematic for packaged applications that rely upon database connectivity directly and do not support such service layers. A DCL, ostensibly, would support packaged and custom applications if it were implemented properly in all commercial database offerings. CONNECTIVITY CARTEL And therein lies the problem – if it were implemented properly in all commercial database offerings. There is a risk here of a connectivity cartel arising, where database vendors form alliances with other database vendors to support a DCL while “locking out” vendors whom they have decided do not belong. Because the DCL depends on supporting “proprietary SQL extensions, data types, and data structures natively” there may be a need for database vendors to collaborate as a means to properly support those proprietary features. If collaboration is required, it is possible to deny that collaboration as a means to control who plays in the market. It’s also possible for a vendor to slightly change some proprietary feature in order to “break” the others’ support. And of course the sheer volume of work necessary for a database vendor to support all other database vendors could overwhelm smaller database vendors, leaving them with no real way to support everyone else. The idea of a DCL is an interesting one, and it has its appeal as a means to forward compatibility for migration – both temporary and permanent. Will it gain in popularity? For the latter, perhaps, but for the former? Less likely. The inherent difficulties and scope of supporting such a wide variety of databases natively will certainly inhibit any such efforts. Solutions such as a REST-ful interface, a la PHP REST SQL or a JSON-HTTP based solution like DBSlayer may be more appropriate in the long run if they were to be standardized. And by standardized I mean standardized with industry-wide and agreed upon specifications. Not more of the “more what you’d call ‘guidelines’ than actual rules” that we already have. Database Migrations are Finally Becoming Simpler Enterprise Information Integration | Data Without Borders Review: EII Suites | Don't Fear the Data The Database Tier is Not Elastic Infrastructure Scalability Pattern: Sharding Sessions F5 Friday: THE Database Gets Some Love The Impossibility of CAP and Cloud Sessions, Sessions Everywhere Cloud-Tiered Architectural Models are Bad Except When They Aren’t326Views0likes1CommentInteroperability between clouds requires more than just VM portability
The issue of application state and connection management is one often discussed in the context of cloud computing and virtualized architectures. That's because the stress placed on existing static infrastructure due to the potentially rapid rate of change associated with dynamic application provisioning is enormous and, as is often pointed out, existing "infrastructure 1.0" systems are generally incapable of reacting in a timely fashion to such changes occurring in real-time. The most basic of concerns continues to revolve around IP address management. This is a favorite topic of Greg Ness at Infrastructure 2.0 and has been subsequently addressed in a variety of articles and blogs since the concepts of cloud computing and virtualization have gained momentum. The Burton Group has addressed this issue with regards to interoperability in a recent post, positing that perhaps changes are needed (agreed) to support emerging data center models. What is interesting is that the blog supports the notion of modifying existing core infrastructure standards (IP) to support the dynamic nature of these new models and also posits that interoperability is essentially enabled simply by virtual machine portability. From The Burton Group's "What does the Cloud Need? Standards for Infrastructure as a Service" First question is: How do we migrate between clouds? If we're talking System Infrastructure as a Service, then what happens when I try to migrate a virtual machine (VM) between my internal cloud running ESX (say I'm running VDC-OS) and a cloud provider who is running XenServer (running Citrix C3)? Are my cloud vendor choices limited to those vendors that match my internal cloud infrastructure? Well, while its probably a good idea, there are published standards out there that might help. Open Virtualization Format (OVF) is a meta-data format used to describe VMs in standard terms. While the format of the VM is different, the meta-data in OVF can be used to facilitate VM conversion from one format to other, thereby enabling interoperability. ... Another biggie is application state and connection management. When I move a workload from one location to another, the application has made some assumptions about where external resources are and how to get to them. The IP address the application or OS use to resolve DNS names probably isn't valid now that the VM has moved to a completely different location. That's where Locator ID Separation Protocol (LISP -- another overloaded acronym) steps in. The idea with LISP is to add fields to the IP header so that packets can be redirected to the correct location. The "ID" and and "locator" are separated so that the packet with the "ID" can be sent to the "locator" for address resolution. The "locator" can change the final address dynamically, allowing the source application or OS to change locations as long as they can reach the "locator". [emphasis added] If LISP sounds eerily familiar to some of you, it should. It's the same basic premise behind UDDI and the process of dynamically discovering the "location" of service end-points in a service-based architecture. Not exactly the same, but the core concepts are the same. The most pressing issue with proposing LISP as a solution is that it focuses only on the problems associated with moving workloads from one location to another with the assumption that the new location is, essentially, a physically disparate data center, and not simply a new location within the same data center; an issue with LISP does not even consider. That it also ignores other application networking infrastructure that requires the same information - that is, the new location of the application or resource - is also disconcerting but not a roadblock, it's merely a speed-bump in the road to implementation. We'll come back to that later; first let's examine the emphasized statement that seems to imply that simply migrating a virtual image from one provider to another equates to interoperability between clouds - specifically IaaS clouds. I'm sure the author didn't mean to imply that it's that simple; that all you need is to be able to migrate virtual images from one system to another. I'm sure there's more to it, or at least I'm hopeful that this concept was expressed so simply in the interests of brevity rather than completeness because there's a lot more to porting any application from one environment to another than just the application itself. Applications, and therefore virtual images containing applications, are not islands. They are not capable of doing anything without a supporting infrastructure - application and network - and some of that infrastructure is necessarily configured in such a way as to be peculiar to the application - and vice-versa. We call it an "ecosystem" for a reason; because there's a symbiotic relationship between applications and their supporting infrastructure that, when separated, degrades or even destroys the usability of that application. One cannot simply move a virtual machine from one location to another, regardless of the interoperability of virtualization infrastructure, and expect things to magically work unless all of the required supporting infrastructure has also been migrated as seamlessly. And this infrastructure isn't just hardware and network infrastructure; authentication and security systems, too, are an integral part of an application deployment. Even if all the necessary components were themselves virtualized (and I am not suggesting this should be the case at all) simply porting the virtual instances from one location to another is not enough to assure interoperability as the components must be able to collaborate, which requires connectivity information. Which brings us back to the problems associated with LISP and its focus on external discovery and location. There's just a lot more to interoperability than pushing around virtual images regardless of what those images contain: application, data, identity, security, or networking. Portability between virtual images is a good start, but it certainly isn't going to provide the interoperability necessary to ensure the seamless transition from one IaaS cloud environment to another. RELATED ARTICLES & BLOGS Who owns application delivery meta-data in the cloud? More on the meta-data menagerie The Feedback Loop Must Include Applications How VM sprawl will drive the urgency of the network evolution The Diseconomy of Scale Virus Flexibility is Key to Dynamic Infrastructure The Three Horsemen of the Coming Network Revolution As a Service: The Many Faces of the Cloud304Views0likes2CommentsThe Great Client-Server Architecture Myth
The webification of applications over the years has led to the belief that client-server as an architecture is dying. But very few beliefs about architecture have been further from the truth. The belief that client-server was dying - or at least falling out of favor - was primarily due to fact that early browser technology was used only as a presentation mechanism. The browser did not execute application logic, did not participate in application logic, and acted more or less like a television: smart enough to know how to display data but not smart enough to do anything about it. But the sudden explosion of Web 2.0 style applications and REST APIs have changed all that and client-server is very much in style again, albeit with a twist. Developers no longer need to write the core of a so-called "fat client" from the ground up. The browser or a framework such as Adobe AIR or Microsoft's Silverlight provide the client-side platform on which applications are developed and deployed. These client-side platforms have become very similar in nature to their server-side cousins, application servers, taking care of the tedious tasks associated with building and making connections to servers, parsing data, and even storage of user-specific configuration data. Even traditional thin-client applications are now packing on the pounds, using AJAX and various JavaScript libraries to provide both connectivity and presentation components to developers in the same fashion that AIR and Silverlight provide a framework for developers to build richer, highly interactive applications. These so-called RIAs (Rich Internet Applications) are, in reality, thin-clients that are rapidly gaining weight. One of the core reasons client-server architecture is being reinvigorated is the acceptance of standards. As developers have moved toward not only HTTP as the de facto transport protocol but HTML, DHTML, CSS, and JavaScript as primary client-side technologies so have device makers accepted these technologies as the "one true way" to deliver applications to multiple clients from a single server-side architecture. It's no longer required that a client be developed for every possible operating system and device combination. A single server-side application can serve any and all clients capable of communicating via HTTP and rendering HTML, DHTML, CSS, and executing client-side scripts. Standards, they are good things after all. Client-server architectures are not going away. They have simply morphed from an environment-specific model to an environment-agnostic model that is much more efficient in terms of development costs and ability to support a wider range of users, but they are still based on the same architectural principles. Client-server as a model works and will continue to work as long as the infrastructure over which such applications are delivered continue to mature and recognize that while one application may be capable of being deployed and utilized from any device that the environments over which they are delivered may impact the performance and security of those applications. The combination of fatter applications and increasing client-side application logic execution means more opportunities for exploitation as well as the potential for degradation of performance. Because client-server applications are now agnostic and capable of being delivered and used on a variety of devices and clients that they are not specifically optimized for any given environment and developers do not necessarily have access to the network and transport layer components they would need in order to optimize them. These applications are written specifically to not care, and yet the device and the location of the user and the network over which the application is delivered is relevant to application performance and security. The need for context-aware application delivery is more important now than ever, as the same application may be served to the same user but rendered in a variety of different client environments and in a variety of locations. All these variables must be accounted for in order to deliver these fat clients RIAs in the most secure, performant fashion regardless of where the user may be, over what network the application is being delivered, and what device the user may be using at the time.301Views0likes0CommentsMaking Infrastructure 2.0 reality may require new standards
Managing a heterogeneous infrastructure is difficult enough, but managing a dynamic, ever changing heterogeneous infrastructure that must be stable enough to deliver dynamic applications makes the former look like a walk in the park. Part of the problem is certainly the inability to manage heterogeneous network infrastructure devices from a single management system. SNMP (Simple Network Management Protocol), the only truly interoperable network management standard used by infrastructure vendors for over a decade, is not robust enough to deal with the management nightmare rapidly emerging for cloud computing vendors. It's called "Simple" for a reason, after all. And even if it weren't, SNMP, while interoperable with network management systems like HP OpenView and IBM's Tivoli, is not standardized at the configuration level. Each vendor generally provides their own customized MIB (Management Information Base). Customized, which roughly translates to "proprietary"; if not in theory then in practice. MIBs are not interchangeable, they aren't interoperable, and they aren't very robust. Generally they're used to share information and are not capable of being used to modify device configuration. In other words, SNMP and customized MIBs are just not enough to support efficient management of a very large heterogeneous data center. As Greg Ness pointed out in his latest blog post on Infrastructure 2.0, the diseconomies of scale in the IP address management space are applicable more generally to the network management space. There's just no good way today to efficiently manage the kind of large, heterogeneous environment required of cloud computing vendors. SNMP wasn't designed for this kind of management any more than TCP/IP was designed to handle the scaling needs of today's applications. While some infrastructure vendors, F5 among them, have seen fit to provide a standards-based management and configuration framework, none of us are really compatible with the other in terms of methodology. The way in which we, for example, represent a pool or a VIP (Virtual IP address), or a VLAN (Virtual LAN) is not the same way Cisco or Citrix or Juniper represent the same network objects. Indeed, our terminology may even be different; we use pool, other ADC vendors use "farm" or "cluster" to represent the same concept. Add virtualization to the mix and yet another set of terms is added to the mix, often conflicting with those used by network infrastructure vendors. "Virtual server" means something completely different when used by an application delivery vendor than it does when used by a virtualization vendor like VMWare or Microsoft. And the same tasks must be accomplished regardless of which piece of the infrastructure is being configured. VLANs, IP addresses, gateway, routes, pools, nodes, and other common infrastructure objects must be managed and configured across a variety of implementations. Scaling the management of these disparate devices and solutions is quickly becoming a nightmare for vendors involved in trying to build out large-scale data centers, whether those are large enterprises or cloud computing vendors or service providers. In a response to Cloud Computing and Infrastructure 2.0, "johnar" points out: Companies are forced to either roll the dice on single-vendor solutions for simplicity, or fill the voids with their own home-brew solutions and therefore assume responsibility for a lot of very complex code that is tightly coupled with ever-changing vendor APIs and technology. The same technology that vendors tout as their differentiator is what is causing the integrators grey hair. Because we all "do it different" with our modern day equivalents of customized MIBs it makes it difficult to integrate all the disparate nodes that make up a full application delivery network and infrastructure into a single, cohesive, efficient management mechanism. We're standards-based, but we aren't based on a single management standard. And as "johnar" points out, it seems unlikely that we'll "unite for data center peace" any time soon: "Unlike ratifying a new Ethernet standard, there's little motivation for ADC vendors to play nice with each other." I think there is motivation and reason for us to play nice with each other in this regard. Disparate competitive vendors came together in the past to ratify Ethernet standards, which led to interoperability and simpler management as we built out the infrastructure that makes the web work today. If we can all agree that application delivery controllers (ADCs) are an integral part of Infrastructure 2.0 (and I'm betting we all can) then in order to forward adoption of ADCs in general and make it possible for customers to choose based on features and functionality then we must make an effort to come together and consider standardizing a management model across the industry. And if we're really going to do it right, we need to encourage other infrastructure vendors to agree on a common base network management model to further simplify management of large heterogeneous network infrastructures. A VLAN is a VLAN regardless of whether it's implemented in a switch, an ADC, or on a server. If a lack of standards might hold back adoption or prevent the ability of vendors to compete for business, then that's a damn good motivating factor right there for us to unite for data center peace. If Microsoft, IBM, BEA, and Oracle were able to unite and agree upon a single web services interoperability standard (which they were, the result of which is WS-I) then it is not crazy to think that F5 and its competitors can come together and agree upon a single, standards-based management interface that will drive Infrastructure 2.0 to be reality. Major shifts in architectural paradigms often require new standards. That's where we got all the WS-* specifications and that's where we got all the 802.x standards: major architectural paradigm shifts. Cloud computing and the pervasive webification of, well, everything is driving yet another major architectural paradigm shift. And that may very well mean we need new standards to move forward and make the shift as painless as possible for everyone.298Views0likes0CommentsThe HTTP 2.0 War has Just Begun
#stirling Microsoft takes on Google as the war to win the standard for an overdue overhaul of HTTP starts to pick up steam RFC 1945 – “Hypertext Transfer Protocol -- HTTP/1.0” – was published in May 1996. In June of 1999, RFC 2616 – “Hypertext Transfer Protocol -- HTTP/1.1” was published. In the ensuing 13 years there has been no substantial changes to the HTTP standard. None. Nada. Zilch. Even as the size and number of objects has ballooned over that time, and the overall composition of web pages grown increasingly complex, still there’s been no substantial efforts to improve upon the now entrenched HTTP standard. Even as sites struggled to maintain availability and performance in the face of exploding usage growth – fueled by mobile device proliferation, increasingly affordable access enabling everything from plants to cows to users to “get online” – HTTP 1.1 remained the standard for web-everything, despite the growing fact that it simply wasn’t the most optimal means of connecting users with the resources they expect and increasingly, demand. AJAX and Web 2.0 gave us better interactive models that alleviated some of the pain associated with performance problems, but as that model took hold and video became the medium du jour even it’s advantages have become unable to produce the acceptable results. And then Google introduced SPDY. The first shot in the HTTP 2.0 war. Now Microsoft has fired back with “Speed+Mobility” and the battle appears about to be fully engaged. Although SPDY has been out and about for some time, it only recently made it to the status of “Internet-Draft” in the RFC system, being officially published in Feb 2012. Along comes March 2012, and Microsoft has (sort of) countered with Speed+Mobility. What will be interesting as the battle progresses is to see which other organizations and vendors will side with which version (if not both). Invariably other organizations will want to be able to claim to have been co-authors of whichever standard becomes, well, the standard but choosing sides so early in a war is hardly appropriate, especially when the technical details are still (as of this writing) missing from Microsoft’s proposal. RIP-REPLACE versus UPGRADE It’s also not clear how Speed + Mobility will “retain as much compatibility as possible with the existing Web infrastructure” – a noble and laudable sentiment, to be sure – while still adopting most of the core concepts including in SPDY: HTTP Speed+Mobility RFC It [the session layer] would maintain the integrity of the layered architecture. It would use an upgrade mechanism similar to that of WebSockets. This would enable compatibility with existing proxies and connection models, without creating a mandatory dependency on TLS. [Same as SPDY] The protocol would define two types of frames: data and control. [Same as SPDY] The session layer would enable negotiation of multiple simultaneous streams for HTTP requests with minimal overhead. [Same as SPDY] The session layer would allow for prioritizing delivery of content to ensure highest value traffic is delivered first. There’s not much in the Speed + Mobility RFC on which to base a technical impact assessment on infrastructure (existing proxies and other HTTP mediating devices like load balancers) but what Microsoft appears to be saying is that it wants to leverage the concepts introduced by Google with SPDY (acknowledging their performance and ultimately scaling benefits) without leaving the familiar world of HTTP. That’s actually important, assuming it can be done, because SPDY requires significant changes to existing infrastructure – network and server – in order to operate, and it is not inherently interoperable with HTTP. Despite this, SPDY interest and inquiries are beginning to become more frequent, which means it’s getting the attention it deserves. Being the only kid on the block to really address the performance issues inherent with HTTP (especially with respect to mobile devices) that’s no surprise as the investment in new solutions to support SPDY would ostensibly see a return in the form of scalability on the server side by requiring fewer server resources to support as many if not more users. But SPDY isn’t so far along (see previous note) as to be a clear front runner. It’s still too new despite interest to have garnered widespread support or mindshare, and despite Google’s ubiquitous status as a household term for search, it isn’t necessarily synonymous with web standards. Chrome may be gaining on IE, but in the minds of most users, IE is still synonymous with web browsing. It also has a serious advantage over Google in its relationship with the enterprise and IT, and in its more intimate understanding of data center infrastructure, as is evident from its blog on the introduction of its proposal: We think that rapid adoption of HTTP 2.0 is important. To make that happen, HTTP 2.0 needs to retain as much compatibility as possible with the existing Web infrastructure. Awareness of HTTP is built into nearly every switch, router, proxy, Load balancer, and security system in use today. If the new protocol is “HTTP” in name only, upgrading all of this infrastructure would take too long. By building on existing web standards, the community can set HTTP 2.0 up for rapid adoption throughout the web. -- Speed and Mobility: An Approach for HTTP 2.0 to Make Mobile Apps and the Web Faster Google, while not necessarily openly hostile to the enterprise or infrastructure vendors who’d need to support SPDY, certainly appears indifferent to the impact of a rip-and-replace protocol model. That’s not to say Google’s approach isn’t feasible or desirable. Indeed, in some cases a “rip-and-replace” strategy is the only way to clean out the cobwebs that otherwise seem to hang onto technology for years after they’ve been superceded and superceded again. Think COBOL, which in some industries is still under active development, augmented by a hundred other technologies designed to workaround the reality that it’s an aged, outdated technology that for various reasons we are unable to simply walk away from. TAKE a SIDE ALREADY, WILL YOU?! Nope. Not gonna take a side yet – if ever. Personal preferences aside (which it’s hard to have at this point without more technical details from Microsoft) the decision whether an organization eventually wants to go with SPDY or Speed+Mobility will not at all impact negatively mediating devices. In fact, the existence of both would not negatively impact such devices because of their strategic location in the network. The existence of all three – SPDY, S+M, HTTP – would actually not negatively impact these devices as long as they were able to support all three, which seems more likely than simply choosing a side. There will be a need to support both – and likely all three (do I hear a fourth?) – protocols moving forward. Regardless of who wins this particular war and comes out crowned HTTP 2.0 champion, there will still be a need to implement support across infrastructure vendors. There will be a transitory period during which browsers and servers and infrastructure all must “get up to speed” (ha!) and will do so at different rates, making the need for intermediating devices critical. Just as is the case with the migration from IPv4 to IPv6, intermediating application delivery solutions provide the means by which organizations with substantial infrastructure investments to maintain the value of those investments while moving forward to support emerging standards. Being able to translate, for example, between SPDY and HTTP today would be a significant boon for organizations, as it requires no changes to what is likely an extensive application and server infrastructure. Similarly, assuming a pilot of Speed+Mobility, if the application delivery tier can support it, it can mediate – translate – and provide an opportunity to support users via either standard without radically disrupting the application server infrastructure. A full-proxy based application delivery infrastructure is full of advantages, after all. I like SPDY. I like it’s approach and I actually admire Google’s chutzpah in diverging from HTTP as a solution, recognizing perhaps the inherent tendency to be more concerned with backwards compatibility than with improving upon the model. But I like what Microsoft is saying from an enterprise perspective because honestly, replacing an entire infrastructure architecture to support one protocol out of many is not an appealing option, no matter the benefits. Both approaches have merit, and the bigger story is that an overhaul of HTTP is necessary - and long overdue. Web App Performance: Think 1990s. Network versus Application Layer Prioritization Oops! HTML5 Does It Again Fire and Ice, Silk and Chrome, SPDY and HTTP Grokking the Goodness of MapReduce and SPDY Google SPDY Protocol Would Require Mass Change in Infrastructure What Does Mobile Mean, Anyway? Moore’s (Traffic) Law271Views0likes1Comment