standards
27 TopicsBIG-IP Configuration Object Naming Conventions
George posted an excellent blog on hostname nomenclature a while back, but something we haven’t discussed much in this space is a naming convention for the BIG-IP configuration objects. Last week, DevCentral community user Deon posted a question on exactly that. Sometimes there are standards just for the sake of having one, but in most cases, and particularly in this case, having standards is a very good thing. Señor Forum, hoolio, and MVP hamish weighed in with some good advice. [app name]_[protocol]_[object type] Examples: www.example.com_http_vs www.example.com_http_pool www.example.com_http_monitor As hoolio pointed out in the forum, each object now has a description field, so the metadata capability is there to establish identifying information (knowledge base IDs, troubleshooting info, application owners), but having an object name that is quickly searchable and identifiable to operational staff is key. Hamish had a slight alternative format for virtuals: [fqdn]_[port] For network virtuals, I’ve always made the network part of the name, as hamish also recommends in his guidance: network VS's tend to be named net-net.num.dot.ed-masklen. e.g. net-0.0.0.0-0 is the default address. Where they conflict (e.g. two defaults depending on src clan, it gets an extra descriptor between net- and the ip address. e.g. net-wireless-0.0.0.0-0 (Default network VS for a wireless VLAN). I don't currently have any network VS's for specific ports. But they'd be something like net-0.0.0.0-0-port Your Turn What standards do you use? Share in the comments section below, or post to the forum thread.1.1KViews0likes0CommentsSOAP vs REST: The war between simplicity and standards
SOA is, at its core, a design and development methodology. It embraces reuse through decomposition of business processes and functions into core services. It enables agility by wrapping services in an accessible interface that is decoupled from its implementation. It provides a standard mechanism for application integration that can be used internally or externally. It is, as they say, what it is. SOA is not necessarily SOAP, though until the recent rise of social networking and Web 2.0 there was little real competition against the rising standard. But of late the adoption of REST and its use on the web facing side of applications has begun to push around the incumbent. We still aren't sure who swung first. We may never know, and at this point it's irrelevant: there's a war out there, as SOAP and REST duke it out for dominance of SOA. At the core of the argument is this: SOAP is weighted down by the very standards designed to promote interoperability (WS-I), security (WS-Security), and reliability (WS-Reliability). REST is a lightweight compared to its competitor, with no standards at all. Simplicity is its siren call, and it's being heard even in the far corners of corporate data centers. A February 2007 Evans Data survey found a 37% increase in those implementing or considering REST, with 25% considering REST-Based Web Services as a simpler alternative to SOAP-based services. And that was last year, before social networking really exploded and the integration of Web 2.0 sites via REST-based services took over the face of the Internet. It was postulated then that WOA (Web Oriented Architecture) was the face of SOA (Service Oriented Architecture). That REST on the outside was the way to go, but SOAP on the inside was nearly sacrosanct. Apparently that thought, while not wrong in theory, didn't take into account the fervor with which developers hold dear their beliefs regarding everything from language to operating system to architecture. The downturn in the economy hasn't helped, either, as REST certainly is easier and faster to implement, even with the plethora of development tools and environments available to carry all the complex WS-* standards that go along with SOAP like some sort of technology bellhop. Developers have turned to the standard-less option because it seems faster, cheaper, and easier. And honestly, we really don't like being told how to do things. I don't, and didn't, back in the day when the holy war was between structured and object-oriented programming. While REST has its advantages, certainly, standard-less development can, in the long-run, be much more expensive to maintain and manage than standards-focused competing architectures. The argument that standards-based protocols and architectures is difficult because there's more investment required to learn the basics as well as associated standards is essentially a red herring. Without standards there is often just as much investment in learning data formats (are you using XML? JSON? CSV? Proprietary formats? WWW-URL encoded?) as there is in learning standards. Without standards there is necessarily more documentation required, which cuts into development time. Then there's testing. Functional and vulnerability testing which necessarily has to be customized because testing tools can't predict what format or protocol you might be using. And let's not forget the horror that is integration, and how proprietary application protocols made it a booming software industry replete with toolkits and libraries and third-party packages just to get two applications to play nice together. Conversely, standards that are confusing and complex lengthen the implementation cycle, but make integration and testing as well as long term maintenance much less painful and less costly. Arguing simplicity versus standards is ridiculous in the war between REST and SOA because simplicity without standards is just as detrimental to the costs and manageability of an application as is standards without simplicity. Related articles by Zemanta RESTful .NET Has social computing changed attitudes toward reuse? The death of SOA has been greatly exaggerated Web 2.0: Integration, APIs, and scalability Performance Impact: Granularity of Services590Views0likes3CommentsWelcome to the The Phygital World
Standards for 'Things' That thing, next to the other thing, talking to this thing needs something to make it interoperate properly. That's the goal of the Industrial Internet Consortium (IIC) which hopes to establish common ways that machines share information and move data. IBM, Cisco, GE and AT&T have all teamed up to form the Industrial Internet Consortium (IIC), an open membership group that’s been established with the task of breaking down technology silo barriers to drive better big data access and improved integration of the physical and digital worlds. The Phygital World. The IIC will work to develop a ‘common blueprint' that machines and devices from all manufacturers can use to share and move data. These standards won’t just be limited to internet protocols, but will also include metrics like storage capacity in IT systems, various power levels, and data traffic control. Sensors are getting standards. Soon. As more of these chips are getting installed on street lights, thermostats, engines, soda machines and even into our own body the IIC will focus on testing IoT applications, produce best practices and standards, influence global IoT standards for Internet and industrial systems and create a forum for sharing ideas. Explore new worlds so to speak. I think it's nuts that we're in an age where we are trying to figure out how the blood sensor talks to the fridge sensor which notices there is no more applesauce and auto-orders from the local grocery to have it delivered that afternoon. Almost there. Initially, the new group will focus on the 'industrial Internet' applications in manufacturing, oil and gas exploration, healthcare and transportation. In those industries, vendors often don't make it easy for hardware and software solutions to work together. The IIC is saying, 'we all have to play with each other.' That will become critically important when your imbedded sleep monitor/dream recorder notices your blood sugar levels rising indicating that you're about to wake up, which kicks off a series of workflows that start the coffee machine, heat & distribute the hot water and display the day's news and weather on the refrigerator's LCD screen. Any minute now. It will probably be a little while (years) before these standards can be created and approved, but when they are they’ll help developers of hardware and software to create solutions that are compatible with the Internet of Things. The end result will be the full integration of sensors, networks, computers, cloud systems, large enterprises, vehicles, businesses and hundreds of other entities that are 'connected.' With London cars getting stolen using electronic gadgets and connected devices as common as electricity by 2025, securing the Internet of Things should be one of the top priorities facing the consortium. ps Related: Consortium Wants Standards for ‘Internet of Things’ AT&T, Cisco, GE, IBM and Intel form Industrial Internet Consortium for IoT standards IBM, Cisco, GE & AT&T form Industrial Internet Consortium The “Industrial” Internet of Things and the Industrial Internet Consortium The Internet of Things Will Thrive by 2025 Securing the Internet of Things: is the web already breaking up? Connected Devices as Common as Electricity by 2025 The ABCs of the Internet of Things Some Predictions About the Internet of Things and Wearable Tech From Pew Research Car-Hacking Goes Viral In London Technorati Tags: iot,things,internet of things,standards,security,sensors,nouns,silva,f5 Connect with Peter: Connect with F5:539Views0likes0CommentsFedRAMP Federates Further
FedRAMP (Federal Risk and Authorization Management Program), the government’s cloud security assessment plan, announced late last week that Amazon Web Services (AWS) is the first agency-approved cloud service provider. The accreditation covers all AWS data centers in the United States. Amazon becomes the third vendor to meet the security requirements detailed by FedRAMP. FedRAMP is the result of the US Government’s work to address security concerns related to the growing practice of cloud computing and establishes a standardized approach to security assessment, authorizations and continuous monitoring for cloud services and products. By creating industry-wide security standards and focusing more on risk management, as opposed to strict compliance with reporting metrics, officials expect to improve data security as well as simplify the processes agencies use to purchase cloud services. FedRAMP is looking toward full operational capability later this year. As both the cloud and the government’s use of cloud services grow, officials found that there were many inconsistencies to requirements and approaches as each agency began to adopt the cloud. Launched in 2012, FedRAMP’s goal is to bring consistency to the process but also give cloud vendors a standard way of providing services to the government. And with the government’s cloud-first policy, which requires agencies to consider moving applications to the cloud as a first option for new IT projects, this should streamline the process of deploying to the cloud. This is an ‘approve once, and use many’ approach, reducing the cost and time required to conduct redundant, individual agency security assessment. AWS's certification is for 3 years. FedRAMP provides an overall checklist for handling risks associated with Web services that would have a limited, or serious impact on government operations if disrupted. Cloud providers must implement these security controls to be authorized to provide cloud services to federal agencies. The government will forbid federal agencies from using a cloud service provider unless the vendor can prove that a FedRAMP-accredited third-party organization has verified and validated the security controls. Once approved, the cloud vendor would not need to be ‘re-evaluated’ by every government entity that might be interested in their solution. There may be instances where additional controls are added by agencies to address specific needs. The BIG-IP Virtual Edition for AWS includes options for traffic management, global server load balancing, application firewall, web application acceleration, and other advanced application delivery functions. ps Related: Cloud Security With FedRAMP FedRAMP Ramps Up FedRAMP achieves another cloud security milestone Amazon wins key cloud security clearance from government Cloud Security With FedRAMP CLOUD SECURITY ACCREDITATION PROGRAM TAKES FLIGHT FedRAMP comes fraught with challenges F5 iApp template for NIST Special Publication 800-53 Now Playing on Amazon AWS - BIG-IP Connecting Clouds as Easy as 1-2-3 F5 Gives Enterprises Superior Application Control with BIG-IP Solutions for Amazon Web Services Technorati Tags: f5,fedramp,government,cloud,service providers,risk,standards,silva,compliance,cloud security,aws,amazon Connect with Peter: Connect with F5:483Views0likes0CommentsInfrastructure 2.0: The Feedback Loop Must Include Applications
Greg Ness calls it "connectivity intelligence" but it seems that we're really talking about is the ability of network infrastructure to both be agile itself and enable IT agility at the same time. Brittle, inflexible infrastructures - whether they are implemented in hardware or software or both - are not agile enough to deal with an evolving, dynamic application architecture. Greg says in a previous post The static infrastructure was not architected to keep up with these new levels of change and complexity without a new layer of connectivity intelligence, delivering dynamic information between endpoint instances and everything from Ethernet switches and firewalls to application front ends. Empowered with dynamic feedback, the existing deployed infrastructure can evolve into an even more responsive, resilient and flexible network and deliver new economies of scale. The issue I see is this: it's all too network focused. Knowing that a virtual machine instance came online and needs an IP address, security policies, and to be added to a VLAN on the switch is very network-centric. Necessary, but network-centric. The VM came online for a reason, and that reason is most likely an application specific one. Greg has referred several times to the Trusted Computing Group's IF-MAP specification, which provides the basics through which connectivity intelligence could certainly be implemented if vendors could all agree to implement it. The problem with IF-MAP and, indeed, most specifications that come out of a group of network-focused organizers is that they are, well, network-focused. In fact, reading through IF-MAP I found many similarities between its operations (functions) and those found in the more application-focused security standard, SAML. While IF-MAP allows for custom data to be included, which could be used by application vendors to IF-MAP enable application servers through which more application specific details could be included in the dynamic infrastructure feedback loop, that's not as agile as it could be because it doesn't allow for a simple, standard mechanism through which application developers can integrate application specific details into that feedback loop. And yet that's exactly what we need to complete this dynamic feedback loop and create a truly flexible, agile infrastructure because the applications are endpoints; they, too, need to be managed and secured and integrated into the Infrastructure 2.0 world. While I agree with Greg that IP address management in general and managing a constantly changing heterogeneous infrastructure is a nightmare that standards like IF-MAP might certainly help IT wake up from, there's another level of managing the dynamic environments associated with cloud computing and virtualization that generally isn't addressed by very network-specific standards like IF-MAP: the application layer. In order for a specification like IF-MAP to address the application layer, application developers would need to integrate (become an IF-MAP client) the code necessary to act as part of an IF-MAP enabled infrastructure. That's because knowing that a virtual machine just came online is one thing; understanding which application it is, what application policies need to be applied, and what application-specific processing might be necessary in the rest of the infrastructure is another. It's all contextual, and based on variables we can't know ahead of time. This can't be determined before the application is actually written, so it can't be something written by vendors and shipped as a "value add". Application security and switching policies are peculiar to the application; they're unique and the only way we, as vendors, can provide that integration without foreknowledge of that uniqueness is to abstract applications to a general use case. That completely destroys the concept of agility because it doesn't take into consideration the application environment as it is at any given moment in time. It results in static, brittle integration that is essentially no more useful than SNMP would be if it were integrated into an application. We can all sit around and integrate with VMWare, and Hyper-V, and Xen. We can learn to speak IF-MAP or (some other common standard) and integrate with DNS and DHCP servers, with network security devices and with layer 2-3 switches. But we are still going to have to manually manage the applications that are ultimately the reason for the existence of such virtualized environments. While we are getting our infrastructure up to speed so that it is easier and less costly to manage is necessary, let's not forget about the applications we also still have to manage. Dynamic feedback is great and we have, today, the ability to enable pieces of that dynamic feedback loop. Customers can, today, use tools like iControl and iRules to build a feedback loop between their application delivery network and applications, regardless of whether those applications are in a VM or a Java EE container, or on a Microsoft server. But this feedback is specific to one vendor, and doesn't necessarily include the rest of the infrastructure. Greg is talking about general dynamic feedback at the network layer. He's specifically (and understandably) concerned with network agility, not application agility. That's why he calls it infrastructure 2.0 and not application something 2.0. Greg points as an example to the constant levels of change introduced by virtual machines coming on and off line and the difficulties inherent in trying to manage that change via static, infrastructure 1.0 products. That's all completely true and needs to be addressed by infrastructure vendors. But we also need to consider how to enable agility at the application layer, so the feedback loop that drives security and routing and switching and acceleration and delivery configurations in real-time can adapt to conditions within and around the applications we are trying to manage in the first place. It's all about the application in the end. Endpoints - whether internal or external to the data center - are requesting access and IP addresses for one reason: to get a resource served by an application. That application may be TCP-based, it may be HTTP-based, it may be riding on UDP. Regardless of the network-layer transport mechanisms, it's still an application - a browser, a server-side web application, a SOA service - and its unique needs must be considered in order for the feedback loop to be complete. How else will you know which application just came online or went offline? How do you know what security to apply if you don't know what you might be trying to secure? Somehow the network-centric standards that might evolve from a push to a more agile infrastructure must broaden their focus and consider how an application might integrate with such standards or what information they might provide as part of this dynamic feedback loop that will drive a more agile infrastructure. Any such standard emerging upon which Infrastructure 2.0 is built must somehow be accessible and developer-friendly and take into consideration application-specific resources as well as network-resources, and provide a standard means by which information about the application that can drive the infrastructure to adapt to its unique needs can be shared. If it doesn't, we're going to end up with the same fractured "us versus them" siloed infrastructure we've had for years. That's no longer reasonable. The network and the application are inexorably linked now, thanks to cloud computing and the Internet in general. Managing thousands of instances of an application will be as painful as managing thousands of IP addresses. As Greg points out, that doesn't work very well right now and it's costing us a lot of money and time and effort to do so. We know where this ends up, because we've seen it happen already. The same diseconomies of scale that affect TCP/IP are going to affect application management. We should be more proactive in addressing the same management issues that will arise with trying to manage thousands of applications and services rather than waiting until it, too, can no longer be ignored.338Views0likes1CommentWhat Do Database Connectivity Standards and the Pirate’s Code Have in Common?
A: They’re both more what you’d call “guidelines” than actual rules. An almost irrefutable fact of application design today is the need for a database, or at a minimum a data store – i.e. a place to store the data generated and manipulated by the application. A second reality is that despite the existence of database access “standards”, no two database solutions support exactly the same syntax and protocols. Connectivity standards like JDBC and ODBC exist, yes, but like SQL they are variable, resulting in just slightly different enough implementations to effectively cause vendor lock-in at the database layer. You simply can’t take an application developed to use an Oracle database and point it at a Microsoft or IBM database and expect it to work. Life’s like that in the development world. Database connectivity “standards” are a lot like the pirate’s Code, described well by Captain Barbossa in Pirates of the Carribbean as “more what you’d call ‘guidelines’ than actual rules.” It shouldn’t be a surprise, then, to see the rise of solutions that address this problem, especially in light of an increasing awareness of (in)compatibility at the database layer and its impact on interoperability, particularly as it relates to cloud computing . Forrester Analyst Noel Yuhanna recently penned a report on what is being called Database Compatibility Layers (DCL). The focus of DCL at the moment is on migration across database platforms because, as pointed out by Noel, they’re expensive, time consuming and very costly. Database migrations have always been complex, time-consuming, and costly due to proprietary data structures and data types, SQL extensions, and procedural languages. It can take up to several months to migrate a database, depending on database size, complexity, and usage of these proprietary features. A new technology has recently emerged for solving this problem: the database compatibility layer, a database access layer that supports another database management system’s (DBMS’s) proprietary extensions natively, allowing existing applications to access the new database transparently. -- Simpler Database Migrations Have Arrived (Forrester Research Report) Anecdotally, having been on the implementation end of such a migration I can’t disagree with the assessment. Whether the right answer is to sit down and force some common standards on database connectivity or build a compatibility layer is a debate for another day. Suffice to say that right now the former is unlikely given the penetration and pervasiveness of existing database connectivity, so the latter is probably the most efficient and cost-effective solution. After all, any changes in the core connectivity would require the same level of application modification as a migration; not an inexpensive proposition at all. According to Forrester a Database Compatibility Layer (DCL) is a “database layer that supports another DBMS’s proprietary SQL extensions, data types, and data structures natively. Existing applications can transparently access the newly migrated database with zero or minimal changes.” By extension, this should also mean that an application could easily access one database and a completely different one using the same code base (assuming zero changes, of course). For the sake of discussion let’s assume that a DCL exists that exhibits just that characteristic – complete interoperability at the connectivity layer. Not just for migration, which is of course the desired use, but for day to day use. What would that mean for cloud computing providers – both internal and external? ENABLING IT as a SERVICE Based on our assumption that a DCL exists and is implemented by multiple database solution vendors, a veritable cornucopia of options becomes a lot more available for moving enterprise architectures toward IT as a Service than might be at first obvious. Consider that applications have variable needs in terms of performance, redundancy, disaster recovery, and scalability. Some applications require higher performance, others just need a nightly or even weekly backup and some, well, some are just not that important that you can’t use other IT operations backups to restore if something goes wrong. In some cases the applications might have varying needs based on the business unit deploying them. The same application used by finance, for example, might have different requirements than the same one used by developers. How could that be? Because the developers may only be using that application for integration or testing while finance is using it for realz. It happens. What’s more interesting, however, is how a DCL could enable a more flexible service-oriented style buffet of database choices, especially if the organization used different database solutions to support different transactional, availability, and performance goals. If a universal DCL (or near universal at least) existed, business stakeholders – together with their IT counterparts – could pick and choose the database “service” they wished to employ based on not only the technical characteristics and operational support but also the costs and business requirements. It would also allow them to “migrate” over time as applications became more critical, without requiring a massive investment in upgrading or modifying the application to support a different back-end database. Obviously I’m picking just a few examples that may or may not be applicable to every organization. The bigger thing here, I think, is the flexibility in architecture and design that is afforded by such a model that balances costs with operational characteristics. Monitoring of database resource availability, too, could be greatly simplified from such a layer, providing solutions that are natively supported by upstream devices responsible for availability at the application layer, which ultimately depends on the database but is often an ignored component because of the complexity currently inherent in supporting such a varied set of connectivity standards. It should also be obvious that this model would work for a PaaS-style provider who is not tied to any given database technology. A PaaS-style vendor today must either invest effort in developing and maintaining a services layer for database connectivity or restrict customers to a single database service. The latter is fine if you’re creating a single-stack environment such as Microsoft Azure but not so fine if you’re trying to build a more flexible set of offerings to attract a wider customer base. Again, same note as above. Providers would have a much more flexible set of options if they could rely upon what is effectively a single database interface regardless of the specific database implementation. More importantly for providers, perhaps, is the migration capability noted by the Forrester report in the first place, as one of the inhibitors of moving existing applications to a cloud computing provider is support for the same database across both enterprise and cloud computing environments. While services layers are certainly a means to the same end, such layers are not universally supported. There’s no “standard” for them, not even a set of best practice guidelines, and the resulting application code suffers exactly the same issues as with the use of proprietary database connectivity: lock in. You can’t pick one up and move it to the cloud, or another database without changing some code. Granted, a services layer is more efficient in this sense as it serves as an architectural strategic point of control at which connectivity is aggregated and thus database implementation and specifics are abstracted from the application. That means the database can be changed without impacting end-user applications, only the services layer need be modified. But even that approach is problematic for packaged applications that rely upon database connectivity directly and do not support such service layers. A DCL, ostensibly, would support packaged and custom applications if it were implemented properly in all commercial database offerings. CONNECTIVITY CARTEL And therein lies the problem – if it were implemented properly in all commercial database offerings. There is a risk here of a connectivity cartel arising, where database vendors form alliances with other database vendors to support a DCL while “locking out” vendors whom they have decided do not belong. Because the DCL depends on supporting “proprietary SQL extensions, data types, and data structures natively” there may be a need for database vendors to collaborate as a means to properly support those proprietary features. If collaboration is required, it is possible to deny that collaboration as a means to control who plays in the market. It’s also possible for a vendor to slightly change some proprietary feature in order to “break” the others’ support. And of course the sheer volume of work necessary for a database vendor to support all other database vendors could overwhelm smaller database vendors, leaving them with no real way to support everyone else. The idea of a DCL is an interesting one, and it has its appeal as a means to forward compatibility for migration – both temporary and permanent. Will it gain in popularity? For the latter, perhaps, but for the former? Less likely. The inherent difficulties and scope of supporting such a wide variety of databases natively will certainly inhibit any such efforts. Solutions such as a REST-ful interface, a la PHP REST SQL or a JSON-HTTP based solution like DBSlayer may be more appropriate in the long run if they were to be standardized. And by standardized I mean standardized with industry-wide and agreed upon specifications. Not more of the “more what you’d call ‘guidelines’ than actual rules” that we already have. Database Migrations are Finally Becoming Simpler Enterprise Information Integration | Data Without Borders Review: EII Suites | Don't Fear the Data The Database Tier is Not Elastic Infrastructure Scalability Pattern: Sharding Sessions F5 Friday: THE Database Gets Some Love The Impossibility of CAP and Cloud Sessions, Sessions Everywhere Cloud-Tiered Architectural Models are Bad Except When They Aren’t338Views0likes1CommentInteroperability between clouds requires more than just VM portability
The issue of application state and connection management is one often discussed in the context of cloud computing and virtualized architectures. That's because the stress placed on existing static infrastructure due to the potentially rapid rate of change associated with dynamic application provisioning is enormous and, as is often pointed out, existing "infrastructure 1.0" systems are generally incapable of reacting in a timely fashion to such changes occurring in real-time. The most basic of concerns continues to revolve around IP address management. This is a favorite topic of Greg Ness at Infrastructure 2.0 and has been subsequently addressed in a variety of articles and blogs since the concepts of cloud computing and virtualization have gained momentum. The Burton Group has addressed this issue with regards to interoperability in a recent post, positing that perhaps changes are needed (agreed) to support emerging data center models. What is interesting is that the blog supports the notion of modifying existing core infrastructure standards (IP) to support the dynamic nature of these new models and also posits that interoperability is essentially enabled simply by virtual machine portability. From The Burton Group's "What does the Cloud Need? Standards for Infrastructure as a Service" First question is: How do we migrate between clouds? If we're talking System Infrastructure as a Service, then what happens when I try to migrate a virtual machine (VM) between my internal cloud running ESX (say I'm running VDC-OS) and a cloud provider who is running XenServer (running Citrix C3)? Are my cloud vendor choices limited to those vendors that match my internal cloud infrastructure? Well, while its probably a good idea, there are published standards out there that might help. Open Virtualization Format (OVF) is a meta-data format used to describe VMs in standard terms. While the format of the VM is different, the meta-data in OVF can be used to facilitate VM conversion from one format to other, thereby enabling interoperability. ... Another biggie is application state and connection management. When I move a workload from one location to another, the application has made some assumptions about where external resources are and how to get to them. The IP address the application or OS use to resolve DNS names probably isn't valid now that the VM has moved to a completely different location. That's where Locator ID Separation Protocol (LISP -- another overloaded acronym) steps in. The idea with LISP is to add fields to the IP header so that packets can be redirected to the correct location. The "ID" and and "locator" are separated so that the packet with the "ID" can be sent to the "locator" for address resolution. The "locator" can change the final address dynamically, allowing the source application or OS to change locations as long as they can reach the "locator". [emphasis added] If LISP sounds eerily familiar to some of you, it should. It's the same basic premise behind UDDI and the process of dynamically discovering the "location" of service end-points in a service-based architecture. Not exactly the same, but the core concepts are the same. The most pressing issue with proposing LISP as a solution is that it focuses only on the problems associated with moving workloads from one location to another with the assumption that the new location is, essentially, a physically disparate data center, and not simply a new location within the same data center; an issue with LISP does not even consider. That it also ignores other application networking infrastructure that requires the same information - that is, the new location of the application or resource - is also disconcerting but not a roadblock, it's merely a speed-bump in the road to implementation. We'll come back to that later; first let's examine the emphasized statement that seems to imply that simply migrating a virtual image from one provider to another equates to interoperability between clouds - specifically IaaS clouds. I'm sure the author didn't mean to imply that it's that simple; that all you need is to be able to migrate virtual images from one system to another. I'm sure there's more to it, or at least I'm hopeful that this concept was expressed so simply in the interests of brevity rather than completeness because there's a lot more to porting any application from one environment to another than just the application itself. Applications, and therefore virtual images containing applications, are not islands. They are not capable of doing anything without a supporting infrastructure - application and network - and some of that infrastructure is necessarily configured in such a way as to be peculiar to the application - and vice-versa. We call it an "ecosystem" for a reason; because there's a symbiotic relationship between applications and their supporting infrastructure that, when separated, degrades or even destroys the usability of that application. One cannot simply move a virtual machine from one location to another, regardless of the interoperability of virtualization infrastructure, and expect things to magically work unless all of the required supporting infrastructure has also been migrated as seamlessly. And this infrastructure isn't just hardware and network infrastructure; authentication and security systems, too, are an integral part of an application deployment. Even if all the necessary components were themselves virtualized (and I am not suggesting this should be the case at all) simply porting the virtual instances from one location to another is not enough to assure interoperability as the components must be able to collaborate, which requires connectivity information. Which brings us back to the problems associated with LISP and its focus on external discovery and location. There's just a lot more to interoperability than pushing around virtual images regardless of what those images contain: application, data, identity, security, or networking. Portability between virtual images is a good start, but it certainly isn't going to provide the interoperability necessary to ensure the seamless transition from one IaaS cloud environment to another. RELATED ARTICLES & BLOGS Who owns application delivery meta-data in the cloud? More on the meta-data menagerie The Feedback Loop Must Include Applications How VM sprawl will drive the urgency of the network evolution The Diseconomy of Scale Virus Flexibility is Key to Dynamic Infrastructure The Three Horsemen of the Coming Network Revolution As a Service: The Many Faces of the Cloud311Views0likes2CommentsThe Great Client-Server Architecture Myth
The webification of applications over the years has led to the belief that client-server as an architecture is dying. But very few beliefs about architecture have been further from the truth. The belief that client-server was dying - or at least falling out of favor - was primarily due to fact that early browser technology was used only as a presentation mechanism. The browser did not execute application logic, did not participate in application logic, and acted more or less like a television: smart enough to know how to display data but not smart enough to do anything about it. But the sudden explosion of Web 2.0 style applications and REST APIs have changed all that and client-server is very much in style again, albeit with a twist. Developers no longer need to write the core of a so-called "fat client" from the ground up. The browser or a framework such as Adobe AIR or Microsoft's Silverlight provide the client-side platform on which applications are developed and deployed. These client-side platforms have become very similar in nature to their server-side cousins, application servers, taking care of the tedious tasks associated with building and making connections to servers, parsing data, and even storage of user-specific configuration data. Even traditional thin-client applications are now packing on the pounds, using AJAX and various JavaScript libraries to provide both connectivity and presentation components to developers in the same fashion that AIR and Silverlight provide a framework for developers to build richer, highly interactive applications. These so-called RIAs (Rich Internet Applications) are, in reality, thin-clients that are rapidly gaining weight. One of the core reasons client-server architecture is being reinvigorated is the acceptance of standards. As developers have moved toward not only HTTP as the de facto transport protocol but HTML, DHTML, CSS, and JavaScript as primary client-side technologies so have device makers accepted these technologies as the "one true way" to deliver applications to multiple clients from a single server-side architecture. It's no longer required that a client be developed for every possible operating system and device combination. A single server-side application can serve any and all clients capable of communicating via HTTP and rendering HTML, DHTML, CSS, and executing client-side scripts. Standards, they are good things after all. Client-server architectures are not going away. They have simply morphed from an environment-specific model to an environment-agnostic model that is much more efficient in terms of development costs and ability to support a wider range of users, but they are still based on the same architectural principles. Client-server as a model works and will continue to work as long as the infrastructure over which such applications are delivered continue to mature and recognize that while one application may be capable of being deployed and utilized from any device that the environments over which they are delivered may impact the performance and security of those applications. The combination of fatter applications and increasing client-side application logic execution means more opportunities for exploitation as well as the potential for degradation of performance. Because client-server applications are now agnostic and capable of being delivered and used on a variety of devices and clients that they are not specifically optimized for any given environment and developers do not necessarily have access to the network and transport layer components they would need in order to optimize them. These applications are written specifically to not care, and yet the device and the location of the user and the network over which the application is delivered is relevant to application performance and security. The need for context-aware application delivery is more important now than ever, as the same application may be served to the same user but rendered in a variety of different client environments and in a variety of locations. All these variables must be accounted for in order to deliver these fat clients RIAs in the most secure, performant fashion regardless of where the user may be, over what network the application is being delivered, and what device the user may be using at the time.304Views0likes0CommentsMaking Infrastructure 2.0 reality may require new standards
Managing a heterogeneous infrastructure is difficult enough, but managing a dynamic, ever changing heterogeneous infrastructure that must be stable enough to deliver dynamic applications makes the former look like a walk in the park. Part of the problem is certainly the inability to manage heterogeneous network infrastructure devices from a single management system. SNMP (Simple Network Management Protocol), the only truly interoperable network management standard used by infrastructure vendors for over a decade, is not robust enough to deal with the management nightmare rapidly emerging for cloud computing vendors. It's called "Simple" for a reason, after all. And even if it weren't, SNMP, while interoperable with network management systems like HP OpenView and IBM's Tivoli, is not standardized at the configuration level. Each vendor generally provides their own customized MIB (Management Information Base). Customized, which roughly translates to "proprietary"; if not in theory then in practice. MIBs are not interchangeable, they aren't interoperable, and they aren't very robust. Generally they're used to share information and are not capable of being used to modify device configuration. In other words, SNMP and customized MIBs are just not enough to support efficient management of a very large heterogeneous data center. As Greg Ness pointed out in his latest blog post on Infrastructure 2.0, the diseconomies of scale in the IP address management space are applicable more generally to the network management space. There's just no good way today to efficiently manage the kind of large, heterogeneous environment required of cloud computing vendors. SNMP wasn't designed for this kind of management any more than TCP/IP was designed to handle the scaling needs of today's applications. While some infrastructure vendors, F5 among them, have seen fit to provide a standards-based management and configuration framework, none of us are really compatible with the other in terms of methodology. The way in which we, for example, represent a pool or a VIP (Virtual IP address), or a VLAN (Virtual LAN) is not the same way Cisco or Citrix or Juniper represent the same network objects. Indeed, our terminology may even be different; we use pool, other ADC vendors use "farm" or "cluster" to represent the same concept. Add virtualization to the mix and yet another set of terms is added to the mix, often conflicting with those used by network infrastructure vendors. "Virtual server" means something completely different when used by an application delivery vendor than it does when used by a virtualization vendor like VMWare or Microsoft. And the same tasks must be accomplished regardless of which piece of the infrastructure is being configured. VLANs, IP addresses, gateway, routes, pools, nodes, and other common infrastructure objects must be managed and configured across a variety of implementations. Scaling the management of these disparate devices and solutions is quickly becoming a nightmare for vendors involved in trying to build out large-scale data centers, whether those are large enterprises or cloud computing vendors or service providers. In a response to Cloud Computing and Infrastructure 2.0, "johnar" points out: Companies are forced to either roll the dice on single-vendor solutions for simplicity, or fill the voids with their own home-brew solutions and therefore assume responsibility for a lot of very complex code that is tightly coupled with ever-changing vendor APIs and technology. The same technology that vendors tout as their differentiator is what is causing the integrators grey hair. Because we all "do it different" with our modern day equivalents of customized MIBs it makes it difficult to integrate all the disparate nodes that make up a full application delivery network and infrastructure into a single, cohesive, efficient management mechanism. We're standards-based, but we aren't based on a single management standard. And as "johnar" points out, it seems unlikely that we'll "unite for data center peace" any time soon: "Unlike ratifying a new Ethernet standard, there's little motivation for ADC vendors to play nice with each other." I think there is motivation and reason for us to play nice with each other in this regard. Disparate competitive vendors came together in the past to ratify Ethernet standards, which led to interoperability and simpler management as we built out the infrastructure that makes the web work today. If we can all agree that application delivery controllers (ADCs) are an integral part of Infrastructure 2.0 (and I'm betting we all can) then in order to forward adoption of ADCs in general and make it possible for customers to choose based on features and functionality then we must make an effort to come together and consider standardizing a management model across the industry. And if we're really going to do it right, we need to encourage other infrastructure vendors to agree on a common base network management model to further simplify management of large heterogeneous network infrastructures. A VLAN is a VLAN regardless of whether it's implemented in a switch, an ADC, or on a server. If a lack of standards might hold back adoption or prevent the ability of vendors to compete for business, then that's a damn good motivating factor right there for us to unite for data center peace. If Microsoft, IBM, BEA, and Oracle were able to unite and agree upon a single web services interoperability standard (which they were, the result of which is WS-I) then it is not crazy to think that F5 and its competitors can come together and agree upon a single, standards-based management interface that will drive Infrastructure 2.0 to be reality. Major shifts in architectural paradigms often require new standards. That's where we got all the WS-* specifications and that's where we got all the 802.x standards: major architectural paradigm shifts. Cloud computing and the pervasive webification of, well, everything is driving yet another major architectural paradigm shift. And that may very well mean we need new standards to move forward and make the shift as painless as possible for everyone.303Views0likes0CommentsThe Inevitable Eventual Consistency of Cloud Computing
An IDC survey highlights the reasons why private clouds will mature before public, leading to the eventual consistency of public and private cloud computing frameworks Network Computing recently reported on a very interesting research survey from analyst firm IDC. This one was interesting because it delved into concerns regarding public cloud computing in a way that most research surveys haven’t done, including asking respondents to weight their concerns as it relates to application delivery from a public cloud computing environment. The results? Security, as always, tops the list. But close behind are application delivery related concerns such as availability and performance. N etwork Computing – IDC Survey: Risk In The Cloud While growing numbers of businesses understand the advantages of embracing cloud computing, they are more concerned about the risks involved, as a survey released at a cloud conference in Silicon Valley shows. Respondents showed greater concern about the risks associated with cloud computing surrounding security, availability and performance than support for the pluses of flexibility, scalability and lower cost, according to a survey conducted by the research firm IDC and presented at the Cloud Leadership Forum IDC hosted earlier this week in Santa Clara, Calif. “However, respondents gave more weight to their worries about cloud computing: 87 percent cited security concerns, 83.5 percent availability, 83 percent performance and 80 percent cited a lack of interoperability standards.” The respondents rated the risks associated with security, availability, and performance higher than the always-associated benefits of public cloud computing of lower costs, scalability, and flexibility. Which ultimately results in a reluctance to adopt public cloud computing and is likely driving these organizations toward private cloud computing because public cloud can’t or won’t at this point address these challenges, but private cloud computing can and is – by architecting a collection of infrastructure services that can be leveraged by (internal) customers on an application by application (and sometimes request by request) basis. PRIVATE CLOUD will MATURE FIRST What will ultimately bubble up and become more obvious to public cloud providers is customer demand. Clouderati like James Urquhart and Simon Wardley often refer to this process as commoditization or standardization of services. These services – at the infrastructure layer of the cloud stack – will necessarily be driven by customer demand; by the market. Because customers right now are not fully exercising public cloud computing as they would their own private implementation – replete with infrastructure services, business critical applications, and adherence to business-focused service level agreements – public cloud providers are a bit of a disadvantage. The market isn’t telling them what they want and need, thus public cloud providers are left to fend for themselves. Or they may be pandering necessarily to the needs and demands of a few customers that have fully adopted their platform as their data center du jour. Internal to the organization there is a great deal more going on than some would like to admit. Organizations have long since abandoned even the pretense of caring about the definition of “cloud” and whether or not there exists such a thing as “private” cloud and have forged their way forward past “virtualization plus” (a derogatory and dismissive term often used to describe such efforts by some public cloud providers) and into the latter stages of the cloud computing maturity model. Internal IT organizations can and will solve the “infrastructure as a service” conundrum because they necessarily have a smaller market to address. They have customers, but it is a much smaller and well-defined set of customers which they must support and thus they are able to iterate over the development processes and integration efforts necessary to get there much quicker and without as much disruption. Their goal is to provide IT as a service, offering a repertoire of standardized application and infrastructure services that can easily be extended to support new infrastructure services. They are, in effect, building their own cloud frameworks (stack) upon which they can innovate and extend as necessary. And as they do so they are standardizing, whether by conscious effort or as a side-effect of defining their frameworks. But they are doing it, regardless of those who might dismiss their efforts as “not real cloud.” When you get down to it, enterprise IT isn’t driven by adherence to some definition put forth by pundits. They’re driven by a need to provide business value to their customers at the best possible “profit margin” they can. And they’re doing it faster than public cloud providers because they can. WHEN CLOUDS COLLIDE - EVENTUAL CONSISTENCY What that means is that in a relatively short amount of time, as measured by technological evolution at least, the “private clouds” of customers will have matured to the point they will be ready to adopt a private/public (hybrid) model and really take advantage of that public, cheap, compute on demand that’s so prevalent in today’s cloud computing market. Not just use them as inexpensive development or test playgrounds but integrate them as part of their global application delivery strategy. The problem then is aligning the models and APIs and frameworks that have grown up in each of the two types of clouds. Like the concept of “eventual consistency” with regards to data and databases and replication across clouds (intercloud) the same “eventual consistency” theory will apply to cloud frameworks. Eventually there will be a standardized (consistent) set of infrastructure services and network services and frameworks through which such services are leveraged. Oh, at first there will be chaos and screaming and gnashing of teeth as the models bump heads, but as more organizations and providers work together to find that common ground between them they’ll find that just like the peanut-butter and chocolate in a Reese’s Peanut Butter cup, the two disparate architectures can “taste better together.” The question that remains is which standardization will be the one with which others must become consistent. Without consistency, interoperability and portability will remain little more than a pipe dream. Will it be standardization driven by the customers, a la the Enterprise Buyer’s Cloud Council? Or will it be driven by providers in a “if you don’t like what we offer go elsewhere” market? Or will it be driven by a standards committee comprised primarily of vendors with a few “interested third parties”? Related Posts from tag interoperability Despite Good Intentions PaaS Interoperability Still Only Skin Deep Apple iPad Pushing Us Closer to Internet Armageddon Cloud, Standards, and Pants Approaching cloud standards with end-user focus only is full of fail Interoperability between clouds requires more than just VM portability Who owns application delivery meta-data in the cloud? Cloud interoperability must dig deeper than the virtualization layer from tag standards How Coding Standards Can Impair Application Performance The Dynamic Infrastructure Mashup The Great Client-Server Architecture Myth Infrastructure 2.0: Squishy Name for a Squishy Concept Can You Teach an Old Developer New Tricks? (more..) del.icio.us Tags: MacVittie,F5,cloud computing,standards,interoperability,integration,hybrid cloud,private cloud,public cloud,infrastructure299Views0likes1Comment