standards
27 TopicsBIG-IP Configuration Object Naming Conventions
George posted an excellent blog on hostname nomenclature a while back, but something we haven’t discussed much in this space is a naming convention for the BIG-IP configuration objects. Last week, DevCentral community user Deon posted a question on exactly that. Sometimes there are standards just for the sake of having one, but in most cases, and particularly in this case, having standards is a very good thing. Señor Forum, hoolio, and MVP hamish weighed in with some good advice. [app name]_[protocol]_[object type] Examples: www.example.com_http_vs www.example.com_http_pool www.example.com_http_monitor As hoolio pointed out in the forum, each object now has a description field, so the metadata capability is there to establish identifying information (knowledge base IDs, troubleshooting info, application owners), but having an object name that is quickly searchable and identifiable to operational staff is key. Hamish had a slight alternative format for virtuals: [fqdn]_[port] For network virtuals, I’ve always made the network part of the name, as hamish also recommends in his guidance: network VS's tend to be named net-net.num.dot.ed-masklen. e.g. net-0.0.0.0-0 is the default address. Where they conflict (e.g. two defaults depending on src clan, it gets an extra descriptor between net- and the ip address. e.g. net-wireless-0.0.0.0-0 (Default network VS for a wireless VLAN). I don't currently have any network VS's for specific ports. But they'd be something like net-0.0.0.0-0-port Your Turn What standards do you use? Share in the comments section below, or post to the forum thread.929Views0likes0CommentsSOAP vs REST: The war between simplicity and standards
SOA is, at its core, a design and development methodology. It embraces reuse through decomposition of business processes and functions into core services. It enables agility by wrapping services in an accessible interface that is decoupled from its implementation. It provides a standard mechanism for application integration that can be used internally or externally. It is, as they say, what it is. SOA is not necessarily SOAP, though until the recent rise of social networking and Web 2.0 there was little real competition against the rising standard. But of late the adoption of REST and its use on the web facing side of applications has begun to push around the incumbent. We still aren't sure who swung first. We may never know, and at this point it's irrelevant: there's a war out there, as SOAP and REST duke it out for dominance of SOA. At the core of the argument is this: SOAP is weighted down by the very standards designed to promote interoperability (WS-I), security (WS-Security), and reliability (WS-Reliability). REST is a lightweight compared to its competitor, with no standards at all. Simplicity is its siren call, and it's being heard even in the far corners of corporate data centers. A February 2007 Evans Data survey found a 37% increase in those implementing or considering REST, with 25% considering REST-Based Web Services as a simpler alternative to SOAP-based services. And that was last year, before social networking really exploded and the integration of Web 2.0 sites via REST-based services took over the face of the Internet. It was postulated then that WOA (Web Oriented Architecture) was the face of SOA (Service Oriented Architecture). That REST on the outside was the way to go, but SOAP on the inside was nearly sacrosanct. Apparently that thought, while not wrong in theory, didn't take into account the fervor with which developers hold dear their beliefs regarding everything from language to operating system to architecture. The downturn in the economy hasn't helped, either, as REST certainly is easier and faster to implement, even with the plethora of development tools and environments available to carry all the complex WS-* standards that go along with SOAP like some sort of technology bellhop. Developers have turned to the standard-less option because it seems faster, cheaper, and easier. And honestly, we really don't like being told how to do things. I don't, and didn't, back in the day when the holy war was between structured and object-oriented programming. While REST has its advantages, certainly, standard-less development can, in the long-run, be much more expensive to maintain and manage than standards-focused competing architectures. The argument that standards-based protocols and architectures is difficult because there's more investment required to learn the basics as well as associated standards is essentially a red herring. Without standards there is often just as much investment in learning data formats (are you using XML? JSON? CSV? Proprietary formats? WWW-URL encoded?) as there is in learning standards. Without standards there is necessarily more documentation required, which cuts into development time. Then there's testing. Functional and vulnerability testing which necessarily has to be customized because testing tools can't predict what format or protocol you might be using. And let's not forget the horror that is integration, and how proprietary application protocols made it a booming software industry replete with toolkits and libraries and third-party packages just to get two applications to play nice together. Conversely, standards that are confusing and complex lengthen the implementation cycle, but make integration and testing as well as long term maintenance much less painful and less costly. Arguing simplicity versus standards is ridiculous in the war between REST and SOA because simplicity without standards is just as detrimental to the costs and manageability of an application as is standards without simplicity. Related articles by Zemanta RESTful .NET Has social computing changed attitudes toward reuse? The death of SOA has been greatly exaggerated Web 2.0: Integration, APIs, and scalability Performance Impact: Granularity of Services501Views0likes3CommentsWelcome to the The Phygital World
Standards for 'Things' That thing, next to the other thing, talking to this thing needs something to make it interoperate properly. That's the goal of the Industrial Internet Consortium (IIC) which hopes to establish common ways that machines share information and move data. IBM, Cisco, GE and AT&T have all teamed up to form the Industrial Internet Consortium (IIC), an open membership group that’s been established with the task of breaking down technology silo barriers to drive better big data access and improved integration of the physical and digital worlds. The Phygital World. The IIC will work to develop a ‘common blueprint' that machines and devices from all manufacturers can use to share and move data. These standards won’t just be limited to internet protocols, but will also include metrics like storage capacity in IT systems, various power levels, and data traffic control. Sensors are getting standards. Soon. As more of these chips are getting installed on street lights, thermostats, engines, soda machines and even into our own body the IIC will focus on testing IoT applications, produce best practices and standards, influence global IoT standards for Internet and industrial systems and create a forum for sharing ideas. Explore new worlds so to speak. I think it's nuts that we're in an age where we are trying to figure out how the blood sensor talks to the fridge sensor which notices there is no more applesauce and auto-orders from the local grocery to have it delivered that afternoon. Almost there. Initially, the new group will focus on the 'industrial Internet' applications in manufacturing, oil and gas exploration, healthcare and transportation. In those industries, vendors often don't make it easy for hardware and software solutions to work together. The IIC is saying, 'we all have to play with each other.' That will become critically important when your imbedded sleep monitor/dream recorder notices your blood sugar levels rising indicating that you're about to wake up, which kicks off a series of workflows that start the coffee machine, heat & distribute the hot water and display the day's news and weather on the refrigerator's LCD screen. Any minute now. It will probably be a little while (years) before these standards can be created and approved, but when they are they’ll help developers of hardware and software to create solutions that are compatible with the Internet of Things. The end result will be the full integration of sensors, networks, computers, cloud systems, large enterprises, vehicles, businesses and hundreds of other entities that are 'connected.' With London cars getting stolen using electronic gadgets and connected devices as common as electricity by 2025, securing the Internet of Things should be one of the top priorities facing the consortium. ps Related: Consortium Wants Standards for ‘Internet of Things’ AT&T, Cisco, GE, IBM and Intel form Industrial Internet Consortium for IoT standards IBM, Cisco, GE & AT&T form Industrial Internet Consortium The “Industrial” Internet of Things and the Industrial Internet Consortium The Internet of Things Will Thrive by 2025 Securing the Internet of Things: is the web already breaking up? Connected Devices as Common as Electricity by 2025 The ABCs of the Internet of Things Some Predictions About the Internet of Things and Wearable Tech From Pew Research Car-Hacking Goes Viral In London Technorati Tags: iot,things,internet of things,standards,security,sensors,nouns,silva,f5 Connect with Peter: Connect with F5:475Views0likes0CommentsFedRAMP Federates Further
FedRAMP (Federal Risk and Authorization Management Program), the government’s cloud security assessment plan, announced late last week that Amazon Web Services (AWS) is the first agency-approved cloud service provider. The accreditation covers all AWS data centers in the United States. Amazon becomes the third vendor to meet the security requirements detailed by FedRAMP. FedRAMP is the result of the US Government’s work to address security concerns related to the growing practice of cloud computing and establishes a standardized approach to security assessment, authorizations and continuous monitoring for cloud services and products. By creating industry-wide security standards and focusing more on risk management, as opposed to strict compliance with reporting metrics, officials expect to improve data security as well as simplify the processes agencies use to purchase cloud services. FedRAMP is looking toward full operational capability later this year. As both the cloud and the government’s use of cloud services grow, officials found that there were many inconsistencies to requirements and approaches as each agency began to adopt the cloud. Launched in 2012, FedRAMP’s goal is to bring consistency to the process but also give cloud vendors a standard way of providing services to the government. And with the government’s cloud-first policy, which requires agencies to consider moving applications to the cloud as a first option for new IT projects, this should streamline the process of deploying to the cloud. This is an ‘approve once, and use many’ approach, reducing the cost and time required to conduct redundant, individual agency security assessment. AWS's certification is for 3 years. FedRAMP provides an overall checklist for handling risks associated with Web services that would have a limited, or serious impact on government operations if disrupted. Cloud providers must implement these security controls to be authorized to provide cloud services to federal agencies. The government will forbid federal agencies from using a cloud service provider unless the vendor can prove that a FedRAMP-accredited third-party organization has verified and validated the security controls. Once approved, the cloud vendor would not need to be ‘re-evaluated’ by every government entity that might be interested in their solution. There may be instances where additional controls are added by agencies to address specific needs. The BIG-IP Virtual Edition for AWS includes options for traffic management, global server load balancing, application firewall, web application acceleration, and other advanced application delivery functions. ps Related: Cloud Security With FedRAMP FedRAMP Ramps Up FedRAMP achieves another cloud security milestone Amazon wins key cloud security clearance from government Cloud Security With FedRAMP CLOUD SECURITY ACCREDITATION PROGRAM TAKES FLIGHT FedRAMP comes fraught with challenges F5 iApp template for NIST Special Publication 800-53 Now Playing on Amazon AWS - BIG-IP Connecting Clouds as Easy as 1-2-3 F5 Gives Enterprises Superior Application Control with BIG-IP Solutions for Amazon Web Services Technorati Tags: f5,fedramp,government,cloud,service providers,risk,standards,silva,compliance,cloud security,aws,amazon Connect with Peter: Connect with F5:419Views0likes0CommentsInfrastructure 2.0: The Feedback Loop Must Include Applications
Greg Ness calls it "connectivity intelligence" but it seems that we're really talking about is the ability of network infrastructure to both be agile itself and enable IT agility at the same time. Brittle, inflexible infrastructures - whether they are implemented in hardware or software or both - are not agile enough to deal with an evolving, dynamic application architecture. Greg says in a previous post The static infrastructure was not architected to keep up with these new levels of change and complexity without a new layer of connectivity intelligence, delivering dynamic information between endpoint instances and everything from Ethernet switches and firewalls to application front ends. Empowered with dynamic feedback, the existing deployed infrastructure can evolve into an even more responsive, resilient and flexible network and deliver new economies of scale. The issue I see is this: it's all too network focused. Knowing that a virtual machine instance came online and needs an IP address, security policies, and to be added to a VLAN on the switch is very network-centric. Necessary, but network-centric. The VM came online for a reason, and that reason is most likely an application specific one. Greg has referred several times to the Trusted Computing Group's IF-MAP specification, which provides the basics through which connectivity intelligence could certainly be implemented if vendors could all agree to implement it. The problem with IF-MAP and, indeed, most specifications that come out of a group of network-focused organizers is that they are, well, network-focused. In fact, reading through IF-MAP I found many similarities between its operations (functions) and those found in the more application-focused security standard, SAML. While IF-MAP allows for custom data to be included, which could be used by application vendors to IF-MAP enable application servers through which more application specific details could be included in the dynamic infrastructure feedback loop, that's not as agile as it could be because it doesn't allow for a simple, standard mechanism through which application developers can integrate application specific details into that feedback loop. And yet that's exactly what we need to complete this dynamic feedback loop and create a truly flexible, agile infrastructure because the applications are endpoints; they, too, need to be managed and secured and integrated into the Infrastructure 2.0 world. While I agree with Greg that IP address management in general and managing a constantly changing heterogeneous infrastructure is a nightmare that standards like IF-MAP might certainly help IT wake up from, there's another level of managing the dynamic environments associated with cloud computing and virtualization that generally isn't addressed by very network-specific standards like IF-MAP: the application layer. In order for a specification like IF-MAP to address the application layer, application developers would need to integrate (become an IF-MAP client) the code necessary to act as part of an IF-MAP enabled infrastructure. That's because knowing that a virtual machine just came online is one thing; understanding which application it is, what application policies need to be applied, and what application-specific processing might be necessary in the rest of the infrastructure is another. It's all contextual, and based on variables we can't know ahead of time. This can't be determined before the application is actually written, so it can't be something written by vendors and shipped as a "value add". Application security and switching policies are peculiar to the application; they're unique and the only way we, as vendors, can provide that integration without foreknowledge of that uniqueness is to abstract applications to a general use case. That completely destroys the concept of agility because it doesn't take into consideration the application environment as it is at any given moment in time. It results in static, brittle integration that is essentially no more useful than SNMP would be if it were integrated into an application. We can all sit around and integrate with VMWare, and Hyper-V, and Xen. We can learn to speak IF-MAP or (some other common standard) and integrate with DNS and DHCP servers, with network security devices and with layer 2-3 switches. But we are still going to have to manually manage the applications that are ultimately the reason for the existence of such virtualized environments. While we are getting our infrastructure up to speed so that it is easier and less costly to manage is necessary, let's not forget about the applications we also still have to manage. Dynamic feedback is great and we have, today, the ability to enable pieces of that dynamic feedback loop. Customers can, today, use tools like iControl and iRules to build a feedback loop between their application delivery network and applications, regardless of whether those applications are in a VM or a Java EE container, or on a Microsoft server. But this feedback is specific to one vendor, and doesn't necessarily include the rest of the infrastructure. Greg is talking about general dynamic feedback at the network layer. He's specifically (and understandably) concerned with network agility, not application agility. That's why he calls it infrastructure 2.0 and not application something 2.0. Greg points as an example to the constant levels of change introduced by virtual machines coming on and off line and the difficulties inherent in trying to manage that change via static, infrastructure 1.0 products. That's all completely true and needs to be addressed by infrastructure vendors. But we also need to consider how to enable agility at the application layer, so the feedback loop that drives security and routing and switching and acceleration and delivery configurations in real-time can adapt to conditions within and around the applications we are trying to manage in the first place. It's all about the application in the end. Endpoints - whether internal or external to the data center - are requesting access and IP addresses for one reason: to get a resource served by an application. That application may be TCP-based, it may be HTTP-based, it may be riding on UDP. Regardless of the network-layer transport mechanisms, it's still an application - a browser, a server-side web application, a SOA service - and its unique needs must be considered in order for the feedback loop to be complete. How else will you know which application just came online or went offline? How do you know what security to apply if you don't know what you might be trying to secure? Somehow the network-centric standards that might evolve from a push to a more agile infrastructure must broaden their focus and consider how an application might integrate with such standards or what information they might provide as part of this dynamic feedback loop that will drive a more agile infrastructure. Any such standard emerging upon which Infrastructure 2.0 is built must somehow be accessible and developer-friendly and take into consideration application-specific resources as well as network-resources, and provide a standard means by which information about the application that can drive the infrastructure to adapt to its unique needs can be shared. If it doesn't, we're going to end up with the same fractured "us versus them" siloed infrastructure we've had for years. That's no longer reasonable. The network and the application are inexorably linked now, thanks to cloud computing and the Internet in general. Managing thousands of instances of an application will be as painful as managing thousands of IP addresses. As Greg points out, that doesn't work very well right now and it's costing us a lot of money and time and effort to do so. We know where this ends up, because we've seen it happen already. The same diseconomies of scale that affect TCP/IP are going to affect application management. We should be more proactive in addressing the same management issues that will arise with trying to manage thousands of applications and services rather than waiting until it, too, can no longer be ignored.288Views0likes1CommentWhat Do Database Connectivity Standards and the Pirate’s Code Have in Common?
A: They’re both more what you’d call “guidelines” than actual rules. An almost irrefutable fact of application design today is the need for a database, or at a minimum a data store – i.e. a place to store the data generated and manipulated by the application. A second reality is that despite the existence of database access “standards”, no two database solutions support exactly the same syntax and protocols. Connectivity standards like JDBC and ODBC exist, yes, but like SQL they are variable, resulting in just slightly different enough implementations to effectively cause vendor lock-in at the database layer. You simply can’t take an application developed to use an Oracle database and point it at a Microsoft or IBM database and expect it to work. Life’s like that in the development world. Database connectivity “standards” are a lot like the pirate’s Code, described well by Captain Barbossa in Pirates of the Carribbean as “more what you’d call ‘guidelines’ than actual rules.” It shouldn’t be a surprise, then, to see the rise of solutions that address this problem, especially in light of an increasing awareness of (in)compatibility at the database layer and its impact on interoperability, particularly as it relates to cloud computing . Forrester Analyst Noel Yuhanna recently penned a report on what is being called Database Compatibility Layers (DCL). The focus of DCL at the moment is on migration across database platforms because, as pointed out by Noel, they’re expensive, time consuming and very costly. Database migrations have always been complex, time-consuming, and costly due to proprietary data structures and data types, SQL extensions, and procedural languages. It can take up to several months to migrate a database, depending on database size, complexity, and usage of these proprietary features. A new technology has recently emerged for solving this problem: the database compatibility layer, a database access layer that supports another database management system’s (DBMS’s) proprietary extensions natively, allowing existing applications to access the new database transparently. -- Simpler Database Migrations Have Arrived (Forrester Research Report) Anecdotally, having been on the implementation end of such a migration I can’t disagree with the assessment. Whether the right answer is to sit down and force some common standards on database connectivity or build a compatibility layer is a debate for another day. Suffice to say that right now the former is unlikely given the penetration and pervasiveness of existing database connectivity, so the latter is probably the most efficient and cost-effective solution. After all, any changes in the core connectivity would require the same level of application modification as a migration; not an inexpensive proposition at all. According to Forrester a Database Compatibility Layer (DCL) is a “database layer that supports another DBMS’s proprietary SQL extensions, data types, and data structures natively. Existing applications can transparently access the newly migrated database with zero or minimal changes.” By extension, this should also mean that an application could easily access one database and a completely different one using the same code base (assuming zero changes, of course). For the sake of discussion let’s assume that a DCL exists that exhibits just that characteristic – complete interoperability at the connectivity layer. Not just for migration, which is of course the desired use, but for day to day use. What would that mean for cloud computing providers – both internal and external? ENABLING IT as a SERVICE Based on our assumption that a DCL exists and is implemented by multiple database solution vendors, a veritable cornucopia of options becomes a lot more available for moving enterprise architectures toward IT as a Service than might be at first obvious. Consider that applications have variable needs in terms of performance, redundancy, disaster recovery, and scalability. Some applications require higher performance, others just need a nightly or even weekly backup and some, well, some are just not that important that you can’t use other IT operations backups to restore if something goes wrong. In some cases the applications might have varying needs based on the business unit deploying them. The same application used by finance, for example, might have different requirements than the same one used by developers. How could that be? Because the developers may only be using that application for integration or testing while finance is using it for realz. It happens. What’s more interesting, however, is how a DCL could enable a more flexible service-oriented style buffet of database choices, especially if the organization used different database solutions to support different transactional, availability, and performance goals. If a universal DCL (or near universal at least) existed, business stakeholders – together with their IT counterparts – could pick and choose the database “service” they wished to employ based on not only the technical characteristics and operational support but also the costs and business requirements. It would also allow them to “migrate” over time as applications became more critical, without requiring a massive investment in upgrading or modifying the application to support a different back-end database. Obviously I’m picking just a few examples that may or may not be applicable to every organization. The bigger thing here, I think, is the flexibility in architecture and design that is afforded by such a model that balances costs with operational characteristics. Monitoring of database resource availability, too, could be greatly simplified from such a layer, providing solutions that are natively supported by upstream devices responsible for availability at the application layer, which ultimately depends on the database but is often an ignored component because of the complexity currently inherent in supporting such a varied set of connectivity standards. It should also be obvious that this model would work for a PaaS-style provider who is not tied to any given database technology. A PaaS-style vendor today must either invest effort in developing and maintaining a services layer for database connectivity or restrict customers to a single database service. The latter is fine if you’re creating a single-stack environment such as Microsoft Azure but not so fine if you’re trying to build a more flexible set of offerings to attract a wider customer base. Again, same note as above. Providers would have a much more flexible set of options if they could rely upon what is effectively a single database interface regardless of the specific database implementation. More importantly for providers, perhaps, is the migration capability noted by the Forrester report in the first place, as one of the inhibitors of moving existing applications to a cloud computing provider is support for the same database across both enterprise and cloud computing environments. While services layers are certainly a means to the same end, such layers are not universally supported. There’s no “standard” for them, not even a set of best practice guidelines, and the resulting application code suffers exactly the same issues as with the use of proprietary database connectivity: lock in. You can’t pick one up and move it to the cloud, or another database without changing some code. Granted, a services layer is more efficient in this sense as it serves as an architectural strategic point of control at which connectivity is aggregated and thus database implementation and specifics are abstracted from the application. That means the database can be changed without impacting end-user applications, only the services layer need be modified. But even that approach is problematic for packaged applications that rely upon database connectivity directly and do not support such service layers. A DCL, ostensibly, would support packaged and custom applications if it were implemented properly in all commercial database offerings. CONNECTIVITY CARTEL And therein lies the problem – if it were implemented properly in all commercial database offerings. There is a risk here of a connectivity cartel arising, where database vendors form alliances with other database vendors to support a DCL while “locking out” vendors whom they have decided do not belong. Because the DCL depends on supporting “proprietary SQL extensions, data types, and data structures natively” there may be a need for database vendors to collaborate as a means to properly support those proprietary features. If collaboration is required, it is possible to deny that collaboration as a means to control who plays in the market. It’s also possible for a vendor to slightly change some proprietary feature in order to “break” the others’ support. And of course the sheer volume of work necessary for a database vendor to support all other database vendors could overwhelm smaller database vendors, leaving them with no real way to support everyone else. The idea of a DCL is an interesting one, and it has its appeal as a means to forward compatibility for migration – both temporary and permanent. Will it gain in popularity? For the latter, perhaps, but for the former? Less likely. The inherent difficulties and scope of supporting such a wide variety of databases natively will certainly inhibit any such efforts. Solutions such as a REST-ful interface, a la PHP REST SQL or a JSON-HTTP based solution like DBSlayer may be more appropriate in the long run if they were to be standardized. And by standardized I mean standardized with industry-wide and agreed upon specifications. Not more of the “more what you’d call ‘guidelines’ than actual rules” that we already have. Database Migrations are Finally Becoming Simpler Enterprise Information Integration | Data Without Borders Review: EII Suites | Don't Fear the Data The Database Tier is Not Elastic Infrastructure Scalability Pattern: Sharding Sessions F5 Friday: THE Database Gets Some Love The Impossibility of CAP and Cloud Sessions, Sessions Everywhere Cloud-Tiered Architectural Models are Bad Except When They Aren’t277Views0likes1CommentMaking Infrastructure 2.0 reality may require new standards
Managing a heterogeneous infrastructure is difficult enough, but managing a dynamic, ever changing heterogeneous infrastructure that must be stable enough to deliver dynamic applications makes the former look like a walk in the park. Part of the problem is certainly the inability to manage heterogeneous network infrastructure devices from a single management system. SNMP (Simple Network Management Protocol), the only truly interoperable network management standard used by infrastructure vendors for over a decade, is not robust enough to deal with the management nightmare rapidly emerging for cloud computing vendors. It's called "Simple" for a reason, after all. And even if it weren't, SNMP, while interoperable with network management systems like HP OpenView and IBM's Tivoli, is not standardized at the configuration level. Each vendor generally provides their own customized MIB (Management Information Base). Customized, which roughly translates to "proprietary"; if not in theory then in practice. MIBs are not interchangeable, they aren't interoperable, and they aren't very robust. Generally they're used to share information and are not capable of being used to modify device configuration. In other words, SNMP and customized MIBs are just not enough to support efficient management of a very large heterogeneous data center. As Greg Ness pointed out in his latest blog post on Infrastructure 2.0, the diseconomies of scale in the IP address management space are applicable more generally to the network management space. There's just no good way today to efficiently manage the kind of large, heterogeneous environment required of cloud computing vendors. SNMP wasn't designed for this kind of management any more than TCP/IP was designed to handle the scaling needs of today's applications. While some infrastructure vendors, F5 among them, have seen fit to provide a standards-based management and configuration framework, none of us are really compatible with the other in terms of methodology. The way in which we, for example, represent a pool or a VIP (Virtual IP address), or a VLAN (Virtual LAN) is not the same way Cisco or Citrix or Juniper represent the same network objects. Indeed, our terminology may even be different; we use pool, other ADC vendors use "farm" or "cluster" to represent the same concept. Add virtualization to the mix and yet another set of terms is added to the mix, often conflicting with those used by network infrastructure vendors. "Virtual server" means something completely different when used by an application delivery vendor than it does when used by a virtualization vendor like VMWare or Microsoft. And the same tasks must be accomplished regardless of which piece of the infrastructure is being configured. VLANs, IP addresses, gateway, routes, pools, nodes, and other common infrastructure objects must be managed and configured across a variety of implementations. Scaling the management of these disparate devices and solutions is quickly becoming a nightmare for vendors involved in trying to build out large-scale data centers, whether those are large enterprises or cloud computing vendors or service providers. In a response to Cloud Computing and Infrastructure 2.0, "johnar" points out: Companies are forced to either roll the dice on single-vendor solutions for simplicity, or fill the voids with their own home-brew solutions and therefore assume responsibility for a lot of very complex code that is tightly coupled with ever-changing vendor APIs and technology. The same technology that vendors tout as their differentiator is what is causing the integrators grey hair. Because we all "do it different" with our modern day equivalents of customized MIBs it makes it difficult to integrate all the disparate nodes that make up a full application delivery network and infrastructure into a single, cohesive, efficient management mechanism. We're standards-based, but we aren't based on a single management standard. And as "johnar" points out, it seems unlikely that we'll "unite for data center peace" any time soon: "Unlike ratifying a new Ethernet standard, there's little motivation for ADC vendors to play nice with each other." I think there is motivation and reason for us to play nice with each other in this regard. Disparate competitive vendors came together in the past to ratify Ethernet standards, which led to interoperability and simpler management as we built out the infrastructure that makes the web work today. If we can all agree that application delivery controllers (ADCs) are an integral part of Infrastructure 2.0 (and I'm betting we all can) then in order to forward adoption of ADCs in general and make it possible for customers to choose based on features and functionality then we must make an effort to come together and consider standardizing a management model across the industry. And if we're really going to do it right, we need to encourage other infrastructure vendors to agree on a common base network management model to further simplify management of large heterogeneous network infrastructures. A VLAN is a VLAN regardless of whether it's implemented in a switch, an ADC, or on a server. If a lack of standards might hold back adoption or prevent the ability of vendors to compete for business, then that's a damn good motivating factor right there for us to unite for data center peace. If Microsoft, IBM, BEA, and Oracle were able to unite and agree upon a single web services interoperability standard (which they were, the result of which is WS-I) then it is not crazy to think that F5 and its competitors can come together and agree upon a single, standards-based management interface that will drive Infrastructure 2.0 to be reality. Major shifts in architectural paradigms often require new standards. That's where we got all the WS-* specifications and that's where we got all the 802.x standards: major architectural paradigm shifts. Cloud computing and the pervasive webification of, well, everything is driving yet another major architectural paradigm shift. And that may very well mean we need new standards to move forward and make the shift as painless as possible for everyone.249Views0likes0CommentsInteroperability between clouds requires more than just VM portability
The issue of application state and connection management is one often discussed in the context of cloud computing and virtualized architectures. That's because the stress placed on existing static infrastructure due to the potentially rapid rate of change associated with dynamic application provisioning is enormous and, as is often pointed out, existing "infrastructure 1.0" systems are generally incapable of reacting in a timely fashion to such changes occurring in real-time. The most basic of concerns continues to revolve around IP address management. This is a favorite topic of Greg Ness at Infrastructure 2.0 and has been subsequently addressed in a variety of articles and blogs since the concepts of cloud computing and virtualization have gained momentum. The Burton Group has addressed this issue with regards to interoperability in a recent post, positing that perhaps changes are needed (agreed) to support emerging data center models. What is interesting is that the blog supports the notion of modifying existing core infrastructure standards (IP) to support the dynamic nature of these new models and also posits that interoperability is essentially enabled simply by virtual machine portability. From The Burton Group's"What does the Cloud Need? Standards for Infrastructure as a Service" First question is: How do we migrate between clouds? If we're talking System Infrastructure as a Service, then what happens when I try to migrate a virtual machine (VM) between my internal cloud running ESX (say I'm running VDC-OS) and a cloud provider who is running XenServer (running Citrix C3)? Are my cloud vendor choices limited to those vendors that match my internal cloud infrastructure? Well, while its probably a good idea, there are published standards out there that might help. Open Virtualization Format (OVF) is a meta-data format used to describe VMs in standard terms. While the format of the VM is different, the meta-data in OVF can be used to facilitate VM conversion from one format to other, thereby enabling interoperability. ... Another biggie is application state and connection management. When I move a workload from one location to another, the application has made some assumptions about where external resources are and how to get to them. The IP address the application or OS use to resolve DNS names probably isn't valid now that the VM has moved to a completely different location. That's where Locator ID Separation Protocol (LISP -- another overloaded acronym) steps in. The idea with LISP is to add fields to the IP header so that packets can be redirected to the correct location. The "ID" and and "locator" are separated so that the packet with the "ID" can be sent to the "locator" for address resolution. The "locator" can change the final address dynamically, allowing the source application or OS to change locations as long as they can reach the "locator". [emphasis added] If LISP sounds eerily familiar to some of you, it should. It's the same basic premise behind UDDI and the process of dynamically discovering the "location" of service end-points in a service-based architecture. Not exactly the same, but the core concepts are the same. The most pressing issue with proposing LISP as a solution is that it focuses only on the problems associated with moving workloads from one location to another with the assumption that the new location is, essentially, a physically disparate data center, and not simply a new location within the same data center; an issue with LISP does not even consider. That it also ignores other application networking infrastructure that requires the same information - that is, the new location of the application or resource - is also disconcerting but not a roadblock, it's merely a speed-bump in the road to implementation. We'll come back to that later; first let's examine the emphasized statement that seems to imply that simply migrating a virtual image from one provider to another equates to interoperability between clouds - specifically IaaS clouds. I'm sure the author didn't mean to imply that it's that simple; that all you need is to be able to migrate virtual images from one system to another. I'm sure there's more to it, or at least I'm hopeful that this concept was expressed so simply in the interests of brevity rather than completeness because there's a lot more to porting any application from one environment to another than just the application itself. Applications, and therefore virtual images containing applications, are not islands. They are not capable of doing anything without a supporting infrastructure - application and network - and some of that infrastructure is necessarily configured in such a way as to be peculiar to the application - and vice-versa. We call it an "ecosystem" for a reason; because there's a symbiotic relationship between applications and their supporting infrastructure that, when separated, degrades or even destroys the usability of that application. One cannot simply move a virtual machine from one location to another, regardless of the interoperability of virtualization infrastructure, and expect things to magically work unless all of the required supporting infrastructure has also been migrated as seamlessly. And this infrastructure isn't just hardware and network infrastructure; authentication and security systems, too, are an integral part of an application deployment. Even if all the necessary components were themselves virtualized (and I am not suggesting this should be the case at all) simply porting the virtual instances from one location to another is not enough to assure interoperability as the components must be able to collaborate, which requires connectivity information. Which brings us back to the problems associated with LISP and its focus on external discovery and location. There's just a lot more to interoperability than pushing around virtual images regardless of what those images contain: application, data, identity, security, or networking. Portability between virtual images is a good start, but it certainly isn't going to provide the interoperability necessary to ensure the seamless transition from one IaaS cloud environment to another. RELATED ARTICLES & BLOGS Who owns application delivery meta-data in the cloud? More on the meta-data menagerie The Feedback Loop Must Include Applications How VM sprawl will drive the urgency of the network evolution The Diseconomy of Scale Virus Flexibility is Key to Dynamic Infrastructure The Three Horsemen of the Coming Network Revolution As a Service: The Many Faces of the Cloud241Views0likes2CommentsThe Great Client-Server Architecture Myth
The webification of applications over the years has led to the belief that client-server as an architecture is dying. But very few beliefs about architecture have been further from the truth. The belief that client-server was dying - or at least falling out of favor - was primarily due to fact that early browser technology was used only as a presentation mechanism. The browser did not execute application logic, did not participate in application logic, and acted more or less like a television: smart enough to know how to display data but not smart enough to do anything about it. But the sudden explosion of Web 2.0 style applications and REST APIs have changed all that and client-server is very much in style again, albeit with a twist. Developers no longer need to write the core of a so-called "fat client" from the ground up. The browser or a framework such as Adobe AIR or Microsoft's Silverlight provide the client-side platform on which applications are developed and deployed. These client-side platforms have become very similar in nature to their server-side cousins, application servers, taking care of the tedious tasks associated with building and making connections to servers, parsing data, and even storage of user-specific configuration data. Even traditional thin-client applications are now packing on the pounds, using AJAX and various JavaScript libraries to provide both connectivity and presentation components to developers in the same fashion that AIR and Silverlight provide a framework for developers to build richer, highly interactive applications. These so-called RIAs (Rich Internet Applications) are, in reality, thin-clients that are rapidly gaining weight. One of the core reasons client-server architecture is being reinvigorated is the acceptance of standards. As developers have moved toward not only HTTP as the de facto transport protocol but HTML, DHTML, CSS, and JavaScript as primary client-side technologies so have device makers accepted these technologies as the "one true way" to deliver applications to multiple clients from a single server-side architecture. It's no longer required that a client be developed for every possible operating system and device combination. A single server-side application can serve any and all clients capable of communicating via HTTP and rendering HTML, DHTML, CSS, and executing client-side scripts. Standards, they are good things after all. Client-server architectures are not going away. They have simply morphed from an environment-specific model to an environment-agnostic model that is much more efficient in terms of development costs and ability to support a wider range of users, but they are still based on the same architectural principles. Client-server as a model works and will continue to work as long as the infrastructure over which such applications are delivered continue to mature and recognize that while one application may be capable of being deployed and utilized from any device that the environments over which they are delivered may impact the performance and security of those applications. The combination of fatter applications and increasing client-side application logic execution means more opportunities for exploitation as well as the potential for degradation of performance. Because client-server applications are now agnostic and capable of being delivered and used on a variety of devices and clients that they are not specifically optimized for any given environment and developers do not necessarily have access to the network and transport layer components they would need in order to optimize them. These applications are written specifically to not care, and yet the device and the location of the user and the network over which the application is delivered is relevant to application performance and security. The need for context-aware application delivery is more important now than ever, as the same application may be served to the same user but rendered in a variety of different client environments and in a variety of locations. All these variables must be accounted for in order to deliver these fat clients RIAs in the most secure, performant fashion regardless of where the user may be, over what network the application is being delivered, and what device the user may be using at the time.238Views0likes0CommentsLet’s Face It: PaaS is Just SOA for Platforms Without the Baggage
At some point in the past few years SOA apparently became a four-letter word (as opposed to just a TLA that leaves a bad taste in your mouth) or folks are simply unwilling – or unable – to recognize the parallels between SOA and cloud computing . This is mildly amusing given the heavy emphasis of services in all things now under the “cloud computing” moniker. Simeon Simeonov was compelled to pen an article for GigaOM on the evolution/migration of cloud computing toward PaaS after an experience playing around with some data from CrunchBase. He came to the conclusion that if only there were REST-based web services (note the use of the term “web services” here for later in the discussion) for both MongoDB and CrunchBase his life would have been a whole lot easier. For an application developer, as opposed to an infrastructure developer, all these vestiges of decades-old operating system architecture add little value. In fact, they cause deployment and operational headaches—lots of them. If I had taken almost any other approach to the problem using the tools I’m familiar with I would have performed HTTP operations against the REST-based web services interface for CrunchBase and then used HTTP to send the data to MongoDB. My code would have never operated against a file or any other OS-level construct directly. […] Most assume that server virtualization as we know it today is a fundamental enabler of the cloud, but it is only a crutch we need until cloud-based application platforms mature to the point where applications are built and deployed without any reference to current notions of servers and operating systems. -- Simeon Simeonov “The next reincarnation of cloud computing” Now I’m certainly not going to disagree with Simeon on his point that REST-based web services for data sources would make life a whole lot easier for a whole lot of people. I’m not even going to disagree with his assertion that PaaS is where cloud is headed. What needs to be pointed out is what he (and a lot of other people) are describing is essentially SOA minus the standards baggage. You’ve got the notion of abstraction in the maturation of platforms removing the need for developers to reference servers or operating systems (and thus files). You’ve got ubiquity in a standards-based transport protocol (HTTP) through which such services are consumed. You’ve got everything except the standards baggage. You know them, the real four-letter words of SOA: SOAP, WSDL, WSIL and, of course, the stars of the “we hate SOA show”, WS-everything. But the underlying principles that were the foundation and the vision of SOA – abstraction of interface from implementation, standards-based communication channels, discrete chunks of reusable logic – are all present in Simeon’s description. If they are not spelled out they are certainly implied by his frustration with a required interaction with file system constructs, desiring instead some higher level abstracted interface through which the underlying implementation is obscured from view. CLOUDS AREN’T CALLED “as a SERVICE” for NOTHING Whether we’re talking about compute, storage, platform, or infrastructure as a service the operative word is service. It’s a services-based model, a service-oriented model. It’s a service-oriented architecture that’s merely moved down the stack a bit, into the underlying and foundational technologies upon which applications are built. Instead of building business services we’re talking about building developer services – messaging services, data services, provisioning services. Services, services, and more services. Move down the stack again and when we talk about devops and automation or cloud and orchestration we’re talking about leveraging services – whether RESTful or SOAPy – to codify operational and datacenter level processes as a means to shift the burden of managing infrastructure from people to technology. Infrastructure services that can be provisioned on-demand, that can be managed on-demand, that can apply policies on-demand. PaaS is no different. It’s about leveraging services instead of libraries or adapters or connectors. It’s about platforms – data, application, messaging – as a service. And here’s where I’ll diverge from agreeing with Simeon, because it shouldn’t matter to PaaS how the underlying infrastructure is provisioned or managed, either. I agree that virtualization isn’t necessary to build a highly scalable, elastic and on-demand cloud computing environment. But whether that data services is running on bare-metal, or on a physical server supported by an operating system, or on a virtual server should not be the concern of the platform services. Whether elastic scalability of a RabbitMQ service is enabled via virtualization or not is irrelevant. It is exactly that level of abstraction that makes it possible to innovate at the next layer, for PaaS offerings to focus on platform services and not the underlying infrastructure, for developers to focus on application services and not the underlying platforms. Thus his musings on the migration of IaaS into PaaS are ignoring that for most people, “cloud” is essentially a step pyramid, with each “level” in that pyramid being founded upon a firm underlying layer that exposes itself as services. SOA IS ALIVE and LIVING UNDER an ASSUMED NAME for ITS OWN PROTECTION If we return to the early days of SOA you’ll find this is exactly the same prophetic message offered by proponents riding high on the “game changing” technology of that time. SOA promised agility through abstraction, reuse through a services-oriented approach to composition, and relieving developers of the need to be concerned with how and where a services was implemented so they could focus instead on innovating new solutions. That’s the same thing that all the *aaS are trying to provide – and with many of the same promises. The “cloud” plays into the paradigm by introducing elastic scalability, multi-tenancy, and the notion of self-service for provisioning that brings the financial incentives to the table. The only thing missing from the “as a service” paradigm is a plethora of standards and the bad taste they left in many a developer’s mouth. And it is that facet of SOA that is likely the impetus for refusing to say the “S” word in close proximity to cloud and *aaS. The conflict, the disagreement, the confusion, the difficulties, the lack of interoperability that nearly destroyed the interoperability designed in the first place – all the negatives associated with SOA come to the fore upon hearing that TLA instead of its underlying concepts and architectural premises. Premises which, if you look around hard enough, you’ll find still very much in use and successfully doing exactly what it promised to do. Simeon himself does not appear to disagree with the SOA-aaS connection. In a Twitter conversation he said, “I still have scars from the early #SOA days. Shouldn't we start with something simpler for PaaS?” To which I would now say “but we are”. After all it wasn’t – and isn’t - SOA that was so darn complex, it was its myriad complex and often competing standards. A rose by any other name, and all that. We can refuse to use the acronym, but that doesn’t change the fact that the core principles we’re applying (successfully, I might add) are, in fact, service-oriented. Related Posts230Views0likes0Comments