standards
27 TopicsWelcome to the The Phygital World
Standards for 'Things' That thing, next to the other thing, talking to this thing needs something to make it interoperate properly. That's the goal of the Industrial Internet Consortium (IIC) which hopes to establish common ways that machines share information and move data. IBM, Cisco, GE and AT&T have all teamed up to form the Industrial Internet Consortium (IIC), an open membership group that’s been established with the task of breaking down technology silo barriers to drive better big data access and improved integration of the physical and digital worlds. The Phygital World. The IIC will work to develop a ‘common blueprint' that machines and devices from all manufacturers can use to share and move data. These standards won’t just be limited to internet protocols, but will also include metrics like storage capacity in IT systems, various power levels, and data traffic control. Sensors are getting standards. Soon. As more of these chips are getting installed on street lights, thermostats, engines, soda machines and even into our own body the IIC will focus on testing IoT applications, produce best practices and standards, influence global IoT standards for Internet and industrial systems and create a forum for sharing ideas. Explore new worlds so to speak. I think it's nuts that we're in an age where we are trying to figure out how the blood sensor talks to the fridge sensor which notices there is no more applesauce and auto-orders from the local grocery to have it delivered that afternoon. Almost there. Initially, the new group will focus on the 'industrial Internet' applications in manufacturing, oil and gas exploration, healthcare and transportation. In those industries, vendors often don't make it easy for hardware and software solutions to work together. The IIC is saying, 'we all have to play with each other.' That will become critically important when your imbedded sleep monitor/dream recorder notices your blood sugar levels rising indicating that you're about to wake up, which kicks off a series of workflows that start the coffee machine, heat & distribute the hot water and display the day's news and weather on the refrigerator's LCD screen. Any minute now. It will probably be a little while (years) before these standards can be created and approved, but when they are they’ll help developers of hardware and software to create solutions that are compatible with the Internet of Things. The end result will be the full integration of sensors, networks, computers, cloud systems, large enterprises, vehicles, businesses and hundreds of other entities that are 'connected.' With London cars getting stolen using electronic gadgets and connected devices as common as electricity by 2025, securing the Internet of Things should be one of the top priorities facing the consortium. ps Related: Consortium Wants Standards for ‘Internet of Things’ AT&T, Cisco, GE, IBM and Intel form Industrial Internet Consortium for IoT standards IBM, Cisco, GE & AT&T form Industrial Internet Consortium The “Industrial” Internet of Things and the Industrial Internet Consortium The Internet of Things Will Thrive by 2025 Securing the Internet of Things: is the web already breaking up? Connected Devices as Common as Electricity by 2025 The ABCs of the Internet of Things Some Predictions About the Internet of Things and Wearable Tech From Pew Research Car-Hacking Goes Viral In London Technorati Tags: iot,things,internet of things,standards,security,sensors,nouns,silva,f5 Connect with Peter: Connect with F5:479Views0likes0CommentsFedRAMP Federates Further
FedRAMP (Federal Risk and Authorization Management Program), the government’s cloud security assessment plan, announced late last week that Amazon Web Services (AWS) is the first agency-approved cloud service provider. The accreditation covers all AWS data centers in the United States. Amazon becomes the third vendor to meet the security requirements detailed by FedRAMP. FedRAMP is the result of the US Government’s work to address security concerns related to the growing practice of cloud computing and establishes a standardized approach to security assessment, authorizations and continuous monitoring for cloud services and products. By creating industry-wide security standards and focusing more on risk management, as opposed to strict compliance with reporting metrics, officials expect to improve data security as well as simplify the processes agencies use to purchase cloud services. FedRAMP is looking toward full operational capability later this year. As both the cloud and the government’s use of cloud services grow, officials found that there were many inconsistencies to requirements and approaches as each agency began to adopt the cloud. Launched in 2012, FedRAMP’s goal is to bring consistency to the process but also give cloud vendors a standard way of providing services to the government. And with the government’s cloud-first policy, which requires agencies to consider moving applications to the cloud as a first option for new IT projects, this should streamline the process of deploying to the cloud. This is an ‘approve once, and use many’ approach, reducing the cost and time required to conduct redundant, individual agency security assessment. AWS's certification is for 3 years. FedRAMP provides an overall checklist for handling risks associated with Web services that would have a limited, or serious impact on government operations if disrupted. Cloud providers must implement these security controls to be authorized to provide cloud services to federal agencies. The government will forbid federal agencies from using a cloud service provider unless the vendor can prove that a FedRAMP-accredited third-party organization has verified and validated the security controls. Once approved, the cloud vendor would not need to be ‘re-evaluated’ by every government entity that might be interested in their solution. There may be instances where additional controls are added by agencies to address specific needs. The BIG-IP Virtual Edition for AWS includes options for traffic management, global server load balancing, application firewall, web application acceleration, and other advanced application delivery functions. ps Related: Cloud Security With FedRAMP FedRAMP Ramps Up FedRAMP achieves another cloud security milestone Amazon wins key cloud security clearance from government Cloud Security With FedRAMP CLOUD SECURITY ACCREDITATION PROGRAM TAKES FLIGHT FedRAMP comes fraught with challenges F5 iApp template for NIST Special Publication 800-53 Now Playing on Amazon AWS - BIG-IP Connecting Clouds as Easy as 1-2-3 F5 Gives Enterprises Superior Application Control with BIG-IP Solutions for Amazon Web Services Technorati Tags: f5,fedramp,government,cloud,service providers,risk,standards,silva,compliance,cloud security,aws,amazon Connect with Peter: Connect with F5:427Views0likes0CommentsF5 Friday. Speedy SPDY
#ADO, #Stirling, #fasterapp a SPDY implementation that is as fast and adaptable as needed. **I originally wrote this more than a month ago… Coworkers have covered this topic extensively, but thought I’d still get it posted for those who read my blog and missed it. Remember the days when Internet connections were inherently slow, and browser usage required extreme patience? For many people – from certain geographic regions to mobile phone Internet users – that world of waiting has come around again, and they’re not as patient as people used to be, largely because instant communication has become a standard, so expectations have risen. As with all recurring themes, there are new solutions coming along to resolve these problems, and F5 is staying on top of them, helping IT to better serve the needs of the business, and the customer. In November of 2009, Google announced the SPDY protocol to improve the performance of browser-server communications. Since then, implementations of SPDY have cropped up in both Chrome and Firefox, which according to w3schools.com comprise over 70% of the global browser market. The problem is that web server and web application server implementations lag far behind client adoption. While the default is for SPDY to drop to HTTP if either client or server does not have a SPDY implementation, there are clear-cut benefits to SPDY that IT is missing out on. This is the result of a convergence of issues that will eventually be resolved on their own, most notably that it is easy to get two open source browsers to support your standard and attain market penetration, but much harder to convince tens of thousands of IT folks to disrupt their normal operations while implementing a standard that isn’t strictly necessary for most of them. Eventually, SPDY support will come pre-packaged in most web servers, and if it is something your organization needs, those webservers will be the first choice for new projects. Until then, clients with slow connections (including all mobile clients) will suffer longer delivery timeframes. What is required is a solution that allows for SPDY support without disrupting the flow of normal operations. Something that can be implemented quickly and easily, without the hassle of dropping web servers, installing modules, making configuration changes, etc. And of course that solution should be comprehensive enough to serve the most demanding environments. As of now, that requirement is fulfilled by F5. F5 WebAccelerator now supports SPDY as a proxy for all of the servers you choose to turn SPDY support on for. In the normal course of SPDY operations, the client and the server exchange information about whether they support SPDY or not, and if both do not, then HTTP is used for communication between the browser and the web server. BIG-IP WebAccelerator acts as a proxy for web servers. It terminates the connection, responds that the server behind it does indeed support SPDY, then translates requests from the browser into HTTP before passing them to the server, and responses from the server into SPDY before passing them to the client. The net result is that on the slowest part of the connection – the Internet and wireless device “last mile”, SPDY is being used, while there are zero changes to the application infrastructure. And because the BIG-IP product family specializes in configurations per-application, you can pick and choose which applications running behind a BIG-IP device actually support SPDY, should the need arise. Combined with the whole collection of other optimizations that WebAccelerator implements, the performance of web applications to any device can greatly benefit without retrofitting the entire network. The HTTP 2.0 War has Just Begun The Four V’s of Big Data The “All of the Above” Approach to Improving Application Performance Mobile Apps. New Game, New (and Old) Rules The HTTP 2.0 War has Just Begun F5 Friday: Ops First Rule220Views0likes0CommentsThe HTTP 2.0 War has Just Begun
#stirling Microsoft takes on Google as the war to win the standard for an overdue overhaul of HTTP starts to pick up steam RFC 1945 – “Hypertext Transfer Protocol -- HTTP/1.0” – was published in May 1996. In June of 1999, RFC 2616 – “Hypertext Transfer Protocol -- HTTP/1.1” was published. In the ensuing 13 years there has been no substantial changes to the HTTP standard. None. Nada. Zilch. Even as the size and number of objects has ballooned over that time, and the overall composition of web pages grown increasingly complex, still there’s been no substantial efforts to improve upon the now entrenched HTTP standard. Even as sites struggled to maintain availability and performance in the face of exploding usage growth – fueled by mobile device proliferation, increasingly affordable access enabling everything from plants to cows to users to “get online” – HTTP 1.1 remained the standard for web-everything, despite the growing fact that it simply wasn’t the most optimal means of connecting users with the resources they expect and increasingly, demand. AJAX and Web 2.0 gave us better interactive models that alleviated some of the pain associated with performance problems, but as that model took hold and video became the medium du jour even it’s advantages have become unable to produce the acceptable results. And then Google introduced SPDY. The first shot in the HTTP 2.0 war. Now Microsoft has fired back with “Speed+Mobility” and the battle appears about to be fully engaged. Although SPDY has been out and about for some time, it only recently made it to the status of “Internet-Draft” in the RFC system, being officially published in Feb 2012. Along comes March 2012, and Microsoft has (sort of) countered with Speed+Mobility. What will be interesting as the battle progresses is to see which other organizations and vendors will side with which version (if not both). Invariably other organizations will want to be able to claim to have been co-authors of whichever standard becomes, well, the standard but choosing sides so early in a war is hardly appropriate, especially when the technical details are still (as of this writing) missing from Microsoft’s proposal. RIP-REPLACE versus UPGRADE It’s also not clear how Speed + Mobility will “retain as much compatibility as possible with the existing Web infrastructure” – a noble and laudable sentiment, to be sure – while still adopting most of the core concepts including in SPDY: HTTP Speed+Mobility RFC It [the session layer] would maintain the integrity of the layered architecture. It would use an upgrade mechanism similar to that of WebSockets. This would enable compatibility with existing proxies and connection models, without creating a mandatory dependency on TLS. [Same as SPDY] The protocol would define two types of frames: data and control. [Same as SPDY] The session layer would enable negotiation of multiple simultaneous streams for HTTP requests with minimal overhead. [Same as SPDY] The session layer would allow for prioritizing delivery of content to ensure highest value traffic is delivered first. There’s not much in the Speed + Mobility RFC on which to base a technical impact assessment on infrastructure (existing proxies and other HTTP mediating devices like load balancers) but what Microsoft appears to be saying is that it wants to leverage the concepts introduced by Google with SPDY (acknowledging their performance and ultimately scaling benefits) without leaving the familiar world of HTTP. That’s actually important, assuming it can be done, because SPDY requires significant changes to existing infrastructure – network and server – in order to operate, and it is not inherently interoperable with HTTP. Despite this, SPDY interest and inquiries are beginning to become more frequent, which means it’s getting the attention it deserves. Being the only kid on the block to really address the performance issues inherent with HTTP (especially with respect to mobile devices) that’s no surprise as the investment in new solutions to support SPDY would ostensibly see a return in the form of scalability on the server side by requiring fewer server resources to support as many if not more users. But SPDY isn’t so far along (see previous note) as to be a clear front runner. It’s still too new despite interest to have garnered widespread support or mindshare, and despite Google’s ubiquitous status as a household term for search, it isn’t necessarily synonymous with web standards. Chrome may be gaining on IE, but in the minds of most users, IE is still synonymous with web browsing. It also has a serious advantage over Google in its relationship with the enterprise and IT, and in its more intimate understanding of data center infrastructure, as is evident from its blog on the introduction of its proposal: We think that rapid adoption of HTTP 2.0 is important. To make that happen, HTTP 2.0 needs to retain as much compatibility as possible with the existing Web infrastructure. Awareness of HTTP is built into nearly every switch, router, proxy, Load balancer, and security system in use today. If the new protocol is “HTTP” in name only, upgrading all of this infrastructure would take too long. By building on existing web standards, the community can set HTTP 2.0 up for rapid adoption throughout the web. -- Speed and Mobility: An Approach for HTTP 2.0 to Make Mobile Apps and the Web Faster Google, while not necessarily openly hostile to the enterprise or infrastructure vendors who’d need to support SPDY, certainly appears indifferent to the impact of a rip-and-replace protocol model. That’s not to say Google’s approach isn’t feasible or desirable. Indeed, in some cases a “rip-and-replace” strategy is the only way to clean out the cobwebs that otherwise seem to hang onto technology for years after they’ve been superceded and superceded again. Think COBOL, which in some industries is still under active development, augmented by a hundred other technologies designed to workaround the reality that it’s an aged, outdated technology that for various reasons we are unable to simply walk away from. TAKE a SIDE ALREADY, WILL YOU?! Nope. Not gonna take a side yet – if ever. Personal preferences aside (which it’s hard to have at this point without more technical details from Microsoft) the decision whether an organization eventually wants to go with SPDY or Speed+Mobility will not at all impact negatively mediating devices. In fact, the existence of both would not negatively impact such devices because of their strategic location in the network. The existence of all three – SPDY, S+M, HTTP – would actually not negatively impact these devices as long as they were able to support all three, which seems more likely than simply choosing a side. There will be a need to support both – and likely all three (do I hear a fourth?) – protocols moving forward. Regardless of who wins this particular war and comes out crowned HTTP 2.0 champion, there will still be a need to implement support across infrastructure vendors. There will be a transitory period during which browsers and servers and infrastructure all must “get up to speed” (ha!) and will do so at different rates, making the need for intermediating devices critical. Just as is the case with the migration from IPv4 to IPv6, intermediating application delivery solutions provide the means by which organizations with substantial infrastructure investments to maintain the value of those investments while moving forward to support emerging standards. Being able to translate, for example, between SPDY and HTTP today would be a significant boon for organizations, as it requires no changes to what is likely an extensive application and server infrastructure. Similarly, assuming a pilot of Speed+Mobility, if the application delivery tier can support it, it can mediate – translate – and provide an opportunity to support users via either standard without radically disrupting the application server infrastructure. A full-proxy based application delivery infrastructure is full of advantages, after all. I like SPDY. I like it’s approach and I actually admire Google’s chutzpah in diverging from HTTP as a solution, recognizing perhaps the inherent tendency to be more concerned with backwards compatibility than with improving upon the model. But I like what Microsoft is saying from an enterprise perspective because honestly, replacing an entire infrastructure architecture to support one protocol out of many is not an appealing option, no matter the benefits. Both approaches have merit, and the bigger story is that an overhaul of HTTP is necessary - and long overdue. Web App Performance: Think 1990s. Network versus Application Layer Prioritization Oops! HTML5 Does It Again Fire and Ice, Silk and Chrome, SPDY and HTTP Grokking the Goodness of MapReduce and SPDY Google SPDY Protocol Would Require Mass Change in Infrastructure What Does Mobile Mean, Anyway? Moore’s (Traffic) Law235Views0likes1CommentNew Communications = Multiplexification
I wrote a good while back about the need to translate all the various storage protocols into one that could take root and simplify the lives of IT. None of the ones currently being hawked seem to be making huge inroads in the datacenter, all have some uses, none is unifying. Those peddling the latest, greatest thing of course want to sell you on their protocol because they hope to be The One, but it’s not about selling, it’s about useful. At the time FCoE was the new thing. I don’t get much chance to follow storage like I used to, but I haven’t heard of anything new since the furor over FCoE started to calm down, so presume the market is still sitting there, with NAS split between two, and block storage split between many. There is a similar fragmentation trend going on in networking at the moment too. There have always been a zillion transport standards, and as long as the upper layers can be uniform, working out how to fit your cool new satellite link into Ethernet is a simple problem from the IT perspective. Either the vendor solves the issue or they fail due to lack of usefulness. But higher layers are starting to see fragmentation. In the form of SPDY, Speed + mobility, etc. In both of these cases, HTTP is being supplanted by something that requires configuration differences and is not universally supported by clients. And yet the benefits are such that IT is paying attention. IPv6 is causing similar issues at the lower layers, and it is worth mentioning here for a reason. The key, as Lori and I have both written, is that IT cannot afford to rework everything at once to support these new standards, but feels an imperative (for IP address space from IPv6, for web app performance for the http layer changes) to implement them whenever possible. The best solution to these problems – where upgrading has its costs and failing to upgrade has other costs – is to implement a gateway. F5s IPv6 Gateway is one solution (other vendors have them too - I’ll talk about the one I know here, but assume it applies to the others and verify that with your vendor) that’s easy to talk about because it is being utilized in IT shops to do just that. With the gateway implemented, sitting in front of your DC, it translates from IPv6 to IPv4, meaning that the datacenter can be converted at a sane pace, and support for IPv4 is not a separate stack that must be maintained while client adoption catches up. If a connection comes in to the gateway, if it is IPv4 and the server speaks IPv4, the connection is passed through. The same occurs if both client and server support IPv6. If the client and server have a mismatch, the gateway translates between them. That means you get support the day a gateway is deployed, and over time can transfer your systems while maintaining support for all clients. This type of solution works handily for protocols like SPDY too – offering the ability to say a server supports SPDY when in fact it doesn’t, the gateway does and translates between SPDY and HTTP. Deploying a SPDY gateway gives instant SPDY support to web (and application) servers behind the gateway, buying IT time to reconfigure those web servers to actually support SPDY. SPDY accelerates everything on the client side, and http is only used on the faster server side where the network is dedicated. Faster has an asterisk by it though. What if the app or web server is at a remote site? You’re going right back out onto the Internet and using HTTP unoptimized. In those cases – and other cases where network response time is slow - something is needed on the backend to keep those performance gains without finding the next bottleneck as soon as the SPDY gateway is deployed. F5 uses several technologies to improve backend communications performance, and other vendors have similar solutions (though ours are better – biased though I may be). For F5’s part, secure tunnels, WAN optimization, and a very relevant feature of BIG-IP LTM called OneConnect all work together to minimize backend traffic. OneConnect is a cool little feature that minimizes the connections from the BIG-IP to the backend server by pooling and reusing them. This process does several things, but importantly, it takes setup and teardown time for connections out of the picture. So if a (non-SPDY) client makes four connections to get its data, the BIG-IP merges them with other requests to the same server and essentially multiplexes them. Funny thing is, this is one of the features of SPDY on the other side, with the primary difference that SPDY is client focused (merges connections from the client), and OneConnect is server focused (merges connections to the server). The client side is “all connections from this client”, while the server side is “all connections to this server (regardless of client)”, but otherwise they are very similar. This enters interesting territory, because now we’re essentially multi-multi-plexing. But we’re not. Here’s a simple diagram utilizing only a couple of clients and generic server/application farm to try and show the sequence of events: 1. SPDY comes into a gateway as a single stream from the client 2. The gateway translates into HTTP’s multiple streams 3. BIG-IP identifies the server the request is for 4. If a connection exists to the server, BIG-IP passes the request through the existing connection 5. When responses are sent, this process is handled in reverse. Responses come in over OneConnect and go out SPDY encoded. There is only a brief period of time where native HTTP is being communicated, and presumably the SPDY gateway and the BIG-IP are in very close proximity. The result is application communications that are optimized end-to-end, but the only changes to your application architecture are configuring the SPDY Gateway and OneConnect. Not too bad for a problem that normally requires modification of each web and application servers that will support SPDY. As alluded to above, if the application servers are remote from the SPDY Gateway, the benefits are even more pronounced, just due to latency on the back end. All the benefits of both SPDY and OneConnect, and you will be done before lunch. Far better than loading modules into every webserver or upgrading every app server. Alternatively, you could continue to support only HTTP, but watching the list of clients that transparently support SPDY, the net result of doing so is very likely to be that customers gravitate to your competitors whose websites seem to be faster. The Four V’s of Big Data The “All of the Above” Approach to Improving Application Performance Google SPDY Accelerates Mobile Web193Views0likes0CommentsBIG-IP Configuration Object Naming Conventions
George posted an excellent blog on hostname nomenclature a while back, but something we haven’t discussed much in this space is a naming convention for the BIG-IP configuration objects. Last week, DevCentral community user Deon posted a question on exactly that. Sometimes there are standards just for the sake of having one, but in most cases, and particularly in this case, having standards is a very good thing. Señor Forum, hoolio, and MVP hamish weighed in with some good advice. [app name]_[protocol]_[object type] Examples: www.example.com_http_vs www.example.com_http_pool www.example.com_http_monitor As hoolio pointed out in the forum, each object now has a description field, so the metadata capability is there to establish identifying information (knowledge base IDs, troubleshooting info, application owners), but having an object name that is quickly searchable and identifiable to operational staff is key. Hamish had a slight alternative format for virtuals: [fqdn]_[port] For network virtuals, I’ve always made the network part of the name, as hamish also recommends in his guidance: network VS's tend to be named net-net.num.dot.ed-masklen. e.g. net-0.0.0.0-0 is the default address. Where they conflict (e.g. two defaults depending on src clan, it gets an extra descriptor between net- and the ip address. e.g. net-wireless-0.0.0.0-0 (Default network VS for a wireless VLAN). I don't currently have any network VS's for specific ports. But they'd be something like net-0.0.0.0-0-port Your Turn What standards do you use? Share in the comments section below, or post to the forum thread.993Views0likes0CommentsThe Inevitable Eventual Consistency of Cloud Computing
An IDC survey highlights the reasons why private clouds will mature before public, leading to the eventual consistency of public and private cloud computing frameworks Network Computing recently reported on a very interesting research survey from analyst firm IDC. This one was interesting because it delved into concerns regarding public cloud computing in a way that most research surveys haven’t done, including asking respondents to weight their concerns as it relates to application delivery from a public cloud computing environment. The results? Security, as always, tops the list. But close behind are application delivery related concerns such as availability and performance. N etwork Computing – IDC Survey: Risk In The Cloud While growing numbers of businesses understand the advantages of embracing cloud computing, they are more concerned about the risks involved, as a survey released at a cloud conference in Silicon Valley shows. Respondents showed greater concern about the risks associated with cloud computing surrounding security, availability and performance than support for the pluses of flexibility, scalability and lower cost, according to a survey conducted by the research firm IDC and presented at the Cloud Leadership Forum IDC hosted earlier this week in Santa Clara, Calif. “However, respondents gave more weight to their worries about cloud computing: 87 percent cited security concerns, 83.5 percent availability, 83 percent performance and 80 percent cited a lack of interoperability standards.” The respondents rated the risks associated with security, availability, and performance higher than the always-associated benefits of public cloud computing of lower costs, scalability, and flexibility. Which ultimately results in a reluctance to adopt public cloud computing and is likely driving these organizations toward private cloud computing because public cloud can’t or won’t at this point address these challenges, but private cloud computing can and is – by architecting a collection of infrastructure services that can be leveraged by (internal) customers on an application by application (and sometimes request by request) basis. PRIVATE CLOUD will MATURE FIRST What will ultimately bubble up and become more obvious to public cloud providers is customer demand. Clouderati like James Urquhart and Simon Wardley often refer to this process as commoditization or standardization of services. These services – at the infrastructure layer of the cloud stack – will necessarily be driven by customer demand; by the market. Because customers right now are not fully exercising public cloud computing as they would their own private implementation – replete with infrastructure services, business critical applications, and adherence to business-focused service level agreements – public cloud providers are a bit of a disadvantage. The market isn’t telling them what they want and need, thus public cloud providers are left to fend for themselves. Or they may be pandering necessarily to the needs and demands of a few customers that have fully adopted their platform as their data center du jour. Internal to the organization there is a great deal more going on than some would like to admit. Organizations have long since abandoned even the pretense of caring about the definition of “cloud” and whether or not there exists such a thing as “private” cloud and have forged their way forward past “virtualization plus” (a derogatory and dismissive term often used to describe such efforts by some public cloud providers) and into the latter stages of the cloud computing maturity model. Internal IT organizations can and will solve the “infrastructure as a service” conundrum because they necessarily have a smaller market to address. They have customers, but it is a much smaller and well-defined set of customers which they must support and thus they are able to iterate over the development processes and integration efforts necessary to get there much quicker and without as much disruption. Their goal is to provide IT as a service, offering a repertoire of standardized application and infrastructure services that can easily be extended to support new infrastructure services. They are, in effect, building their own cloud frameworks (stack) upon which they can innovate and extend as necessary. And as they do so they are standardizing, whether by conscious effort or as a side-effect of defining their frameworks. But they are doing it, regardless of those who might dismiss their efforts as “not real cloud.” When you get down to it, enterprise IT isn’t driven by adherence to some definition put forth by pundits. They’re driven by a need to provide business value to their customers at the best possible “profit margin” they can. And they’re doing it faster than public cloud providers because they can. WHEN CLOUDS COLLIDE - EVENTUAL CONSISTENCY What that means is that in a relatively short amount of time, as measured by technological evolution at least, the “private clouds” of customers will have matured to the point they will be ready to adopt a private/public (hybrid) model and really take advantage of that public, cheap, compute on demand that’s so prevalent in today’s cloud computing market. Not just use them as inexpensive development or test playgrounds but integrate them as part of their global application delivery strategy. The problem then is aligning the models and APIs and frameworks that have grown up in each of the two types of clouds. Like the concept of “eventual consistency” with regards to data and databases and replication across clouds (intercloud) the same “eventual consistency” theory will apply to cloud frameworks. Eventually there will be a standardized (consistent) set of infrastructure services and network services and frameworks through which such services are leveraged. Oh, at first there will be chaos and screaming and gnashing of teeth as the models bump heads, but as more organizations and providers work together to find that common ground between them they’ll find that just like the peanut-butter and chocolate in a Reese’s Peanut Butter cup, the two disparate architectures can “taste better together.” The question that remains is which standardization will be the one with which others must become consistent. Without consistency, interoperability and portability will remain little more than a pipe dream. Will it be standardization driven by the customers, a la the Enterprise Buyer’s Cloud Council? Or will it be driven by providers in a “if you don’t like what we offer go elsewhere” market? Or will it be driven by a standards committee comprised primarily of vendors with a few “interested third parties”? Related Posts from tag interoperability Despite Good Intentions PaaS Interoperability Still Only Skin Deep Apple iPad Pushing Us Closer to Internet Armageddon Cloud, Standards, and Pants Approaching cloud standards with end-user focus only is full of fail Interoperability between clouds requires more than just VM portability Who owns application delivery meta-data in the cloud? Cloud interoperability must dig deeper than the virtualization layer from tag standards How Coding Standards Can Impair Application Performance The Dynamic Infrastructure Mashup The Great Client-Server Architecture Myth Infrastructure 2.0: Squishy Name for a Squishy Concept Can You Teach an Old Developer New Tricks? (more..) del.icio.us Tags: MacVittie,F5,cloud computing,standards,interoperability,integration,hybrid cloud,private cloud,public cloud,infrastructure227Views0likes1CommentNow Witness the Power of this Fully Operational Feedback Loop
It’s called a feedback loop, not a feedback black hole. One of the key components of a successful architecture designed to mitigate operational risk is the ability to measure, monitor and make decisions based on collected “management” data. Whether it’s simple load balancing decisions based on availability of an application or more complex global application delivery traffic steering that factors in location, performance, availability and business requirements, neither can be successful unless the components making decisions have the right information upon which to take action. Monitoring and management is likely one of the least sought after tasks in the data center. It’s not all that exciting and it often involves (please don’t be frightened by this) integration. Agent-based, agentless, standards-based. Monitoring of the health and performance of resources is critical to understanding how well an “application” is performing on a daily basis. It’s the foundational data used for capacity planning, to determine whether an application is under attack and to enable the dynamism required of a dynamic, intelligent infrastructure supportive of today’s operational goals. YOU CAN’T REACT to WHAT you CAN’T SEE We talk a lot about standards and commoditization and how both can enable utility-style computing as well as the integration necessary at the infrastructure layers to improve the overall responsiveness of IT. But we don’t talk a lot about what that means in terms of monitoring and management of resource “health” – performance, capacity and availability. The ability of any load-balancing service depends upon the ability to determine the status of an application. In an operationally mature architecture that includes the status of all components related to the delivery of that application, including other application services such as middle-ware and databases and external application services. When IT has control over all components, then traditional agent-based approaches work well to provide that information. When IT does not have control over all components, as is increasingly the case, then it cannot collect that data nor access it in real-time. If the infrastructure components upon which successful application delivery relies cannot “see” how any given resource is performing let alone whether it’s available or not, there is a failure to communicate that ultimately leads to poor decision making on the part of the infrastructure. We know that in a highly virtualized or cloud-computing model of application deployment that it’s important to monitor the health of the resource, not the “server”, because the “server” has become little more than a container, a platform upon which a resource is deployed and made available. With the possibility of a resource “moving” it is even more imperative that operations monitor resources. Consider how IT organizations that may desire to leverage more PaaS (Platform as a Service) to drive application development efforts forward faster. Monitoring and management of those resources must occur at the resource layer; IT has no control or visibility into the underlying platforms – which is kind of the point in the first place. YOU CAN’T MAKE DECISIONS without FEEDBACK The feedback from the resource must come from somewhere. Whether that’s an agent (doesn’t play well with a PaaS model) or some other mechanism (which is where we’re headed in this discussion) is not as important as getting there in the first place. If we’re going to architect highly responsive and dynamic data centers, we must share all the relevant information in a way that enables decision-making components (strategic points of control) to make the right decisions. To do that resources, specifically applications and application-related resources, must provide feedback. This is a job for devops if ever there was one. Not the ops who apply development principles like Agile to their operational tasks, but developers who integrate operational requirements and needs into the resources they design, develop and ultimately deploy. We already see efforts to standardize APIs designed to promote security awareness and information through efforts like CloudAudit. We see efforts to standardize and commoditize APIs that drive operational concerns like provisioning with OpenStack. But what we don’t see is an effort to standardize and commoditize even the simplest of health monitoring methods. No simple API, no suggestion of what data might be common across all layers of the application architecture that could provide the basic information necessary for infrastructure services to take actions appropriately. The feedback regarding the operational status of an application resource is critical in ensuring that infrastructure is able to make the right decisions at the right time regarding each and every request. It’s about promoting dynamic equilibrium in the architecture; an equilibrium that leads to efficient resource utilization across the data center while simultaneously providing for the best possible performance and availability of services. MORE OPS in the DEV It is critical that developers not only understand but take action regarding the operational needs of the service delivery chain. It is critical because in many situations the developer will be the only ones with the means to enable the collection of the very data upon which the successful delivery of services relies. While infrastructure and specifically application delivery services are capable of collaborating with applications to retrieve health-related data and subsequently parse the information into actionable data, the key is that the data be available in the first place. That means querying the application service – whether application or middle-ware and beyond – directly for the data needed to make the right decisions. This type of data is not standard, it’s not out of the box, and it’s not built into the platforms upon which developers build and deploy applications. It must be enabled, and that means code. That means developers must provide the implementation of the means by which the data is collected; ultimately one hopes this results in a standardized health-monitoring collection API jointly specified by ops and dev. Together. Cloud Control Does Not Always Mean ‘Do it yourself’ Operational Risk Comprises More Than Just Security On Cloud, Integration and Performance How to Build a Silo Faster: Not Enough Ops in your Devops Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait Cloud Chemistry 101 Infrastructure 2.0 Isn’t Just For Cloud Computing Infrastructure 2.0 Is the Beginning of the Story, Not the End Will DevOps Fork? The Zero-Product Property of IT The New Network217Views0likes2CommentsUseful Cloud Advice, Part Two. Applications
This is the second part of this series talking about things you need to consider, and where cloud usage makes sense given the current state of cloud evolution. The first one, Cloud Storage, can be found here. The point of the series is to help you figure out what you can do now, and what you have to consider when moving to the cloud. This will hopefully help you to consider your options when pressure from the business or management to “do something” mounts. Once again, our definition of cloud is Infrastructure as a Service (IaaS) - “VM containers”, not SOA or other variants of Cloud. For our purposes, we’ll also assume “public cloud”. The reasoning here is simple, if you’re implementing internal cloud, you’re likely already very virtualized, and you don’t have the external vendor issues, so you don’t terribly need this advice – though some of it will still apply to you, so read on anyway. Related Articles and Blogs Maybe Ubuntu Enterprise Cloud Makes Cloud Computing Too Easy Cloud Balancing, Cloud Bursting, and Intercloud Bursting the Cloud The Impossibility of CAP and Cloud Amazon Makes the Cloud Sticky Cloud, Standards, and Pants The Inevitable Eventual Consistency of Cloud Computing Infrastructure 2.0 + Cloud + IT as a Service = An Architectural ... Cloud Computing Makes Servers Obsolete Cloud Computing's Other Achilles' Heel: Software Licensing172Views0likes0CommentsThe InfoSec Conundrum. Keep Playing Until You Lose.
Lori and I received the new Blackberry Smart Phones that F5 ordered for us last week, and have spent about a week familiarizing ourselves with all that has changed since our several-year-old ones came out. There is certainly a lot of change. The Social Media add-ons bundled into these phones are certainly much nicer than the ones we had installed on our older phones, texting has its own app rather than being a part of the email package, the screen is more crisp, and photo quality is light-years ahead of previous incarnations, but still doesn’t compete with high-end digital cameras. Oh yeah, and it takes calls too. One little bundled application was Word Mole, a game where you try to pick words out of a six by six array of letters. You can use any letters, the words have to be in their dictionary, and the larger the word, the more points. And the less common the letters the more points you are awarded. The game is surprisingly (for me) addicting, and takes very little time to play – each level is timed to two minutes, so you can complete a level in two minutes or less. And since you get extra points for any time left on the clock, most levels take less than half a minute. An interesting implementation choice in this game is that you “win” by “losing later” than your last time. There is no finale to the game, you just keep going through levels (with occasional breaks to do a little something else that is speed based) until you don’t get enough points before the timer runs out. Since the timer is set to two minutes, and the required number of points goes up with each completed level, it is pretty much inevitable that you’re going to lose. The only question is how far you will get before you fail. Word Mole Menu, Compliments of Crackberry.com And that got me to thinking about how we deal with information security, even today. It is not generally considered if you will get compromised, we approach InfoSec like you will fail, the only questions are “when?” and “did you do enough to try and stop it?” That just is not a viable way to run a business over the long-term. Particularly not with the sanctions and pressures governments are putting on the victims of hackers. Organizations are under increasing pressure as if they were the culprit, whilst the ne-er-do-wells are sometimes apprehended, sometimes not, sometimes hiding away in countries that will not pursue them. Disclosure laws are good, you should warn people if their identity has been compromised, but looking to see if an organization is “culpable”? Even if a company was stupid enough to have zero information security in place, that is akin to a company failing to lock its front door and being robbed… While stupid, and an insurer may have an issue with it, the authorities certainly wouldn’t blame the company for having forgotten to lock their door (though a lenient judge may give the robber a lighter sentence for it). They didn’t ask to be robbed. And yet we are still running on “do all you can to protect…” where “protect” has the known double meaning of “customer data from hackers” and “the organization from exposure if a hacker gets in”. Eventually, your organization will fail. “Guaranteed failure with minimized risk” is not the answer, and it leaves us in a untenable position, both as organizations and as customers of said organizations. So I’ve pointed out the obvious, next you want to know my answers. I wish I had them. We here at F5 have some great products to support you and lend better protections, but the problem is much more comprehensive than that. It requires international relations, standards on when you’ve crossed the line from innocent poking around to outright lawbreaking, and governments the world over to track down and prosecute the evildoers. For some reason we (the world in general) are much more accepting of criminals that steal from a keyboard than those who steal with a brick through a window, and it is far past time when that must end. But ending it will not be easy or quick. I’d start by forging solid international agreements on what constitutes violation of an organization’s presence on the Internet. Certainly attempting to connect to one port is not a violation, but if that connection includes a malicious script or the attempt is to connect to a thousand different ports, it is a different story all together. From there, punishment guidelines must be agreed upon, and enforcement… Enforced. There is very little in this world that I truly believe needs international cooperation on a grand scale, but the Internet is anywhere, and ingress/egress points in different countries is not only common, it is largely the norm in some parts of the world, so any attempt at Internet regulation must involve a massive scale of national cooperation, making it a tough problem. Still, it is a worthy effort. There are enforcement techniques available to force international cooperation, assuming the other parts mentioned above are taken care of. Cutting off rogue countries that harbor Internet lawbreakers is entirely possible on an international scale, as are several other enforcement tools. Let’s stop treating InfoSec like a game of Word Mole. Because when you lose in InfoSec, someone generally gets fired, even if everything was done right, and that impacts people in a very real way. That doesn’t even touch upon the type of harsh treatment the corporations that are compromised suffer at the hands of press, bloggers, and governments, and the impact that has on the overall organization.231Views0likes0Comments