web acceleration
64 TopicsWebacceleration profile not caching
Dear All, I have a Big IP LTM ASM provisioned and applied a web accelerator profile with the default settings (version 11.5.1) to a virtual server. A lot of traffic is passing through but I simply do not see any traffic being cached inside the Ramcache. Also I analyzed the traffic and the HTTP reply packets don’t contain the new age header value. In the webacceleration profile counters I notice a lot of misses, but when I analyze the traffic in Wireshark I see a lot of objects like css, png, jpg, json, js that I believe is all cacheable. Also what I would like to achieve is to alter the cache timer for all objects passing through F5 towards the clients by inserting the new age header. In some objects I notice Cache Control: private, must revalidate, max age=0 but most of the objects responses don’t have any caching header at all. All the objects have the response code 200, so that is also not the issue here. Test results show that the age header is never altered as configured in the webaccelerator profile. As I said before the profile is very basic configured with no filter, Ignore Headers All (also tried none), insert age header enabled, aging rate = 9. I have never seen anything in the cache and the caching time is 100 days. I would like to know why none of these objects are cacheable and why is F5 not changing the age time (cache expiration date)? Perhaps there is something that needs to be configured within the application itself to be cacheable? I hope someone has the answer over here.799Views0likes5CommentsWeb Acceleration profile and changing HTTP/1.1 to 1.0
Hi, I am not HTTP expert but this behavior is quite a surprise to me - but maybe it's perfectly OK? Setup (tested on v11.2.0HF7): VS with Web Acceleration profile attached (based on optimized-caching), no OneConnect, no HTTP Compression Profile Request from client: GET /images/wwfr.png HTTP/1.1 Host: cache.test.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: pl,en-US;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive Request from BIG-IP to backend server GET /images/wwfr.png HTTP/1.0 Host: cache.test.com User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:39.0) Gecko/20100101 Firefox/39.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: pl,en-US;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive If I will disable Web Acceleration profile then request from BIG-IP to server will be HTTP/1.1. So why such change? Another issue - I have iRule attached to VS to disable caching response on the client side: when HTTP_RESPONSE { HTTP::header insert Cache-control no-store } What surprised me is that BIG-IP is taking this header into account when deciding if server response is cacheable. That is quite strange as header is inserted on client side so should not influence server side decision - or I am wrong here? Piotr462Views0likes1CommentCloud Storage Gateways. Short term win, but long term…?
In the rush to cloud, there are many tools and technologies out there that are brand new. I’ve covered a few, but that’s nowhere near a complete list, but it’s interesting to see what is going on out there from a broad-spectrum view. I have talked a bit about Cloud Storage Gateways here. And I’m slowly becoming a fan of this technology for those who are considering storing in the cloud tier. There are a couple of good reasons to consider these products, and I was thinking about the reasons and their standing validity. Thought I’d share with you where I stand on them at this time, and what I see happening that might impact their value proposition. The two vendors I have taken some time to research while preparing this blog for you are Nasuni and Panzura. No doubt there are plenty of others, but I’m writing you a blog here, not researching a major IT initiative. So I researched two of them to have some points of comparison, and leave the in-depth vendor selection research to you and your staff. These two vendors present similar base technology and very different additional feature sets. Both rely heavily upon local caching in the controller box, and both work with multiple cloud vendors, and both claim to manage compression. Nasuni delivers as a Virtual Appliance, includes encryption on your network before transmitting to the cloud, automated cloud provisioning, and caching that has timed updates to the cloud, but can perform a forced update if the cache gets full. It presents the cloud storage you’ve provisioned as a NAS on your end. Panzura delivers a hardware appliance that also presents the cloud as a NAS, works with multiple cloud vendors, handles encryption on-device, and claims global dedupe. I say claims, because “global” has a meaning that is “all” and in their case “all” means “all the storage we know about”, not “all the storage you know”. I would prefer a different term, but I get what they mean. Like everything else, they can’t de-dupe what they don’t control. They too present the cloud storage you’ve provisioned as a NAS on your end, but claim to accelerate CIFS and NFS also. Panzura is also trying to make a big splash about speeding access to MS-Sharepoint, but honestly, as a TMM for F5, a company that makes two astounding products that speed access to Sharepoint and nearly everything else on the Internet (LTM and WOM), I’m not impressed by Sharepoint acceleration. In fact, our Sharepoint Application Ready Solution is here, and our list of Application Ready Solutions is here. Those are just complete architectures we support directly, and don’t touch on what you can do with the products through Virtuals, iRules, profiles, and the host of other dials and knobs. I could go on and on about this topic, but that’s not the point of this blog, so suffice it to say there are some excellent application acceleration and WAN Optimization products out there, so this point solution alone should not be a buying criteria. There are some compelling reasons to purchase one of these products if you are considering cloud storage as a possible solution. Let’s take a look at them. Present cloud storage as a NAS – This is a huge benefit right now, but over time the importance will hopefully decrease as standards for cloud storage access emerge. Even if there is no actual standard that everyone agrees to, it will behoove smaller players to emulate the larger players that are allowing access to their storage in a manner that is similar to other storage technologies. Encryption – As far as I can see this will always be a big driver. They’re taking care of encryption for you, so you can sleep at night as they ship your files to the public cloud. If you’re considering them for non-public cloud, this point may still be huge if your pipe to the storage is over the public Internet. Local Caching – With current broadband bandwidths, this will be a large driver for the foreseeable future. You need your storage to be responsive, and local caching increases responsiveness, depending upon implementation, cache size, and how many writes you are doing this could be a huge improvement. De-duplication – I wish I had more time to dig into what these vendors mean by dedupe. Replacing duplicate files with a symlink is simplest and most resembles existing file systems, but it is also significantly less effective than partial file de-dupe. Let’s face it, most organizations have a lot more duplication laying around in files named Filename.Draft1.doc through Filename.DraftX.doc than they do in completely duplicate files. Check with the vendors if you’re considering this technology to find out what you can hope to gain from their de-dupe. This is important for the simple reason that in the cloud, you pay for what you use. That makes de-duplication more important than it has historically been. The largest caution sign I can see is vendor viability. This is a new space, and we have plenty of history with early entry players in a new space. Some will fold, some will get bought up by companies in adjacent spaces, some will be successful… at something other than Cloud Storage Gateways, and some will still be around in five or ten years. Since these products compress, encrypt, and de-dupe your data, and both of them manage your relationship with the cloud vendor, losing them is a huge risk. I would advise some due diligence before signing on with one – new companies in new market spaces are not always a risky proposition, but you’ll have to explore the possibilities to make sure your company is protected. After all, if they’re as good as they seem, you’ll soon have more data running through them than you’ll have free space in your data center, making eliminating them difficult at best. I haven’t done the research to say which product I prefer, and my gut reaction may well be wrong, so I’ll leave it to you to check into them if the topic interests you. They would certainly fit well with an ARX, as I mentioned in that other blog post. Here’s a sample architecture that would make “the Cloud Tier” just another piece of your virtual storage directory under ARX, complete with automated tiering and replication capabilities that ARX owners thrive on. This sample architecture shows your storage going to a remote data center over EDGE Gateway, to the cloud over Nasuni, and to NAS boxes, all run through an ARX to make the client (which could be a server or a user – remember this is the NAS client) see a super-simplified, unified directory view of the entire thing. Note that this is theoretical, to my knowledge no testing has occurred between Nasuni and ARX, and usually (though certainly not always) the storage traffic sent over EDGE Gateway will be from a local NAS to a remote one, but there is no reason I can think of for this not to work as expected – as long as the Cloud Gateway really presents itself as a NAS. That gives you several paths to replicate your data, and still presents client machines with a clean, single-directory NAS that participates in ADS if required. In this case Tier one could be NAS Vendor 1, Tier two NAS Vendor 2, your replication targets securely connected over EDGE Gateway, and tier 3 (things you want to save but no longer need to replicate for example) is the cloud as presented by the Cloud Gateway. The Cloud Gateway would arbitrate between actual file systems and whatever idiotic interface the cloud provider decided to present and tell you to deal with, while the ARX presents all of these different sources as a single-directory-tree NAS to the clients, handling tiering between them, access control, etc. And yes, if you’re not an F5 shop, you could indeed accomplish pieces of this architecture with other solutions. Of course, I’m biased, but I’m pretty certain the solution would not be nearly as efficient, cool, or let you sleep as well at night. Storage is complicated, but this architecture cleans it up a bit. And that’s got to be good for you. And all things considered, the only issue that is truly concerning is failure of a startup cloud gateway vendor. If another vendor takes one over, they’ll either support it or provide a migration path, if they are successful at something else, you’ll have plenty of time to move off of their storage gateway product, so only outright failure is a major concern. Related Articles and Blogs Panzura Launches ANS, Cloud Storage Enabled Alternative to NAS Nasuni Cloud Storage Gateway InfoSmack Podcasts #52: Nasuni (Podcast) F5’s BIG-IP Edge Gateway Solution Takes New Approach to Unifying, Optimizing Data Center Access Tiering is Like Tables or Storing in the Cloud Tier444Views0likes1CommentDatabases in the Cloud Revisited
A few of us were talking on Facebook about high speed rail (HSR) and where/when it makes sense the other day, and I finally said that it almost never does. Trains lost out to automobiles precisely because they are rigid and inflexible, while population densities and travel requirements are highly flexible. That hasn’t changed since the early 1900s, and isn’t likely to in the future, so we should be looking at different technologies to answer the problems that HSR tries to address. And since everything in my universe is inspiration for either blogging or gaming, this lead me to reconsider the state of cloud and the state of cloud databases in light of synergistic technologies (did I just use “synergistic technologies in a blog? Arrrggghhh…). There are several reasons why your organization might be looking to move out of a physical datacenter, or to have a backup datacenter that is completely virtual. Think of the disaster in Japan or hurricane Katrina. In both cases, having even the mission critical portions of your datacenter replicated to the cloud would keep your organization online while you recovered from all of the other very real issues such a disaster creates. In other cases, if you are a global organization, the cost of maintaining your own global infrastructure might well be more than utilizing a global cloud provider for many services… Though I’ve not checked, if I were CIO of a global organization today, I would be looking into it pretty closely, particularly since this option should continue to get more appealing as technology continues to catch up with hype. Today though, I’m going to revisit databases, because like trains, they are in one place, and are rigid. If you’ve ever played with database Continuous Data Protection or near-real-time replication, you know this particular technology area has issues that are only now starting to see technological resolution. Over the last year, I have talked about cloud and remote databases a few times, talking about early options for cloud databases, and mentioning Oracle Goldengate – or praising Goldengate is probably more accurate. Going to the west in the US? HSR is not an option. The thing is that the options get a lot more interesting if you have Goldengate available. There are a ton of tools, both integral to database systems and third-party that allow you to encrypt data at rest these days, and while it is not the most efficient access method, it does make your data more protected. Add to this capability the functionality of Oracle Goldengate – or if you don’t need heterogeneous support, any of the various database replication technologies available from Oracle, Microsoft, and IBM, you can seamlessly move data to the cloud behind the scenes, without interfering with your existing database. Yes, initial configuration of database replication will generally require work on the database server, but once configured, most of them run without interfering with the functionality of the primary database in any way – though if it is one that runs inside the RDBMS, remember that it will use up CPU cycles at the least, and most will work inside of a transaction so that they can insure transaction integrity on the target database, so know your solution. Running inside the primary transaction is not necessary, and for many uses may not even be desirable, so if you want your commits to happen rapidly, something like Goldengate that spawns a separate transaction for the replica are a good option… Just remember that you then need to pay attention to alerts from the replication tool so that you don’t end up with successful transactions on the primary not getting replicated because something goes wrong with the transaction on the secondary. But for DBAs, this is just an extension of their daily work, as long as someone is watching the logs. With the advent of Goldengate, advanced database encryption technology, and products like our own BIG-IPWOM, you now have the ability to drive a replica of your database into the cloud. This is certainly a boon for backup purposes, but it also adds an interesting perspective to application mobility. You can turn on replication from your data center to the cloud or from cloud provider A to cloud provider B, then use VMotion to move your application VMS… And you’re off to a new location. If you think you’ll be moving frequently, this can all be configured ahead of time, so you can flick a switch and move applications at will. You will, of course, have to weigh the impact of complete or near-complete database encryption against the benefits of cloud usage. Even if you use the adaptability of the cloud to speed encryption and decryption operations by distributing them over several instances, you’ll still have to pay for that CPU time, so there is a balancing act that needs some exploration before you’ll be certain this solution is a fit for you. And at this juncture, I don’t believe putting unencrypted corporate data of any kind into the cloud is a good idea. Every time I say that, it angers some cloud providers, but frankly, cloud being new and by definition shared resources, it is up to the provider to prove it is safe, not up to us to take their word for it. Until then, encryption is your friend, both going to/from the cloud and at rest in the cloud. I say the same thing about Cloud Storage Gateways, it is just a function of the current state of cloud technology, not some kind of unreasoning bias. So the key then is to make sure your applications are ready to be moved. This is actually pretty easy in the world of portable VMs, since the entire VM will pick up and move. The only catch is that you need to make sure users can get to the application at the new location. There are a ton of Global DNS solutions like F5’s BIG-IP Global Traffic Manager that can get your users where they need to be, since your public-facing IPs will be changing when moving from organization to organization. Everything else should be set, since you can use internal IP addresses to communicate between your application VMs and database VMs. Utilizing a some form of in-flight encryption and some form of acceleration for your database replication will round out the solution architecture, and leave you with a road map that looks more like a highway map than an HSR map. More flexible, more pervasive.365Views0likes0CommentsLTM Web Acceleration Profile Configuration
My organization and I are new to BIG-IP. Everything has been working quite well, but I do have a question about Web Acceleration profiles. For LTM, the iApps I have used so far create one by default. This seemed to be working satisfactorily, until the programmers began complaining about it. The previous load balancer my company used apparently did not do anything like F5's Web Acceleration profiles do, so the old ADC wasn't caching any items in that regard. I got with the programmers and told them we can clear the cache when we make changes, we can reduce the maximum age, and we can limit what items are cached, but they don't like that either. They want the LTM to recognize a document has changed without any cache clearing or anything like that. They want to be able to make changes to web pages (including to items the Web Acceleration profiles caches) and have all machines see those changes instantly and automatically. Of course, this obviously needs to happen, and the web pages we serve up change often, but the only way I see of doing that is clearing the cache on the Web Acceleration profile. Is there a provision for the LTM to check the web server for items to get their last modified dates so that if an item is changed the new version will be served up automagically instead of the old version? Do Web Acceleration profiles provide enough improvement to justify their use in a medium sized environment? What do other companies regularly do?350Views0likes3CommentsSPDY and Web Acceleration cannot be used together
If I enable both a SPDY and Web Acceleration profile on a Virtual Server, the initial html page is downloaded but all subsequent HTTP requests (javascript, css, images etc.) return with size 0 and response status 0. If I switch off either of them then it seems to work fine. Has anyone else encountered this? Platform: BIG-IP 11.5.0 Build 1.0.227 Hotfix HF1299Views0likes2CommentsIs it time for a new Enterprise Architect?
After a short break to get some major dental rework done, I return to you with my new, sore mouth for a round of “Maybe we should have…” discussions. In the nineties and early 21st century, positions were created in may organizations with titles like “chief architect” and often there was a group whose title were something like “IT Architect”. These people made decisions that impacted one or all subsidiaries of an organization, trying to bring standardization to systems that had grown organically and were terribly complex. They ushered in standards, shared code between disparate groups, made sure that AppDev and Network Ops and Systems Admins were all involved in projects that touched their areas. The work they did was important to the organization, and truly different than what had come before. Just like in the 20th century the concept of a “Commander of Army Group” became necessary because the armies being fielded were so large that you needed an overall commander to make sure the pieces were working together, the architect was there (albeit with far less power than an Army Group commander) to make sure all the pieces fit together. Through virtualization, they managed to keep the ball rolling, and direct things such that a commitment to virtualization was applied everywhere it made sense. Organizations without this role did much the same, but those with this role had a person responsible for making sure things moved along as smoothly as a major architecture change that impacts users, systems, apps, and networks can. Steve Martin in Little Shop of Horrors I worked on an enterprise architecture team for several years in the late 90s, and the work was definitely challenging, and often frustrating, but was a role (at least at the insurer I worked for) that had an impact on cutting waste out of IT and building a robust architecture in apps, systems, and networks. The problem was that network and security staff were always a bit distanced from architecture. A couple of companies whose architects I hung out with (Southwestern Bell comes to mind) had managed to drive deep into the decision making process for all facets of IT, but most of us were left with systems and applications being primary and having to go schmooze and beg to get influence in the network or security groups. Often we were seen as outsiders telling them what to do, which wasn’t the case at all. For the team we were on, if one subsidiary had a rocking security bit, we wanted it shared across the other subsidiaries so they would all benefit from this work the organization had already paid for. It was tough work, and some days you went home feeling as if you’d accomplished nothing. But when it all came together, it was a great job to have. You saw almost every project the organization was working on, you got to influence their decisions, and you got to see the project implemented. It was a fun time. Now, we face a scenario in networking and network architecture that is very similar to that faced by applications back then. We have to make increasingly complex networking decisions about storage, app deployment, load distribution, and availability. And security plays a critical role in all of these choices because if your platform is not secure, none of the applications running on it are. We use the term “Network architecture” a lot, and some of us even use it to describe all the possibilities – Internal, SaaS providers, cross-datacenter WAN, the various cloud application/platform providers, and cloud storage… But maybe it is time to create a position that can juggle all of these balls and get applications to the right place. This person could work with business units to determine needs, provide them with options about deployment that stress strengths and weaknesses in terms of their application, and make sure that each application lives in a “happy place” where all of its needs are met, and the organization is served by the locality. We here at F5, along with many other infrastructure vendors, are increasingly offering virtual versions of our products, in our case the goal is to allow you to extend the impact of our market leading ADC and File Virtualization appliances to virtualized and cloud environments. I won’t speak for other vendors about why they’re doing it, each has a tale to tell that I wouldn’t do justice to. But the point of this blog is that all of these options… In the cloud, or reserve capacity in the cloud? What impact does putting this application in the cloud have on WAN bandwidth? Can we extend our application firewall security functionality to protect this application if it is sent out to the cloud? Would an internal virtualized deployment be a better fit for the volume of in-datacenter database accesses that this particular application makes? Can we run this application from multiple datacenters and share the backend systems somehow, and if so what is the cost? These are the exact types of questions that a dedicated architect, specialized in deployment models, could ask and dig to find the answers to. It would be just like the other architecture team members, but more focused on getting the most out of where an application is deployed and minimizing the impacts of choices one application team makes upon everyone else. I think it’s time. A network architect worries mostly about the internal network, and perhaps some of the items above, we should use a different title. I know it’s been abused in the past, but extranet architect might be a good title. Since they would need to increasingly be able to interface with business units and explain choices and impacts, I think I prefer application locality architect… But that makes light of some of the more technical aspects of the job, like setting up load balancing in a cloud – or at least seeing to it that someone is. Like other architecture jobs, it would be a job of influence, not command. The role is to find the best solution given the parameters of the problem, and then sell the decision makers on why they are the right choice. But that role works well for all the other enterprise architect jobs, just takes a certain type of personality to get it done. Nothing new there, so knowledge of all of the options available would become the largest requirement… How costs of a cloud deployment at vendor X compare to costs of virtual deployment, what the impact of cloud-based applications are on the WAN (given application parameters of course), etc. There are a ton of really smart people in IT, so finding someone capable of digesting and utilizing all of that information may be easier than finding someone who can put up with “You may have the right solution, but for political reasons, we’re going to do this really dumb thing instead” with equanimity. And for those of you who already have a virtualization or cloud architect… Well that’s just a bit limiting if you have multiple platform choices and multiple deployment avenues. Just like there were application architects and enterprise architecture used their services, so would it be with this role and those specialized architects.286Views0likes1CommentHow to get the applications Assigned to a Web Acceleration Profile? (Possible API Bug)
I am trying to work out the applications a specific web acceleration profile are connected to. I am using the soap interface (ruby gem) When I make a get_application request from Soap Documentation: interfaces['LocalLB.ProfileWebAcceleration'].get_application(["/MYPARTITION/my_app_cache_profile"]) I get an empty response object: [] I would expect this to contain "/MYPARTITION/my_app.app" Verification (Profile is connected) I can confirm that this profile is connected to an app (by looking for it from the app): system_session.get_active_folder "/MYPARTITION/my_app.app" interfaces['LocalLB.ProfileWebAcceleration'].get_list ["/MYPARTITION/my_app.app/my_app_cache_profile"] NB: This does not help me as if you apply a profile to an app after creation the path is "/MYPARTITION/my_app_cache_profile" so running get_list on the app path will return 0 results even though there is a caching profile assigned.275Views0likes1CommentMore Complexity, New Problems, Sounds Like IT!
It is a very cool world we live in, where technology is concerned. We’re looking at a near future where your excess workload, be it applications or storage, can be shunted off to a cloud. Your users have more power in their hands than ever before, and are chomping at the bit to use it on your corporate systems. IBM recently announced a memory/storage breakthrough that will make Flash disks look like 5.25 inch floppies. While we can’t know what tomorrow will bring, we can certainly know that the technology will enable us to be more adaptable, responsive, and (yes, I’ll say it) secure. Whether we actually are or not is up to us, but the tools will be available. Of course, as has been the case for the last thirty years, those changes will present new difficulties. Enabling technology creates issues… Which create opportunity for emerging technology. But we have to live through the change, and deal with making things sane. In the near future, you will be able to send backup and replication data to the cloud, reducing your on-site storage and storage administration needs by a huge volume. You can today, in fact, with products like F5’s ARX Cloud Extender. You will also be able to grant access to your applications from an increasing array of endpoint devices, again, you can do it today, with products like F5’s ASM for VPN access and APM for application security, but recent surveys and events in the security space should be spurring you to look more closely into these areas. SaaS is cool again in many areas that it had been ruled out – like email – to move the expense of relatively standardized high volume applications out of the datacenter and into the hands of trusted vendors. You can get email “in the cloud” or via traditional SaaS vendors. That’s just some of the changes coming along, and guess who is going to implement these important changes, be responsible for making them secure, fast, and available? That would be IT. To frame the conversation, I’m going to pillage some of Lori’s excellent graphics and we’ll talk about what you’ll need to cover as your environment changes. I won’t use the one showing little F5 balls on all of the strategic points of control, but if we have one. First, the points of business value and cost containment possible on the extended datacenter network. Notice that this slide is couched in terms of “how can you help the business”. Its genius is that Lori drew an architecture and then inserted business-relevant bits into it, so you can equate what you do every day to helping the business. Next up is the actual Strategic Points of Control slide, where we can see the technological equivalency of these points. So these few points are where you can hook in to the existing infrastructure offer you enhanced control of your network – storage, global, WAN, LAN, Internet clients – by putting tools into place that will act upon the data passing through them and contain policies and programmability that give you unprecedented automation. The idea here is that we are stepping beyond traditional deployments, to virtualization, remote datacenters, cloud, varied clients, ever-increasing storage (and cloud storage of course), while current service levels and security will be expected to be maintained. That’s a tall order, and stepping up the stack a bit to put strategic points of control into the network helps you manage the change without killing yourself or implementing a million specialized apps, policies, and procedures just to keep order and control costs. At the Global Strategic Point of Control, you can direct users to a working instance of your application, even if the primary application is unavailable and users must be routed to a remote instance. At this same place, you can control access to restricted applications, and send unauthorized individuals to a completely different server than the application they were trying to access. That’s the tip of the iceberg, with load balancing to local strategic points of control being one of the other uses that is beyond the scope of this blog. The Local Strategic Point of Control offers performance, reliability, and availability in the guise of load balancing, security in the form of content-based routing and application security – before the user has hit the application server – and encryption of sensitive data flowing internally and/or externally, without placing encryption burdens on your servers. The Storage Strategic Point of Control offers up tiering and storage consolidation through virtual directories, heterogeneous security administration, and abstraction of the NAS heads. By utilizing this point of control between the user and the file services, automation can act across vendors and systems to balance load and consolidate data access. It also reduces management time for endpoint staff, as the device behind a mount/map point can be changed without impacting users. Remote site VPN extension and DMZ rules consolidation can happen at the global strategic point of control at the remote site, offering a more hands-off approach to satellite offices. Note that WAN Optimization occurs across the WAN, over the Local and global strategic points of control. Web Application Optimization also happens at the global or local strategic point of control, on the way out to the end point device. What’s not shown is a large unknown in cloud usage – how to extend the control you have over the LAN out to the cloud via the WAN. Some things are easy enough to cover by sending users to a device in your datacenter and then redirecting to the cloud application, but this can be problematic if you’re not careful about redirection and bookmarks. Also, it has not been possible for symmetric tools like WAN Optimization to be utilized in this environment. Virtual appliances like BIG-IP LTM VE are resolving that particular issue, extending much of the control you have in the datacenter out to the cloud. I’ve said before, the times are still changing, you’ll have to stay on top of the new issues that confront you as IT transforms yet again, trying to stay ahead of the curve. Related Blogs: Like Load Balancing WAN Optimization is a Feature of Application ... Is it time for a new Enterprise Architect? Virtual Infrastructure in Cloud Computing Just Passes the Buck The Cloud Computing – Application Acceleration Connection F5 Friday: Secure, Scalable and Fast VMware View Deployment Smart Energy Cloud? Sounds like fun. WAN Optimization is not Application Acceleration The Three Reasons Hybrid Clouds Will Dominate F5 Friday: BIG-IP WOM With Oracle Products Oracle Fusion Middleware Deployment Guides Introducing: Long Distance VMotion with VMWare Load Balancers for Developers – ADCs Wan Optimization Functionality Cloud Control Does Not Always Mean 'Do it yourself' Best Practices Deploying IBM Web Sphere 7 Now Available275Views0likes0CommentsUseful IT. Bringing Health Record Transfer into the 21st Century.
I read the Life as a Healthcare CIO blog on occasion, mostly because as a former radiographer, health care records integration and other non-diagnostic IT use in healthcare is a passing interest of mine. Within the last hospital I worked at the systems didn’t communicate – not even close, as in there was no effort to make them do so. This intrigues me, as since I’ve entered IT I have watched technology uptake in healthcare slowly ramp up at a great curve behind the rest of the business world. Oh make no mistake, technology has been in overdrive on the equipment used, but things like systems interoperability and utilizing technology to make doctors, nurses, and tech’s lives easier is just slower in the medical world. A huge chunk of the resistance is grounded in a very common sense philosophy. “When people’s lives are on the line you do not rush willy-nilly to the newest gadget.” No one in healthcare says it that way – at least not to my knowledge – but that’s the essence of what they think. I can think of a few businesses that could use that same mentality applied occasionally with a slightly different twist: “When the company’s viability is on the line…” but that’s a different blog. Even with this very common-sense resistance, there has been a steady acceleration of uptake in technology use for things like patient records and prescriptions. It has been interesting to watch, as someone on the outside with plenty of experience with the way hospitals worked and their systems were all silos. Healthcare IT is to be commended for things like electronic prescription pads and instant transfer of (now nearly all electronic) X-Rays to those who need them to care for the patient. Applying the “this can help with little impact on critical care” or even “this can help with positive impact on critical care and little risk of negative impact” viewpoint as a counter to the above-noted resistance has produced some astounding results. A friend of mine from my radiographer days is manager of a Cardiac Cath Lab, and talking with him is just fun. “Dude, ninety percent of the pups coming out of Radiology schools can’t set an exposure!” is evidence that diagnostic tools are continuing to take advantage of technology – in this case auto-detecting XRay exposure limits. He has more glowing things to say about the non-diagnostic growth of technology within any given organization. But outside the organization? Well that’s a completely different story. The healthcare organization wants to keep your records safe and intact, and rarely even want to let you touch them. That’s just a case of the “intact” bit. Some people might want their records to not contain some portion – like their blood alcohol level when brought to the ER – and some people might inadvertently lose some portion of the record. While they’re more than happy to send them on a referral, and willing to give you a copy if you’re seeking a second opinion, these records all have one archaic quality. Paper. If I want to buy a movie, I can go to netflix, sign up, and stream it (at least many of them) to watch. If I want my medical records transferred to a specialist so I can get treatment before my left eye oozes out of its socket, they have to be copied, verified, and mailed. If they’re short or my eye is on the verge of falling out right this instant, then they might be faxed. But the bulk of records are mailed. Even overnight is another day lost in the treatment cycle. Recently – the last couple of years – there has been a movement to replicate the records delivery process electronically. As time goes on, more and more of your medical records are being stored digitally. It saves room, time, and makes it easier for a doctor to “request” your record should he need it in a hurry. It also makes it easier to track accidental or even intentional changes in records. While it didn’t happen as often as fear-mongers and ambulance chasers want you to believe, of course there are deletions and misplacements in the medical records of the 300 million US citizens. An electronic system never forgets, so while something as simple as a piece of paper falling out of a record could forever change it, in electronic form that can’t happen. Even an intentional deletion can be “deleted” as in not show up, but still there, stored with your other information so that changes can be checked should the need ever arise. The inevitable off-shoot of electronic records is the ability to communicate them between hospitals. If you’re in the ER in Tulsa, and your normal doctor is in Manhattan, getting your records quickly and accurately could save your life. So it made sense that as the percentage of new records that were electronic grew, someone would start to put together a way to communicate them. No doubt you’re familiar with the debate about national health information databases, a centralized location for records is a big screaming target from many people’s perspectives, while it is a potentially life-saving technological advancement to others (they’re both right, but I think the infosec crowd has the stronger argument). But a smart group of people put together a project to facilitate doing electronically exactly what is being done today physically. The process is that the patient (or another doctor) requests the records be sent, they are pulled out, copied, mailed or faxed, and then a follow-up or “record received” communication occurs to insure that the source doctor got your records where they belong. Electronically this equates to the same thing, but instead of “selected” you get “looked up”, and instead of “mailed or faxed” you get “sent electronically”. There’s a lot more to it, but that’s the gist of The Direct Project. There are several reasons I got sucked into reading about this project. From a former healthcare worker’s perspective, it’s very cool to see non-diagnostic technology making a positive difference in healthcare, from a patient perspective, I would like the transfer of records to be as streamlined as possible, from the InfoSec perspective (I did a couple of brief stints in InfoSec), I like that it is not a massive database, but rather a “faster transit” mechanism, and from an F5 perspective, the possibilities for our gear to help make this viable were in my mind while reading. While Dr. Halamka has a lot of interesting stuff on his blog, this is one I followed the links and read the information about. It’s a pretty cool initiative, and what may seem very limiting in their scope assumptions holds true to the Direct Project’s idea of replacing the transfer mechanism and not creating a centralized database. While they’re not specifying formats to use during said transfer, they do list some recommended reading on that topic. What they do have is a registry of people who can receive records, and a system for transferring data over the wire. They worry about DNS-style health-care provider lookups, transfer protocols, and encryption, which is certainly a large enough chunk for them to bite off, and then they show how they fit into the larger nation-wide healthcare electronic records efforts going on. I hope they get it right, and the system they’re helping to build results in near-instantaneous secure records transfers, but many inventions are a product of the time and society in which they live, and even if The Direct Project fails, something like it will eventually succeed. If you’re in Healthcare IT, this is certainly a way to add value to the organization, and worth checking out. Meanwhile, I’m going to continue to delve into their work and the work of other organizations they’ve linked to and see if there isn’t a way F5 can help. After all, we can compress, dedupe, and encrypt communications on-the-wire, and the entire system is about on-the-wire communications, so it seems like a perfectly logical route to explore. Though the patient care guy in me will be reading up as much as the IT guy, because healthcare was a very rewarding field that seriously needed a bit more non-diagnostic technology when I was doing it.272Views1like0Comments