browser
6 TopicsSecurity Sidebar: I Can See Your Browsing History
Is there any expectation of browsing privacy on the Internet any more? Well, there shouldn't be. A few years ago, Internet browsers were widely known to have vulnerabilities that allowed websites the ability to search a user's browsing history. Websites could use a combination of JavaScript and Cascading Style Sheet (CSS) features to figure out what websites you visited. In 2010, researchers at the University of California at San Diego found that several pornographic sites would search a user's browser history to see if the user had visited other pornographic sites. But it wasn't just the porn industry viewing user habits. These same researchers found several news sites, finance sites, and sports sites doing the same thing. Over time, browser security updates were supposed to have fixed these vulnerabilities...and they did for a while. But recently, security researchers have uncovered new vulnerabilities that allow this behavior once again. There's a new attack that uses the requestAnimationFrame function to determine how long it takes a browser to render a webpage. Simply stated, if the page renders quickly, the user has probably visited it before. You get the idea. There are ways to work around these browser history vulnerabilities. The primary workaround is to make sure you never have any browser history. You can clear all your history when you close your browser (in fact, you can do this automatically on most browsers). While this might keep someone from knowing your browsing history, it can also prove to be very inconvenient. After all, if you clear your history...well, you lose your history. Let's be honest, it's nice to have your browser remember the sites you've visited. What a pain to reestablish your user identity on all the websites you like to hit, right? So why is your browsing history so interesting? Many companies want to target you with ads and other marketing initiatives based on your browsing habits. They also want to sell your browsing habits to other interested parties. I could also talk about how the government might use this information to spy on help you, but I'll refrain for now. Allan Friedman, a research scientist at George Washington University, recently said that websites are very likely searching your browser history to determine the selling price for a particular item. They might offer you a better deal if they find that you've been shopping their competitors for the same item. Likewise, they might charge more if they find nothing related to said purchase in your browser history. Justin Brookman, a director at the Center for Democracy and Technology, echoed this sentiment when he said browsing history could come at a cost. For example, if you have been shopping on a high-end retail site, you will likely see advertisements for higher priced businesses displayed on your browser. Another way this could affect your daily life is in the area of smartphone geolocation. Your smartphone will broadcast location information every few seconds, and businesses can use this information to send marketing emails (coupons, daily deals, etc) when they know you are close by. Currently, there is no federal law that prohibits this behavior. As long as businesses aren't lying about what they are doing, it's perfectly legal. Don't be surprised when you conveniently get a "check out our great deals" email from the store you just passed by. Ours is a really cool, technology-filled world...and it's kind of scary at the same time.547Views0likes1CommentAsk the Expert – Why Web Fraud Protection?
Corey Marshall, Security Solution Architect, explains why the browser is a new threat vector into an organization’s applications and infrastructure. This universal client can be the weakest link in the access chain and malicious characters are focusing on this as an avenue to steal information. Web fraud can be detrimental to both users and organizations alike and Corey explains some specific business scenarios along with F5 fraud protection services that can provide visibility into behavior anomalies and protect the client side against data leakage. ps Related: Ask the Expert – Are WAFs Dead? Ask the Expert – Why SSL Everywhere? F5 Web Fraud Protection Technorati Tags: f5,web fraud,security,browser,threats,privacy,silva,video Connect with Peter: Connect with F5:250Views0likes0CommentsIs the URL headed for the endangered technology list?
Jeremiah Owyang, Senior Analyst, Social Computing, Forrester Research, tweeted recently on the subject of Chrome, Google's new open source browser. Jeremiah postulates: Chrome is a nod to the future, the address bar is really a search bar. URLs will be an anachronism. That's an interesting prediction, predicated on the ability of a browser translate search terms into destinations on the Internet. Farfetched? Not at all. After all, there already exists a layer of obfuscation between a URL and an Internet destination; one that translates host names into IP addresses, hiding the complexity and difficult in remembering IP addresses from the end-user. And apparently Chrome is already well on its way to sending URLs the way of the dodo bird, otherwise we wouldn't be having this conversation. But IP addresses, though obfuscated and hidden from view for most folks, aren't an anachronism any more than the engine of car. Its complexity, too, is hidden from view and concern for most folks. We don't need to know how the engine gets started, just that turning the key will get it started. In similar fashion, most folks don't need to know how clicking on a particular URL gets them to the right place, they just need to know to click on it. Operating technology doesn't necessarily require understanding of how it works, and the layer of abstraction we place atop technology to make it usable by the majority doesn't necessarily make the underlying technology an anachronism, although in this case Jeremiah may be right - at least from the view point that using URLs as a navigation mechanism may become an anachronism. URLs will still be necessary, they are a part of the foundation of how the web works. But IP addresses are also necessary, and so is the technology that bridges the gap between IP addresses and host names, namely DNS. More interesting, I think, is that Jeremiah is looking into his crystal ball and seeing the first stages of Web 3.0, where context and content is the primary vehicle that drives your journey through the web rather than a list of hyperlinks. Where SEO is king, and owning a keyword will be as important, if not more so, than brand. The move to a semantic web necessarily eliminates the importance of URLs as a visible manifestation, but not as the foundational building blocks of how that web is tied together. To be fair to other browsers, the address bar in FireFox 3 also acts like a search bar. If I type in my name, it automatically suggests several sites tied to my identity, and takes me by default to this blog. Similarly a simple search for "big-ip" automatically takes me to F5's product page on BIG-IP. That's because my default search engine is Google, and it's taking me to the first ranked page for the search results. This isn't Web 3.0, not yet, but it's one of the first visible manifestations we have of what the web will eventually become. That's what I mean about keywords becoming the new brand. Just as "bandaid", which is really a brand name, became a term used to describe all bandages, the opposite will happen - and quickly - in a semantic web where keywords and phrases are automatically translated into URLs. SEO today understands the importance of search terms and keywords, but it's largely a supporting cog in a much larger wheel of marketing efforts. That won't be true when search really is king, rather than just the crown prince. But URLs will still be necessary. After all, the technology that ties keywords and search terms to URLs requires that URLs exist in the first place, and once you get to a site you still have to navigate it. So while I'm not convinced that URLs will become a complete anachronism, they may very well become virtualized. Just like everything else today.203Views0likes0CommentsEt Tu, Browser?
Friends, foes, Internet-denizens … lend me your browser. Were you involved in any of the DDoS attacks that occurred over the past twelve months? Was your mom? Sister? Brother? Grandfather? Can you even answer that question with any degree of certainty? Reality is that the reason for attack on the web is subtly shifting to theft not necessarily of data, but of resources. While the goal may still be to obtain personal credentials for monetary gain, it is far more profitable to rip hundreds or thousands of credentials from a single source than merely getting one at a time. From a miscreant’s point of view, the return on investment is simply much higher targeting a site than it is targeting you, directly. But that doesn’t mean you’re off the hook. In fact, quite the opposite. For there are other just as nefarious purposes to which your resources can be directed, including inadvertently participating in a grand-scale DDoS attack for what is now-a-days called “hactivism.” In both cases, you are still a victim, but you may not be aware of it as the goal is to stealth-install the means by which your compute resources can be harnessed to perpetrate an attack and it may not be caught by the security you have in place (you do have some in place, right?). You can’t necessarily count on immunity from infection because you only visit “safe sites”. That’s because one of the ways in which attackers leverage your compute resources is not through installation of adware or other malware, but directly through JavaScript loaded via infected sites. At issue is the possible collision between web application and browser security. attackers are recommending to develop a system by which people are lured to some other content, such as SPAM SPAM SPAM !!!graphy, but by visiting the website would invisibly launch the DDOS JavaScript tool. -- Researchers say: DDoS "Low Orbit Ion Cannon" attackers could be easily traced Now consider the number of serious vulnerabilities reported by WhiteHat Security during the Fall of 2010. Consider the rate across Social Networking sites. Assume an attacker managed to exploit one of those vulnerabilities and plant the DDoS JavaScript tool such that unsuspecting visitors end up playing a role in a DDoS attack. It gets worse, as far as the potential impact goes. The recent revelation of a new SSL/TLS vulnerability (BEAST) includes a pre-condition that JavaScript be injected into the browser. CSRF (Cross-Site Request Forgery) is a fairly common method of managing such a trick, and is listed by WhiteHat in the aforementioned report as having increased to 24% of all vulnerabilities. So, too, is XSS (Cross-Site Scripting) which ranks even higher in WhiteHat’s list, tying “information leakage” for the number one spot at 64%. In order to execute their attack, Rizzo and Duong use BEAST (Browser Exploit Against SSL/TLS) against a victim who is on a network on which they have a man-in-the-middle position. Once a victim visits a high-value site, such as PayPal, that uses TLS 1.0, and logs in and receives a cookie, they inject the client-side BEAST code into the victim's browser. This can be done through the use of an iframe ad or just loading the BEAST JavaScript into the victim's browser. -- New Attack Breaks Confidentiality Model of SSL, Allows Theft of Encrypted Cookies Such an attack is designed to steal high-value data such as might be stored in an encrypted cookie used to conduct transactions with Paypal or an online banking service. Depending on the level of protection at the web application layer, the delivery of such JavaScript may go completely unnoticed. Most web application security focuses on verifying user input, not application responses, are free from infection. And too many consumers believe running anti-virus scanning solutions are enough to detect and prevent infection in general, not realizing that a dynamically injected JavaScript (something many sites do all the time for monitoring performance and to enable real-time interaction) may, in fact, be “malicious” or at the very least an attempt at resource theft. How do you stop a browser that essentially stabs you in the back by accepting, without question, questionable content? Without layering additional security on the browser that parses through each and every piece of content delivered, there isn’t a whole lot you can do – other than turn off the ability to execute JavaScript which, today, essentially renders the Internet useless. GO to the SOURCE If we look at the source of browser infections we invariably find the only viable, reasonable, effective answer is to eliminate the source. When you have a pandemic you figure out what’s causing it and you go to the source. Yes, you treat the symptoms of the victims if possible, but what you want to really do is locate and whack the source so it stops spreading. In “Perceptions about Network Security” (Ponemon Institute, June 2011) surveys show that the top three sources of a breach were insider abuse (52%), malicious software download (48%), and malware from a website (43%). Interestingly, 29% indicated the breach resulted from malicious content coming from a “social networking site”, which would – when added to the malware from a website (of which social networking sites are certainly a type) that source tops the chart with 72% of all causes of breaches being a direct result of the failure of a website to secure itself, and essentially allow itself to become a carrier of an outbreak. Certainly if you have control over the desktops, laptops, and mobile devices from which a client will interact with your web site or network and you have the capability to deploy policies on those clients that can aid in securing and protecting that client, you should. But that capability is rapidly dwindling with the introduction of a vast host of clients with wildly different OS footprints and the incompatibility of client-side, OS specific agents and apps capable of supporting a holistic client-side security strategy. Enforcing policies regarding interaction with corporate resources is really your best and most complete option. Like a DDoS attack, you are unlikely to be able to stop the infection of a client. You can, however, stop the spread and possible infection of your corporate resources as a carrier. The more organizations that attend to their own house’s security and protection, the better off end-users will likely be. Reducing the sources of the pandemic of client-side infections will reduce the risk not only to your own organization and users, but to others. And if we can all reduce the potential sources down to sites relying on user’s specifically visiting an infected site, the client-side mechanisms in place to protect users against known malware distribution sites will get us further to a safer and more enjoyable Internet. When the Data Center is Under Siege Don’t Forget to Watch Under the Floor The Many Faces of DDoS: Variations on a Theme or Two Spanish police website hit by Anonymous hackers (June 2011) What We Learned from Anonymous: DDoS is now 3DoS Custom Code for Targeted Attacks Defense in Depth in Context The Big Attacks are Back…Not That They Ever Stopped (IP) Identity Theft in Cloud Computing Environments If Security in the Cloud Were Handled Like Car Accidents202Views0likes0CommentsAutomatically detecting client speed
We used to spend a lot of cycles worrying about detecting user agents (i.e. browser) and redirecting clients to the pages written specifically for that browser. You know, back when browser incompatibility was a way of life. Yesterday. Compatibility is still an issue, but most web developers are either using third-party JavaScript libraries to handle detection and incompatibility issues or don't use those particular features that cause problems. One thing still seen at times, however, is the "choose high bandwidth or low bandwidth" entry pages, particularly on sites laden with streaming video and audio, whose playback is highly sensitive to the effects of jitter and thus need a fatter pipe over which to stream. Web site designers necessarily include the "choose your speed" page because they can't reliably determine client speed. Invariably, some user on a poor connection is going to choose high bandwidth anyway, and then e-mail or call to complain about poor service. Because that's how people are. So obviously we still have a need to detect client speed, but the code and method of doing so in the web application would be prohibitively complex and consume time and resources better spent elsewhere. But we'd still like to direct the client to the appropriate page without asking, because we're nice that way - or more likely we just want to avoid the phone call later. That would be a huge motivator for me, but I'm like that. I hate phones. Whatever the reason, detecting client speed is valuable for directing users to appropriate content as well as providing other functionality, such as compression. Compression is itself a resource consuming function and applying compression in some situations can actually degrade performance, effectively negating the improvement in response time gained by decreasing the size of the data to be transferred. If you've got an intelligent application delivery platform in place, you can automatically determine client speed and direct requests based on that speed without needing to ask the client for input. Using iRules, just grab the round-trip time (or bandwidth) and rewrite the URI accordingly: when HTTP_REQUEST { if { [TCP::rtt] >= 1000 } { HTTP::uri "/slowsite.html" } } If you don't want to automatically direct the client, you could use this information to add a message to your normal "choose your bandwidth" page that lets the client know their connection isn't so great and perhaps they should choose the lower-bandwidth option. This is also good for collecting statistics, if you're interested, on the types of connections your customers and users are on. This can help you make a decision regarding whether you even need that choice page, and maybe lead to only supporting one option - making the development and maintenance of your site and video/audio all that much more streamlined.200Views0likes0CommentsDoes This Application Make My Browser Look Fat?
Web applications that count on the advantage of not having a bloated desktop footprint need to keep one eye on the scale… A recent article on CloudAve that brought back the “browser versus native app” debate caught my eye last week. After reading it, the author is really focusing on that piece of the debate which dismisses SaaS and browser-based applications in general based on the disparity in functionality between them and their “bloated desktop” cousins. Why do I have to spend money on powerful devices when I can get an experience almost similar to what I get from native apps? More importantly, the rate of innovation on browser based apps is much higher than what we see in the traditional desktop software. […] Yes, today's SaaS applications still can't do everything a desktop application can do for us. But there is a higher rate of innovation on the SaaS side and it is just a matter of time before they catch up with desktop applications on the user experience angle. When You Can Innovate With browser, Why Do You Need Native Apps?, Krishnan Subramanian I don’t disagree with this assessment and Krishnan’s article is a good one – a reminder that when you move from one paradigm to another it takes time to “catch up”. This is true with cloud computing in general. We’re only at the early stages of maturity, after all, so comparing the “infrastructure services” available from today’s cloud computing implementations with well-established datacenters is a bit unfair. But while I don’t disagree with Krishnan, his discussion reminded me that there’s another piece of this debate that needs to be examined – especially in light of the impact on the entire application delivery infrastructure as browser (and browser-based applications) capabilities to reproduce a desktop experience mature. At what point do we stop looking at browser-based applications as “thin” clients and start viewing them for what they must certainly become to support the kind of user-experience we want from cloud and web applications: bloated desktop applications. The core fallacy here is that SaaS (or any other “cloud” or web application) is not a desktop application. It is. Make no mistake. The technology that makes that interface interactive and integrated is almost all enabled on the desktop, via the use of a whole lot of client-side scripting. Just because it’s loaded and executing from within a browser doesn’t mean it isn’t executing on the desktop. It is. It’s using your desktop’s compute resources and your desktop’s network connection and it’s a whole lot more bloated than anything Tim Berners-Lee could have envisioned back in the early days of HTML and the “World Wide Web.” In fact, I’d argue that with the move to Web 2.0 and a heavy reliance on client-side scripting to implement what is presentation-layer logic that the term “web application” became a misnomer. Previously, when the interface and functionality relied solely on HTML and was assembled completely on the web-side of the equation, these were correctly called “web” applications. Today? Today they’re very nearly a perfected client-server, three-tiered architectural implementation. Presentation layer on the client, application and data on the server. That the network is the Internet instead of the LAN changes nothing; it simply introduces additional challenges into the delivery chain.189Views0likes0Comments