firefox
7 TopicsFirefox 58.x/59.0 with the WebGUI is switching back to Common partition randomly
I have noticed that when using the latest Mozilla Firefox versions (58.x and 59.0) with WebGUI from TMOS/BigIP version 12.1.3 then editing objects in partitions other than Common, the partition switches randomly back to Common in the WebGUI This then causes errors when you try to save as the object is no longer referencing the correct partition (i.e. now Common) As we have no issue with IE or Chrome that it is something new to Firefox but it has been driving me nuts. I have switched to using Chrome as a workaround. Has anyone else had this effect or am I going mad ... I tried looking for other questions about this but found nothing related.258Views0likes1CommentF5 Access Policy Manager and Firefox Browser version 43 and 47+
Firefox Browser version 43 has new plug-in signing requirements. F5 will be providing Engineering Hotfixes for BIG-IP versions 12.0.0, 11.6.0, and 11.5.3, which will include a F5 Access Policy Manager plug-in signed by Firefox for Microsoft Windows and Linux platforms. With F5 officially supporting Firefox version 34, this is a “best efforts” approach to alleviate any disruptions brought about by Firefox version 43 and the upcoming Firefox version 44, related to plug-in signing requirements (Feature Enhancement ID:564253). If issues are uncovered with versions of Firefox greater than version 34 after installing the appropriate Engineering Hotfix, it is recommended that users be guided to use Microsoft Internet Explorer on Windows, and Safari on Mac, as detailed in thisDevCentral post. Another option is to use BIG-IP Edge Client for these two platforms. For Linux, there is a CLI client available for network access. These Engineering Hotfix releases are short-term fixes. A more permanent solution will be available in an upcoming release of BIG-IP; specifics will also be available in the aforementioned DevCentral post. We will make the Engineering hotfixes available for customers who create a support case with F5 Support. This Engineering Hotfix should be good for up to Firefox 46 and F5 will need to have Mozilla sign the plug-in again for Firefox 47+. This is just how Firefox plug-in signing works currently. January 7, 2016 Update: While we (F5) is making progress in getting the Engineering hotfixes out, we are currently working through some issues seen with the Mozzilla add-on submission tool. Once that is resolved, then we expect to be able to provide an ETA for the Engineering Hotifxes. F5 is working on this with urgency. January 8, 2016 Update: We (F5) have the issue with the Mozilla add-on tools resolved, so we can provide a target ETA of January 15, 2016 (Friday) to provide Engineering Hotfixes for the 3 versions of BIG-IP we had mentioned on this post. January 14, 2016 Update: We have run into a few issues that need to be addressed so we will need a few more days to have the Engineering Hotfixes available. Again,F5 is working on this with urgency. January 21, 2016 Update: We have an Engineering Hotfix for BIG-IP 11.6.0, based on BIG-IP 11.6.0 Hotfix 6 now. Again to get it, customers should create a support case with F5 Support. We are still planning to provide an Engineering Hotfix forBIG-IP 12.0.0 and 11.5.3 soon. January 25, 2016 Update: We have an Engineering Hotfix for BIG-IP 12.0.0 based on BIG-IP 12.0.0 Hotfix 1 now. Again to get it, customers should create a support case with F5 Support. We are still planning to provide an Engineering Hotfix forBIG-IP 11.5.3 soon. January 26, 2016 Update: We have an Engineering Hotfix for BIG-IP 11.5.3 based on BIG-IP 11.5.3 Hotfix 2. Customers should create a support case with F5 Support. F5 will target to release Engineering Hotfixes before Firefox 47 is available. May 9, 2016 Update: F5 is currently working on Engineering Hotfixes for the various BIG-IP for Firefox 47+ that would work for all Firefox versions (even Firefox 46 and earlier). Mozilla is allowing for plug-in signing for all (*) versions of Firefox again. We do not have the releases ready for customers yet but expect to have it shortly. Once they are available, we will announce it here and also provide it initially to F5 Support and thus customers can get it via a Support ticket. Shortly after it is available via F5 Support, we will provide it on https://downloads.f5.com. May 16, 2016 Update: F5 has Engineering Hotfixes for 11.5.4 HF1 and 11.6.0 HF6 available. These should work with all versions* of Firefox (including Firefox 47 Beta builds). For now, customers should create a support ticket with F5 Support to get the Engineering Hotfixes. We will provide it on https://downloads.f5.com shortly and we are also working on Engineering Hotfixes for12.0.0 HF2 and 11.6.1. May 21, 2016 Update: F5 has Engineering Hotfix for 12.0.0 HF2 available. These should work with all versions* of Firefox (including Firefox 47 Beta builds). For now, customers should create a support ticket with F5 Support to get the Engineering Hotfixes. We will provide it on https://downloads.f5.com shortly and we are also working on Engineering Hotfixes for11.6.1 and 12.1.0. May 31, 2016 Update:F5 has Engineering Hotfix for 11.6.1 available. For now, customers should create a support ticket with F5 Support to get the Engineering Hotfixes. *These Engineering Hotfixes should work on all versions of Firefox, including 47+, until Firefox removes its NPAPI support. To address that we have another DevCentral post: here:https://devcentral.f5.com/s/articles/addressing-security-loopholes-of-third-party-browser-plug-ins1.6KViews0likes16CommentsSSL error (ssl_error_bad_mac_read) between LTM and Firefox
We have noticed that recent versions of Firefox 36+ are frequently giving SSL errors [ssl_error_bad_mac_read] when talking to our LTM. The LTM is used as a reverse proxy for a website and does SSL bridging. The error happens sporadically on some web pages but some other web pages are giving it pretty constantly. The error happens with all tested flavors of SSL/TLS: SSLv3, TLS 1.0, TLS 1.2. The error does not happen with IE, Chrome and previous versions of Firefox (before 36). The error does not happen if we bypass LTM and connect directly to the website with any version of TLS. Has anybody already seen this issue? What could be a problem? Any help will be appreciated UPDATE 1 If I disable in Firefox all ciphers except 3DES+SHA, everything works well. UPDATE 2 I have three different VIPs on our LTM that use different SSL certificates. I tested all of them with Firefox. In all cases TLS 1.2 with the cipher suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f) was negotiated. In two cases the SSL connections fail with a "bad mac" error. In the third case, I have been unable to reproduce the issue. UPDATE 3 According to Wireshark captures the SSL connection fails sometimes right after the handshake. But sometimes it fails later after have transferring some amount of HTTP data. Looks like a bug in crypto libraries. UPDATE 4 Tested LTM with an OpenSSL client using TLS 1.2 and the AES128-SHA cipher. Got a similar behavior with an intermittent decryption error. error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record1.3KViews0likes10CommentsClickjacking Protection Using X-FRAME-OPTIONS Available for Firefox
But browser support is only half the solution, don’t forget to implement the server-side, too. Clickjacking, unlike more well-known (and understood) web application vulnerabilities, has been given scant amount of attention despite its risks and its usage. Earlier this year, for example, it was used as an attack on Twitter, but never really discussed as being a clickjacking attack. Maybe because aside from rewriting applications to prevent CSRF (adding nonces and validation of the same to every page) or adding framekillers there just haven’t been many other options to prevent the attack technique from being utilized against users. Too, it is one of the more convoluted attack methods out there so it would be silly to expect non-technical media to understand it let alone explain how it works to their readers. There is, however, a solution on the horizon. IE8 has introduced an opt-in measure that allows developers – or whomever might be in charge of network-side scripting implementations – to prevent clickjacking on vulnerable pages using a custom HTTP header to prevent them from being “framed” inappropriately: X-FRAME-OPTIONS. The behavior is described in the aforementioned article as: If the X-FRAME-OPTIONS value contains the token DENY, IE8 will prevent the page from rendering if it will be contained within a frame. If the value contains the token SAMEORIGIN, IE will block rendering only if the origin of the top level-browsing-context is different than the origin of the content containing the X-FRAME-OPTIONS directive. For instance, if http://shop.example.com/confirm.asp contains a DENY directive, that page will not render in a subframe, no matter where the parent frame is located. In contrast, if the X-FRAME-OPTIONS directive contains the SAMEORIGIN token, the page may be framed by any page from the exact http://shop.example.com origin. But that’s only IE8, right? Well, natively, yes. But a development version of NoScript has been released that supports the X-FRAME-OPTIONS header and will provide the same protections as are natively achieved in IE8. The problem is that this is only half the equation: the X-FRAME-OPTIONS header needs to exist before the browser can act on it and the preventive measure for clickjacking completed. As noted in the Register, “some critics have contended the protection will be ineffective because it will require millions of websites to update their pages with proprietary code.” That’s not entirely true as there is another option that will provide support for X-FRAME-OPTIONS without updating pages/applications/sites with proprietary code: network-side scripting. The “proprietary” nature of custom HTTP headers is also debatable, as support for Firefox was provided quickly via NoScript and if the technique is successful will likely be adopted by other browser creators. HOW-TO ADD X-FRAME-OPTIONS TO YOUR APPLICATION – WITH or WITHOUT CODE CHANGES Step 1: Add the custom HTTP header “X-FRAME-OPTIONS” with a value of “DENY” or “SAMEORIGIN” before returning a response to the client Really, that’s it. The browser takes care of the rest for you. OWASP has a great article on how to implement a ClickjackFilter for JavaEE and there are sure to be many more blogs and articles popping up describing how one can implement such functionality in their language-of-choice. Even without such direct “how-to” articles and code samples, it is merely a matter of adding a new custom HTTP header – examples of which ought to be easy enough to find. Similarly a solution can be implemented using network-side scripting that requires no modification to applications. In fact, this can be accomplished via iRules in just one line of code: when HTTP_RESPONSE { HTTP::header insert "X-FRAME-OPTIONS" “(DENY || SAMEORIGIN)”} I believe the mod_rewrite network-side script would be as simple, but as I am not an expert in mod_rewrite I will hope someone who is will leave an appropriate example as a comment or write up a blog/article and leave a pointer to it. A good reason to utilize the agility of network-side scripting solutions in this case is that it is not necessary to modify each application requiring protection, which takes time to implement, test, and deploy. An even better reason is that a single network-side script can protect all applications, regardless of language and deployment platform, without a lengthy development and deployment cycle. Regardless of how you add the header, it would be a wise idea to add it as a standard part of your secure-code deployment requirements (you do have those, don’t you?) because it doesn’t hurt anything for the custom HTTP header to exist and visitors using X-FRAME-OPTIONS enabled browsers/solutions will be a lot safer than without it. Stop brute force listing of HTTP OPTIONS with network-side scripting Jedi Mind Tricks: HTTP Request Smuggling I am in your HTTP headers, attacking your application Understanding network-side scripting 9 ways to use network-side scripting to architect faster, scalable, more secure applications2KViews0likes3CommentsDevCentral questions broken in firefox?
using firefox 35 and since some days it seems DevCentral questions doesn't function fully. it does automatically load more questions when i scroll down. when i try to subcribe or report a question i get send back to the front page. i got no script allowing everything from dev central. anyone else with the same problem?648Views0likes6CommentsSearching DevCentral Just Got Easier
I recently received an internal iRule email and one of our folks created a search provider for FireFox to search DevCentral. Lori quickly responded and asked if we could get this posted to DevCentral. Why not if it will help the community so I took a look. Then it occurred to me that a while back I created a search provider definition based on the OpenSearch specification. For some reason, on our last site refresh, the links in our website were removed so the browser didn’t natively pick them up. I fixed that so now you can add DevCentral as a native search target in your browser of choice. Here’s a little background on OpenSearch, how I implemented it on DevCentral, and how to set it up in your browser. OpenSearch OpenSearch is a format that can be used to describe a search engine so that it can be accessed and used by search client applications such as web browsers. It’s basically just an XML file that you put on your webserver and by adding a hidden tag in your application pages, a browser is able to automatically access the search pages on your site. Creating the Search Engine The first step is to create the search engine definition file. In this case, I called that file OpenSearch.xml. The format for that file is defined here. For DevCentral, you can view the definition directly at OpenSearch.xml.231Views0likes0CommentsIs the URL headed for the endangered technology list?
Jeremiah Owyang, Senior Analyst, Social Computing, Forrester Research, tweeted recently on the subject of Chrome, Google's new open source browser. Jeremiah postulates: Chrome is a nod to the future, the address bar is really a search bar. URLs will be an anachronism. That's an interesting prediction, predicated on the ability of a browser translate search terms into destinations on the Internet. Farfetched? Not at all. After all, there already exists a layer of obfuscation between a URL and an Internet destination; one that translates host names into IP addresses, hiding the complexity and difficult in remembering IP addresses from the end-user. And apparently Chrome is already well on its way to sending URLs the way of the dodo bird, otherwise we wouldn't be having this conversation. But IP addresses, though obfuscated and hidden from view for most folks, aren't an anachronism any more than the engine of car. Its complexity, too, is hidden from view and concern for most folks. We don't need to know how the engine gets started, just that turning the key will get it started. In similar fashion, most folks don't need to know how clicking on a particular URL gets them to the right place, they just need to know to click on it. Operating technology doesn't necessarily require understanding of how it works, and the layer of abstraction we place atop technology to make it usable by the majority doesn't necessarily make the underlying technology an anachronism, although in this case Jeremiah may be right - at least from the view point that using URLs as a navigation mechanism may become an anachronism. URLs will still be necessary, they are a part of the foundation of how the web works. But IP addresses are also necessary, and so is the technology that bridges the gap between IP addresses and host names, namely DNS. More interesting, I think, is that Jeremiah is looking into his crystal ball and seeing the first stages of Web 3.0, where context and content is the primary vehicle that drives your journey through the web rather than a list of hyperlinks. Where SEO is king, and owning a keyword will be as important, if not more so, than brand. The move to a semantic web necessarily eliminates the importance of URLs as a visible manifestation, but not as the foundational building blocks of how that web is tied together. To be fair to other browsers, the address bar in FireFox 3 also acts like a search bar. If I type in my name, it automatically suggests several sites tied to my identity, and takes me by default to this blog. Similarly a simple search for "big-ip" automatically takes me to F5's product page on BIG-IP. That's because my default search engine is Google, and it's taking me to the first ranked page for the search results. This isn't Web 3.0, not yet, but it's one of the first visible manifestations we have of what the web will eventually become. That's what I mean about keywords becoming the new brand. Just as "bandaid", which is really a brand name, became a term used to describe all bandages, the opposite will happen - and quickly - in a semantic web where keywords and phrases are automatically translated into URLs. SEO today understands the importance of search terms and keywords, but it's largely a supporting cog in a much larger wheel of marketing efforts. That won't be true when search really is king, rather than just the crown prince. But URLs will still be necessary. After all, the technology that ties keywords and search terms to URLs requires that URLs exist in the first place, and once you get to a site you still have to navigate it. So while I'm not convinced that URLs will become a complete anachronism, they may very well become virtualized. Just like everything else today.209Views0likes0Comments