BIG-IP Edge Client 2.0.2 for Android
Earlier this week F5 released our BIG-IP Edge Client for Android with support for the new Amazon Kindle Fire HD. You can grab it off Amazon instantly for your Android device. By supporting BIG-IP Edge Client on Kindle Fire products, F5 is helping businesses secure personal devices connecting to the corporate network, and helping end users be more productive so it’s perfect for BYOD deployments. The BIG-IP® Edge Client™ for all Android 4.x (Ice Cream Sandwich) or later devices secures and accelerates mobile device access to enterprise networks and applications using SSL VPN and optimization technologies. Access is provided as part of an enterprise deployment of F5 BIG-IP® Access Policy Manager™, Edge Gateway™, or FirePass™ SSL-VPN solutions. BIG-IP® Edge Client™ for all Android 4.x (Ice Cream Sandwich) Devices Features: Provides accelerated mobile access when used with F5 BIG-IP® Edge Gateway Automatically roams between networks to stay connected on the go Full Layer 3 network access to all your enterprise applications and files Supports multi-factor authentication with client certificate You can use a custom URL scheme to create Edge Client configurations, start and stop Edge Client BEFORE YOU DOWNLOAD OR USE THIS APPLICATION YOU MUST AGREE TO THE EULA HERE: http://www.f5.com/apps/android-help-portal/eula.html BEFORE YOU CONTACT F5 SUPPORT, PLEASE SEE: http://support.f5.com/kb/en-us/solutions/public/2000/600/sol2633.html If you have an iOS device, you can get the F5 BIG-IP Edge Client for Apple iOS which supports the iPhone, iPad and iPod Touch. We are also working on a Windows 8 client which will be ready for the Win8 general availability. ps Resources F5 BIG-IP Edge Client Samsung F5 BIG-IP Edge Client Rooted F5 BIG-IP Edge Client F5 BIG-IP Edge Portal for Apple iOS F5 BIG-IP Edge Client for Apple iOS F5 BIG-IP Edge apps for Android Securing iPhone and iPad Access to Corporate Web Applications – F5 Technical Brief Audio Tech Brief - Secure iPhone Access to Corporate Web Applications iDo Declare: iPhone with BIG-IP Technorati Tags: F5, infrastructure 2.0, integration, cloud connect, Pete Silva, security, business, education,technology, application delivery, ipad, cloud, context-aware,infrastructure 2.0, iPhone, web, internet, security,hardware, audio, whitepaper, apple, iTunes2.5KViews0likes3CommentsThe Challenges of SQL Load Balancing
#infosec #iam load balancing databases is fraught with many operational and business challenges. While cloud computing has brought to the forefront of our attention the ability to scale through duplication, i.e. horizontal scaling or “scale out” strategies, this strategy tends to run into challenges the deeper into the application architecture you go. Working well at the web and application tiers, a duplicative strategy tends to fall on its face when applied to the database tier. Concerns over consistency abound, with many simply choosing to throw out the concept of consistency and adopting instead an “eventually consistent” stance in which it is assumed that data in a distributed database system will eventually become consistent and cause minimal disruption to application and business processes. Some argue that eventual consistency is not “good enough” and cite additional concerns with respect to the failure of such strategies to adequately address failures. Thus there are a number of vendors, open source groups, and pundits who spend time attempting to address both components. The result is database load balancing solutions. For the most part such solutions are effective. They leverage master-slave deployments – typically used to address failure and which can automatically replicate data between instances (with varying levels of success when distributed across the Internet) – and attempt to intelligently distribute SQL-bound queries across two or more database systems. The most successful of these architectures is the read-write separation strategy, in which all SQL transactions deemed “read-only” are routed to one database while all “write” focused transactions are distributed to another. Such foundational separation allows for higher-layer architectures to be implemented, such as geographic based read distribution, in which read-only transactions are further distributed by geographically dispersed database instances, all of which act ultimately as “slaves” to the single, master database which processes all write-focused transactions. This results in an eventually consistent architecture, but one which manages to mitigate the disruptive aspects of eventually consistent architectures by ensuring the most important transactions – write operations – are, in fact, consistent. Even so, there are issues, particularly with respect to security. MEDIATION inside the APPLICATION TIERS Generally speaking mediating solutions are a good thing – when they’re external to the application infrastructure itself, i.e. the traditional three tiers of an application. The problem with mediation inside the application tiers, particularly at the data layer, is the same for infrastructure as it is for software solutions: credential management. See, databases maintain their own set of users, roles, and permissions. Even as applications have been able to move toward a more shared set of identity stores, databases have not. This is in part due to the nature of data security and the need for granular permission structures down to the cell, in some cases, and including transactional security that allows some to update, delete, or insert while others may be granted a different subset of permissions. But more difficult to overcome is the tight-coupling of identity to connection for databases. With web protocols like HTTP, identity is carried along at the protocol level. This means it can be transient across connections because it is often stuffed into an HTTP header via a cookie or stored server-side in a session – again, not tied to connection but to identifying information. At the database layer, identity is tightly-coupled to the connection. The connection itself carries along the credentials with which it was opened. This gives rise to problems for mediating solutions. Not just load balancers but software solutions such as ESB (enterprise service bus) and EII (enterprise information integration) styled solutions. Any device or software which attempts to aggregate database access for any purpose eventually runs into the same problem: credential management. This is particularly challenging for load balancing when applied to databases. LOAD BALANCING SQL To understand the challenges with load balancing SQL you need to remember that there are essentially two models of load balancing: transport and application layer. At the transport layer, i.e. TCP, connections are only temporarily managed by the load balancing device. The initial connection is “caught” by the Load balancer and a decision is made based on transport layer variables where it should be directed. Thereafter, for the most part, there is no interaction at the load balancer with the connection, other than to forward it on to the previously selected node. At the application layer the load balancing device terminates the connection and interacts with every exchange. This affords the load balancing device the opportunity to inspect the actual data or application layer protocol metadata in order to determine where the request should be sent. Load balancing SQL at the transport layer is less problematic than at the application layer, yet it is at the application layer that the most value is derived from database load balancing implementations. That’s because it is at the application layer where distribution based on “read” or “write” operations can be made. But to accomplish this requires that the SQL be inline, that is that the SQL being executed is actually included in the code and then executed via a connection to the database. If your application uses stored procedures, then this method will not work for you. It is important to note that many packaged enterprise applications rely upon stored procedures, and are thus not able to leverage load balancing as a scaling option. Depending on your app or how your organization has agreed to protect your data will determine which of these methods are used to access your databases. The use of inline SQL affords the developer greater freedom at the cost of security, increased programming(to prevent the inherent security risks), difficulty in optimizing data and indices to adapt to changes in volume of data, and deployment burdens. However there is lively debate on the values of both access methods and how to overcome the inherent risks. The OWASP group has identified the injection attacks as the easiest exploitation with the most damaging impact. This also requires that the load balancing service parse MySQL or T-SQL (the Microsoft Transact Structured Query Language). Databases, of course, are designed to parse these string-based commands and are optimized to do so. Load balancing services are generally not designed to parse these languages and depending on the implementation of their underlying parsing capabilities, may actually incur significant performance penalties to do so. Regardless of those issues, still there are an increasing number of organizations who view SQL load balancing as a means to achieve a more scalable data tier. Which brings us back to the challenge of managing credentials. MANAGING CREDENTIALS Many solutions attempt to address the issue of credential management by simply duplicating credentials locally; that is, they create a local identity store that can be used to authenticate requests against the database. Ostensibly the credentials match those in the database (or identity store used by the database such as can be configured for MSSQL) and are kept in sync. This obviously poses an operational challenge similar to that of any distributed system: synchronization and replication. Such processes are not easily (if at all) automated, and rarely is the same level of security and permissions available on the local identity store as are available in the database. What you generally end up with is a very loose “allow/deny” set of permissions on the load balancing device that actually open the door for exploitation as well as caching of credentials that can lead to unauthorized access to the data source. This also leads to potential security risks from attempting to apply some of the same optimization techniques to SQL connections as is offered by application delivery solutions for TCP connections. For example, TCP multiplexing (sharing connections) is a common means of reusing web and application server connections to reduce latency (by eliminating the overhead associated with opening and closing TCP connections). Similar techniques at the database layer have been used by application servers for many years; connection pooling is not uncommon and is essentially duplicated at the application delivery tier through features like SQL multiplexing. Both connection pooling and SQL multiplexing incur security risks, as shared connections require shared credentials. So either every access to the database uses the same credentials (a significant negative when considering the loss of an audit trail) or we return to managing duplicate sets of credentials – one set at the application delivery tier and another at the database, which as noted earlier incurs additional management and security risks. YOU CAN’T WIN FOR LOSING Ultimately the decision to load balance SQL must be a combination of business and operational requirements. Many organizations successfully leverage load balancing of SQL as a means to achieve very high scale. Generally speaking the resulting solutions – such as those often touted by e-Bay - are based on sound architectural principles such as sharding and are designed as a strategic solution, not a tactical response to operational failures and they rarely involve inspection of inline SQL commands. Rather they are based on the ability to discern which database should be accessed given the function being invoked or type of data being accessed and then use a traditional database connection to connect to the appropriate database. This does not preclude the use of application delivery solutions as part of such an architecture, but rather indicates a need to collaborate across the various application delivery and infrastructure tiers to determine a strategy most likely to maintain high-availability, scalability, and security across the entire architecture. Load balancing SQL can be an effective means of addressing database scalability, but it should be approached with an eye toward its potential impact on security and operational management. What are the pros and cons to keeping SQL in Stored Procs versus Code Mission Impossible: Stateful Cloud Failover Infrastructure Scalability Pattern: Sharding Streams The Real News is Not that Facebook Serves Up 1 Trillion Pages a Month… SQL injection – past, present and future True DDoS Stories: SSL Connection Flood Why Layer 7 Load Balancing Doesn’t Suck Web App Performance: Think 1990s.2.1KViews0likes1CommentSSL Read Error
I have a strange problem that I'm trying to sort out. I have a vendor (Mandrill) that is POSTing a webhook to a site that I have sitting behind my BigIP. BigIPis managing the certificate for the client, there is no server-side cert. This system supports several vendors posting to the same site. This one is slightly different in that it is posting a JSON payload as an encoded URL form field versus just a JSON post. Anyway, the vendor keeps failing on "SSL read: error:00000000:lib(0):func(0):reason(0), errno 104". Since the F5 is hosting the SSL and since I've tried everything else...I've sort of run out of ideas beyond posting here and looking for some ideas on where to look. If I download the content and manually post with curl it works. I've seen this error elsewhere, I'm wondering if there's something I need to allow for given their sending system?2.1KViews0likes11CommentsASM_REQUEST_BLOCKING and email notification
I am trying to send an email notification directly from the ASM when the blocking response page is presented. There is a post similar to this which I now cannot find, but it seemed geared more towards sending an snmp trap rather than sending an email notification. Background and setup info - -Big-IP version: 10.2.0 HF2 -ASM SMTP Options configured -Using ASM Security policy with "Trigger ASM iRule Event" checked. -ASM iRule assigned as a Virtual Server resource -ASM iRule name: ASM_iRule_app1 -ASM iRule content: when ASM_REQUEST_BLOCKING { log local0. "ASM_BLOCK_app1 - Request for Support ID: ts.request.id has been blocked" } -/config/user_alert.conf entry information: alert ASM_BLOCK_app1 "ASM_BLOCK_APP1" { emailtoaddress="user1@domain.com,pager2@site.com" fromaddress="ASM_ALERTS" body="The ASM Blocking response page was just presented for an app1page request" } Questions - 1. Is an "snmptrap OID=" line required in the user_alert.conf file for each alert created? Based on the Solutions articles I've found, that appears to be the case. ( I would like to send an email alert without creating an snmptrap message.) 2. How can I add the SupportID to both the /var/log/ltm entry and the email that is sent by the alert daemon? (My thought is that I can add " . ts.request.id" to the end of the "body" line in the user_alert.conf entry.) 3. Has anyone successfully implemented something similar? 4. Does anyone know if this has been requested as a feature in a future release so that email notifications can be configured from the web UI when the blocking response page is presented?1.7KViews0likes1CommentASM & Splunk integration
hi, i have installed & configure Splunk for F5, able to get LTM self-ip, source-ip etc. logs on splunk server. So, kindly provide any document or help to integrate ASM with Splunk? does it requires iRule to be configured on ASM? Thank You! in advance...1.7KViews0likes8CommentsAccess policy evaluation is already in progress for your current session.
I have an access policy to provide single sign-on amongst a set of Windows (NTLM) authenticated web sites. I have set up a 'Logout URI Include' in the 'Configurations' of the Access Profile. For the most part logging in and out works quite well. Occasionally however logging out throws up some problems. The following events usually do it: - URI it takes longer than the timeout value to display the page, then browse to another page. - Stay on the logout page for a long time then try and visit the site again. - During the logout timeout period start hitting other pages on the site. The following message is displayed by the APM. It's not terribly friendly, but clicking on the link will let you log in again but will create unnecessary APM sessions: Access policy evaluation is already in progress for your current session. You may see this message, if you are using a different browser tab than the one where you started the access policy initially. Please continue to finish your access policy in the previous browser tab, and close this current window immediately. If you have reached to this message due to some other error, click here for creating a new session. Has anyone got any suggestion on why this is happening, where I can find more info on problems with APM sessions and logging out or how to avoid this problem. Regards, Darryl1.4KViews0likes4Commentskerberos and ntlm authentication using APM
Hi, I have setup sharepoint 2010 iApp, using NTLM authentication and it is working well(using the F5 login page), however, I now have a requirement to use kerberos authentication, as well as NTLM. In effect, if the kerberos is not present, then the NTLM should be used as the default. Another requirement, is that if a user is already logged into their windows 7 workstation, then their credentials should be silently passed to the F5 to allow kerberos authentication "transparently" without the user having to see a login page. Currently I have read many documents, but settled on the "Access policy manager, Single Sign On configuration guide" for v11.3(HF3). This details the NTLM setup nicely and also a "client based certificate" setup using kerberos. Whilst this is instructive, it does not actually help, as my scenario does not involve client side certificates(unless I am mistaken). I have created a kerberos SSO config, and am at the stage of editing the access policy, but it is at this piont, where I seem to have a lot of choices and not much documentation. Has anyone done this already, and could offer me any pionters. As a first off, I would like to just get kerberos SSO working, then I could work on getting both NTLM and Kerberos. any links to documentation, or even better a similar example would be extemely appreciated. thanks Sc0tt....1.3KViews0likes13CommentsASM response logging rate limit
I created a ASM policy recently, and it works as expected with 1 exception. In the logging on many of the requests when you click on HTTP Response you see a message that states "Response rate limit was reached", or " Content type is not supported for response logging ". I can't seem to find any information anywhere about these messages. Has anyone here seen either of these messages in ASM? Any tips would be appreciated. My system is a BIG-IP 3900 running version 11.2.1 (Build 1148.0) Thanks1.2KViews0likes5CommentsOrder of operations
I'm looking at deplying ASM in our environment and we are pretty heavy on using irules for our Virtual servers. I'm looking for best practice with deploying ASM and irules... Is the best practice to not define a default pool for the HTTP_class? So that the http traffic will flow through the ASM module and if allowed then through the irule which would forward traffic as configured? Is best practice to enable ASM in the irule via the ASM::enable command? Or "it depends"? :) Thanks in advance for your advise.1.1KViews0likes9CommentsBig-IP ASM and websockets
Hi, I'm trying to let websockets (ws://) connections run through ASM, the backend application is based on socket.io/nodejs. It seems that connections are falling back to xhr-polling which means that websocket couldn't initialize a connection properly. Does anybody have experience with websockets on ASM? Regards, Jo1.1KViews0likes31Comments