development
138 Topics20 Lines or Less #1
Yesterday I got an idea for what I think will be a cool new series that I wanted to bring to the community via my blog. I call it "20 lines or less". My thought is to pose a simple question: "What can you do via an iRule in 20 lines or less?". Each week I'll find some cool examples of iRules doing fun things in less than 21 lines of code, not counting white spaces or comments, round them up, and post them here. Not only will this give the community lots of cool examples of what iRules can do with relative ease, but I'm hoping it will continue to show just how flexible and light-weight this technology is - not to mention just plain cool. I invite you to follow along, learn what you can and please, if you have suggestions, contributions, or feedback of any kind, don't hesitate to comment, email, IM, whatever. You know how to get a hold of me...please do. ;) I'd love to have a member contributed version of this once a month or quarter or ... whatever if you guys start feeding me your cool, short iRules. Ok, so without further adieu, here we go. The inaugural edition of 20 Lines or Less. For this first edition I wanted to highlight some of the things that have already been contributed by the awesome community here at DevCentral. So I pulled up the Code Share and started reading. I was quite happy to see that I couldn't even get halfway through the list of awesome iRule contributions before I found 5 entries that were neat, and under 20 lines (These are actually almost all under 10 lines of code - wow!) Kudos to the contributors. I'll grab another bunch next week to keep highlighting what we've got already! Cipher Strength Pool Selection Ever want to check the type of encryption your users are using before allowing them into your most secure content? Here's your solution. when HTTP_REQUEST { log local0. "[IP::remote_addr]: SSL cipher strength is [SSL::cipher bits]" if { [SSL::cipher bits] < 128 }{ pool weak_encryption_pool } else { pool strong_encryption_pool } } Clone Pool Based On URI Need to clone some of your traffic to a second pool, based on the incoming URI? Here you go... when HTTP_REQUEST { if { [HTTP::uri] starts_with "/clone_me" } { pool real_pool clone pool clone_pool } else { pool real_pool } } Cache No POST Have you been looking for a way to avoid sending those POST responses to your RAMCache module? You're welcome. when HTTP_REQUEST { if { [HTTP::method] equals "POST" } { CACHE::disable } else { CACHE::enable } } Access Control Based on IP Here's a great example of blocking unwelcome IP addresses from accessing your network and only allowing those Client-IPs that you have deemed trusted. when CLIENT_ACCEPTED { if { [matchclass [IP::client_addr] equals $::trustedAddresses] }{ #Uncomment the line below to turn on logging. #log local0. "Valid client IP: [IP::client_addr] - forwarding traffic" forward } else { #Uncomment the line below to turn on logging. #log local0. "Invalid client IP: [IP::client_addr] - discarding" discard } } Content Type Tracking If you're looking to keep track of the different types of content you're serving, this iRule can help in a big way. # First, create statistics profile named "ContentType" with following entries: # HTML # Images # Scripts # Documents # Stylesheets # Other # Now associate this Statistics Profile to the virtual server. Then apply the following iRule. # To view the results, go to Statistics -> Profiles - Statistics when HTTP_RESPONSE { switch -glob [HTTP::header "Content-type"] { image/* { STATS::incr "ContentType" "Images" } text/html { STATS::incr "ContentType" "HTML" } text/css { STATS::incr "ContentType" "Stylesheets" } *javascript { STATS::incr "ContentType" "Scripts" } text/vbscript { STATS::incr "ContentType" "Scripts" } application/pdf { STATS::incr "ContentType" "Documents" } application/msword { STATS::incr "ContentType" "Documents" } application/*powerpoint { STATS::incr "ContentType" "Documents" } application/*excel { STATS::incr "ContentType" "Documents" } default { STATS::incr "ContentType" "Other" } } } There you have it, the first edition of "20 Lines or Less"! I hope you enjoyed it...I sure did. If you've got feedback or examples to be featured in future editions, let me know. #Colin4.5KViews1like1CommentBack to Basics: The Many Faces of Load Balancing Persistence
Finally! It all makes sense now! Thanks to cloud and the very generic "sticky sessions", many more people are aware of persistence as it relates to load balancing. It's a critical capability of load balancing without which stateful applications (which is most of them including VDI, most web applications, and data analysis tools) would simply fail to scale. Persistence is, in general, like the many moods of Spock. They all look pretty much the same from the outside - ensure that a user, once connected, continues to be connected to the same application instance to ensure access to whatever state is stored in that instance. But though they act the same (and Spock's expression appears the same) deep down, where it counts, persistence is very different depending on how it's implemented. It requires different processing, different inspection, different data, even. Understanding these differences is important because each one has a different impact on performance. The Many Faces of Persistence There are several industry de facto standard types of persistence: simple, SSL, and cookie. Then there are more advanced forms of persistence: SIP, WTS, Universal and Hash. Generally speaking the de facto standard types of persistence are applicable for use with just about any web application. The more advanced forms of persistence are specific to a protocol or rely on a capability that is not necessarily standardized across load balancing services. Without further adieu, let's dive in! Simple Persistence Simple persistence is generally based on network characteristics, like source IP address. It can also include the destination port, to give the load balancer a bit more capacity in terms of simultaneously applications supported. Best practices avoid simple persistence to avoid reoccurrence of the mega-proxy problem which had a tendency to overwhelm application instances. Network load balancing uses a form of simple persistence. SSL Session ID Persistence SSL Session ID persistence became necessary when SSL was broadly accepted as the de facto means of securing traffic in flight for web applications. Because SSL sessions need to be established and are very much tied to a session between client and server, failing to "stick" SSL-secured sessions results in renegotiation of the session, which takes a noticeable amount of time and annoys end-users. To avoid unnecessary renegotiation, load balancers use the SSL Session ID to ensure sessions are properly routed to the application instance to which they first connected. Cookie Persistence Cookie persistence is a technique invented by F5 (shameless plug) that uses the HTTP cookie header to persist connections across a session. Most application servers insert a session id into responses that is used by developers to access data stored in the server session (shopping carts, etc... ). This value is used by load balancing services to enable persistence. This technique avoids the issues associated with simple persistence because the session id is unique. Universal Persistence Universal persistence is the use of any piece of data (network, application protocol, payload) to persist a session. This technique requires the load balancer to be able to inspect and ultimately extract any piece of data from a request or response. This technique is the basis for application-specific persistence solutions addressing popular applications like SIP, WTS, and more recently, VMware View. SIP, WTS, Username Persistence Session Initiation Protocol (SIP) and Windows Terminal Server (WTS) persistence are application-specific persistence techniques that use data unique to a session to persist connections. Username persistence is a similar technique designed to address the needs of VDI - specifically VMware View solutions - in which sessions are persisted (as one might expect) based on username. When a type of persistence becomes very commonly used it is often moved from being a customized, universal persistence implementation to a native, productized persistence profile. This improves performance and scalability by removing the need to inspect and extract the values used to persist sessions from the data flow and results in an application-specific persistence type, such as SIP or WTS. Hash Persistence Hash persistence is the use of multiple values within a request to enable persistence. To avoid problems with simple persistence, for example, a hash value may be created based on Source IP, Destination IP, Destination Port. While not necessarily unique to every session, this technique results in a more even distribution of load across servers. Non-unique value-based persistence techniques (simple, hash) are generally used with stateless applications or streaming content (video, audio) as a means to more evenly distribute load. Unique value-based persistence techniques (universal, application-specific, SSL ID) are generally used with stateful applications that depend on the client being connected to the same application instance through the session's life. Cookie persistence can be used with both techniques, provided the application is web based and uses HTTP headers for each request (Web Sockets breaks this technique).4.1KViews0likes1CommentIntroducing a RESTful interface for iControl
iControl isn’t just SOAP anymore… No, iControl isn’t getting lazy. While taking it easy is an important part of life, I’m talking about the other kind of REST. REST, or “REpresentational State Transfer” for you technically inclined, is a style of architectural principals with which you can design web services that focus on a system’s resources. It also defines how resource states are addressed and transferred over the network. REST is really a “style” of getting and setting resources and doesn’t define the underlying communications. Most implementations out there make use of HTTP and JSON as a content format, which is what we’ve chosen to do as well. There are plenty of articles on the web that compare and contrast SOAP and REST, so I won’t get into those here. I’m also not going to go really deep into the principals of REST as you can find dozens of those easily with a web search. In this article, I’ll discuss how we’ve chosen to implement our REST interface for iControl and give you some examples on how to use it. Oh, and don’t get any ideas that our SOAP based interface is going anywhere. We are creating our REST interfaces as an alternate method for performing automation and monitoring. We don’t currently have plans on ceasing development on our SOAP interface. iControl-REST The REST interface for iControl was introduced in BIG-IP version 11.4. For this release, we are considering the feature as “Early Access”. Call it beta or whatever you want. But what that really means is that it will change in our next release. We are using this release for feedback from the users out there to find out what works and what doesn’t. Several key features are not implemented yet (for example versioning) which we have targeted for an upcoming product release when we finalize the implementation. Ok, with that said, now we can get into the details. iControl-REST, like it’s SOAP counterpart, is implemented on HTTPS and uses the same authentication and authorization roles for user access. We support the following HTTP commands: GET - for retrieving (ie. Querying the status of a Pool) POST - for creating (ie. Creating a Pool Member) PUT - for updating (ie. Changing the load balancing method on a Pool) DELETE - for deleting (ie. Deleting a Virtual Server) And, the format of the requests and responses is JSON. Starting the iControl REST Service (icrd) Run the “modify sys service icrd” TMSH command to add and start the iControl REST service. Notice the nice “EA” warning. root@(BIG-IP1)(…)(tmos) # modify sys service ircd add WARNING: This early-access feature comes with minimal documentation and testing. Version control is not implemented; therefore any scripts that you write using this API version may not work with subsequent releases. root@(BIG-IP1)(…)(tmos) # Once the service is running, you can use the “show sys service”, “stop sys service” and “start sys service” TMSH commands to monitor and control the status of the iControl REST service. Writing Your First Script Since iControl-REST is just using HTTPS, you can use any scripting technology you heart desires. For this article, we’ll use the command line tool cURL. It’s cross platform, so you should be able to wrap a bash script for Unix, Mac, etc or PowerShell for Windows around it very easily. Use the “-u” parameter to pass in the user credentials, the “-X” parameter to specify the HTTP method, and the uri for the resource you wish to access. curl -k -u user:pass -X http-method uri The URI format is as follows Module URI https://management-ip/mgmt/tm/module This access all of the sub-modules and/or components under the given module (ltm, gtm, etc). Sub-module URI https://management-ip/mgmt/tm/module/sub-module This access all of the sub-modules and/or components under the given sub-module. Component URI https://management-ip/mgmt/tm/module[/sub-module]/component This accesses the details about the given component. The tmsh Traffic Management Shell Reference documents the hierarchy of modules and components, and identifies the details of each component. Example This curl command will query all of the information about the ltm module. $ curl -k -u user:pass -H “Content-Type: application/json” -X GET https://bigip_ip/mgmt/tm/ltm { "currentItemCount": 22, "items": [ { "reference": { "link": "https://bigip_ip/mgmt/tm/ltm/auth" } }, { "reference": { "link": "https://bigip_ip/mgmt/tm/ltm/data-group" } }, { "reference": { "link": "https://bigip_ip/mgmt/tm/ltm/dns" } } ... ], "kind": "tm:ltm:ltmstate", "pageIndex": 1, "partition": "/Common/", "selfLink": "https://bigip-ip/mgmt/tm/ltm", "startIndex": 1, "totalItems": 22, "totalPages": 1 } User Guide For this release, we have provided a User Guide to assist in getting started with using the new REST interface. It can be downloaded from this link: iControlRest-UserGuide.pdf. We would love to hear your feedback on using iControl-REST. Any and all comments and questions should be posted to the iControl group. -Joe2.6KViews0likes10CommentsiCall - All New Event-Based Automation System
The community has long requested the ability to affect change to the BIG-IP configuration by some external factor, be it iRules trigger, process or system failure event, or even monitor results. Well, rest easy folks, among the many features arriving with BIG-IP version 11.4 is iCall, a completely new event-based granular internal automation system. iCall gives you comprehensive control over BIG-IP configuration, leveraging the TMSH control plane and seamlessly integrating the data plane as well. Components The iCall system has three components: events, handlers, and scripts. At a high level, an event is "the message," some named object that has context (key value pairs), scope (pool, virtual, etc), origin (daemon, iRules), and a timestamp. Events occur when specific, configurable, pre-defined conditions are met. A handler initiates a script and is the decision mechanism for event data. There are three types of handlers: Triggered - reacts to a specific event Periodic - reacts to a timer Perpetual - runs under the control of a daemon Finally, there are scripts. Scripts perform the action as a result of event and handler. The scripts are TMSH Tcl scripts organized under the /sys icall section of the system. Flow Basic flows for iCall configurations start with an event followed by a handler kicking off a script. A more complex example might start with a periodic handler that kicks off a script that generates an event that another handler picks up and generates another script. These flows are shown in the image below. A Brief Example We'll release a few tech tips on the development aspect of iCall in the coming weeks, but in the interim here's a prime use case. Often an event will happen that an operator will want to grab a tcpdump on the interesting traffic occurring during that event, but the reaction time isn't quick enough. Enter iCall! First, configure an alert in /config/user_alert.conf for a pool member down: alert local-http-10-2-80-1-80-DOWN "Pool /Common/my_pool member /Common/10.2.80.1:80 monitor status down" { exec command="tmsh generate sys icall event tcpdump context { { name ip value 10.2.80.1 } { name port value 80 } { name vlan value internal } { name count value 20 } }" } You'll need one of these stanzas for each pool member you want to monitor in this way. Next, Create the iCall script: modify script tcpdump { app-service none definition { set date [clock format [clock seconds] -format "%Y%m%d%H%M%S"] foreach var { ip port count vlan } { set $var $EVENT::context($var) } exec tcpdump -ni $vlan -s0 -w /var/tmp/${ip}_${port}-${date}.pcap -c $count host $ip and port $port } description none events none } Finally, create the iCall handler to trigger the script: sys icall handler triggered tcpdump { script tcpdump subscriptions { tcpdump { event-name tcpdump } } } Ready. Set. Go! That's one example of a triggered handler. We have many more examples of perpetual and periodic handlers in a variety of use cases in the newly created a iCall wiki, and the codeshare is pre-populated with several use cases for your immediate use and testing. Get ready to jump aboard the iCall automation/orchestration train!2.5KViews0likes4CommentsThe Challenges of SQL Load Balancing
#infosec #iam load balancing databases is fraught with many operational and business challenges. While cloud computing has brought to the forefront of our attention the ability to scale through duplication, i.e. horizontal scaling or “scale out” strategies, this strategy tends to run into challenges the deeper into the application architecture you go. Working well at the web and application tiers, a duplicative strategy tends to fall on its face when applied to the database tier. Concerns over consistency abound, with many simply choosing to throw out the concept of consistency and adopting instead an “eventually consistent” stance in which it is assumed that data in a distributed database system will eventually become consistent and cause minimal disruption to application and business processes. Some argue that eventual consistency is not “good enough” and cite additional concerns with respect to the failure of such strategies to adequately address failures. Thus there are a number of vendors, open source groups, and pundits who spend time attempting to address both components. The result is database load balancing solutions. For the most part such solutions are effective. They leverage master-slave deployments – typically used to address failure and which can automatically replicate data between instances (with varying levels of success when distributed across the Internet) – and attempt to intelligently distribute SQL-bound queries across two or more database systems. The most successful of these architectures is the read-write separation strategy, in which all SQL transactions deemed “read-only” are routed to one database while all “write” focused transactions are distributed to another. Such foundational separation allows for higher-layer architectures to be implemented, such as geographic based read distribution, in which read-only transactions are further distributed by geographically dispersed database instances, all of which act ultimately as “slaves” to the single, master database which processes all write-focused transactions. This results in an eventually consistent architecture, but one which manages to mitigate the disruptive aspects of eventually consistent architectures by ensuring the most important transactions – write operations – are, in fact, consistent. Even so, there are issues, particularly with respect to security. MEDIATION inside the APPLICATION TIERS Generally speaking mediating solutions are a good thing – when they’re external to the application infrastructure itself, i.e. the traditional three tiers of an application. The problem with mediation inside the application tiers, particularly at the data layer, is the same for infrastructure as it is for software solutions: credential management. See, databases maintain their own set of users, roles, and permissions. Even as applications have been able to move toward a more shared set of identity stores, databases have not. This is in part due to the nature of data security and the need for granular permission structures down to the cell, in some cases, and including transactional security that allows some to update, delete, or insert while others may be granted a different subset of permissions. But more difficult to overcome is the tight-coupling of identity to connection for databases. With web protocols like HTTP, identity is carried along at the protocol level. This means it can be transient across connections because it is often stuffed into an HTTP header via a cookie or stored server-side in a session – again, not tied to connection but to identifying information. At the database layer, identity is tightly-coupled to the connection. The connection itself carries along the credentials with which it was opened. This gives rise to problems for mediating solutions. Not just load balancers but software solutions such as ESB (enterprise service bus) and EII (enterprise information integration) styled solutions. Any device or software which attempts to aggregate database access for any purpose eventually runs into the same problem: credential management. This is particularly challenging for load balancing when applied to databases. LOAD BALANCING SQL To understand the challenges with load balancing SQL you need to remember that there are essentially two models of load balancing: transport and application layer. At the transport layer, i.e. TCP, connections are only temporarily managed by the load balancing device. The initial connection is “caught” by the Load balancer and a decision is made based on transport layer variables where it should be directed. Thereafter, for the most part, there is no interaction at the load balancer with the connection, other than to forward it on to the previously selected node. At the application layer the load balancing device terminates the connection and interacts with every exchange. This affords the load balancing device the opportunity to inspect the actual data or application layer protocol metadata in order to determine where the request should be sent. Load balancing SQL at the transport layer is less problematic than at the application layer, yet it is at the application layer that the most value is derived from database load balancing implementations. That’s because it is at the application layer where distribution based on “read” or “write” operations can be made. But to accomplish this requires that the SQL be inline, that is that the SQL being executed is actually included in the code and then executed via a connection to the database. If your application uses stored procedures, then this method will not work for you. It is important to note that many packaged enterprise applications rely upon stored procedures, and are thus not able to leverage load balancing as a scaling option. Depending on your app or how your organization has agreed to protect your data will determine which of these methods are used to access your databases. The use of inline SQL affords the developer greater freedom at the cost of security, increased programming(to prevent the inherent security risks), difficulty in optimizing data and indices to adapt to changes in volume of data, and deployment burdens. However there is lively debate on the values of both access methods and how to overcome the inherent risks. The OWASP group has identified the injection attacks as the easiest exploitation with the most damaging impact. This also requires that the load balancing service parse MySQL or T-SQL (the Microsoft Transact Structured Query Language). Databases, of course, are designed to parse these string-based commands and are optimized to do so. Load balancing services are generally not designed to parse these languages and depending on the implementation of their underlying parsing capabilities, may actually incur significant performance penalties to do so. Regardless of those issues, still there are an increasing number of organizations who view SQL load balancing as a means to achieve a more scalable data tier. Which brings us back to the challenge of managing credentials. MANAGING CREDENTIALS Many solutions attempt to address the issue of credential management by simply duplicating credentials locally; that is, they create a local identity store that can be used to authenticate requests against the database. Ostensibly the credentials match those in the database (or identity store used by the database such as can be configured for MSSQL) and are kept in sync. This obviously poses an operational challenge similar to that of any distributed system: synchronization and replication. Such processes are not easily (if at all) automated, and rarely is the same level of security and permissions available on the local identity store as are available in the database. What you generally end up with is a very loose “allow/deny” set of permissions on the load balancing device that actually open the door for exploitation as well as caching of credentials that can lead to unauthorized access to the data source. This also leads to potential security risks from attempting to apply some of the same optimization techniques to SQL connections as is offered by application delivery solutions for TCP connections. For example, TCP multiplexing (sharing connections) is a common means of reusing web and application server connections to reduce latency (by eliminating the overhead associated with opening and closing TCP connections). Similar techniques at the database layer have been used by application servers for many years; connection pooling is not uncommon and is essentially duplicated at the application delivery tier through features like SQL multiplexing. Both connection pooling and SQL multiplexing incur security risks, as shared connections require shared credentials. So either every access to the database uses the same credentials (a significant negative when considering the loss of an audit trail) or we return to managing duplicate sets of credentials – one set at the application delivery tier and another at the database, which as noted earlier incurs additional management and security risks. YOU CAN’T WIN FOR LOSING Ultimately the decision to load balance SQL must be a combination of business and operational requirements. Many organizations successfully leverage load balancing of SQL as a means to achieve very high scale. Generally speaking the resulting solutions – such as those often touted by e-Bay - are based on sound architectural principles such as sharding and are designed as a strategic solution, not a tactical response to operational failures and they rarely involve inspection of inline SQL commands. Rather they are based on the ability to discern which database should be accessed given the function being invoked or type of data being accessed and then use a traditional database connection to connect to the appropriate database. This does not preclude the use of application delivery solutions as part of such an architecture, but rather indicates a need to collaborate across the various application delivery and infrastructure tiers to determine a strategy most likely to maintain high-availability, scalability, and security across the entire architecture. Load balancing SQL can be an effective means of addressing database scalability, but it should be approached with an eye toward its potential impact on security and operational management. What are the pros and cons to keeping SQL in Stored Procs versus Code Mission Impossible: Stateful Cloud Failover Infrastructure Scalability Pattern: Sharding Streams The Real News is Not that Facebook Serves Up 1 Trillion Pages a Month… SQL injection – past, present and future True DDoS Stories: SSL Connection Flood Why Layer 7 Load Balancing Doesn’t Suck Web App Performance: Think 1990s.2.2KViews0likes1Comment20 Lines or Less #64: VIP to VIP redirection, URL separation and Redirect Rewriting
What could you do with your code in 20 Lines or Less? That's the question I like to ask for the DevCentral community, and every time I go looking to find cool new examples that show just how flexible and powerful iRules can be without getting in over your head. If there's one thing I learned on the road recently, traipsing around to some awesome user groups, it's that there's nearly no end to the uses for iRules in the real world. I've seen things ranging from the simplest redirect rules all the way up to some pretty gnarly complexity implementations, and everything in between. It's awesome to see iRules of many incarnations doing their thing, and it continues to remind me just how many ways people are using this technology in a myriad of ways. To that end this week brings another set of iRules examples showcasing a few more ways in which the community is putting iRules to work. Thanks to the wicked community doing that thing that you all do, we've got Virtual to Virtual redirection, rewriting mid redirection, as well as separation of URL/URI content. Not only that, but this week examples in less than 20 lines wasn't enough of a challenge. Instead I'm delivering 3 examples in less than 10 lines of code each. It's the super minified version of the 20LoL, for those efficiency minded folks. Let's dig in: Using an iRule to redirect traffic to different Virtual Servers Using an iRule to redirect traffic to different Virtual Servers iRules and redirection is a match that has been made time and time again for years. A new twist on this ability, however, came about with the capability for iRules to direct traffic from one Virtual Server to another. This is often referred to as VIP targeting VIP. This can be useful for many reasons, not the least of which is toggling profiles based on request info which is something I've had a bit of experience with lately for some Web Accelerator testing…but more about that later. User wbbigdave posed this question about directing traffic from on VS to another to create the ultimate VIP to rule them all. Fortunately for his VIP ruling needs this is completely possible, and Richard gave a simple example of how. 1: when CLIENT_ACCEPTED { 2: if {[TCP::remote_port] == 80} { 3: virtual HTTP_virtual 4: } 5: } Yet another redirect question Yet another redirect question User drizzle is looking for a way to do some URI rewriting of a URI in the midst of a redirection. The situation is that they need to respond to a user with the requested URI modified to replace a given portion of the URI with a separate string. This is easy to do in an iRule, of course, and Michael Yates was kind enough to provide a great example of exactly how this works. 1: when HTTP_REQUEST { 2: switch -glob [string tolower [HTTP::host][HTTP::uri]] { 3: "some.host.com/singleuri*" { 4: HTTP::respond 301 Location "https://some.other.host.com[string map {"/singleuri" "/anotheruri"} [HTTP::uri]]" 5: } 6: } 7: } Looking to separate URL from URI Looking to separate out URL from URI In this final example user Snowman (yeah…awesome name, right?) was looking for a way to split up and examine different portions of an HTTP URI. There are a few options to go about this depending on whether or not you want to inspect the query string or just the HTTP path, etc. After a bit of tweaking it seems that they were able to come up with a solution that worked for them using linden and split to effectively tokenize the HTTP::path variable, and then make decisions based on that list. Very handy stuff to keep around as this is broadly useful logic. 1: when HTTP_REQUEST { 2: set host [HTTP::host] 3: set uri_list [split [string tolower [HTTP::path]] /] 4: if { [lindex $uri_list 2] equals "admin"} { 5: HTTP::respond 302 Location "https://$host/user" 6: } 7: }1.3KViews0likes0CommentsBIG-IP LTM VE: Transfer your iRules in style with the iRule Editor
The new LTM VE has opened up the possibilities for writing, testing and deploying iRules in a big way. It’s easier than ever to get a test environment set up in which you can break things develop to your heart’s content. This is fantastic news for us iRulers that want to be doing the newest, coolest stuff without having to worry about breaking a production system. That’s all well and good, but what the heck do you do to get all of your current stuff onto your test system? There are several options, ranging from copy and paste (shudder) to actual config copies and the like, which all work fine. Assuming all you’re looking for though is to transfer over your iRules, like me, the easiest way I’ve found is to use the iRule editor’s export and import features. It makes it literally a few clicks and super easy to get back up and running in the new environment. First, log into your existing LTM system with your iRule editor (you are using the editor, right? Of course you are…just making sure). You’ll see a screen something like this (right) with a list of a bagillionty iRules on the left and their cool, color coded awesomeness on the right. You can go through and select iRules and start moving them manually, but there’s really no need. All you need to do is go up to the File –> Archive –> Export option and let it do its magic. All it’s doing is saving text files to your local system to archive off all of your iRuley goodness. Once that’s done, you can then spin up your new LTM VE and get logged in via the iRule editor over there. Connect via the iRule editor, and go to File –> Archive –> Import, shown below. Once you choose the import option you’ll start seeing your iRules popping up in the left-hand column, just like you’re used to. This will take a minute depending on how many iRules you have archived (okay, so I may have more than a few iRules in my collection…) but it’s generally pretty snappy. One important thing to note at his point, however, is that all of your iRules are bolded with an asterisk next to them. This means they are not saved in their current state on the LTM. If you exit at this point, you’ll still be iRuleless, and no one wants that. Luckily Joe thought of that when building the iRule editor, so all you need to do is select File –> Save All, and you’ll be most of the way home. I say most of the way because there will undoubtedly be some errors that you’ll need to clean up. These will be config based errors, like pools that used to exist on your old system and don’t now, etc. You can either go create the pools in the config or comment out those lines. I tend to try and keep my iRules as config agnostic as possible while testing things, so there aren’t a ton of these but some of them always crop up. The editor makes these easy to spot and fix though. The name of the iRule that’s having a problem will stay bolded and any errors in that particular code will be called out (assuming you have that feature turned on) so you can pretty quickly spot them and fix them. This entire process took me about 15 minutes, including cleaning up the code in certain iRules to at least save properly on the new system, and I have a bunch of iRules, so that’s a pretty generous estimate. It really is quick, easy and painless to get your code onto an LTM VE and get hacking coding. An added side benefit, but a cool one, is that you now have your iRules backed up locally. Not only does this mean you’re double plus sure that they won’t be lost, but it means the next time you want to deploy them somewhere, all you have to do is import from the editor. So if you haven’t yet, go download your BIG-IP LTM VE and get started. I can’t recommend it enough. Also make sure to check out some of the really handy DC content that shows you how to tweak it for more interfaces or Joe’s supremely helpful guide on how to use a single VM to run an entire client/LTM/server setup. Wicked cool stuff. Happy iRuling. #Colin1.3KViews0likes1CommentWhich TLS algorithm should I use?
TLS is a vital to the internet, but there have been some TLS protocol level attacks lately. I spent a lot of time investigating all these attacks to find out what F5 should do. After the recent rash of attacks, I thought there was a lot of confusion and not a lot of easily accessible information available to the average system administrator who just wants to make sure his application and his customers were safe. This post will describe the attacks and help you to sort them out and make a recommendation for BIG-IP users. The attack descriptions have been simplified quite a bit, reference links are included if you would like to dig deeper. Ciphers in TLS In 2001 the AES standard was established by the US NIST to replace the aging 56 bit DES algorithm. AES has fared very well over the years. It's a block cipher with 128, 192 and 256 bit block sizes. Even 128 bits is still considered strong enough for most uses; the latest attack from 2011 still requires 2^126.1 operations, which is close enough to brute force. But there's been a rash of attacks against the TLS protocol lately that have left many people wondering what cipher they should use. It's not that AES itself is a problem, it's that the specific mechanism of cipher block chaining (CBC) used by the TLS protocol has been weakened in a few different ways. CBC feeds each cipher block back onto the next block and ensures that similar blocks of plaintext are not encrypted into the same cipher text. There's a great article on Wikipedia of electronic code book (ECB) vs other encryption modes that includes a description of cipher block chaining. Side note: Most of these attacks attempt to retrieve an application authentication cookie. These attacks do not retrieve your RSA secret key nor do they obtain the TLS session key, those are still kept secret. Once an attacker has a user's authentication cookie, he can impersonate that user until the cookie expires. Your application authentication cookie should have the HTTPOnly and Secure attributes set so that an attacker can't easily obtain it. If your app doesn't have those attributes set on your auth cookie, the BIG-IP can easily add them. The below attacks all assume these are set and the attacker can’t get the cookies easily. BEAST Attack F5 Projected Threat Level - Low The BEAST attack from Thai Duong and Juliano Rizzo from 2011 demonstrated a previously known flaw in TLS that allows a chosen plaintext attack. An attacker who can capture your network traffic and get code into your browser (called a man in the browser or MITB) could force your browser to repeatedly encrypt the same information along with some chosen plaintext. The BEAST researchers showed that the attacker is able to recover secret information through careful analysis of the encrypted messages. Fortunately, it only affects TLS1.0 and the latest browsers have implemented a record splitting workaround. Even though the server isn’t involved, the server side solution is to move up to TLS1.1 or 1.2 because these newer versions of the protocol initialize the cipher differently. F5 thinks this has a relatively low threat potential because of the complexity of the attack and because modern browsers now have workarounds. Users with old browser may still be vulnerable though. References: Thai Duong's blog post, Tor project's analysis. CRIME Attack F5 Projected Threat Level - Low The CRIME attack from 2012 was a similar attack from the same authors that also allows an attacker to decrypt plaintext. In this attack, the MITB sends many requests with small changes that are compressed before being encrypted. The attacker watches the traffic and derives information about the plaintext from the size of the compressed blocks. The solution is to disable TLS compression (note this is not HTTP content compression). If your client or server does not support TLS compression then this attack is not feasible. Most recent browsers have already disabled TLS compression. BIG-IP does not currently support TLS1.x compression although HTTP compression is supported. F5 thinks this has a relatively low threat potential because of the complexity of the attack and because BIG-IP does not support TLS compression. References: ARS article. Lucky 13 Attack F5 Projected Threat Level - Low The Lucky 13 attack (AlFardan and Patterson) from February 2013 includes a MITB, but also has an active listener who modifies the browser ciphertext and waits for the response from the server. By timing the response, the attacker learns which ciphertext blocks have correct padding. The attacker uses this information leak to eventually recover the plaintext of the message. I know of no server side workaround except to disable CBC mode. A BIG-IP uses hardware for TLS offload and we believe significantly less vulnerable to timing attacks. F5 thinks this has a relatively low threat potential because of the complexity of the attack and because of the hardware acceleration of the BIG-IP. References: Paper, ARS article, F5 blog entry. RC4 attack F5 Projected Threat Level - Low Stream based ciphers like the older RC4 algorithm operate byte by byte and are immune to all of the above attacks. In the wake of the above attacks -- especially BEAST -- many people recommended using RC4. However, RC4 has numerous statistical biases that may result in leakage of plaintext. In March 2013, djb (Daniel J Bernstein of the University of Chicago) revealed that with more than 2^24 samples of cipher text he could reliably statistically obtain the plaintext. With this attack, a MITB attacker would repeatedly send the same message until enough cipher text was obtained. Strangely enough, while TLS renegotiation may help in the CBC attacks, in this case, key renegotiation is bad because the attacker wants to collect as many different ciphertexts of the same information as possible. I know of no solution to this attack except to disable RC4. F5 thinks this has a relatively low threat potential at the moment. This attack does have the potential to become practical If it’s ever seen in the wild, I would definitely recommend disabling RC4. Reference: djb's slides, ARS article. TIME Attack F5 Projected Threat Level - Medium In mid March 2013, Tal Be'Ery presented a refinement of the CRIME attack called TIME. CRIME worked against the compressed request, but TIME works against the compressed HTTP response. He also showed that a MITB could conduct the attack without a separate listener on the network, a significant improvement in the attack. Javascript in the browser measures the timing and infers the size of the HTTP response compressed block. Disabling HTTP compression will mitigate this attack. He recommends browser implementers make some changes to protect users. He also recommends using the X-Frame-Options header in your app. This can easily be added by a BIG-IP. This appears to be the most practical of all the attacks listed above. It’s possible that someday we could see this attack in the wild. Therefore, F5 currently considers this a medium threat. Reference: slides, blog. To summarize: Attack BIG-IP Workaround BEAST Use TLS1.1 or TLS1.2 CRIME Disable TLS compression (not enabled on BIG-IP) Lucky13 Use server side hardware (default on BIG-IP) or disable CBC mode ciphers RC4 attack Disable RC4 TIME Disable HTTP compression, enable X-Frame protection There is no alternative to CBC mode in the TLS specification. RFC 5288 describes a newer mode called Galois Counter Mode (GCM) that should fix the problem but very few browsers support AES-GCM. So what cipher to use? Disclaimer: This is speaking from my experience and what I’ve researched. F5 Networks doesn’t usually make recommendations about ciphers. You should do your own research before changing defaults. If more clients supported it, my recommendation would be to use TLS 1.1 or TLS1.2 AES-CBC mode until we have new modes like GCM. BIG-IP v11.x or later can be set to use TLS 1.2 or TLS1.1 ciphers with a cipher string like "DEFAULT:!SSLv3:!TLSv1:!RC4". SOL13171 can be used to help you configure the BIG-IP. You'll want to test with your clients; many clients don't support the newer ciphers yet. If you can't disable TLS1.0 because you need the client support, then you have to choose between demonstrably weak RC4 and TLS1.0 AES-CBC which may vulnerable to BEAST. Most browsers have implemented the mitigation for the theoretical BEAST attack. The attack against RC4 proves it to be weak, so my recommendation would be to select AES-CBC mode with "DEFAULT:!SSLv3:!RC4". Some popular websites have gone to RC4 over AES. I understand this choice, as a BEAST attack will take minutes to find your cookies, where an RC4 attack will take millions of messages before the entire message is cracked. However, the RC4 attack can only get better and someone may figure out how to narrow that down and use fewer messages to obtain the cookies. The BEAST attack is mitigated in many browsers. At the moment, I can’t recommend disabling HTTP compression to foil the TIME attack. If that attack were seen in the wild, I would definitely recommend disabling HTTP compression. F5 ASM mitigations Almost all of these attacks currently rely on a MITB. Usually an attacker gets a MITB though a cross site request forgery (CSRF) flaw in an application. The ASM module of BIG-IP has CSRF protection that mitigates this risk. ASM also has a domain cookie protection feature. If an attacker does obtain the authentication cookie, he must also obtain the rotating ASM cookie that contains a signature of all the other cookies. These features should make it significantly harder for an attacker to obtain cookies using any of the methods above. Conclusion I’d like to thank my generous F5 colleagues who helped me research these attacks. It’s been a fascinating journey into crypto. You can be assured that F5 will continue to keep an eye on these attacks and will continue to work with browsers and researchers to implement safe and secure protocols.1KViews0likes1CommentMaking Android SSL Work Correctly.
Connecting via #SSL in #Android. Correctly and securely. Mimic browser functionality with untrusted certs and hostname mismatches. #f5 Okay, for those of you who’ve been waiting, this is the blog. For those who haven’t, welcome, we’re going to talk about https connections (specifically) on Android (TM) from within a native app and connecting to sites that Android views as possible security risks. Easy like learning to drive if your face was in your navel. But I’ve tried to make the issues and the code as clean and clear as possible for you. Hope it is of some assistance. I’m going to take it slow and address all of the issues, even the seemingly unrelated ones. My goal with this blog is to help those who are just getting started on Android too, so I’ll use a lot of headings, giving you the chance to skip sections you are already familiar with. Otherwise, grab a cup of coffee (or a spot of tea, or a glass of water, whatever suits you), sit back, and read. Feedback is, as always, welcome. I’m not one of the Android designers, and had to admit long ago I wasn’t perfect, so anything to improve this blog is welcome. Android Info We all basically know what Android is, and if you’ve been developing for it for very long, you know the quirks. There are a few major ones that I’ll mention up-front so simple gotchas and oddities later on don’t catch you off guard when I discuss them. The first, and one that is very important to the general version of this solution, is pretty straight-forward. You cannot stop execution of code and wait for user input. Purists would argue with me, and I’ve even seen the core Android developers claim that the system supported modal dialogs because of some design features, but modal as in “stop and wait for input” Android does not even pretend to support. It is a cooperative multitasking environment, and you’re not allowed to hog a ton of time idling while other apps are crying for cycles. If your app takes too long to do anything, Android will close it. If you try things like (important for our solution) database queries or network connections, Android will close the app. Remember that, because it comes up again later. Ignoring the “cannot stop execution” part, and focusing on the “long running processes” part, we then are forced to move all long-running things like queries and network connections out of the core application thread and into a separate one. Android did a great job of making this relatively painless. There is good documentation, everything works as expected, and there are several different ways to do it based upon your needs. You can just spawn a Java thread, and let it go, you can use an AsynchTask, which allows limited interaction with the UI thread (meaning you can directly post updates relevant to the background task), and there are services. The thing is that both AsynchTask and Thread are stopped when your application is stopped. For some uses, that’s okay. For most, it is not. If you want that long-running SSL communication task to complete before exiting, then a Service is your tool. Service has an interface you can use to send it commands, and when it finishes a command, it exits on its own. This gives you data consistency, even if the application is terminated for some reason (and trust me if you don’t already know, apps are “terminated” for some pretty superfluous reasons). In our code, this is the way we went, but the source I am introducing here works with any of these approaches, simply because it isn’t the execution mechanism that matters, it is the SSL code that matters. SSL Info SSL has been around forever, and Java support for SSL has been too. Every time a new platform comes along though, you get people asking how to implement SSL on the platform, and you get people suggesting various ways to disable SSL on the platform. This blog is not one of those. It is one way that you can make SSL act like it does in your browser, without compromising security. Note: Security geek friends, the following is far from a definitive treatise on SSL, it is aimed at helping developers use it more effectively. Do not pick at my generalizations, I’m aiming at more secure Android apps, not training security peeps. When an SSL connection is made, one of the first things that occurs is the SSL handshake. This includes the server coughing up its certificate for the client to determine if the server certificate is valid. This is the point in most Java developers’ careers that their hair starts to turn grey. The process of validating a certificate is baked into the java.net libraries, and at the level we need to care about consists of three steps. First is to get an instance of javax.net.ssl.HostNameVerifier, and use it to check if the name on the certificate matches the hostname advertised through DNS. A mismatch of these two items may imply a Man-In-The-Middle attack, so the check is there. The thing is, nearly every test environment on the planet also has a host name mismatch, because they’re temporary. In the browser, a dialog pops up and says “Host Name Mismatch, continue? (Y/N)” and you choose. In code, you cannot have the browser do that for you, so you’ll have to handle it – and we’ll see how in a bit. Aside from the Host Name Verifier, the certificate has to be checked for validity. There is a “store” (generally an encrypted file) on the Android device (and really any device using Java for SSL) that has a list of valid “Root Certificates”. These are the certificates that are definitive and known good, there are relatively few of them in the world (compared to the pool of all certificates), and they are used to issue other certificates. It costs money to get a trusted cert for production use, so in test environments, most companies do not, so again, if you’re testing, you’ll see problems that might not exist in the real-world. A production certificate that you would use is “signed” by someone and theirs was “signed” by someone, etc, all the way up to one of the root certificates. If you can get back to a root by looking at the signatures, that’s called the certificate chain, and it means the certificate is good. But there are two catches. First, Android has not consistently included the all of the worlds’ largest of the root servers’ certificates in the certificate store. “Android” devices being sold by many different companies – who can change the certificate store – doesn’t help. So sometimes, a valid certificate will trace up the chain correctly, but at the top of the chain, the root certificate is not in the store, so the entire chain – and by extension the server you’re connecting to – is “invalid” in Androids’ eyes. Second, as alluded to above, in a test environment, you can generate “self-signed” certificates. These are valid certificates that do not point to a Root, the chain ends with them. These are never deemed valid in Android, nor in most other SSL implementations, simply because anyone could self-sign and there is no way to track it back to a root that guarantees that signing. There are other things that are checked – is the certificate expired, has it been revoked by someone further up the chain, etc. – but the killer for Android developers is the “break in the chain”, either by lack of inclusion of a root certificate or self signing. Android With SSL So in the normal processing of SSL, you can implement the validation interfaces and change normal function – we’re going to stick with the two that we use in this source, but there are (can be) more. We’ll look at the problem in a couple of different ways, and in the end show you how to use the code developed to pop up a descriptive dialog, and ask the user if they want to continue. This is exactly what browsers (including the default Android browser) do for you. Looking around online there are several excellent examples of handling this problem with dialogs. Android doesn’t support this model well at all. If you recall up above I pointed out that you can’t stop code execution on Android, meaning you can’t ask the user from within, say, HostNameVerifier.verify(), because you must return true or false from this routine. False says “don’t allow this connection”, and true say “yes, allow this connection. But in Android, since you cannot use a modal dialog to stop execution and wait for an answer, if you do throw up a dialog, the code continues, and whatever the last value of your return variable was, that’s what the answer will be. This is not likely to meet your design goals. At this point, since we’ve hit the first fiddly bit we have to solve, I’ll say point blank that those people who are suggesting that you simply always return true from this routine are not the ones to listen to. This protection is there for a reason, do not lightly take it away from your users, or you will eventually regret it. A very similar scenario exists with the certificate validation classes – in our case, X509Certificates – you either throw an exception, or the act of exiting the routine implies acceptance and allows the connection to go through. Needless to say, in Android, before the dialog to ask a users’ preference is drawn, the routine would have exited. So What To Do? There are several decent work-arounds for this problem. We’ll discuss three of them that cover common scenarios. Solution #1: If you are only ever going to hit the same servers all the time (like you would in a VPN), and have access to the tablets/phones that will be doing the connecting, then you should look up how to add certs to the Android device. This will give you a solid solution with zero loss of security and essentially no code complexity. I will not show you how to achieve this one in this blog, simply because it is the best documented of the three solutions. Solution #2: If you can ask the user ahead of time (this is my scenario) if they want to ignore errors for a given server, then you can save it in a database and use that information to tell the routine how to behave. Solution #3: If you cannot ask the user ahead of time, and want to be able to dynamically go to many sites, then you’ll have a little more work to do – notably subclassing the verification classes you’ll need, and installing them as the defaults for your kind of communication (TLS in our case). I’ll detail this below, but the overview is (1) Return false or throw an exception. (2) Send a message to the UI loop to say “connection failed, bad cert, retry”. (3) In the main UI loop you can pop up a dialog, and while it will not be blocking, you will be notified when the user clicks okay or cancel (assuming you’re the listener), if the notification is “yes, proceed”, then you can tell your background thread to retry and return true from HostnameVerifier, while not throwing exceptions in X509 processing. I loathe this solution, because it causes you to build the entire connection twice, which is always bad form, but doubly so on phones with data plans, which is a huge chunk of the Android market. But short of writing extensive code that could become a project on its own, this is the best solution for dynamic access to websites available at this time. It essentially adopts the blocking dialog model to the Android model. Source – Solution #2 The key to this solution and the next is the over-ridden classes for host name verification and certificate validation. We’ll delve into them first, then how to install them, then into how to set the up to give the SSL processing code the correct answers. We’ll also need a database, but we’ll discuss that afterward, because in theory you could use other mechanisms to implement that bit, depending upon the rest of your architecture. HostnameVerifier is the easier of the two. The Interface in the javax.net.ssl library has only one routine – verify. We’ll implement that routine with a few assumptions, and I’ll talk about the assumptions afterward. Diagram of relevant program flow The above diagram is simply for reference. Some people understand easier with a diagram, so I built you one with just the parts we’re going to care about in it. So here is the source for our host name verifier. It is really pretty simple, we’re just implementing the interface and using our own criteria to determine if a host name mismatch should cause us to abandon the connection… public class ModifiedHostNameVerifier implements HostnameVerifier { boolean bAccepted; public ModifiedHostNameVerifier(Context m) { bAccepted=false; } public void setAccepted(boolean acc) { bAccepted = acc; } // verify is only called to compare hostname from DNS to SSL Certificate issued-to name. Do what the user wants and return. // NOTE: in the more generic case where the user is connecting all over the place, you’ll need to get the default behavior, // same as we do in the X509 code below – otherwise they will be blocked on all calls not set to accepted, even ones with // matching hostnames. public boolean verify(String string,SSLSession ssls) { return bAccepted; } } I mentioned caveats, well as you can see, we set whether this should be accepted or not outside of the class. That makes nothing to the class. But we have to have it or the standard rule of “host names don’t match, let’s get out of here” will apply. As mentioned in the comments, you’ll want to get a pointer to the default handler before this one is installed, so you can use it for most validations – except in cases (like ours) that only need to know what the user said and don’t have to care about the normal case where hostnames match. A call in the constructor that constructs an instance of DefaultHostNameVerifier and saves the created object to a global (just as we did in X509) will then give you a reference to use to check normal processing before returning the value of bAccepted. The original iteration of this code had the determination if we should accept the mismatch or not right in the verify method. The reason it had to move out of here is simple – connections have timeouts. We were wasting so much time doing that determination inside this class and the X509 class that the SSL connection would time out and close. Not exactly what we wanted, so I did it this way, which conveniently for blogging purposes is also easier to follow. The determination is made and we set the accepted variable before the call to connect (which eventually calls this classes verify()), More on that after this next class… The X509TrustManager handles validating certificate chains. It cares not what you’re doing, merely validates that the certificate is valid. We need to add a caveat that says “If the default Trust Manager doesn’t think this is valid, we should try to validate it with our specialized parameters.” And that is indeed what we do. Again, we use a boolean to tell us how to handle this scenario, which keeps the code both fast in execution and clean for learning. public class ModifiedX509TrustManager implements X509TrustManager { X509TrustManager defaultTrustMgr; boolean accepted; public ModifiedX509TrustManager() { TrustManager managers[]= {null}; try { KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType()); String algo = TrustManagerFactory.getDefaultAlgorithm(); TrustManagerFactory tmf =TrustManagerFactory.getInstance(algo); tmf.init(ks); managers = tmf.getTrustManagers(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } catch (KeyStoreException e) { e.printStackTrace(); } for (int i = 0; i < managers.length; i++) { if (managers[i] instanceof X509TrustManager) { defaultTrustMgr = (X509TrustManager) managers[i]; return; } } } public void checkClientTrusted(X509Certificate[] chain, String authType) throws CertificateException { defaultTrustMgr.checkClientTrusted(chain, authType); } public void setAccepted(boolean acc) { accepted = acc; } public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException { // First, check with the default. try { defaultTrustMgr.checkServerTrusted(chain, authType); // if there is no exception, return happily. } catch(CertificateException ce) { if(accepted) { // YES. They've accepted it in the setup dialog. return; } else { Log.e("Device Not trusted!","Check Settings."); throw ce; } } } public X509Certificate[] getAcceptedIssuers() { return defaultTrustMgr.getAcceptedIssuers(); } } The first thing we do (in the constructor) is use the KeyStore (where the recognized trusted certs are stored) and TrustManagerFactory (the class that instantiates and manages Trust Managers for different types of keys. We seek out the default, which is TLS on most systems. Some would tell you that this is unsafe and you should explicitly ask for TLS, but I am of the opinion that we want the default for X509, since we don’t much care beyond that. So we grab it. We store a class level reference to the default trust manager for use later. That really is the whole point of overriding the constructor. It is save to ignore the implementation of checkClientTrusted(). We have to implement it, but don’t need it (unless you’re doing client connections…), so it simply returns the default trust manager implementation of this class. SetAccepted() is called from our source before the connect call is made, to tell this class whether the user has agreed to accept certificates that aren’t perfect. It simply sets a class-level variable that we use in the next, most important, routine. checkServerTrusted() is the key routine in this class. It tries to validate with the default trust manager, and if that fails, it then uses the value set with accepted() to determine if it should let the connection go through. If the default trust manager threw an exception (finding fault with the cert or its chain), and the user said “No, don’t accept bad connections” when we asked, this routine throws a certificate exception that stops us from connecting. In all other scenarios, it returns normally and allows the connection to go through. getAcceptedIssuers() is another routine that the Interface requires, but we didn’t want to modify, since our accepted/not accepted is not based on adding new certificates. You might want to modify this routine if you are going to manage the certificates you (or the user) have explicitly trusted. Okay, that’s the worst of it. Now we have two implementations of the interfaces that will do what we want. But first, we have to tell the system to use our classes instead of the ones currently in use. That’s relatively easy, here’s the method to do it (this comes from our connection class, and is called before HttpsUrlConnection.connnect() is called). private void trustOurHosts() throws Exception { // Create a host name verifier mhnv = new ModifiedHostNameVerifier(this); // Create a trust manager that recognizes our acceptance criteria mod509 = new ModifiedX509TrustManager(); TrustManager[] trustOurCerts ={ mod509 }; // Install the trust manager and host name verifier try { SSLContext sc = SSLContext.getInstance("TLS"); sc.init(null, trustOurCerts, new java.security.SecureRandom()); HttpsURLConnection .setDefaultSSLSocketFactory(sc.getSocketFactory()); HttpsURLConnection.setDefaultHostnameVerifier( mhnv); } catch (Exception e) { /* No Such Algorithm Exception, KeyManagementException. In both cases, we can't continue. */ e.printStackTrace(); throw e; } } This simply tells the SSL context associated with TLS that when its socketfactory creates a new socket, it should use our socketfactory (in sc,.init), then tells HttpsURLConneciton that when it opens a socket, it should use the modified socketfactory. It then tells HttpsURLConnection that it should use our hostname verifier. In the event of an exception, we’re unable to connect reliably and need to get out, so we pass the exception on. Finally, we have to call this routine, tell our implementations whether or not to accept errors, and actually create an https connection. Almost there, here’s the code. if(mod509 == null) trustOurHosts(); What we don’t want to do is waste a bunch of time creating and destroying the handling classes, so we create them and keep a reference to them, only recreating them if that reference is null. mod509.setAccepted(bigIp.isAccepted()); mhnv.setAccepted(bigIp.isAccepted()); Then each time we need to start a new connection with the device, we call setAccepted on both of our classes. For our purposes, we could lump both host name mismatch and invalid cert into a single question, you might want to have two different values for users to check. HttpTransportBasicAuth httpt = new HttpTransportBasicAuth("https://" + host + "/iControl/iControlPortal.cgi", param1, param2); And finally, in our case, we use HttpTransportBasicAuth from kSOAP2 Android to connect. We have also tested this solution with the more generic SSL connection mechanism: HttpsURLConnection https = new HttpsURLConnection(ourURL); https.connect(); And it worked just fine. Last but not least Okay, so this solution works if you already know how the user is going to respond, but what if you don’t, and you want your behavior to mimic the browser, but for whatever reason cannot use the browser with Intents to achieve your goals…? Well, I have not implemented this, but knew looking into how to solve the problem that more people would be interested in this solution. Here’s my answer. 1. Do all of the steps above except set “accept” to false for both class implementations. 2. Make the connect call, in the cases where there are problems, it will throw an exception. 3. Return to the UI thread with an “Oh my, exception type X occurred” message. 4. In the UI thread, respond to this message by throwing up an “Allow connection?” dialogfragment. with your calling class as the onclicklistener 5. in OnClick, if they said “no, do not trust”, just move on with whatever the app was doing. If they said “Yes, do trust”, use their answers to set up a new call to the thread to connect(). You now have actual values for the setAccepted() calls. As I said above, I don’t like it because all the overhead of establishing a connection occurs twice, but it is pretty clean code-wise, and short of stopping execution, this is the best solution we have… For now. One thing Android has been great at is addressing issues in a timely manner, this one just has to impact enough developers to raise it up as an issue. And that, is that. There’s a lot here, both for you to consume and me to write. Feel free to contact me with questions/corrections/issues, and I’ll update this or lend a hand as much as possible. It certainly works like a champ for our app, hopefully it does for yours also. The people out there offering all sorts of insecure and downright obfuscated answers to this problem could send you batty, hopefully this solution is a bit more concise and helps you not only implement the solution, but understand the problem domain also. My caveat: None of us is an island any more. I wish I had a list of references to the bazillion websites and StackOverflow threads that eventually lead me to this implementation, but alas, I was focused on solving the problem, not documenting the path… So I’ll offer a hearty “thank you” to the entire Android community, and acknowledge that while this solution is my own, parts of it certainly came from others.999Views0likes0CommentsiControl REST 101: Modifying Objects
So far in this series we’ve shown you how to connect to iControl REST via cURL, how to list objects on the device (which is probably the most used command for most APIs, including ours), and how to add objects. So you’re currently able to get a system up and running and configure new services on your BIG-IP. What if, however, you want to modify things that are already running? For that, you need a modify command. Enter PUT. Remember that in REST based architectures the API is able to determine what type of action you’re performing, and thereby the arguments and structure it should expect to parse coming in, based on the type of HTTP(S) transaction. For us a GET is a list, a POST is an add, and to modify things, you use PUT. This allows the system to understand that you’re going to reference an object that already exists, and modify some of the contents. It is important that the API back end can tell the difference. If you just tried to do another POST with only the arguments you wanted to change, it would fail because you wouldn’t meet the minimum requirements. Without a PUT, you’d have to do a list on the object, save all the configuration options already on the system for that object, modify just the one you wanted to change in your memory structure, delete the object on the system, and then POST all of that data, including the one or two modified fields, back to the device. That’s way too much work. Instead, let’s just learn how to modify, shall we? Fortunately it’s simple. All you need to do is format your cURL request with a PUT and the data you want to modify, and you’re all set. First let’s list the object we want to modify, so you can see what it looks like as it sits on the box currently. For that we go back to our list command: curl -sk -u admin:admin https://dev.rest.box/mgmt/tm/net/self/cw_test2 | jq . So that’s what the object looks like Now what if we want to simply make that existing self IP address available for more than one port? We can use the “allowService” flag to do this. By setting it to “all” we’ll allow that IP to answer on any port, rather than just a specific one, thereby opening up our config a bit. So we have the object we want to modify as well as the attribute we want to modify for that object. All we need now is to send the command to do the work. Fortunately this is pretty easy, as I mentioned before. You use the same cURL structure as always with the –sk and user:pass supplied. This will look nearly identical to the add command with the changes being that we use the “PUT” method instead of “POST” and we only supply the one item we’re modifying about the object. It looks like this: curl -sk -u admin:admin https://dev.rest.box/mgmt/tm/net/self/cw_test2 -H "Content-Type: application/json" -X PUT -d "{\"allowService\":\"all\"}" | jq . Notice that I’m still piping the output to jq here. This is because the response from the API when running most commands such as POST and PUT is actually quite useful. Here’s what I get when I run the above command: As you can see this returns the entire, new output of the object, including the piece that got modified. In this case it’s the sixth line of output, including the brace lines. What used to be the line defining the vlan is now "allowService": "all", just like we wanted. So there you have it, modifying objects on the BIG-IP remotely doesn’t get much easier than that. Armed with this knowledge on top of what we’ve covered in previous installments you’ll be able to tackle 90% of the challenges you might encounter. For our next and penultimate edition of this series we’ll cover the delete command, which should round out the methods you’ll need for the general application of the new iControl REST API.899Views1like8Comments