encryption
27 TopicsBIG-IP Edge Client 2.0.2 for Android
Earlier this week F5 released our BIG-IP Edge Client for Android with support for the new Amazon Kindle Fire HD. You can grab it off Amazon instantly for your Android device. By supporting BIG-IP Edge Client on Kindle Fire products, F5 is helping businesses secure personal devices connecting to the corporate network, and helping end users be more productive so it’s perfect for BYOD deployments. The BIG-IP® Edge Client™ for all Android 4.x (Ice Cream Sandwich) or later devices secures and accelerates mobile device access to enterprise networks and applications using SSL VPN and optimization technologies. Access is provided as part of an enterprise deployment of F5 BIG-IP® Access Policy Manager™, Edge Gateway™, or FirePass™ SSL-VPN solutions. BIG-IP® Edge Client™ for all Android 4.x (Ice Cream Sandwich) Devices Features: Provides accelerated mobile access when used with F5 BIG-IP® Edge Gateway Automatically roams between networks to stay connected on the go Full Layer 3 network access to all your enterprise applications and files Supports multi-factor authentication with client certificate You can use a custom URL scheme to create Edge Client configurations, start and stop Edge Client BEFORE YOU DOWNLOAD OR USE THIS APPLICATION YOU MUST AGREE TO THE EULA HERE: http://www.f5.com/apps/android-help-portal/eula.html BEFORE YOU CONTACT F5 SUPPORT, PLEASE SEE: http://support.f5.com/kb/en-us/solutions/public/2000/600/sol2633.html If you have an iOS device, you can get the F5 BIG-IP Edge Client for Apple iOS which supports the iPhone, iPad and iPod Touch. We are also working on a Windows 8 client which will be ready for the Win8 general availability. ps Resources F5 BIG-IP Edge Client Samsung F5 BIG-IP Edge Client Rooted F5 BIG-IP Edge Client F5 BIG-IP Edge Portal for Apple iOS F5 BIG-IP Edge Client for Apple iOS F5 BIG-IP Edge apps for Android Securing iPhone and iPad Access to Corporate Web Applications – F5 Technical Brief Audio Tech Brief - Secure iPhone Access to Corporate Web Applications iDo Declare: iPhone with BIG-IP Technorati Tags: F5, infrastructure 2.0, integration, cloud connect, Pete Silva, security, business, education,technology, application delivery, ipad, cloud, context-aware,infrastructure 2.0, iPhone, web, internet, security,hardware, audio, whitepaper, apple, iTunes2.5KViews0likes3CommentsYour SSL Secrets Uncovered
Get Started with SSL Orchestrator SSL and its brethren TLS is becoming more prevalent to secure IP communications on the internet. It’s not just financial, health care or other sensitive sites, even search engines routinely use the encryption protocol. This can be good or bad. Good, in that all communications are scrambled from prying eyes but potentially hazardous if attackers are hiding malware inside encrypted traffic. If the traffic is encrypted and simply passed through, inspection engines are unable to intercept that traffic for a closer look like they can with clear text communications. The entire ‘defense-in-depth’ strategy with IPS systems and NGFWs lose effectiveness. F5 BIG-IP can solve these SSL/TSL challenges with an advanced threat protection system that enables organizations to decrypt encrypted traffic within the enterprise boundaries, send to an inspection engine, and gain visibility into outbound encrypted communications to identify and block zero-day exploits. In this case, only the interesting traffic is decrypted for inspection, not all of the wire traffic, thereby conserving processing resources of the inspecting device. You can dynamically chain services based on a context-based policy to efficiently deploy security. This solution is supported across the existing F5 BIG-IP v12 family of products with F5 SSL Orchestrator and is integrated with such solutions like FireEye NX, Cisco ASA FirePOWER and Symantec DLP. Here I’ll show you how to complete the initial setup. A few things to know prior – from a licensing perspective, The F5 SSL visibility solution can be deployed using either the BIG-IP system or the purpose built SSL Orchestrator platform. Both have same SSL intercept capabilities with different licensing requirements. To deploy using BIG-IP, you’ll need BIG-IP LTM for SSL offload, traffic steering, and load balancing and the SSL forward proxy for outbound SSL visibility. Optionally, you can also consider the URL filtering subscription to enforce corporate web use policies and/or the IP Intelligence subscription for reputation based web blocking. For the purpose built solution, all you’ll need is the F5 Security SSL Orchestrator hardware appliance. The initial setup addresses URL filtering, SSL bypass, and the F5 iApps template. URL filtering allows you to select specific URL categories that should bypass SSL decryption. Normally this is done for concerns over user privacy or for categories that contain items (such as software update tools) that may rely on specific SSL certificates to be presented as part of a verification process. Before configuring URL filtering, we recommend updating the URL database. This must be performed from the BIG-IP system command line. Make sure you can reach download.websense.com on port 80 via the BIG-IP system and from the BIG-IP LTM command line, type the following commands: modify sys url-db download-schedule urldb download-now false modify sys url-db download-schedule urldb download-now true To list all the supported URL categories by the BIG-IP system, run the following command: tmsh list sys url-db url-category | grep url-category Next, you’ll want to configure data groups for SSL bypass. You can choose to exempt SSL offloading based on various parameters like source IP address, destination IP address, subnet, hostname, protocol, URL category, IP intelligence category, and IP geolocation. This is achieved by configuring the SSL bypass in the iApps template calling the data groups in the TCP service chain classifier rules. A data group is a simple group of related elements, represented as key value pairs. The following example provides configuration steps for creating a URL category data group to bypass HTTPS traffic of financial websites. For the BIG-IP system deployment, download the latest release of the iApps template and import to the BIG-IP system. Extract (unzip) the ssl-intercept-12.1.0-1.5.7.zip template (or any newer version available) and follow the steps to import to the BIG-IP web configuration utility. From there, you’ll configure your unique inspection engine along with simply following the BIG-IP admin UI with the iApp questionnaire. You’ll need to select and/or fill in different values in the wizard to enable the SSL orchestration functionality. We have deployment guides for the detailed specifics and from there, you’ll be able to send your now unencrypted traffic to your inspection engine for a more secure network. ps Resources: Ponemon Report: Application Security in the Changing Risk Landscape IDC Report: The Blind State of Rising SSL Traffic814Views0likes7CommentsAndroid Encrypted Databases
The Android development community, as might be expected, is a pretty vibrant community with a lot of great contributors helping people out. Since Android is largely based upon Java, there is a lot of skills reusability between the Java client dev community and the Android Dev community. As I mentioned before, encryption as a security topic is perhaps the weakest link in that community at this time. Perhaps, but since that phone/tablet could end up in someone else’s hands much more easily than your desktop or even laptop, it is something that needs a lot more attention from business developers. When I set out to write my first complex app for Android, I determined to report back to you from time-to-time about what needed better explanation or intuitive solutions. Much has been done in the realm of “making it easier”, except for security topics, which still rank pretty low on the priority list. So using encrypted SQLite databases is the topic of this post. If you think it’s taking an inordinate amount of time for me to complete this app, consider that I’m doing it outside of work. This blog was written during work hours, but all of the rest of the work is squeezed into two hours a night on the nights I’m able to dedicate time. Which is far from every night. For those of you who are not developers, here’s the synopsis so you don’t have to paw through code with us: It’s not well documented, but it’s possible, with some caveats. I wouldn’t use this method for large databases that need indexes over them, but for securing critical data it works just fine. At the end I propose a far better solution that is outside the purview of app developers and would pretty much have to be implemented by the SQLite team. Okay, only developers left? Good. In my research, there were very few useful suggestions for designing secure databases. They fall into three categories: 1. Use the NDK to write a variant of SQLite that encrypts at the file level. For most Android developers this isn’t an option, and I’m guessing the SQLite team wouldn’t be thrilled about you mucking about with their database – it serves a lot more apps than yours. 2. Encrypt the entire SD card through the OS and then store the DB there. This one works, but slows the function of the entire tablet/phone down because you’ve now (again) mucked with resources used by other apps. I will caveat that if you can get your users to do this, it is the currently available solution that allows indices over encrypted data. 3. Use one of several early-beta DB encryption tools. I was uncomfortable doing this with production systems. You may feel differently, particularly after some of them have matured. I didn’t like any of these options, so I did what we’ve had to do in the past when a piece of data was so dangerous in the wrong hands it needed encrypting. I wrote an interface to the DB that encrypts and decrypts as data is inserted and removed. In Android the only oddity you won’t find in other Java environments – or you can more easily get around in other Java environments – is filling list boxes from the database. For that I had to write a custom provider that took care of on-the-fly decryption and insertion to the list. My solution follows. There are a large varieties of ways that you could solve this problem in Java, this one is where I went because I don’t have a lot of rows for any given table. The data does not need to be indexed. If either of these items is untrue for your implementation, you’ll either have to modify this implementation or find an alternate solution. So first the encryption handler. Note that in this sample, I chose to encode encrypted arrays of bytes as Strings. I do not guarantee this will work for your scenario, and suggest you keep them as arrays of bytes until after decryption. Also note that this sample was built from a working one by obfuscating what the actual source did and making some modifications for simplification of example. It was not tested after the final round of simplification, but should be correct throughout. package com.company.monitor; import javax.crypto.Cipher; import javax.crypto.spec.SecretKeySpec; import android.util.Base64; public class DBEncryptor { private static byte[] key; private static String cypherType = cypherType; public DBEncryptor(String localPass) { // save the encoded key for future use // - note that this keeps it in memory, and is not strictly safe key = encode(localPass.getBytes()).getBytes(); String keyCopy = new String(key); while(keyCopy.length() < 16) keyCopy = keyCopy + keyCopy; byte keyA[] = keyCopy.getBytes(); if(keyA.length > 16) key = System.arraycopy(keyA, 0, key, 0, 16); } public String encode(byte [] s) { return Base64.encodeToString(s, Base64.URL_SAFE); } public byte[] decode(byte[] s) { return Base64.decode(s, Base64.URL_SAFE); } public byte[] getKey() { // return a copy of the key. return key.clone(); } public String encrypt(String toEncrypt) throws Exception { //Create your Secret Key Spec, which defines the key transformations SecretKeySpec skeySpec = new SecretKeySpec(key, cypherType); //Get the cipher Cipher cipher = Cipher.getInstance(cypherType); //Initialize the cipher cipher.init(Cipher.ENCRYPT_MODE, skeySpec); //Encrypt the string into bytes byte[ ] encryptedBytes = cipher.doFinal(toEncrypt.getBytes()); //Convert the encrypted bytes back into a string String encrypted = encode(encryptedBytes); return encrypted; } public String decrypt(String encryptedText) throws Exception { // Get the secret key spec SecretKeySpec skeySpec = new SecretKeySpec(key, cypherType); // create an AES Cipher Cipher cipher = Cipher.getInstance(cypherType); // Initialize it for decryption cipher.init(Cipher.DECRYPT_MODE, skeySpec); // Get the decoded bytes byte[] toDecrypt = decode(encryptedText.getBytes()); // And finally, do the decryption. byte[] clearText = cipher.doFinal(toDecrypt); return new String(clearText); } } So what we are essentially doing is base-64 encoding the string to be encrypted, and then encrypting the base-64 value using standard Java crypto classes. We simply reverse the process to decrypt a string. Note that this class is also useful if you’re storing values in the Properties file and wish them to be encrypted, since it simply operates on strings. The value you pass in to create the key needs to be something that is unique to the user or tablet. When it comes down to it, this is your password, and should be treated as such (hence why I changed the parameter name to localPass). For seasoned Java developers, there’s nothing new on Android at this juncture. We’re just encrypting and decrypting data. Next it does leave the realm of other Java platforms because the database is utilizing SQLite, which is not generally what you’re writing Java to outside of Android. Bear with me while we go over this class. The SQLite database class follows. Of course this would need heavy modification to work with your database, but the skeleton is here. Note that not all fields have to be encrypted. You can mix and match, no problems at all. That is one of the things I like about this solution, if I need an index for any reason, I can create an unencrypted field of a type other than blob and index on it. package com.company.monitor; import android.content.ContentValues; import android.content.Context; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteDatabase.CursorFactory; import android.database.sqlite.SQLiteOpenHelper; public class DBManagernames extends SQLiteOpenHelper { public static final String TABLE_NAME = "Map"; public static final String COLUMN_ID = "_id"; public static final String COLUMN_LOCAL = "Local"; public static final String COLUMN_WORLD = "World"; private static int indexId = 0; private static int indexLocal = 1; private static int indexWorld = 2; private static final String DATABASE_NAME = "Mappings.db"; private static final int DATABASE_VERSION = 1; // SQL statement to create the DB private static final String DATABASE_CREATE = "create table " + TABLE_NAME + "(" + COLUMN_ID + " integer primary key autoincrement, " + COLUMN_LOCAL + " BLOB not null, " + COLUMN_WORLD +" BLOB not null);"; public DBManagernames(Context context, CursorFactory factory) { super(context, DATABASE_NAME, factory, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DATABASE_CREATE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // TODO Auto-generated method stub // Yeah, this isn't implemented in production yet either. It's low on the list, but definitely "on the list" } // Assumes DBEncryptor was used to convert the fields of name before calling insert public void insertToDB(DBNameMap name) { ContentValues cv = new ContentValues(); cv.put(COLUMN_LOCAL, name.getName().getBytes()); cv.put(COLUMN_WORLD, name.getOtherName().getBytes()); getWritableDatabase().insert(TABLE_NAME, null, cv); } // returns the encrypted values to be manipulated with the decryptor. public DBNameMap readFromDB(Integer index) { SQLiteDatabase db = getReadableDatabase(); DBNameMap hnm = new DBNameMap(); Cursor cur = null; try { cur = db.query(TABLE_NAME, null, "_id='"+index.toString() +"'", null, null, null, COLUMN_ID); // cursors connsistently return before the first element. Move to the first. cur.moveToFirst(); byte[] name = cur.getBlob(indexLocal); byte [] othername = cur.getBlob(indexWorld); hnm = new DBNameMap(new String(name), new String(othername), false); } catch(Exception e) { System.out.println(e.toString()); // Do nothing - we want to return the empty host name map. } return hnm; } // NOTE: This routine assumes "String name" is the encrypted version of the string. public DBNameMap getFromDBByName(String name) { SQLiteDatabase db = getReadableDatabase(); Cursor cur = null; String check = null; try { // Note - the production version of this routine actually uses the "where" field to get the correct // element instead of looping the table. This is here for your debugging use. cur = db.query(TABLE_NAME, null, null, null, null, null, null); for(cur.moveToFirst();(!cur.isLast());cur.moveToNext()) { check = new String(cur.getBlob(indexLocal)); if(check.equals(name)) return new DBNameMap(check, new String(cur.getBlob(indexWorld)), false); } if(cur.isLast()) return new DBNameMap(); return new DBNameMap(cur.getString(indexLocal), cur.getString(indexWorld), false); } catch(Exception e) { System.out.println(e.toString()); return new DBNameMap(); } } // used by our list adapter - coming next in the blog. public Cursor getCursor() { try { return getReadableDatabase().query(TABLE_NAME, null, null, null, null, null, null); } catch(Exception e) { System.out.println(e.toString()); return null; } } // This is used in our list adapter for mapping to fields. public String[] listColumns() { return new String[] {COLUMN_LOCAL}; } } I am not including the DBNameMap class, as it is a simple container that has two string fields and maps one name to another. Finally, we have the List Provider. Android requires that you populate lists with a provider, and has several base ones to work with. The problem with the SimpleCursorAdapter is that it assumes an unencrypted database, and we just invested a ton of time making the DB encrypted. There are several possible solutions to this problem, and I present the one I chose here. I extended ResourceCursorAdapter and implemented decryption right in the routines, leaving not much to do in the list population section of my activity but to assign the correct adapter. package com.company.monitor; import android.content.Context; import android.database.Cursor; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ResourceCursorAdapter; import android.widget.TextView; public class EncryptedNameAdapter extends ResourceCursorAdapter { private String pw; public EncryptedHostNameAdapter(Context context, int layout, Cursor c, boolean autoRequery) { super(context, layout, c, autoRequery); } public EncryptedHostNameAdapter(Context context, int layout, Cursor c, int flags) { super(context, layout, c, flags); } // This class must know what the encryption key is for the DB before filling the list, // so this call must be made before the list is populated. The first call after the constructor works. public void setPW(String pww) { pw = pww; } @Override public View newView(Context context, Cursor cur, ViewGroup parent) { LayoutInflater li = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); return li.inflate(R.layout.my_list_entry, parent, false); } @Override public void bindView(View arg0, Context arg1, Cursor arg2) { // Get an encryptor/decryptor for our data. DBEncryptor enc = new DBEncryptor(pw); // Get the TextView we're placing the data into. TextView tvLocal = (TextView)arg0.findViewById(R.id.list_entry_name); // Get the bytes from the cursor byte[] bLocal = arg2.getBlob(arg2.getColumnIndex(DBManagerNames.COLUMN_LOCAL )); // Convert bytes to a string String local = new String(bSite); try { // decrypt the string local = enc.decrypt(local); } catch(Exception e) { System.out.println(e.toString()); // local holds the encrypted version at this point, fix it. // We’ll return an empty string for simplicity local = new String(); } tvSite.setText(local); } } The EncryptedNameAdapter can be set as the source for any listbox just like most examples set an ArrayAdapter as the source. Of course, it helps if you’ve put some data in the database first . That’s it for this time. There’s a lot more going on with this project, and I’ll present my solution for SSL certificate verification some time in the next couple of weeks, but for now if you need to encrypt some fields of a database, this is one way to get it done. Ping me on any of the social media outlets or here in the comments if you know of a more elegant/less resource intensive solution, always up for learning more. And please, if you find an error, it was likely introduced in the transition to something I was willing to throw out here publicly, but let me know so others don’t have problems. I’ve done my best not to introduce any, but always get a bit paranoid if I changed it after my last debug session – and I did to simplify and sanitize.407Views0likes0CommentsDatabases in the Cloud Revisited
A few of us were talking on Facebook about high speed rail (HSR) and where/when it makes sense the other day, and I finally said that it almost never does. Trains lost out to automobiles precisely because they are rigid and inflexible, while population densities and travel requirements are highly flexible. That hasn’t changed since the early 1900s, and isn’t likely to in the future, so we should be looking at different technologies to answer the problems that HSR tries to address. And since everything in my universe is inspiration for either blogging or gaming, this lead me to reconsider the state of cloud and the state of cloud databases in light of synergistic technologies (did I just use “synergistic technologies in a blog? Arrrggghhh…). There are several reasons why your organization might be looking to move out of a physical datacenter, or to have a backup datacenter that is completely virtual. Think of the disaster in Japan or hurricane Katrina. In both cases, having even the mission critical portions of your datacenter replicated to the cloud would keep your organization online while you recovered from all of the other very real issues such a disaster creates. In other cases, if you are a global organization, the cost of maintaining your own global infrastructure might well be more than utilizing a global cloud provider for many services… Though I’ve not checked, if I were CIO of a global organization today, I would be looking into it pretty closely, particularly since this option should continue to get more appealing as technology continues to catch up with hype. Today though, I’m going to revisit databases, because like trains, they are in one place, and are rigid. If you’ve ever played with database Continuous Data Protection or near-real-time replication, you know this particular technology area has issues that are only now starting to see technological resolution. Over the last year, I have talked about cloud and remote databases a few times, talking about early options for cloud databases, and mentioning Oracle Goldengate – or praising Goldengate is probably more accurate. Going to the west in the US? HSR is not an option. The thing is that the options get a lot more interesting if you have Goldengate available. There are a ton of tools, both integral to database systems and third-party that allow you to encrypt data at rest these days, and while it is not the most efficient access method, it does make your data more protected. Add to this capability the functionality of Oracle Goldengate – or if you don’t need heterogeneous support, any of the various database replication technologies available from Oracle, Microsoft, and IBM, you can seamlessly move data to the cloud behind the scenes, without interfering with your existing database. Yes, initial configuration of database replication will generally require work on the database server, but once configured, most of them run without interfering with the functionality of the primary database in any way – though if it is one that runs inside the RDBMS, remember that it will use up CPU cycles at the least, and most will work inside of a transaction so that they can insure transaction integrity on the target database, so know your solution. Running inside the primary transaction is not necessary, and for many uses may not even be desirable, so if you want your commits to happen rapidly, something like Goldengate that spawns a separate transaction for the replica are a good option… Just remember that you then need to pay attention to alerts from the replication tool so that you don’t end up with successful transactions on the primary not getting replicated because something goes wrong with the transaction on the secondary. But for DBAs, this is just an extension of their daily work, as long as someone is watching the logs. With the advent of Goldengate, advanced database encryption technology, and products like our own BIG-IPWOM, you now have the ability to drive a replica of your database into the cloud. This is certainly a boon for backup purposes, but it also adds an interesting perspective to application mobility. You can turn on replication from your data center to the cloud or from cloud provider A to cloud provider B, then use VMotion to move your application VMS… And you’re off to a new location. If you think you’ll be moving frequently, this can all be configured ahead of time, so you can flick a switch and move applications at will. You will, of course, have to weigh the impact of complete or near-complete database encryption against the benefits of cloud usage. Even if you use the adaptability of the cloud to speed encryption and decryption operations by distributing them over several instances, you’ll still have to pay for that CPU time, so there is a balancing act that needs some exploration before you’ll be certain this solution is a fit for you. And at this juncture, I don’t believe putting unencrypted corporate data of any kind into the cloud is a good idea. Every time I say that, it angers some cloud providers, but frankly, cloud being new and by definition shared resources, it is up to the provider to prove it is safe, not up to us to take their word for it. Until then, encryption is your friend, both going to/from the cloud and at rest in the cloud. I say the same thing about Cloud Storage Gateways, it is just a function of the current state of cloud technology, not some kind of unreasoning bias. So the key then is to make sure your applications are ready to be moved. This is actually pretty easy in the world of portable VMs, since the entire VM will pick up and move. The only catch is that you need to make sure users can get to the application at the new location. There are a ton of Global DNS solutions like F5’s BIG-IP Global Traffic Manager that can get your users where they need to be, since your public-facing IPs will be changing when moving from organization to organization. Everything else should be set, since you can use internal IP addresses to communicate between your application VMs and database VMs. Utilizing a some form of in-flight encryption and some form of acceleration for your database replication will round out the solution architecture, and leave you with a road map that looks more like a highway map than an HSR map. More flexible, more pervasive.365Views0likes0Comments4 things you can do in your code now to make it more scalable later
No one likes to hear that they need to rewrite or re-architect an application because it doesn't scale. I'm sure no one at Twitter thought that they'd need to be overhauling their architecture because it gained popularity as quickly as it did. Many developers, especially in the enterprise space, don't worry about the kind of scalability that sites like Twitter or LinkedIn need to concern themselves with, but they still need to be (or at least should be) concerned with scalability in general and the effects of inserting an application into a high-scalability environment, such as one fronted by a load balancer or application delivery controller. There are some very simple things you can do in your code, when you're developing an application, that can ease the transition into a high-availability architecture and that will eventually lead to a faster, more scalable application. Here are four things you can do now - and why - to make your application fit better into a high availability environment in the future and avoid rewriting or re-architecting your solutions later. Where's F5? Storage Decisions Sept 23-24 in New York Networld IT Roadmap Sept 23 in Dallas Oracle Open World Sept 21-25 in San Francisco Storage Networking World Oct 13-16 in Dallas Storage Expo 2008 UK Oct 15-16 in London Storage Networking World Oct 27-29 in Frankfurt 1. Don't assume your application is always responsible for cookie encryption Encrypting cookies in today's privacy lax environment that is the Internet is the responsible thing to do. In the first iterations of your application you will certainly be responsible for handling the encryption and decryption of cookies, but later on, when the application is inserted into a high-availability environment and there exists an application delivery controller (ADC), that functionality can be offloaded to the ADC. Offloading the responsibility for encryption and decryption of cookies to the ADC improves performance because the ADC employs hardware acceleration. To make it easier to offload this responsibility to an ADC in the future but support it early on, use a configuration flag to indicate whether you should decrypt or encrypt cookies before examining them. That way you can simply change the configuration flag later on and immediately take advantage of a performance boost from the network infrastructure. 2. Don't assume the client IP is accurate If you need to use/store/access the client's IP address, don't assume the traditional HTTP header is accurate. Early on it certainly will be, but when the application is inserted into a high availability environment and a full-proxy solution is sitting in front of your application, it won't be. A full-proxy mediates between client and server, which means it is the client when talking to the server, so its IP address becomes the "client IP". Almost all full-proxies insert the real client IP address into the X-Forwarded-For HTTP header, so you should always check that header before checking the client IP address. If there is an X-Forwarded-For value, you'll more than likely want to use it instead of the client IP address. This simple check should alleviate the need to make changes to your application when it's moved into a high availability environment. 3. Don't use relative paths Always use the FQDN (fully qualified domain name) when referencing images, scripts, etc... inside your application. Furthermore, use different host names for different content types - i.e. images.example.com and scripts.example.com. Early on all the hosts will point to the same server, probably, but by insuring that you're using the FQDN now makes architecting that high availability environment much easier. While any intelligent application delivery controller can perform layer 7 switching on any part of the URI and arrive at the same architecture, it's much more efficient to load balance and route application data based on the host name. By using the FQDN and separating host names by content type you can later optimize and tune specific servers for delivery of that content, or use the CNAME trick to improve parallelism and performance in request heavy applications. 4. Separate out API rate limiting functionality If you're writing an application with an API for integration later, separate out the rate limiting functionality. Initially you may need it, but when the application is inserted into a high-availability environment with an intelligent application delivery controller, it can take over that functionality and spare your application from having to reject requests that exceed the set limits. Like cookie encryption, use a configuration flag to determine whether you should check this limitation or not so it can be easily be turned on and off at will. By offloading the responsibility for rate limiting to an application delivery controller you remove the need for the server to waste resources (connections, RAM, cycles) on requests it won't respond to anyway. This improves the capacity of the server and thus your application, making it more efficient and more scalable. By thinking about the ways in which your application will need to interact with a high availability infrastructure later and adjusting your code to take that into consideration you can save yourself a lot of headaches later on when your application is inserted into that infrastructure. That means less rewriting of applications, less troubleshooting, and fewer servers needed to scale up quickly to meet demand. Happy coding!360Views0likes1CommentSANS Top 25 Epic Fail: CWE-319
If you've taken the time to read over the "Top 25 Most Dangerous Programming Errors" published by SANS recently, you may (or may not) have noticed that CWE-319 is an anomaly, and should be easily picked out by developers and security professionals in a game called "which one of these is not like the other". CWE-319 If your software sends sensitive information across a network, such as private data or authentication credentials, that information crosses many different nodes in transit to its final destination. Attackers can sniff this data right off the wire, and it doesn't require a lot of effort. All they need to do is control one node along the path to the final destination, control any node within the same networks of those transit nodes, or plug into an available interface. Trying to obfuscate traffic using schemes like Base64 and URL encoding doesn't offer any protection, either; those encodings are for normalizing communications, not scrambling data to make it unreadable. Prevention and Mitigations Architecture and Design Secret information should not be transmitted in cleartext. Encrypt the data with a reliable encryption scheme before transmitting. Implementation When using web applications with SSL, use SSL for the entire session from login to logout, not just for the initial login page. Operation Configure servers to use encrypted channels for communication, which may include SSL or other secure protocols. 1. This is not a "programming error" The first problem with the inclusion of this "error" on the list is that it is not a programming error. It may be a poor design, architectural, or deployment decision, but it is not an "error". While not necessarily a problem with the actual weakness described, the misnomer is frustrating and undermines the rest of the list, most of which are actual errors in coding practices that need to be addressed. SSL can be easily enabled by any customer, regardless of how the web application is written. Using SSL has always been suggested as part of a secure architecture, and it is organizations not using SSL that bear the burden of failure to implement this simple security scheme, not necessarily developers. Trying to force software vendors to force SSL on its customers is an end-run around the sad fact that most organizations fail to implement proper encryption when necessary. 2. Mitigation through encryption can disrupt security systems internally SSL enabled servers require that the organization obtain and manage the appropriate server-side certificates. SSL usage is the responsibility of the organization deploying the software, not the software vendor. Ensuring the web application works correctly when deployed using SSL may be the vendor's responsibility, but configuring it that way is clearly a matter of architectural choice on the part of the organization deploying the software. It is likely that this remediation solution was intended to direct developers to always use HTTPS instead of HTTP when loading URLs, rather than using relative paths. This likely requires rework on the part of the developers of web applications to obtain the host name dynamically before constructing the proper URL rather than using relative paths. This would also require organizations to ensure an environment that supports SSL, which puts the onus of a secure implementation squarely back on the organization, not the vendor. The ramifications of implementation of SSL from client all the way to server can include the inadvertent elimination of the ability of other security systems - IDS, IPS, WAF - to perform their tasks unless specifically configured to decrypt, then examine the requests and responses, and then re-encrypt the session before sending it on to the appropriate server. This requires re-architecture on the part of the organization, and careful consideration of the security of systems on which such keys and/or certificates are will be stored. This is important as the compromise of any system storing the keys and/or certificates may lead to the "bad guys" obtaining these important pieces of security architecture, thus rendering any application or system relying upon that data insecure. 3. Encrypted malicious data is still malicious A very wise man told me once that malicious data encrypted is still malicious. Using SSL encryption certainly keeps the "bad guys" from looking at and capturing sensitive data, but as noted in issue number 3 it also keeps security devices from inspecting the exchange in its goal of detecting and preventing malicious data from getting near the web application or web server, where it is likely to do harm. The "bad guys" have the same level of access to those means as do the normal users; this does nothing to prevent the insertion of malicious data but does make it more difficult to detect and prevent, unless the application is requiring client-side certificates, which opens yet another can of worms and can seriously degrade the flexibility of the application in supporting a wide variety of end-user devices. The result, no matter how it is implemented, is security theater at its finest. CWE-319 should not have been included on a list of top "programming errors", and the remediation solutions offered fail to recognize that the majority of the burden of implementation is on the organization, not the software vendor. It fails to recognize the impact of the suggested implementations on the application and the supporting infrastructure, and it is likely to cause more problems than it will solve. The blind adoption of this list as a requirement for procurement by the state of New York, and likely others soon to follow, is little more than a grand gesture designed to send a message not to vendors, but to its customers and, likely, the courts. Certainly requiring software be certified against this list could be considered due-diligence in any lawsuit resulting from the inadvertent leak of sensitive information, thereby proving no negligence on the part of the organization and therefore no liability. While enabling SSL communications is certainly a good idea, it is important to remember that it - like other encryption schemes - is merely obfuscation. It will blindly transport malicious data as easily as it does legitimate data, and failure to adjust internal architectures to deal with SSL across all required security and application delivery devices does little to enhance security in any real meaningful way. Related articles by Zemanta Secure gmail account by turning on https permanently Windows encryption programs open to kernel hack336Views0likes1CommentAsk the Expert – Why SSL Everywhere?
Kevin Stewart, Security Solution Architect, talks about the paradigm shift in the way we think about IT network services, particularly SSL and encryption. Gone are the days where clear text roams freely on the internal network and organizations are looking to bring SSL all the way to the application, which brings complexity. Kevin explains some of the challenges of encrypting all the way to the application and ways to solve this increasing trend. SSL is not just about protecting data in motion, it’s also about privacy. ps Related: Ask the Expert – Are WAFs Dead? RSA2015 – SSL Everywhere (feat Holmes) AWS re:Invent 2015 – SSL Everywhere…Including the Cloud (feat Stanley) F5 SSL Everywhere Solutions Technorati Tags: f5,ssl,encryption,pki,big-ip,security,privacy,silva,video Connect with Peter: Connect with F5:329Views0likes0CommentsWho Took the Cookie from the Cookie Jar … and Did They Have Proper Consent?
Cookies as a service enabled via infrastructure services provide an opportunity to improve your operational posture. Fellow DevCentral blogger Robert Haynes posted a great look at a UK law regarding cookies. Back in May a new law went info effect regarding “how cookies and other “cookie-like” objects are stored on users’ devices.” If you haven’t heard about it, don’t panic – there’s a one-year grace period before enforcement begins and those £500 000 fines are being handed out. The clock is ticking, however. What do the new regulations say? Well essentially whereas cookies could be stored with what I, as a non-lawyer, would term implied consent, i.e. the cookies you set are listed along with their purpose and how to opt out in some interminable privacy policy on your site, you are now going to have to obtain a more active and informed consent to store cookies on a user’s device. -- The UK Cookie Law – Robert goes on to explain that the solution to this problem requires (1) capturing cookies and (2) implementing some mechanism to allow users to grant consent. Mind you, this is not a trivial task. There are logic considerations – not all cookies are set at the same time – as well as logistical issues – how do you present a request for consent? Once consent is granted, where do you store that? In a cookie? That you must gain consent to store? Infinite loops are never good. And of course, what do you if consent is not granted, but the application depends on that cookie existing? To top it all off, the process of gathering consent requires modification to application behavior, which means new code, testing and eventually deployment. Infrastructure services may present an alternative approach that is less disruptive technologically, but does not necessarily address the business or logic ramifications resulting from such a change. COOKIES as a SERVICE Cookies as a Service, a.k.a. cookie gateways, wherein cookie authorization and management is handled by an intermediate proxy, is likely best able to mitigate the expense and time associated with modifying applications to meet the new UK regulation. As Robert describes, he’s currently working on a network-side scripting solution to meet the UK requirements that leverages a full-proxy architecture’s ability to mediate for applications and communicate directly with clients before passing requests on to the application. Not only is this a valid approach to managing privacy regulations, it’s also a good means of providing additional security against attacks that leverage cookies either directly or indirectly. Cross-site scripting, browser vulnerabilities and other attacks that bypass the same origin policy of modern web browsers – sometimes by design to circumvent restrictions on integration methods – as well as piggy-backing on existing cookies as a means to gain unauthorized access to applications are all potential dangerous of cookies. By leveraging encryption of cookies in conjunction with transport layer security, i.e. SSL, organizations can better protect both users and applications from unintended security consequences. Implementing a cookie gateway should make complying with regulations like the new UK policy a less odious task. By centralizing cookie management on an intermediate device, they can be easily collected and displayed along with the appropriate opt-out / consent policies without consuming application resources or requiring every application to be modified to include the functionality to do so. AN INFRASTRUCTURE SERVICE This is one of the (many) ways in which an infrastructure service hosted in “the network” can provide value to both application developers and business stakeholders. Such a reusable infrastructure-hosted service can be leveraged to provide services to all applications and users simultaneously, dramatically reducing the time and effort required to support such a new initiative. Reusing an infrastructure service also reduces the possibility of human error during the application modification, which can drag out the deployment lifecycle and delay time to market. In the case of meeting the requirements of the new UK regulations, that delay could become costly. According to a poll of CIOs regarding their budgets for 2010, The Society for Information Management found that “Approximately 69% of IT spending in 2010 will be allocated to existing systems, while about 31% will be spent on building and buying new systems. This ratio remains largely the same compared to this year's numbers.” If we are to be more responsive to new business initiatives and flip that ratio such that we are spending less on maintaining existing systems and more on developing new systems and methods of management (i.e. cloud computing ) we need to leverage strategic points of control within the network to provide services that minimize resources, time and money on existing systems. Infrastructure services such as cookie gateways provide the opportunity to enhance security, comply with regulations and eliminate costs in application development that can be reallocated to new initiatives and projects. We need to start treating the network and its unique capabilities as assets to be leveraged and services to be enabled instead of a fat, dumb pipe. Understanding network-side scripting This is Why We Can’t Have Nice Things When the Data Center is Under Siege Don’t Forget to Watch Under the Floor IT as a Service: A Stateless Infrastructure Architecture Model You Can’t Have IT as a Service Until IT Has Infrastructure as a Service F5 Friday: Eliminating the Blind Spot in Your Data Center Security Strategy The UK Cookie Law –321Views0likes1CommentThe New Certificate 2048 My Performance
SSL is a cryptographic protocol used to secure communications over the internet. SSL ensures secure end-to-end transmission and is implemented in every Web browser. It can also be used to secure email, instant messaging and VoIP sessions. The encryption and decryption of SSL is computationally intensive and can put a strain on server resources like CPU. Currently, most server SSL Certificates are 1024-bit key length and the National Institute of Standards and Technology (NIST) is recommending a transition to 2048-bit key lengths by Jan 1 st 2011. SSL and its brethren, TLS (Transport Layer Security) provide the security and encryption necessary for secure communications over the internet, and particularly for creating an encrypted link between the browser and web server. You will see ‘https’ in your browser address bar when visiting a site that is SSL enabled. The strength of SSL is tied to the size of the Public Key Infrastructure (PKI) key. Key length or key size (1024 bit, 2048 bit, 4092 bit) is measured in bits and typically used to indicate the strength of the encryption algorithm; the longer the key length, the harder it is decode. In order to enable an SSL connection, the server needs to have a digital certificate installed. If you have multiple servers, each requiring SSL, then each server must have a digital certificate. Transactions handled over SSL can require substantial computational power to establish the connection (handshake) and then to encrypt and decrypt the transferred data. If you need the same performance as non-secured data, then additional computing power (CPU) is needed. SSL processing can be up to 5 times more computationally expensive than clear text to have the same level of performance, no matter which vendor is providing the hardware. This can have significant, detrimental ramifications to server performance. SSL Offload takes much of that computing burden off the servers and places it on dedicated SSL hardware. SSL offload allows organizations to migrate 100% of their communications to SSL for greater security, consolidation of certificates, centralized management, and reduction of cost and allows for selective content encryption & encrypted cookies along with the ability to inspect and modify encrypted traffic. SSL offloading can relieve the Web server of the processing burden of encrypting and/or decrypting traffic sent via SSL. Customers, vendors and the industry as a whole will soon face the challenge of what to do regarding their SSL strategy. Those who have valid 1024-bit certificates need to understand the ramifications of the switch and that next time they go to renew their certificates, they will be forced to buy 2048-bit certificates. This will drastically affect their SSL capacity on both the servers and the load balancer. There is a significant increase in needed computational power going from 1024-bit to 2048-bit and an exponential drop off in performance when doubling key sizes regardless of the platform or vendor. Most CAs, like Entrust have already stopped issuing 1024-bit certificates, and Verisign will stop doing so in 4-5 months. Since many certificate vendors are now only issuing 2048-bit certificates, customers might not understand the potential SSL performance capacity. The overall performance impact of 2048-bit keys on the servers if you don’t offload will increase significantly. This can be a challenge when you have hundreds of servers providing content. Existing certificates issued with 1024-bit encryption will not stop working. If you still have valid certificates but need to ensure you are delivering 2048-bit certificates to users (or due to regulatory requirements), one option, as mentioned in Lori’s blog, is to install the 2048-bit certificate on your BIG-IP LTM for the off-load performance capabilities and then use your existing 1024-bit keys from BIG-IP LTM to the back-end server farm. Simply import the server certificates directly into BIG-IP. This means that the SSL Certificates that would normally go on each server can be centrally stored and managed by LTM, thereby reducing the cost of the certificates needed as well as the cost for any specialized server software/hardware required. This keeps the load off the servers, potentially eliminating any performance issues and allows you to stay current with NIST guidelines while still providing an end to end SSL connection for your web applications. This is a huge advantage over commodity hardware with no SSL offload capabilities. BIG-IP LTM has specialized SSL chips which are dedicated and optimized for SSL encryption and decryption. These chips provide the ability to maintain performance levels even at longer key lengths, whereas in commodity hardware the computational load of SSL decreases the overall system performance impacting user experience and other server tasks. The F5 SSL Acceleration Module removes all the bottlenecks for secure, wire-speed processing, including concurrent users, bulk throughput, and new transactions per second along with supporting certificates up to 4092-bits. The fully loaded F5 VIPRION chassis is the most powerful SSL-offloading engine on the market today and, along with the BIG-IP LTM Virtual Edition (VE), provides a powerful solution to the SSL challenge. By front-ending BIG-IP VE farms with a VIPRION, you can assign load balancing or SSL offloading to a dedicated ADC. The same approach can remedy access to legacy systems that might not support 2048-bit certificates or cannot be upgraded due to business restrictions or other rationale. By deploying an F5 BIG-IP device with 2048k certificate in front of the legacy systems, back-end encryption can be accomplished using existing 1024-bit certificates. F5 does support 4096-bit keys, future-proofing support for longer keys down the road and offers backwards and forwards compatibility but unless there is a strong business case, 2048-bit keys are recommended for optimal performance and protection. ps304Views0likes0Comments20 Lines or Less #18
What could you do with your code in 20 Lines or Less? That's the question I ask (almost) every week, and every week I go looking to find cool new examples that show just how flexible and powerful iRules can be without getting in over your head. Back with more cool examples of what iRules can do in a scant 20 lines of code, this week's 20LoL brings you three different HTTP goodies. Fully Decode URI http://devcentral.f5.com/s/Wiki/default.aspx/iRules/FullyDecodeURI.html This cool example from the wiki shows how you can ensure that your URI is not just decoded once, which can leave stray encoded characters behind, but that it's actually fully decoded. This is done with a while loop and a check to compare the decoded URI with the last pass. Take a look. when HTTP_REQUEST { # decode original URI. set tmpUri [HTTP::uri] set uri [URI::decode $tmpUri] # repeat decoding until the decoded version equals the previous value. while { $uri ne $tmpUri } { set tmpUri $uri set uri [URI::decode $tmpUri] } HTTP::uri $uri # log local0. "Original URI: [HTTP::uri]" # log local0. "Fully decoded URI: $uri"} HTTP Track Unanswered Requests If you're looking for a way to determine how many HTTP requests are still left open, or unanswered, then this is the iRule for you. In yet another great example of iRules power and brevity, this example provides some nice utility as well. when HTTP_REQUEST { STATS::incr StatsDemo inFlight if { [STATS::get StatsDemo inFlight] > [STATS::get StatsDemo inFlightMax] } { STATS::set StatsDemo inFlightMax [STATS::get StatsDemo inFlight] }}when HTTP_RESPONSE { STATS::incr StatsDemo inFlight -1} Cookie Encryption Gateway If you're looking to encrypt/decrypt ALL cookies going in and out of a virtual in one fell swoop, then here's your solution. Normal configuration of profiles requires you to state each cookie that's going to be encrypted. This iRule allows you to add or remove cookies from your application at will, while always being sure they're going to be secured. when RULE_INIT { # Exposed passphrase, but this key can be synchronized to the peer LTM set ::passphrase "secret" # Private passphrase, but it isn't synchronized. On LTM failover to # its peer, applications relying on the encrypted cookies will break. # set ::passphrase [AES::key]}when HTTP_REQUEST { foreach { cookieName } [HTTP::cookie names] { HTTP::cookie decrypt $cookieName ::passphrase }}when HTTP_RESPONSE { foreach { cookieName } [HTTP::cookie names] { HTTP::cookie encrypt $cookieName ::passphrase }} There they are, this week's 20 Lines or Less examples. This series has been awesome fun so far and I'm constantly impressed with the kinds of things that can be done in less than 21 lines of code. If you have any ideas, suggestions, comments, etc. please let me know. I'd love to hear your thoughts. Until next time, code hard. #Colin282Views0likes0Comments