encryption
37 TopicsBIG-IP Edge Client 2.0.2 for Android
Earlier this week F5 released our BIG-IP Edge Client for Android with support for the new Amazon Kindle Fire HD. You can grab it off Amazon instantly for your Android device. By supporting BIG-IP Edge Client on Kindle Fire products, F5 is helping businesses secure personal devices connecting to the corporate network, and helping end users be more productive so it’s perfect for BYOD deployments. The BIG-IP® Edge Client™ for all Android 4.x (Ice Cream Sandwich) or later devices secures and accelerates mobile device access to enterprise networks and applications using SSL VPN and optimization technologies. Access is provided as part of an enterprise deployment of F5 BIG-IP® Access Policy Manager™, Edge Gateway™, or FirePass™ SSL-VPN solutions. BIG-IP® Edge Client™ for all Android 4.x (Ice Cream Sandwich) Devices Features: Provides accelerated mobile access when used with F5 BIG-IP® Edge Gateway Automatically roams between networks to stay connected on the go Full Layer 3 network access to all your enterprise applications and files Supports multi-factor authentication with client certificate You can use a custom URL scheme to create Edge Client configurations, start and stop Edge Client BEFORE YOU DOWNLOAD OR USE THIS APPLICATION YOU MUST AGREE TO THE EULA HERE: http://www.f5.com/apps/android-help-portal/eula.html BEFORE YOU CONTACT F5 SUPPORT, PLEASE SEE: http://support.f5.com/kb/en-us/solutions/public/2000/600/sol2633.html If you have an iOS device, you can get the F5 BIG-IP Edge Client for Apple iOS which supports the iPhone, iPad and iPod Touch. We are also working on a Windows 8 client which will be ready for the Win8 general availability. ps Resources F5 BIG-IP Edge Client Samsung F5 BIG-IP Edge Client Rooted F5 BIG-IP Edge Client F5 BIG-IP Edge Portal for Apple iOS F5 BIG-IP Edge Client for Apple iOS F5 BIG-IP Edge apps for Android Securing iPhone and iPad Access to Corporate Web Applications – F5 Technical Brief Audio Tech Brief - Secure iPhone Access to Corporate Web Applications iDo Declare: iPhone with BIG-IP Technorati Tags: F5, infrastructure 2.0, integration, cloud connect, Pete Silva, security, business, education,technology, application delivery, ipad, cloud, context-aware,infrastructure 2.0, iPhone, web, internet, security,hardware, audio, whitepaper, apple, iTunes2.5KViews0likes3CommentsEncryption error - SAML assertion: response is not encrypted
We are trying to configure out APM with Azure SAML authentication. After login on and succedded we can an error and the logs show the following: modules/Authentication/Saml/SamlSPAgent.cpp: 'verifyAssertionSignature()': 5374: Verification of SAML signature #2 succeeded ----------------------- SAML2Websak_act_saml_auth_ag failed to parse assertion, error: Response is not encrypted ...................... a6559abf: Following rule 'fallback' from item 'SAML Auth' to ending 'Deny' As a result the login is Denied. Is this related to the certificate or RSA encryption? We have tried various options but it comes back to the same error1.6KViews0likes3CommentsYour SSL Secrets Uncovered
Get Started with SSL Orchestrator SSL and its brethren TLS is becoming more prevalent to secure IP communications on the internet. It’s not just financial, health care or other sensitive sites, even search engines routinely use the encryption protocol. This can be good or bad. Good, in that all communications are scrambled from prying eyes but potentially hazardous if attackers are hiding malware inside encrypted traffic. If the traffic is encrypted and simply passed through, inspection engines are unable to intercept that traffic for a closer look like they can with clear text communications. The entire ‘defense-in-depth’ strategy with IPS systems and NGFWs lose effectiveness. F5 BIG-IP can solve these SSL/TSL challenges with an advanced threat protection system that enables organizations to decrypt encrypted traffic within the enterprise boundaries, send to an inspection engine, and gain visibility into outbound encrypted communications to identify and block zero-day exploits. In this case, only the interesting traffic is decrypted for inspection, not all of the wire traffic, thereby conserving processing resources of the inspecting device. You can dynamically chain services based on a context-based policy to efficiently deploy security. This solution is supported across the existing F5 BIG-IP v12 family of products with F5 SSL Orchestrator and is integrated with such solutions like FireEye NX, Cisco ASA FirePOWER and Symantec DLP. Here I’ll show you how to complete the initial setup. A few things to know prior – from a licensing perspective, The F5 SSL visibility solution can be deployed using either the BIG-IP system or the purpose built SSL Orchestrator platform. Both have same SSL intercept capabilities with different licensing requirements. To deploy using BIG-IP, you’ll need BIG-IP LTM for SSL offload, traffic steering, and load balancing and the SSL forward proxy for outbound SSL visibility. Optionally, you can also consider the URL filtering subscription to enforce corporate web use policies and/or the IP Intelligence subscription for reputation based web blocking. For the purpose built solution, all you’ll need is the F5 Security SSL Orchestrator hardware appliance. The initial setup addresses URL filtering, SSL bypass, and the F5 iApps template. URL filtering allows you to select specific URL categories that should bypass SSL decryption. Normally this is done for concerns over user privacy or for categories that contain items (such as software update tools) that may rely on specific SSL certificates to be presented as part of a verification process. Before configuring URL filtering, we recommend updating the URL database. This must be performed from the BIG-IP system command line. Make sure you can reach download.websense.com on port 80 via the BIG-IP system and from the BIG-IP LTM command line, type the following commands: modify sys url-db download-schedule urldb download-now false modify sys url-db download-schedule urldb download-now true To list all the supported URL categories by the BIG-IP system, run the following command: tmsh list sys url-db url-category | grep url-category Next, you’ll want to configure data groups for SSL bypass. You can choose to exempt SSL offloading based on various parameters like source IP address, destination IP address, subnet, hostname, protocol, URL category, IP intelligence category, and IP geolocation. This is achieved by configuring the SSL bypass in the iApps template calling the data groups in the TCP service chain classifier rules. A data group is a simple group of related elements, represented as key value pairs. The following example provides configuration steps for creating a URL category data group to bypass HTTPS traffic of financial websites. For the BIG-IP system deployment, download the latest release of the iApps template and import to the BIG-IP system. Extract (unzip) the ssl-intercept-12.1.0-1.5.7.zip template (or any newer version available) and follow the steps to import to the BIG-IP web configuration utility. From there, you’ll configure your unique inspection engine along with simply following the BIG-IP admin UI with the iApp questionnaire. You’ll need to select and/or fill in different values in the wizard to enable the SSL orchestration functionality. We have deployment guides for the detailed specifics and from there, you’ll be able to send your now unencrypted traffic to your inspection engine for a more secure network. ps Resources: Ponemon Report: Application Security in the Changing Risk Landscape IDC Report: The Blind State of Rising SSL Traffic814Views0likes7Commentscookie encyption passphrase
I realize this is a pretty basic question so don't skewer me. I want to enable cookie encryption which seems like a very painless process, but I'm just curious as to what the cookie encryption passphrase is used for? is this going to be needed to be given out to users? whats the use and when is it utilized? further configuration needed on other devices for it? any and all help is always appreciated.602Views0likes5CommentsICAP Over HTTPS
So we have some conflicting requirements where our applications that require end to end encryption are also required to ICAP uploaded files to our Content Analysis platform. The Content Analysis platform will sandbox and scan files for malicious content and supports ICAP over HTTPS through port 11344. However as far as I can tell the F5 ASM only supports sending traffic over HTTP to ICAP on 1344 (or other HTTP ports). Is anyone aware of a work around to do ICAP over HTTPS so that these files are never sent in the clear? This is critical if we are going to be able to meet customer requirements. Can any F5 employees chime in if this is a planned future feature? We are currently on 12.1.2.556Views0likes1CommentAndroid Encrypted Databases
The Android development community, as might be expected, is a pretty vibrant community with a lot of great contributors helping people out. Since Android is largely based upon Java, there is a lot of skills reusability between the Java client dev community and the Android Dev community. As I mentioned before, encryption as a security topic is perhaps the weakest link in that community at this time. Perhaps, but since that phone/tablet could end up in someone else’s hands much more easily than your desktop or even laptop, it is something that needs a lot more attention from business developers. When I set out to write my first complex app for Android, I determined to report back to you from time-to-time about what needed better explanation or intuitive solutions. Much has been done in the realm of “making it easier”, except for security topics, which still rank pretty low on the priority list. So using encrypted SQLite databases is the topic of this post. If you think it’s taking an inordinate amount of time for me to complete this app, consider that I’m doing it outside of work. This blog was written during work hours, but all of the rest of the work is squeezed into two hours a night on the nights I’m able to dedicate time. Which is far from every night. For those of you who are not developers, here’s the synopsis so you don’t have to paw through code with us: It’s not well documented, but it’s possible, with some caveats. I wouldn’t use this method for large databases that need indexes over them, but for securing critical data it works just fine. At the end I propose a far better solution that is outside the purview of app developers and would pretty much have to be implemented by the SQLite team. Okay, only developers left? Good. In my research, there were very few useful suggestions for designing secure databases. They fall into three categories: 1. Use the NDK to write a variant of SQLite that encrypts at the file level. For most Android developers this isn’t an option, and I’m guessing the SQLite team wouldn’t be thrilled about you mucking about with their database – it serves a lot more apps than yours. 2. Encrypt the entire SD card through the OS and then store the DB there. This one works, but slows the function of the entire tablet/phone down because you’ve now (again) mucked with resources used by other apps. I will caveat that if you can get your users to do this, it is the currently available solution that allows indices over encrypted data. 3. Use one of several early-beta DB encryption tools. I was uncomfortable doing this with production systems. You may feel differently, particularly after some of them have matured. I didn’t like any of these options, so I did what we’ve had to do in the past when a piece of data was so dangerous in the wrong hands it needed encrypting. I wrote an interface to the DB that encrypts and decrypts as data is inserted and removed. In Android the only oddity you won’t find in other Java environments – or you can more easily get around in other Java environments – is filling list boxes from the database. For that I had to write a custom provider that took care of on-the-fly decryption and insertion to the list. My solution follows. There are a large varieties of ways that you could solve this problem in Java, this one is where I went because I don’t have a lot of rows for any given table. The data does not need to be indexed. If either of these items is untrue for your implementation, you’ll either have to modify this implementation or find an alternate solution. So first the encryption handler. Note that in this sample, I chose to encode encrypted arrays of bytes as Strings. I do not guarantee this will work for your scenario, and suggest you keep them as arrays of bytes until after decryption. Also note that this sample was built from a working one by obfuscating what the actual source did and making some modifications for simplification of example. It was not tested after the final round of simplification, but should be correct throughout. package com.company.monitor; import javax.crypto.Cipher; import javax.crypto.spec.SecretKeySpec; import android.util.Base64; public class DBEncryptor { private static byte[] key; private static String cypherType = cypherType; public DBEncryptor(String localPass) { // save the encoded key for future use // - note that this keeps it in memory, and is not strictly safe key = encode(localPass.getBytes()).getBytes(); String keyCopy = new String(key); while(keyCopy.length() < 16) keyCopy = keyCopy + keyCopy; byte keyA[] = keyCopy.getBytes(); if(keyA.length > 16) key = System.arraycopy(keyA, 0, key, 0, 16); } public String encode(byte [] s) { return Base64.encodeToString(s, Base64.URL_SAFE); } public byte[] decode(byte[] s) { return Base64.decode(s, Base64.URL_SAFE); } public byte[] getKey() { // return a copy of the key. return key.clone(); } public String encrypt(String toEncrypt) throws Exception { //Create your Secret Key Spec, which defines the key transformations SecretKeySpec skeySpec = new SecretKeySpec(key, cypherType); //Get the cipher Cipher cipher = Cipher.getInstance(cypherType); //Initialize the cipher cipher.init(Cipher.ENCRYPT_MODE, skeySpec); //Encrypt the string into bytes byte[ ] encryptedBytes = cipher.doFinal(toEncrypt.getBytes()); //Convert the encrypted bytes back into a string String encrypted = encode(encryptedBytes); return encrypted; } public String decrypt(String encryptedText) throws Exception { // Get the secret key spec SecretKeySpec skeySpec = new SecretKeySpec(key, cypherType); // create an AES Cipher Cipher cipher = Cipher.getInstance(cypherType); // Initialize it for decryption cipher.init(Cipher.DECRYPT_MODE, skeySpec); // Get the decoded bytes byte[] toDecrypt = decode(encryptedText.getBytes()); // And finally, do the decryption. byte[] clearText = cipher.doFinal(toDecrypt); return new String(clearText); } } So what we are essentially doing is base-64 encoding the string to be encrypted, and then encrypting the base-64 value using standard Java crypto classes. We simply reverse the process to decrypt a string. Note that this class is also useful if you’re storing values in the Properties file and wish them to be encrypted, since it simply operates on strings. The value you pass in to create the key needs to be something that is unique to the user or tablet. When it comes down to it, this is your password, and should be treated as such (hence why I changed the parameter name to localPass). For seasoned Java developers, there’s nothing new on Android at this juncture. We’re just encrypting and decrypting data. Next it does leave the realm of other Java platforms because the database is utilizing SQLite, which is not generally what you’re writing Java to outside of Android. Bear with me while we go over this class. The SQLite database class follows. Of course this would need heavy modification to work with your database, but the skeleton is here. Note that not all fields have to be encrypted. You can mix and match, no problems at all. That is one of the things I like about this solution, if I need an index for any reason, I can create an unencrypted field of a type other than blob and index on it. package com.company.monitor; import android.content.ContentValues; import android.content.Context; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteDatabase.CursorFactory; import android.database.sqlite.SQLiteOpenHelper; public class DBManagernames extends SQLiteOpenHelper { public static final String TABLE_NAME = "Map"; public static final String COLUMN_ID = "_id"; public static final String COLUMN_LOCAL = "Local"; public static final String COLUMN_WORLD = "World"; private static int indexId = 0; private static int indexLocal = 1; private static int indexWorld = 2; private static final String DATABASE_NAME = "Mappings.db"; private static final int DATABASE_VERSION = 1; // SQL statement to create the DB private static final String DATABASE_CREATE = "create table " + TABLE_NAME + "(" + COLUMN_ID + " integer primary key autoincrement, " + COLUMN_LOCAL + " BLOB not null, " + COLUMN_WORLD +" BLOB not null);"; public DBManagernames(Context context, CursorFactory factory) { super(context, DATABASE_NAME, factory, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DATABASE_CREATE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // TODO Auto-generated method stub // Yeah, this isn't implemented in production yet either. It's low on the list, but definitely "on the list" } // Assumes DBEncryptor was used to convert the fields of name before calling insert public void insertToDB(DBNameMap name) { ContentValues cv = new ContentValues(); cv.put(COLUMN_LOCAL, name.getName().getBytes()); cv.put(COLUMN_WORLD, name.getOtherName().getBytes()); getWritableDatabase().insert(TABLE_NAME, null, cv); } // returns the encrypted values to be manipulated with the decryptor. public DBNameMap readFromDB(Integer index) { SQLiteDatabase db = getReadableDatabase(); DBNameMap hnm = new DBNameMap(); Cursor cur = null; try { cur = db.query(TABLE_NAME, null, "_id='"+index.toString() +"'", null, null, null, COLUMN_ID); // cursors connsistently return before the first element. Move to the first. cur.moveToFirst(); byte[] name = cur.getBlob(indexLocal); byte [] othername = cur.getBlob(indexWorld); hnm = new DBNameMap(new String(name), new String(othername), false); } catch(Exception e) { System.out.println(e.toString()); // Do nothing - we want to return the empty host name map. } return hnm; } // NOTE: This routine assumes "String name" is the encrypted version of the string. public DBNameMap getFromDBByName(String name) { SQLiteDatabase db = getReadableDatabase(); Cursor cur = null; String check = null; try { // Note - the production version of this routine actually uses the "where" field to get the correct // element instead of looping the table. This is here for your debugging use. cur = db.query(TABLE_NAME, null, null, null, null, null, null); for(cur.moveToFirst();(!cur.isLast());cur.moveToNext()) { check = new String(cur.getBlob(indexLocal)); if(check.equals(name)) return new DBNameMap(check, new String(cur.getBlob(indexWorld)), false); } if(cur.isLast()) return new DBNameMap(); return new DBNameMap(cur.getString(indexLocal), cur.getString(indexWorld), false); } catch(Exception e) { System.out.println(e.toString()); return new DBNameMap(); } } // used by our list adapter - coming next in the blog. public Cursor getCursor() { try { return getReadableDatabase().query(TABLE_NAME, null, null, null, null, null, null); } catch(Exception e) { System.out.println(e.toString()); return null; } } // This is used in our list adapter for mapping to fields. public String[] listColumns() { return new String[] {COLUMN_LOCAL}; } } I am not including the DBNameMap class, as it is a simple container that has two string fields and maps one name to another. Finally, we have the List Provider. Android requires that you populate lists with a provider, and has several base ones to work with. The problem with the SimpleCursorAdapter is that it assumes an unencrypted database, and we just invested a ton of time making the DB encrypted. There are several possible solutions to this problem, and I present the one I chose here. I extended ResourceCursorAdapter and implemented decryption right in the routines, leaving not much to do in the list population section of my activity but to assign the correct adapter. package com.company.monitor; import android.content.Context; import android.database.Cursor; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ResourceCursorAdapter; import android.widget.TextView; public class EncryptedNameAdapter extends ResourceCursorAdapter { private String pw; public EncryptedHostNameAdapter(Context context, int layout, Cursor c, boolean autoRequery) { super(context, layout, c, autoRequery); } public EncryptedHostNameAdapter(Context context, int layout, Cursor c, int flags) { super(context, layout, c, flags); } // This class must know what the encryption key is for the DB before filling the list, // so this call must be made before the list is populated. The first call after the constructor works. public void setPW(String pww) { pw = pww; } @Override public View newView(Context context, Cursor cur, ViewGroup parent) { LayoutInflater li = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); return li.inflate(R.layout.my_list_entry, parent, false); } @Override public void bindView(View arg0, Context arg1, Cursor arg2) { // Get an encryptor/decryptor for our data. DBEncryptor enc = new DBEncryptor(pw); // Get the TextView we're placing the data into. TextView tvLocal = (TextView)arg0.findViewById(R.id.list_entry_name); // Get the bytes from the cursor byte[] bLocal = arg2.getBlob(arg2.getColumnIndex(DBManagerNames.COLUMN_LOCAL )); // Convert bytes to a string String local = new String(bSite); try { // decrypt the string local = enc.decrypt(local); } catch(Exception e) { System.out.println(e.toString()); // local holds the encrypted version at this point, fix it. // We’ll return an empty string for simplicity local = new String(); } tvSite.setText(local); } } The EncryptedNameAdapter can be set as the source for any listbox just like most examples set an ArrayAdapter as the source. Of course, it helps if you’ve put some data in the database first . That’s it for this time. There’s a lot more going on with this project, and I’ll present my solution for SSL certificate verification some time in the next couple of weeks, but for now if you need to encrypt some fields of a database, this is one way to get it done. Ping me on any of the social media outlets or here in the comments if you know of a more elegant/less resource intensive solution, always up for learning more. And please, if you find an error, it was likely introduced in the transition to something I was willing to throw out here publicly, but let me know so others don’t have problems. I’ve done my best not to introduce any, but always get a bit paranoid if I changed it after my last debug session – and I did to simplify and sanitize.407Views0likes0Commentsencryption with AES/CRYPTO - how to securely store the encryption key
Dear All, I need to encrypt/decrypt some sensitive data which is permanently stored in a datagroup. Is there a way to store the encryption key so that it remained accessible from an iRule but at the same time was not present in the code? I anticipate that absolute security is problematic here (if such a thing exists at all :)) but what would be the most secure way of doing this on BigIP? Ideal scenario would be to generate a key programmatically and store it somewhere on the BigIP file system (or separate admin partition) so that it was accessible to a specific iRule (ideally just one rule) but was not accessible from GUI/CLI. The iRule then could be signed with a certificate stored on HSM and any modifications to the iRule would be captured in the audit log, syslog and eventually SIEM which is ran by our SOC. The key needs to be hidden if not from all user accounts but at least from all except one "break-glass" account whose use and credentials would be strictly controlled (administratively). Or maybe I'm trying to invite a bicycle and it may be possible to easily use HSM to store symmetric keys? Any thoughts would be very much appreciated!382Views0likes2CommentsDatabases in the Cloud Revisited
A few of us were talking on Facebook about high speed rail (HSR) and where/when it makes sense the other day, and I finally said that it almost never does. Trains lost out to automobiles precisely because they are rigid and inflexible, while population densities and travel requirements are highly flexible. That hasn’t changed since the early 1900s, and isn’t likely to in the future, so we should be looking at different technologies to answer the problems that HSR tries to address. And since everything in my universe is inspiration for either blogging or gaming, this lead me to reconsider the state of cloud and the state of cloud databases in light of synergistic technologies (did I just use “synergistic technologies in a blog? Arrrggghhh…). There are several reasons why your organization might be looking to move out of a physical datacenter, or to have a backup datacenter that is completely virtual. Think of the disaster in Japan or hurricane Katrina. In both cases, having even the mission critical portions of your datacenter replicated to the cloud would keep your organization online while you recovered from all of the other very real issues such a disaster creates. In other cases, if you are a global organization, the cost of maintaining your own global infrastructure might well be more than utilizing a global cloud provider for many services… Though I’ve not checked, if I were CIO of a global organization today, I would be looking into it pretty closely, particularly since this option should continue to get more appealing as technology continues to catch up with hype. Today though, I’m going to revisit databases, because like trains, they are in one place, and are rigid. If you’ve ever played with database Continuous Data Protection or near-real-time replication, you know this particular technology area has issues that are only now starting to see technological resolution. Over the last year, I have talked about cloud and remote databases a few times, talking about early options for cloud databases, and mentioning Oracle Goldengate – or praising Goldengate is probably more accurate. Going to the west in the US? HSR is not an option. The thing is that the options get a lot more interesting if you have Goldengate available. There are a ton of tools, both integral to database systems and third-party that allow you to encrypt data at rest these days, and while it is not the most efficient access method, it does make your data more protected. Add to this capability the functionality of Oracle Goldengate – or if you don’t need heterogeneous support, any of the various database replication technologies available from Oracle, Microsoft, and IBM, you can seamlessly move data to the cloud behind the scenes, without interfering with your existing database. Yes, initial configuration of database replication will generally require work on the database server, but once configured, most of them run without interfering with the functionality of the primary database in any way – though if it is one that runs inside the RDBMS, remember that it will use up CPU cycles at the least, and most will work inside of a transaction so that they can insure transaction integrity on the target database, so know your solution. Running inside the primary transaction is not necessary, and for many uses may not even be desirable, so if you want your commits to happen rapidly, something like Goldengate that spawns a separate transaction for the replica are a good option… Just remember that you then need to pay attention to alerts from the replication tool so that you don’t end up with successful transactions on the primary not getting replicated because something goes wrong with the transaction on the secondary. But for DBAs, this is just an extension of their daily work, as long as someone is watching the logs. With the advent of Goldengate, advanced database encryption technology, and products like our own BIG-IPWOM, you now have the ability to drive a replica of your database into the cloud. This is certainly a boon for backup purposes, but it also adds an interesting perspective to application mobility. You can turn on replication from your data center to the cloud or from cloud provider A to cloud provider B, then use VMotion to move your application VMS… And you’re off to a new location. If you think you’ll be moving frequently, this can all be configured ahead of time, so you can flick a switch and move applications at will. You will, of course, have to weigh the impact of complete or near-complete database encryption against the benefits of cloud usage. Even if you use the adaptability of the cloud to speed encryption and decryption operations by distributing them over several instances, you’ll still have to pay for that CPU time, so there is a balancing act that needs some exploration before you’ll be certain this solution is a fit for you. And at this juncture, I don’t believe putting unencrypted corporate data of any kind into the cloud is a good idea. Every time I say that, it angers some cloud providers, but frankly, cloud being new and by definition shared resources, it is up to the provider to prove it is safe, not up to us to take their word for it. Until then, encryption is your friend, both going to/from the cloud and at rest in the cloud. I say the same thing about Cloud Storage Gateways, it is just a function of the current state of cloud technology, not some kind of unreasoning bias. So the key then is to make sure your applications are ready to be moved. This is actually pretty easy in the world of portable VMs, since the entire VM will pick up and move. The only catch is that you need to make sure users can get to the application at the new location. There are a ton of Global DNS solutions like F5’s BIG-IP Global Traffic Manager that can get your users where they need to be, since your public-facing IPs will be changing when moving from organization to organization. Everything else should be set, since you can use internal IP addresses to communicate between your application VMs and database VMs. Utilizing a some form of in-flight encryption and some form of acceleration for your database replication will round out the solution architecture, and leave you with a road map that looks more like a highway map than an HSR map. More flexible, more pervasive.365Views0likes0Comments4 things you can do in your code now to make it more scalable later
No one likes to hear that they need to rewrite or re-architect an application because it doesn't scale. I'm sure no one at Twitter thought that they'd need to be overhauling their architecture because it gained popularity as quickly as it did. Many developers, especially in the enterprise space, don't worry about the kind of scalability that sites like Twitter or LinkedIn need to concern themselves with, but they still need to be (or at least should be) concerned with scalability in general and the effects of inserting an application into a high-scalability environment, such as one fronted by a load balancer or application delivery controller. There are some very simple things you can do in your code, when you're developing an application, that can ease the transition into a high-availability architecture and that will eventually lead to a faster, more scalable application. Here are four things you can do now - and why - to make your application fit better into a high availability environment in the future and avoid rewriting or re-architecting your solutions later. Where's F5? Storage Decisions Sept 23-24 in New York Networld IT Roadmap Sept 23 in Dallas Oracle Open World Sept 21-25 in San Francisco Storage Networking World Oct 13-16 in Dallas Storage Expo 2008 UK Oct 15-16 in London Storage Networking World Oct 27-29 in Frankfurt 1. Don't assume your application is always responsible for cookie encryption Encrypting cookies in today's privacy lax environment that is the Internet is the responsible thing to do. In the first iterations of your application you will certainly be responsible for handling the encryption and decryption of cookies, but later on, when the application is inserted into a high-availability environment and there exists an application delivery controller (ADC), that functionality can be offloaded to the ADC. Offloading the responsibility for encryption and decryption of cookies to the ADC improves performance because the ADC employs hardware acceleration. To make it easier to offload this responsibility to an ADC in the future but support it early on, use a configuration flag to indicate whether you should decrypt or encrypt cookies before examining them. That way you can simply change the configuration flag later on and immediately take advantage of a performance boost from the network infrastructure. 2. Don't assume the client IP is accurate If you need to use/store/access the client's IP address, don't assume the traditional HTTP header is accurate. Early on it certainly will be, but when the application is inserted into a high availability environment and a full-proxy solution is sitting in front of your application, it won't be. A full-proxy mediates between client and server, which means it is the client when talking to the server, so its IP address becomes the "client IP". Almost all full-proxies insert the real client IP address into the X-Forwarded-For HTTP header, so you should always check that header before checking the client IP address. If there is an X-Forwarded-For value, you'll more than likely want to use it instead of the client IP address. This simple check should alleviate the need to make changes to your application when it's moved into a high availability environment. 3. Don't use relative paths Always use the FQDN (fully qualified domain name) when referencing images, scripts, etc... inside your application. Furthermore, use different host names for different content types - i.e. images.example.com and scripts.example.com. Early on all the hosts will point to the same server, probably, but by insuring that you're using the FQDN now makes architecting that high availability environment much easier. While any intelligent application delivery controller can perform layer 7 switching on any part of the URI and arrive at the same architecture, it's much more efficient to load balance and route application data based on the host name. By using the FQDN and separating host names by content type you can later optimize and tune specific servers for delivery of that content, or use the CNAME trick to improve parallelism and performance in request heavy applications. 4. Separate out API rate limiting functionality If you're writing an application with an API for integration later, separate out the rate limiting functionality. Initially you may need it, but when the application is inserted into a high-availability environment with an intelligent application delivery controller, it can take over that functionality and spare your application from having to reject requests that exceed the set limits. Like cookie encryption, use a configuration flag to determine whether you should check this limitation or not so it can be easily be turned on and off at will. By offloading the responsibility for rate limiting to an application delivery controller you remove the need for the server to waste resources (connections, RAM, cycles) on requests it won't respond to anyway. This improves the capacity of the server and thus your application, making it more efficient and more scalable. By thinking about the ways in which your application will need to interact with a high availability infrastructure later and adjusting your code to take that into consideration you can save yourself a lot of headaches later on when your application is inserted into that infrastructure. That means less rewriting of applications, less troubleshooting, and fewer servers needed to scale up quickly to meet demand. Happy coding!360Views0likes1CommentF5 Labs 2019 TLS Telemetry Report Summary
Encryption standards are constantly evolving, so it is important to stay up to date with best practices. The 2019 F5 Labs TLS Telemetry Summary Report by David Warburton with additional contributions from Remi Cohen and Debbie Walkowski expands the scope of our research to bring you deeper insights into how encryption on the web is constantly evolving. We look into which ciphers and SSL/TLS versions are being used to secure the Internet’s top websites and, for the first time, examine the use of digital certificates on the web and look at supporting protocols (such as DNS) and application layer headers. On average, almost 86% of all page loads over the web are now encrypted with HTTPS. This is a win for consumer privacy and security, but it’s also posing a problem for those scanning web traffic. In our research we found that 71% of phishing sites in July 2019 were using secure HTTPS connections with valid digital certificates. This means we have to stop training users to “look for the HTTPS at the start of the address” since attackers are using deceptive URLs to emulate secure connections for their phishing and malware sites. Read our report for details and recommendations on how to bolster your HTTPS connections.350Views1like0Comments