encryption
27 TopicsHow Malware Evades Detection
Malware loves encryption since it can sneak around undetected. F5Labs 2018 Phishing & Fraud Report explains how malware tricks users and evades detection. With the cloning of legitimate emails from well-known companies, the quality of phishing emails is improving and fooling more unsuspecting victims. Attackers disguise the malware installed during phishing attacks from traditional traffic inspection devices by phoning home to encrypted sites. Let's light up how evasion happens & get your F5 Labs 2018 Phishing & Fraud Report today. ps243Views0likes0CommentsYour SSL Secrets Uncovered
Get Started with SSL Orchestrator SSL and its brethren TLS is becoming more prevalent to secure IP communications on the internet. It’s not just financial, health care or other sensitive sites, even search engines routinely use the encryption protocol. This can be good or bad. Good, in that all communications are scrambled from prying eyes but potentially hazardous if attackers are hiding malware inside encrypted traffic. If the traffic is encrypted and simply passed through, inspection engines are unable to intercept that traffic for a closer look like they can with clear text communications. The entire ‘defense-in-depth’ strategy with IPS systems and NGFWs lose effectiveness. F5 BIG-IP can solve these SSL/TSL challenges with an advanced threat protection system that enables organizations to decrypt encrypted traffic within the enterprise boundaries, send to an inspection engine, and gain visibility into outbound encrypted communications to identify and block zero-day exploits. In this case, only the interesting traffic is decrypted for inspection, not all of the wire traffic, thereby conserving processing resources of the inspecting device. You can dynamically chain services based on a context-based policy to efficiently deploy security. This solution is supported across the existing F5 BIG-IP v12 family of products with F5 SSL Orchestrator and is integrated with such solutions like FireEye NX, Cisco ASA FirePOWER and Symantec DLP. Here I’ll show you how to complete the initial setup. A few things to know prior – from a licensing perspective, The F5 SSL visibility solution can be deployed using either the BIG-IP system or the purpose built SSL Orchestrator platform. Both have same SSL intercept capabilities with different licensing requirements. To deploy using BIG-IP, you’ll need BIG-IP LTM for SSL offload, traffic steering, and load balancing and the SSL forward proxy for outbound SSL visibility. Optionally, you can also consider the URL filtering subscription to enforce corporate web use policies and/or the IP Intelligence subscription for reputation based web blocking. For the purpose built solution, all you’ll need is the F5 Security SSL Orchestrator hardware appliance. The initial setup addresses URL filtering, SSL bypass, and the F5 iApps template. URL filtering allows you to select specific URL categories that should bypass SSL decryption. Normally this is done for concerns over user privacy or for categories that contain items (such as software update tools) that may rely on specific SSL certificates to be presented as part of a verification process. Before configuring URL filtering, we recommend updating the URL database. This must be performed from the BIG-IP system command line. Make sure you can reach download.websense.com on port 80 via the BIG-IP system and from the BIG-IP LTM command line, type the following commands: modify sys url-db download-schedule urldb download-now false modify sys url-db download-schedule urldb download-now true To list all the supported URL categories by the BIG-IP system, run the following command: tmsh list sys url-db url-category | grep url-category Next, you’ll want to configure data groups for SSL bypass. You can choose to exempt SSL offloading based on various parameters like source IP address, destination IP address, subnet, hostname, protocol, URL category, IP intelligence category, and IP geolocation. This is achieved by configuring the SSL bypass in the iApps template calling the data groups in the TCP service chain classifier rules. A data group is a simple group of related elements, represented as key value pairs. The following example provides configuration steps for creating a URL category data group to bypass HTTPS traffic of financial websites. For the BIG-IP system deployment, download the latest release of the iApps template and import to the BIG-IP system. Extract (unzip) the ssl-intercept-12.1.0-1.5.7.zip template (or any newer version available) and follow the steps to import to the BIG-IP web configuration utility. From there, you’ll configure your unique inspection engine along with simply following the BIG-IP admin UI with the iApp questionnaire. You’ll need to select and/or fill in different values in the wizard to enable the SSL orchestration functionality. We have deployment guides for the detailed specifics and from there, you’ll be able to send your now unencrypted traffic to your inspection engine for a more secure network. ps Resources: Ponemon Report: Application Security in the Changing Risk Landscape IDC Report: The Blind State of Rising SSL Traffic834Views0likes7CommentsAsk the Expert – Why SSL Everywhere?
Kevin Stewart, Security Solution Architect, talks about the paradigm shift in the way we think about IT network services, particularly SSL and encryption. Gone are the days where clear text roams freely on the internal network and organizations are looking to bring SSL all the way to the application, which brings complexity. Kevin explains some of the challenges of encrypting all the way to the application and ways to solve this increasing trend. SSL is not just about protecting data in motion, it’s also about privacy. ps Related: Ask the Expert – Are WAFs Dead? RSA2015 – SSL Everywhere (feat Holmes) AWS re:Invent 2015 – SSL Everywhere…Including the Cloud (feat Stanley) F5 SSL Everywhere Solutions Technorati Tags: f5,ssl,encryption,pki,big-ip,security,privacy,silva,video Connect with Peter: Connect with F5:332Views0likes0CommentsWireless network considerations for the enterprise
The announcement of Telstra’s plans to rollout a new WiFi network to provide 8000 new WiFi hotspots around Australia is no doubt welcome news to individuals and businesses alike. New modems will be provided to two million homes and businesses to serve as one interconnected public WiFi network, literally laying the foundations for a more connected nation and advanced economy. According to the latest research by Telsyte, the rollout of Wi-Fi networks are competing with dedicated mobile broadband devices. In addition, more than 80 per cent of businesses with more than 20 employees operate Wi-Fi networks giving people’s devices access to the Internet at work. For today’s mobile workforce, ensuring wireless network security can be a serious challenge for businesses. Administrators face an ever-growing need to protect critical company resources from increasingly sophisticated cyber attacks. When employees access private corporate data over a wireless network, the data may be compromised by unauthorised viewers if the user is not shielding the connection from outsiders, for example, via password-protected access. As such,businessesneed to consider the following options to ensure their data remains secure whilst offering wireless network access. 1. Use a VPN Enforcing users to connect to the WiFi network using a VPN will ensure any data that passes through the network is encrypted, thus securing your data from external threats. With iOS 7, Apple introduced a great way to accomplish this with theirPer app VPN. Per app VPN allows iOS to control which applications have access to the VPN tunnel. This gives organisations the ability to designate which applications are corporate apps and treat everything else as personal. 2. Encryption is key Encryption is the process of transforming information using an algorithm (referred to as a cipher) to make it unreadable to anyone except those processing special knowledge (usually referred to as a key). Encryption is especially important for wireless communications due to the fact that wireless networks are easier to "tap" than their hard-wired counterparts. Encryption is essential to implement whencarrying out any kind of sensitive transaction, such as financial transactions or confidential communications. Network devices implement the processing of encryption to the network layer eliminating the overhead required on individual servers. 3. Turn on two-factor authentication Two-factor authentication (TFA) has been around for many years and the concept far pre-dates computers. The application of a keyed padlock and a combination lock to secure a single point would technically qualify as two-factor authentication: “something you have,” a key, and “something you know,” a combination.It essentially involves setting up a two-step process in order to verify the identity of someone trying to gain access to a network.227Views0likes0CommentsYou’ll Shoot Your Eye Out…
…is probably one of the most memorable lines of any Holiday Classic. Of course I’m referring to A Christmas Story, where a young Ralphie tries to convince his parents, teachers and Santa that the Red Ryder BB Gun is the perfect present. I don’t know of there was a warning label on the 1940’s edition box but it is a good reminder from a security perspective that often we, meaning humans, are our own worst enemy when it comes to protecting ourselves. Every year about 100 or so homes burn down due to fried turkeys. A frozen one with ice crystals straight in or the ever famous too much oil that overflows and toasts everything it touches. Even with the warnings and precautions, humans still take the risk. Warning: You can get burned badly. As if the RSA breach wasn’t warning enough about the perils of falling for a phishing scam, we now learn that the South Carolina Department of Revenue breach was also due to an employee, and it only takes one, clicking a malicious email link. That curiosity lead to over 3.8 million Social Security numbers, 3.3 million bank accounts, thousands of credit cards along with 1.9 million dependant’s information being exposed. While the single click started it all, 2-factor authentication was not required and the stored info was not encrypted, so there is a lot of human error to go around. Plus a lot of blame being tossed back and forth – another well used human trait – deflection. Warning: Someone else may not protect your information. While working the SharePoint Conference 2012 in Vegas a couple weeks ago, I came across a interesting kiosk where it allows you to take a picture and post online for free to any number of social media sites. It says ‘Post a picture online for free.’ but there didn’t seem to be a Warning: ‘You are also about to potentially share your sensitive social media credentials or email, which might also be tied to your bank account, into this freestanding machine that you know nothing about.’ I’m sure if that was printed somewhere, betters would think twice about that risk. If you prefer not to enter social media info, you can always have the image emailed to you (to then share) but that also (obviously) requires you to enter that information. While logon info might not be stored, email is. Yet another reason to get a throw away email address. I’m always amazed at all the ways various companies try to make it so easy for us to offer up our information…and many of us do without considering the risks. In 2010, there were a number of photo kiosks that were spreading malware. Warning: They are computers after all and connected to the internet. Insider threats are also getting a lot of attention these days with some statistics indicating that 33% of malicious or criminal attacks are from insiders. In August, an insider at Saudi Aramco released a virus that infected about 75% of the employee desktops. It is considered one of the most destructive computer sabotages inflicted upon a private company. And within the last 2 days, we’ve learned that the White House issued an Executive Order to all government agencies informing them of new standards and best practices around gathering, analyzing and responding to insider threats. This could be actual malicious, disgruntled employees, those influenced by a get rich quick scheme from an outsider or just ‘compromised’ employees, like getting a USB from a friend and inserting it into your work computer. It could even be simple misuse by accident. In any event, intellectual property or personally identifiable information is typically the target. Warning: Not everyone is a saint. The Holidays are still Happy but wear your safety glasses, don’t click questionable links even from friends, don’t enter your logon credentials into a stray kiosk and a third of your staff is a potential threat. And if you are in NYC for the holidays, a limited run of "Ralphie to the Rescue!" A Christmas Story, The Musical is playing at the Lunt-Fontanne Theatre until Dec 30th. ps References How One Turkey Fryer Turned Into A 40-foot Inferno That Destroyed Two Cars And A Barn S.C. tax breach began when employee fell for spear phish 5 Stages of a Data Breach Thinking about Security from the Inside Out Obama issues insider threat guidance for gov't agencies National Insider Threat Policy and Minimum Standards for Executive Branch Insider Threat Programs Insiders Big Threat to Intellectual Property, Says Verizon DBIR Negligent Insiders and Malicious Attacks Continue to Pose Security Threat Infographic: Protect Yourself Against Cybercrime The Exec-Disconnect on IT Security "Ralphie to the Rescue!" A Christmas Story, The Musical Opens On Broadway Nov. 19256Views0likes0CommentsBIG-IP Edge Client 2.0.2 for Android
Earlier this week F5 released our BIG-IP Edge Client for Android with support for the new Amazon Kindle Fire HD. You can grab it off Amazon instantly for your Android device. By supporting BIG-IP Edge Client on Kindle Fire products, F5 is helping businesses secure personal devices connecting to the corporate network, and helping end users be more productive so it’s perfect for BYOD deployments. The BIG-IP® Edge Client™ for all Android 4.x (Ice Cream Sandwich) or later devices secures and accelerates mobile device access to enterprise networks and applications using SSL VPN and optimization technologies. Access is provided as part of an enterprise deployment of F5 BIG-IP® Access Policy Manager™, Edge Gateway™, or FirePass™ SSL-VPN solutions. BIG-IP® Edge Client™ for all Android 4.x (Ice Cream Sandwich) Devices Features: Provides accelerated mobile access when used with F5 BIG-IP® Edge Gateway Automatically roams between networks to stay connected on the go Full Layer 3 network access to all your enterprise applications and files Supports multi-factor authentication with client certificate You can use a custom URL scheme to create Edge Client configurations, start and stop Edge Client BEFORE YOU DOWNLOAD OR USE THIS APPLICATION YOU MUST AGREE TO THE EULA HERE: http://www.f5.com/apps/android-help-portal/eula.html BEFORE YOU CONTACT F5 SUPPORT, PLEASE SEE: http://support.f5.com/kb/en-us/solutions/public/2000/600/sol2633.html If you have an iOS device, you can get the F5 BIG-IP Edge Client for Apple iOS which supports the iPhone, iPad and iPod Touch. We are also working on a Windows 8 client which will be ready for the Win8 general availability. ps Resources F5 BIG-IP Edge Client Samsung F5 BIG-IP Edge Client Rooted F5 BIG-IP Edge Client F5 BIG-IP Edge Portal for Apple iOS F5 BIG-IP Edge Client for Apple iOS F5 BIG-IP Edge apps for Android Securing iPhone and iPad Access to Corporate Web Applications – F5 Technical Brief Audio Tech Brief - Secure iPhone Access to Corporate Web Applications iDo Declare: iPhone with BIG-IP Technorati Tags: F5, infrastructure 2.0, integration, cloud connect, Pete Silva, security, business, education,technology, application delivery, ipad, cloud, context-aware,infrastructure 2.0, iPhone, web, internet, security,hardware, audio, whitepaper, apple, iTunes2.6KViews0likes3CommentsAndroid Encrypted Databases
The Android development community, as might be expected, is a pretty vibrant community with a lot of great contributors helping people out. Since Android is largely based upon Java, there is a lot of skills reusability between the Java client dev community and the Android Dev community. As I mentioned before, encryption as a security topic is perhaps the weakest link in that community at this time. Perhaps, but since that phone/tablet could end up in someone else’s hands much more easily than your desktop or even laptop, it is something that needs a lot more attention from business developers. When I set out to write my first complex app for Android, I determined to report back to you from time-to-time about what needed better explanation or intuitive solutions. Much has been done in the realm of “making it easier”, except for security topics, which still rank pretty low on the priority list. So using encrypted SQLite databases is the topic of this post. If you think it’s taking an inordinate amount of time for me to complete this app, consider that I’m doing it outside of work. This blog was written during work hours, but all of the rest of the work is squeezed into two hours a night on the nights I’m able to dedicate time. Which is far from every night. For those of you who are not developers, here’s the synopsis so you don’t have to paw through code with us: It’s not well documented, but it’s possible, with some caveats. I wouldn’t use this method for large databases that need indexes over them, but for securing critical data it works just fine. At the end I propose a far better solution that is outside the purview of app developers and would pretty much have to be implemented by the SQLite team. Okay, only developers left? Good. In my research, there were very few useful suggestions for designing secure databases. They fall into three categories: 1. Use the NDK to write a variant of SQLite that encrypts at the file level. For most Android developers this isn’t an option, and I’m guessing the SQLite team wouldn’t be thrilled about you mucking about with their database – it serves a lot more apps than yours. 2. Encrypt the entire SD card through the OS and then store the DB there. This one works, but slows the function of the entire tablet/phone down because you’ve now (again) mucked with resources used by other apps. I will caveat that if you can get your users to do this, it is the currently available solution that allows indices over encrypted data. 3. Use one of several early-beta DB encryption tools. I was uncomfortable doing this with production systems. You may feel differently, particularly after some of them have matured. I didn’t like any of these options, so I did what we’ve had to do in the past when a piece of data was so dangerous in the wrong hands it needed encrypting. I wrote an interface to the DB that encrypts and decrypts as data is inserted and removed. In Android the only oddity you won’t find in other Java environments – or you can more easily get around in other Java environments – is filling list boxes from the database. For that I had to write a custom provider that took care of on-the-fly decryption and insertion to the list. My solution follows. There are a large varieties of ways that you could solve this problem in Java, this one is where I went because I don’t have a lot of rows for any given table. The data does not need to be indexed. If either of these items is untrue for your implementation, you’ll either have to modify this implementation or find an alternate solution. So first the encryption handler. Note that in this sample, I chose to encode encrypted arrays of bytes as Strings. I do not guarantee this will work for your scenario, and suggest you keep them as arrays of bytes until after decryption. Also note that this sample was built from a working one by obfuscating what the actual source did and making some modifications for simplification of example. It was not tested after the final round of simplification, but should be correct throughout. package com.company.monitor; import javax.crypto.Cipher; import javax.crypto.spec.SecretKeySpec; import android.util.Base64; public class DBEncryptor { private static byte[] key; private static String cypherType = cypherType; public DBEncryptor(String localPass) { // save the encoded key for future use // - note that this keeps it in memory, and is not strictly safe key = encode(localPass.getBytes()).getBytes(); String keyCopy = new String(key); while(keyCopy.length() < 16) keyCopy = keyCopy + keyCopy; byte keyA[] = keyCopy.getBytes(); if(keyA.length > 16) key = System.arraycopy(keyA, 0, key, 0, 16); } public String encode(byte [] s) { return Base64.encodeToString(s, Base64.URL_SAFE); } public byte[] decode(byte[] s) { return Base64.decode(s, Base64.URL_SAFE); } public byte[] getKey() { // return a copy of the key. return key.clone(); } public String encrypt(String toEncrypt) throws Exception { //Create your Secret Key Spec, which defines the key transformations SecretKeySpec skeySpec = new SecretKeySpec(key, cypherType); //Get the cipher Cipher cipher = Cipher.getInstance(cypherType); //Initialize the cipher cipher.init(Cipher.ENCRYPT_MODE, skeySpec); //Encrypt the string into bytes byte[ ] encryptedBytes = cipher.doFinal(toEncrypt.getBytes()); //Convert the encrypted bytes back into a string String encrypted = encode(encryptedBytes); return encrypted; } public String decrypt(String encryptedText) throws Exception { // Get the secret key spec SecretKeySpec skeySpec = new SecretKeySpec(key, cypherType); // create an AES Cipher Cipher cipher = Cipher.getInstance(cypherType); // Initialize it for decryption cipher.init(Cipher.DECRYPT_MODE, skeySpec); // Get the decoded bytes byte[] toDecrypt = decode(encryptedText.getBytes()); // And finally, do the decryption. byte[] clearText = cipher.doFinal(toDecrypt); return new String(clearText); } } So what we are essentially doing is base-64 encoding the string to be encrypted, and then encrypting the base-64 value using standard Java crypto classes. We simply reverse the process to decrypt a string. Note that this class is also useful if you’re storing values in the Properties file and wish them to be encrypted, since it simply operates on strings. The value you pass in to create the key needs to be something that is unique to the user or tablet. When it comes down to it, this is your password, and should be treated as such (hence why I changed the parameter name to localPass). For seasoned Java developers, there’s nothing new on Android at this juncture. We’re just encrypting and decrypting data. Next it does leave the realm of other Java platforms because the database is utilizing SQLite, which is not generally what you’re writing Java to outside of Android. Bear with me while we go over this class. The SQLite database class follows. Of course this would need heavy modification to work with your database, but the skeleton is here. Note that not all fields have to be encrypted. You can mix and match, no problems at all. That is one of the things I like about this solution, if I need an index for any reason, I can create an unencrypted field of a type other than blob and index on it. package com.company.monitor; import android.content.ContentValues; import android.content.Context; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteDatabase.CursorFactory; import android.database.sqlite.SQLiteOpenHelper; public class DBManagernames extends SQLiteOpenHelper { public static final String TABLE_NAME = "Map"; public static final String COLUMN_ID = "_id"; public static final String COLUMN_LOCAL = "Local"; public static final String COLUMN_WORLD = "World"; private static int indexId = 0; private static int indexLocal = 1; private static int indexWorld = 2; private static final String DATABASE_NAME = "Mappings.db"; private static final int DATABASE_VERSION = 1; // SQL statement to create the DB private static final String DATABASE_CREATE = "create table " + TABLE_NAME + "(" + COLUMN_ID + " integer primary key autoincrement, " + COLUMN_LOCAL + " BLOB not null, " + COLUMN_WORLD +" BLOB not null);"; public DBManagernames(Context context, CursorFactory factory) { super(context, DATABASE_NAME, factory, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DATABASE_CREATE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // TODO Auto-generated method stub // Yeah, this isn't implemented in production yet either. It's low on the list, but definitely "on the list" } // Assumes DBEncryptor was used to convert the fields of name before calling insert public void insertToDB(DBNameMap name) { ContentValues cv = new ContentValues(); cv.put(COLUMN_LOCAL, name.getName().getBytes()); cv.put(COLUMN_WORLD, name.getOtherName().getBytes()); getWritableDatabase().insert(TABLE_NAME, null, cv); } // returns the encrypted values to be manipulated with the decryptor. public DBNameMap readFromDB(Integer index) { SQLiteDatabase db = getReadableDatabase(); DBNameMap hnm = new DBNameMap(); Cursor cur = null; try { cur = db.query(TABLE_NAME, null, "_id='"+index.toString() +"'", null, null, null, COLUMN_ID); // cursors connsistently return before the first element. Move to the first. cur.moveToFirst(); byte[] name = cur.getBlob(indexLocal); byte [] othername = cur.getBlob(indexWorld); hnm = new DBNameMap(new String(name), new String(othername), false); } catch(Exception e) { System.out.println(e.toString()); // Do nothing - we want to return the empty host name map. } return hnm; } // NOTE: This routine assumes "String name" is the encrypted version of the string. public DBNameMap getFromDBByName(String name) { SQLiteDatabase db = getReadableDatabase(); Cursor cur = null; String check = null; try { // Note - the production version of this routine actually uses the "where" field to get the correct // element instead of looping the table. This is here for your debugging use. cur = db.query(TABLE_NAME, null, null, null, null, null, null); for(cur.moveToFirst();(!cur.isLast());cur.moveToNext()) { check = new String(cur.getBlob(indexLocal)); if(check.equals(name)) return new DBNameMap(check, new String(cur.getBlob(indexWorld)), false); } if(cur.isLast()) return new DBNameMap(); return new DBNameMap(cur.getString(indexLocal), cur.getString(indexWorld), false); } catch(Exception e) { System.out.println(e.toString()); return new DBNameMap(); } } // used by our list adapter - coming next in the blog. public Cursor getCursor() { try { return getReadableDatabase().query(TABLE_NAME, null, null, null, null, null, null); } catch(Exception e) { System.out.println(e.toString()); return null; } } // This is used in our list adapter for mapping to fields. public String[] listColumns() { return new String[] {COLUMN_LOCAL}; } } I am not including the DBNameMap class, as it is a simple container that has two string fields and maps one name to another. Finally, we have the List Provider. Android requires that you populate lists with a provider, and has several base ones to work with. The problem with the SimpleCursorAdapter is that it assumes an unencrypted database, and we just invested a ton of time making the DB encrypted. There are several possible solutions to this problem, and I present the one I chose here. I extended ResourceCursorAdapter and implemented decryption right in the routines, leaving not much to do in the list population section of my activity but to assign the correct adapter. package com.company.monitor; import android.content.Context; import android.database.Cursor; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ResourceCursorAdapter; import android.widget.TextView; public class EncryptedNameAdapter extends ResourceCursorAdapter { private String pw; public EncryptedHostNameAdapter(Context context, int layout, Cursor c, boolean autoRequery) { super(context, layout, c, autoRequery); } public EncryptedHostNameAdapter(Context context, int layout, Cursor c, int flags) { super(context, layout, c, flags); } // This class must know what the encryption key is for the DB before filling the list, // so this call must be made before the list is populated. The first call after the constructor works. public void setPW(String pww) { pw = pww; } @Override public View newView(Context context, Cursor cur, ViewGroup parent) { LayoutInflater li = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); return li.inflate(R.layout.my_list_entry, parent, false); } @Override public void bindView(View arg0, Context arg1, Cursor arg2) { // Get an encryptor/decryptor for our data. DBEncryptor enc = new DBEncryptor(pw); // Get the TextView we're placing the data into. TextView tvLocal = (TextView)arg0.findViewById(R.id.list_entry_name); // Get the bytes from the cursor byte[] bLocal = arg2.getBlob(arg2.getColumnIndex(DBManagerNames.COLUMN_LOCAL )); // Convert bytes to a string String local = new String(bSite); try { // decrypt the string local = enc.decrypt(local); } catch(Exception e) { System.out.println(e.toString()); // local holds the encrypted version at this point, fix it. // We’ll return an empty string for simplicity local = new String(); } tvSite.setText(local); } } The EncryptedNameAdapter can be set as the source for any listbox just like most examples set an ArrayAdapter as the source. Of course, it helps if you’ve put some data in the database first . That’s it for this time. There’s a lot more going on with this project, and I’ll present my solution for SSL certificate verification some time in the next couple of weeks, but for now if you need to encrypt some fields of a database, this is one way to get it done. Ping me on any of the social media outlets or here in the comments if you know of a more elegant/less resource intensive solution, always up for learning more. And please, if you find an error, it was likely introduced in the transition to something I was willing to throw out here publicly, but let me know so others don’t have problems. I’ve done my best not to introduce any, but always get a bit paranoid if I changed it after my last debug session – and I did to simplify and sanitize.431Views0likes0CommentsF5 Friday: Secure Data in Flight
#bigdata #infosec Fat apps combined with SSL Everywhere strategies suggest a need for more powerful processing in the application delivery tier According to Netcraft, who tracks these kinds of things, SSL usage has doubled from 2008 and 2011. That's a good thing, as it indicates an upswing in adherence to security best practices that say "SSL Everywhere" just makes good sense. The downside is overhead, which despite improvements in processing power and support for specific cryptographic processing in hardware still exists. How much overhead is more dependent on the size of data and the specific cryptographic algorithms chosen. SSL is one of those protocols that has different overhead and impacts on performance based on the size of the data. With data less than 32kb, overhead is primarily incurred during session negotiation. After 32kb, bulk encryption becomes the issue. The problem is that a server is likely going to feel both, because it has to negotiate the session and the average response size for web applications today is well above the 32kb threshold, with most pages serving up 41kb in HTML alone – that's not counting scripts, images, and other objects. It turns out that about 70% of the total processing time of an HTTPS transaction is spent in SSL processing. As a result, a more detailed understanding of the key overheads within SSL processing was required. By presenting a detailed description of the anatomy of SSL processing, we showed that the major overhead incurred during SSL processing lies in the session negotiation phase when small amount of data are transferred (as in banking transactions). On the other hand, when the data exchanged in the session crosses over 32K bytes, the bulk data encryption phase becomes important. -- Anatomy and Performance of SSL Processing [pdf] An often overlooked benefit of improvements in processing power is that just as it helps improve processing of SSL on servers, so too do such improvements help boost the processing of SSL on intermediate devices such as application delivery controllers. On such devices, where complete control over the network and operating system stacks is possible, even greater performance benefits are derived from advances in processing power. Those benefits are also seen in other processing on devices such as compression and intelligent traffic management. But also a benefit of more processing power and improvements in core bus architectures is the ability to do more with less, which enables consolidation of application delivery services on to a shared infrastructure platform like BIG-IP. From traffic management to acceleration, from network to application firewall services, from DNS to secure remote access – hardware improvements from the processor to the NIC to the switching backplane offer increased performance as well as increased utilization across multiple functions which, in and of itself, improves performance by eliminating multiple hops in the application delivery chain. Each hop removed improves performance because the latency associated with managing flows and connections is eliminated. Introducing BIG-IP 4200v The BIG-IP 4200v hardware platform takes advantage of this and the result is better performance with a lower power footprint (80+ Gold Certified power supplies) that improves security across all managed applications. Consolidation further reduces power consumption by eliminating redundant services and establishes a strategic point of control through which multiple initiatives can be realized including unified secure remote access, an enhanced security posture, and increased server utilization by leveraging offload services at the application delivery tier. A single, unified application delivery platform offers many benefits, not the least of which is visibility into all operational components: security, performance, and availability. BIG-IP 4200v supports provisioning of BIG-IP Analytics (AVR) in conjunction with other BIG-IP service modules, enabling breadth and depth of traffic management analytics across all shared services. This latest hardware platform provides mid-size enterprises and service providers with the performance and capacity required to implement more comprehensive application delivery services that address operational risk. BIG-IP Hardware – Product Information Page BIG-IP Hardware – Datasheet Hardware Acceleration Critical Component for Cost-Conscious Data Centers When Did Specialized Hardware Become a Dirty Word? Data Center Feng Shui: SSL Why should I care about the hardware? F5 Friday: What’s Inside an F5? F5 Friday: Have You Ever Played WoW without a Good Graphics Card?207Views0likes0CommentsOf Escalators and Network Traffic
Escalators are an interesting first world phenomenon. While not strictly necessary anywhere, they still turn up all over in most first-world countries. The key to their popularity is, no doubt, the fact that they move traffic much more quickly than an elevator, and offer the option of walking to increase the speed to destination even more. One thing about escalators is that they’re always either going up, or down, in contrast to an elevator which changes direction with each trip. The same could be said of network traffic. It is definitely moving on the up escalator, with no signs of slackening. The increasing number of devices not just online, but accessing information both inside and outside the confines of the enterprise has brought with it a large increase in traffic. Combine that with increases in new media both inside and outside the enterprise, and you have a spike in growth that the world may never see again. And we’re in the middle of it. Let’s just take a look at a graph of Internet usage portrayed in a bit of back-and-forth between Rob Beschizza of Boing Boing and Wired magazine. This graphic only goes to 2010, and you can clearly see that the traffic growth is phenomenal. (side note, Mr. Beschizza’s blog entry is worth reading, as he dissects arguments that the web is dead) As this increase impacts an organization, there is a series of steps that generally occurs on the path to Application Delivery Networking, and it’s worth recapping here (note, the order can vary). First, an application is not performing. Application load balancing is brought in to remedy the problem. This step may be repeated, with load balancing widely deployed before... Next, Internet connections are overloaded. Link load balancing is brought in to remedy the problem. Once the enterprise side is running acceptably, it turns out that wireless devices – particularly cell phones – are slow. Application Acceleration is brought in to solve the problem. Application security becomes an issue – either for purchased packages exposed to the world, or internally developed code. A web application firewall is used to solve the problem. Remote backups or replication start to slow the systems, as more and more data is collected. WAN Optimization is generally brought in to address the problem. For storefronts and other security-enabled applications, encryption becomes a burden on CPUs – particularly in a virtualized environment. Encryption offloading is brought in to solve the problem. Traffic management and access control quickly follow – addressed with management tools and SSL VPN. That is where things generally sit right now, there are other bits, but most organizations haven’t finished going this far, so we’ll skip the other bits for now. The problem that has even the most forward-thinking companies mostly paused here is complexity. There’s a lot going on in your application network at this point, and the pause to regain control and insight is necessary. An over-arching solution to the complexity that these steps introduce is, while not strictly necessary, a precursor to further taking advantage of the infrastructure available within the datacenter (notice that I have not discussed multi-data center or datacenter to the cloud in this post), some way to control all of this burgeoning architecture from a central location. Some vendors – like F5 (just marketing here) – offer a platform that allows control of these knobs and features, while other organizations will have to look to products like Tivoli or OpenView to tie the parts together. And while we’re centralizing the management of the application infrastructure, it’s time to consider that separate datacenter or the cloud as a future location to include in the mix. Can the toolset you’re building look beyond the walls of the datacenter and meet your management and monitoring needs? Can it watch multiple cloud vendors? What metrics will you need, and can your tools get them today, or will you need more management? All stuff to ask while taking that breather. There’s a lot of change going on and it’s always a good idea to know where you’re going in the long run while you’re fighting fires in the short run. The cost of failing to ask these questions is limited capability to achieve goals in the future – eg: more firefighting. And IT works hard enough, let’s not make it harder than it needs to be. And don’t hesitate to call your sales rep. They want to give you information about products and try to convince you to buy theirs, it’s what they do. While I can’t speak for other companies, if you get on the phone with an F5 SE, you’ll find that they know their stuff, and can offer help that ranges from defining future needs to meeting current ones. To you IT pros, I say, keep making business run like they don’t know you’re there. And since they won’t generally tell you, I’ll say “thank you” for them. They have no idea how hard their life would be sans IT.192Views0likes0CommentsAdvanced Load Balancing For Developers. The Network Dev Tool
It has been a while since I wrote an installment of Load Balancing for Developers, and now I think it has been too long, but never fear, this is the grad-daddy of Load Balancing for Developers blogs, covering a useful bit of information about Application Delivery Controllers that you might want to take advantage of. For those who have joined us since my last installment, feel free to check out the entire list of blog entries (along with related blog entries) here, though I assure you that this installment, like most of the others, does not require you to have read those that went before. ZapNGo! Is still a growing enterprise, now with several dozen complex applications and a high availability architecture that spans datacenters and the cloud. While the organization relies upon its web properties to generate revenue, those properties have been going along fine with your Application Delivery Controller (ADC) architecture. Now though, you’re seeing a need to centralize administration of a whole lot of functions. What worked fine separately for one or two applications is no longer working so well now that you have several development teams and several dozen applications, and you need to find a way to bring the growing inter-relationships under control before maintenance and hidden dependencies swamp you in a cascading mess of disruption. With maintenance taking a growing portion of your application development manhours, and a reasonably well positioned test environment configured with a virtual ADC to mimic your production environment, all you need now is a way to cut those maintenance manhours and reduce the amount of repetitive work required to create or update an application. Particularly update an application, because that is a constant problem, where creating is less frequent. With many of the threats that your ZapNGo application will be known as ZapNGone eliminated, now it is efficiencies you are after. And believe it or not, these too are available in an ADC. Not all ADC’s are created equal, but this discussion will stay on topics that most ADCs can handle, and I’ll mention it when I stray from generic into specific – which I will do in one case because only one vendor supports one of the tools you can use, but all of the others should be supported by whatever ADC vendor you have, though as always, check with your vendor directly first, since I’m not an expert in the inner workings of every one. There is a lot that many organizations do for themselves, and the array of possibilities is long – from implementing load balancing in source code to security checks in the application, the boundaries of what is expected of developers are shaped by an organization, its history, and its chosen future direction. At ZapNGo, the team has implemented a virtual test environment that as close as possible mirrors production, so that code can be implemented and tested in the way it will be used. They use an ADC for load balancing, so that they don’t have to rewrite the same code over and over, and they have a policy of utilizing a familiar subset of ADC functionality on all applications that face the public. The company is successful and growing, but as always happens in companies in that situation, the pressures upon them are changing just by virtue of their growth. There are more new people who don’t yet have intimate knowledge of the code base, network topology, security policies, whatever their area of expertise is. There are more lines of code to maintain, while new projects are being brought up at a more rapid pace and with higher priorities (I’ve twice lived through the “Everything is high priority? Well this is highest priority!” syndrome while working in IT. Thankfully, most companies grow out of that fast when it’s pointed out that if everything is priority #1, nothing is). Timelines to complete projects – be they new development, bug fixes, or enhancements are stretching longer and longer as the percentage of gurus in the company is down and the complexity of the code and the architecture it runs on is up. So what is a development manager to do to increase productivity? Teaming newer developers with people who’ve been around since the beginning is helping, but those seasoned developers are a smaller and smaller percentage of the workforce, while the volume of work has slowly removed them from some of the many products now under management. Adopting coding standards and standardized libraries helps increase experience portability between projects, but doesn’t do enough. Enter offloading to the ADC. Some things just don’t have to be done in code, and if they don’t have to be, at this stage in the company’s growth, IT management at ZapNGo (that’s you!) decides they won’t be. There just isn’t time for non-essential development anymore. Utilizing a policy management tool and/or an Application Firewall on the ADC can improve security without increasing the code base, for example. And that shaves hours off of maintenance projects, while standardizing on one or a few implementations that are simply selected on the ADC. Implementing Web Application Acceleration protocols on the ADC means that less in-code optimization has to occur. Performance is no longer purely the role of developers (but of course it is still a concern. No Web Application Acceleration tool can make a loop that runs for five minutes run faster), they can allow the Web Application Acceleration tool to shrink the amount of data being sent to the users’ browser for you. Utilizing a WAN Optimization ADC tool to improve the performance of bulk copies or backups to a remote datacenter or cloud storage… The list goes on and on. The key is that the ADC enables a lot of opportunities for App Dev to be more responsive to the needs of the organization by moving repetitive tasks to the ADC and standardizing them. And a heaping bonus is that it also does that for operations with a different subset of functionality, meaning one toolset gives both App Dev and Operations a bit more time out of their day for servicing important organizational needs. Some would say this is all part of DevOps, some would say it is not. I leave those discussions to others, all I care is that it can make your apps more secure, fast, and available, while cutting down on workload. And if your ADC supports an SSL VPN, your developers can work from home when necessary. Or more likely, if your code is your IP, a subset of your developers can. Making ZapNGo more responsive, easier to maintain, and more adaptable to the changes coming next week/month/year. That’s what ADCs do. And they’re pretty darned good at it. That brings us to the one bit that I have to caveat with F5 only, and that is iApps. An iApp is a constructed configuration tool that asks a few questions and then deploys all the bits necessary to set up an ADC for a particular application. Why do I mention it here? Well if you have dozens of applications with similar characteristics, you can create an iApp Template and use it to rapidly bring new applications or new instances of applications online. And since it is abstracted, these iApp templates can be designed such that AppDev, or even the business owner, is able to operate them Meaning less time worrying about what network resources will be available, how they’re configured, and waiting for operations to have time to implement them (in an advanced ADC that is being utilized to its maximum in a complex application environment, this can be hundreds of networking objects to configure – all encapsulated into a form). Less time on the project timeline, more time for the next project. Or for the post deployment party. One of the two. That’s it for the F5 only bit. And knowing that all of these items are standardized means less things to get mis-configured, more surety that it will all work right the first time. As with all of these articles, that offers you the most important benefit… A good night’s sleep.236Views0likes0Comments