java
11 TopicsiControl Library For Java With Source
Included are the binary distribution of the iControl library for Java and Apache Axis. These releases coincides through version BIG-IP, version 13.0.0. iControl Assembly for Java 11.3.0 iControl Assembly for Java 11.4.0 iControl Assembly for Java 11.4.1 iControl Assembly for Java 11.5.0 iControl Assembly for Java 11.6.0 iControl Assembly for Java 12.0.0 iControl Assembly for Java 12.1.0 iControl Assembly for Java 13.0.0 iControl Assembly for Java 13.1.0 The source distribution of the iControl library for Java with Apache Axis. These releases coincide through version BIG-IP, version 12.1.0. The source code for this project is no longer maintained but available at our at f5-icontrol-library-java repository on GitHub. iControl Assembly Java Source 11.4.0 (older release not available on Github) iControl Assembly Java Source 11.4.1 iControl Assembly Java Source 11.5.0 iControl Assembly Java Source 11.6.0 iControl Assembly Java Source 12.1.03.2KViews0likes0CommentsSSL Trust Provider for Java
I've blogged about Self-signed Server Certificates and how they can cause havoc with client java applications. We'll I put the call out there to provide solutions and a very slick one has arrived! XTrustProvider.java: /* * The contents of this file are subject to the "END USER LICENSE AGREEMENT FOR F5 * Software Development Kit for iControl"; you may not use this file except in * compliance with the License. The License is included in the iControl * Software Development Kit. * * Software distributed under the License is distributed on an "AS IS" * basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See * the License for the specific language governing rights and limitations * under the License. * * The Original Code is iControl Code and related documentation * distributed by F5. * * Portions created by F5 are Copyright (C) 1996-2004 F5 Networks * Inc. All Rights Reserved. iControl (TM) is a registered trademark of * F5 Networks, Inc. * * Alternatively, the contents of this file may be used under the terms * of the GNU General Public License (the "GPL"), in which case the * provisions of GPL are applicable instead of those above. If you wish * to allow use of your version of this file only under the terms of the * GPL and not to allow others to use your version of this file under the * License, indicate your decision by deleting the provisions above and * replace them with the notice and other provisions required by the GPL. * If you do not delete the provisions above, a recipient may use your * version of this file under either the License or the GPL. */ import java.security.AccessController; import java.security.InvalidAlgorithmParameterException; import java.security.KeyStore; import java.security.KeyStoreException; import java.security.PrivilegedAction; import java.security.Security; import java.security.cert.X509Certificate; import javax.net.ssl.ManagerFactoryParameters; import javax.net.ssl.TrustManager; import javax.net.ssl.TrustManagerFactorySpi; import javax.net.ssl.X509TrustManager; public final class XTrustProvider extends java.security.Provider { private final static String NAME = "XTrustJSSE"; private final static String INFO = "XTrust JSSE Provider (implements trust factory with truststore validation disabled)"; private final static double VERSION = 1.0D; public XTrustProvider() { super(NAME, VERSION, INFO); AccessController.doPrivileged(new PrivilegedAction() { public Object run() { put("TrustManagerFactory." + TrustManagerFactoryImpl.getAlgorithm(), TrustManagerFactoryImpl.class.getName()); return null; } }); } public static void install() { if(Security.getProvider(NAME) == null) { Security.insertProviderAt(new XTrustProvider(), 2); Security.setProperty("ssl.TrustManagerFactory.algorithm", TrustManagerFactoryImpl.getAlgorithm()); } } public final static class TrustManagerFactoryImpl extends TrustManagerFactorySpi { public TrustManagerFactoryImpl() { } public static String getAlgorithm() { return "XTrust509"; } protected void engineInit(KeyStore keystore) throws KeyStoreException { } protected void engineInit(ManagerFactoryParameters mgrparams) throws InvalidAlgorithmParameterException { throw new InvalidAlgorithmParameterException( XTrustProvider.NAME + " does not use ManagerFactoryParameters"); } protected TrustManager[] engineGetTrustManagers() { return new TrustManager[] { new X509TrustManager() { public X509Certificate[] getAcceptedIssuers() { return null; } public void checkClientTrusted(X509Certificate[] certs, String authType) { } public void checkServerTrusted(X509Certificate[] certs, String authType) { } }}; } } } Calling Application: ... XTrustProvider.install(); ... This file is up in CodeShare for those who are cut+paste challenged B-). Hat tip to Exnihilo for posting this solution! -Joe [Listening to: Ob-La-Di, Ob-La-Da - The Beatles - The White Album (03:08)]721Views0likes4CommentsAndroid Encrypted Databases
The Android development community, as might be expected, is a pretty vibrant community with a lot of great contributors helping people out. Since Android is largely based upon Java, there is a lot of skills reusability between the Java client dev community and the Android Dev community. As I mentioned before, encryption as a security topic is perhaps the weakest link in that community at this time. Perhaps, but since that phone/tablet could end up in someone else’s hands much more easily than your desktop or even laptop, it is something that needs a lot more attention from business developers. When I set out to write my first complex app for Android, I determined to report back to you from time-to-time about what needed better explanation or intuitive solutions. Much has been done in the realm of “making it easier”, except for security topics, which still rank pretty low on the priority list. So using encrypted SQLite databases is the topic of this post. If you think it’s taking an inordinate amount of time for me to complete this app, consider that I’m doing it outside of work. This blog was written during work hours, but all of the rest of the work is squeezed into two hours a night on the nights I’m able to dedicate time. Which is far from every night. For those of you who are not developers, here’s the synopsis so you don’t have to paw through code with us: It’s not well documented, but it’s possible, with some caveats. I wouldn’t use this method for large databases that need indexes over them, but for securing critical data it works just fine. At the end I propose a far better solution that is outside the purview of app developers and would pretty much have to be implemented by the SQLite team. Okay, only developers left? Good. In my research, there were very few useful suggestions for designing secure databases. They fall into three categories: 1. Use the NDK to write a variant of SQLite that encrypts at the file level. For most Android developers this isn’t an option, and I’m guessing the SQLite team wouldn’t be thrilled about you mucking about with their database – it serves a lot more apps than yours. 2. Encrypt the entire SD card through the OS and then store the DB there. This one works, but slows the function of the entire tablet/phone down because you’ve now (again) mucked with resources used by other apps. I will caveat that if you can get your users to do this, it is the currently available solution that allows indices over encrypted data. 3. Use one of several early-beta DB encryption tools. I was uncomfortable doing this with production systems. You may feel differently, particularly after some of them have matured. I didn’t like any of these options, so I did what we’ve had to do in the past when a piece of data was so dangerous in the wrong hands it needed encrypting. I wrote an interface to the DB that encrypts and decrypts as data is inserted and removed. In Android the only oddity you won’t find in other Java environments – or you can more easily get around in other Java environments – is filling list boxes from the database. For that I had to write a custom provider that took care of on-the-fly decryption and insertion to the list. My solution follows. There are a large varieties of ways that you could solve this problem in Java, this one is where I went because I don’t have a lot of rows for any given table. The data does not need to be indexed. If either of these items is untrue for your implementation, you’ll either have to modify this implementation or find an alternate solution. So first the encryption handler. Note that in this sample, I chose to encode encrypted arrays of bytes as Strings. I do not guarantee this will work for your scenario, and suggest you keep them as arrays of bytes until after decryption. Also note that this sample was built from a working one by obfuscating what the actual source did and making some modifications for simplification of example. It was not tested after the final round of simplification, but should be correct throughout. package com.company.monitor; import javax.crypto.Cipher; import javax.crypto.spec.SecretKeySpec; import android.util.Base64; public class DBEncryptor { private static byte[] key; private static String cypherType = cypherType; public DBEncryptor(String localPass) { // save the encoded key for future use // - note that this keeps it in memory, and is not strictly safe key = encode(localPass.getBytes()).getBytes(); String keyCopy = new String(key); while(keyCopy.length() < 16) keyCopy = keyCopy + keyCopy; byte keyA[] = keyCopy.getBytes(); if(keyA.length > 16) key = System.arraycopy(keyA, 0, key, 0, 16); } public String encode(byte [] s) { return Base64.encodeToString(s, Base64.URL_SAFE); } public byte[] decode(byte[] s) { return Base64.decode(s, Base64.URL_SAFE); } public byte[] getKey() { // return a copy of the key. return key.clone(); } public String encrypt(String toEncrypt) throws Exception { //Create your Secret Key Spec, which defines the key transformations SecretKeySpec skeySpec = new SecretKeySpec(key, cypherType); //Get the cipher Cipher cipher = Cipher.getInstance(cypherType); //Initialize the cipher cipher.init(Cipher.ENCRYPT_MODE, skeySpec); //Encrypt the string into bytes byte[ ] encryptedBytes = cipher.doFinal(toEncrypt.getBytes()); //Convert the encrypted bytes back into a string String encrypted = encode(encryptedBytes); return encrypted; } public String decrypt(String encryptedText) throws Exception { // Get the secret key spec SecretKeySpec skeySpec = new SecretKeySpec(key, cypherType); // create an AES Cipher Cipher cipher = Cipher.getInstance(cypherType); // Initialize it for decryption cipher.init(Cipher.DECRYPT_MODE, skeySpec); // Get the decoded bytes byte[] toDecrypt = decode(encryptedText.getBytes()); // And finally, do the decryption. byte[] clearText = cipher.doFinal(toDecrypt); return new String(clearText); } } So what we are essentially doing is base-64 encoding the string to be encrypted, and then encrypting the base-64 value using standard Java crypto classes. We simply reverse the process to decrypt a string. Note that this class is also useful if you’re storing values in the Properties file and wish them to be encrypted, since it simply operates on strings. The value you pass in to create the key needs to be something that is unique to the user or tablet. When it comes down to it, this is your password, and should be treated as such (hence why I changed the parameter name to localPass). For seasoned Java developers, there’s nothing new on Android at this juncture. We’re just encrypting and decrypting data. Next it does leave the realm of other Java platforms because the database is utilizing SQLite, which is not generally what you’re writing Java to outside of Android. Bear with me while we go over this class. The SQLite database class follows. Of course this would need heavy modification to work with your database, but the skeleton is here. Note that not all fields have to be encrypted. You can mix and match, no problems at all. That is one of the things I like about this solution, if I need an index for any reason, I can create an unencrypted field of a type other than blob and index on it. package com.company.monitor; import android.content.ContentValues; import android.content.Context; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteDatabase.CursorFactory; import android.database.sqlite.SQLiteOpenHelper; public class DBManagernames extends SQLiteOpenHelper { public static final String TABLE_NAME = "Map"; public static final String COLUMN_ID = "_id"; public static final String COLUMN_LOCAL = "Local"; public static final String COLUMN_WORLD = "World"; private static int indexId = 0; private static int indexLocal = 1; private static int indexWorld = 2; private static final String DATABASE_NAME = "Mappings.db"; private static final int DATABASE_VERSION = 1; // SQL statement to create the DB private static final String DATABASE_CREATE = "create table " + TABLE_NAME + "(" + COLUMN_ID + " integer primary key autoincrement, " + COLUMN_LOCAL + " BLOB not null, " + COLUMN_WORLD +" BLOB not null);"; public DBManagernames(Context context, CursorFactory factory) { super(context, DATABASE_NAME, factory, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DATABASE_CREATE); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // TODO Auto-generated method stub // Yeah, this isn't implemented in production yet either. It's low on the list, but definitely "on the list" } // Assumes DBEncryptor was used to convert the fields of name before calling insert public void insertToDB(DBNameMap name) { ContentValues cv = new ContentValues(); cv.put(COLUMN_LOCAL, name.getName().getBytes()); cv.put(COLUMN_WORLD, name.getOtherName().getBytes()); getWritableDatabase().insert(TABLE_NAME, null, cv); } // returns the encrypted values to be manipulated with the decryptor. public DBNameMap readFromDB(Integer index) { SQLiteDatabase db = getReadableDatabase(); DBNameMap hnm = new DBNameMap(); Cursor cur = null; try { cur = db.query(TABLE_NAME, null, "_id='"+index.toString() +"'", null, null, null, COLUMN_ID); // cursors connsistently return before the first element. Move to the first. cur.moveToFirst(); byte[] name = cur.getBlob(indexLocal); byte [] othername = cur.getBlob(indexWorld); hnm = new DBNameMap(new String(name), new String(othername), false); } catch(Exception e) { System.out.println(e.toString()); // Do nothing - we want to return the empty host name map. } return hnm; } // NOTE: This routine assumes "String name" is the encrypted version of the string. public DBNameMap getFromDBByName(String name) { SQLiteDatabase db = getReadableDatabase(); Cursor cur = null; String check = null; try { // Note - the production version of this routine actually uses the "where" field to get the correct // element instead of looping the table. This is here for your debugging use. cur = db.query(TABLE_NAME, null, null, null, null, null, null); for(cur.moveToFirst();(!cur.isLast());cur.moveToNext()) { check = new String(cur.getBlob(indexLocal)); if(check.equals(name)) return new DBNameMap(check, new String(cur.getBlob(indexWorld)), false); } if(cur.isLast()) return new DBNameMap(); return new DBNameMap(cur.getString(indexLocal), cur.getString(indexWorld), false); } catch(Exception e) { System.out.println(e.toString()); return new DBNameMap(); } } // used by our list adapter - coming next in the blog. public Cursor getCursor() { try { return getReadableDatabase().query(TABLE_NAME, null, null, null, null, null, null); } catch(Exception e) { System.out.println(e.toString()); return null; } } // This is used in our list adapter for mapping to fields. public String[] listColumns() { return new String[] {COLUMN_LOCAL}; } } I am not including the DBNameMap class, as it is a simple container that has two string fields and maps one name to another. Finally, we have the List Provider. Android requires that you populate lists with a provider, and has several base ones to work with. The problem with the SimpleCursorAdapter is that it assumes an unencrypted database, and we just invested a ton of time making the DB encrypted. There are several possible solutions to this problem, and I present the one I chose here. I extended ResourceCursorAdapter and implemented decryption right in the routines, leaving not much to do in the list population section of my activity but to assign the correct adapter. package com.company.monitor; import android.content.Context; import android.database.Cursor; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ResourceCursorAdapter; import android.widget.TextView; public class EncryptedNameAdapter extends ResourceCursorAdapter { private String pw; public EncryptedHostNameAdapter(Context context, int layout, Cursor c, boolean autoRequery) { super(context, layout, c, autoRequery); } public EncryptedHostNameAdapter(Context context, int layout, Cursor c, int flags) { super(context, layout, c, flags); } // This class must know what the encryption key is for the DB before filling the list, // so this call must be made before the list is populated. The first call after the constructor works. public void setPW(String pww) { pw = pww; } @Override public View newView(Context context, Cursor cur, ViewGroup parent) { LayoutInflater li = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); return li.inflate(R.layout.my_list_entry, parent, false); } @Override public void bindView(View arg0, Context arg1, Cursor arg2) { // Get an encryptor/decryptor for our data. DBEncryptor enc = new DBEncryptor(pw); // Get the TextView we're placing the data into. TextView tvLocal = (TextView)arg0.findViewById(R.id.list_entry_name); // Get the bytes from the cursor byte[] bLocal = arg2.getBlob(arg2.getColumnIndex(DBManagerNames.COLUMN_LOCAL )); // Convert bytes to a string String local = new String(bSite); try { // decrypt the string local = enc.decrypt(local); } catch(Exception e) { System.out.println(e.toString()); // local holds the encrypted version at this point, fix it. // We’ll return an empty string for simplicity local = new String(); } tvSite.setText(local); } } The EncryptedNameAdapter can be set as the source for any listbox just like most examples set an ArrayAdapter as the source. Of course, it helps if you’ve put some data in the database first . That’s it for this time. There’s a lot more going on with this project, and I’ll present my solution for SSL certificate verification some time in the next couple of weeks, but for now if you need to encrypt some fields of a database, this is one way to get it done. Ping me on any of the social media outlets or here in the comments if you know of a more elegant/less resource intensive solution, always up for learning more. And please, if you find an error, it was likely introduced in the transition to something I was willing to throw out here publicly, but let me know so others don’t have problems. I’ve done my best not to introduce any, but always get a bit paranoid if I changed it after my last debug session – and I did to simplify and sanitize.407Views0likes0Comments2.5 bad ways to implement a server load balancing architecture
I'm in a bit of mood after reading a Javaworld article on server load balancing that presents some fairly poor ideas on architectural implementations. It's not the concepts that are necessarily wrong; they will work. It's the architectures offered as a method of load balancing made me do a double-take and say "What?" I started reading this article because it was part 2 of a series on load balancing and this installment focused on application layer load balancing. You know, layer 7 load balancing. Something we at F5 just might know a thing or two about. But you never know where and from whom you'll learn something new, so I was eager to dive in and learn something. I learned something alright. I learned a couple of bad ways to implement a server load balancing architecture. TWO LOAD BALANCERS? The first indication I wasn't going to be pleased with these suggestions came with the description of a "popular" load-balancing architecture that included two load balancers: one for the transport layer (layer 4) and another for the application layer (layer 7). In contrast to low-level load balancing solutions, application-level server load balancing operates with application knowledge. One popular load-balancing architecture, shown in Figure 1, includes both an application-level load balancer and a transport-level load balancer. Even the most rudimentary, entry level load balancers on the market today - software and hardware, free and commercial - can handle both transport and application layer load balancing. There is absolutely no need to deploy two separate load balancers to handle two different layers in the stack. This is a poor architecture introducing unnecessary management and architectural complexity as well as additional points of failure into the network architecture. It's bad for performance because it introduces additional hops and points of inspection through which application messages must flow. To give the author credit he does recognize this and offers up a second option to counter the negative impact of the "additional network hops." One way to avoid additional network hops is to make use of the HTTP redirect directive. With the help of the redirect directive, the server reroutes a client to another location. Instead of returning the requested object, the server returns a redirect response such as 303. I found it interesting that the author cited an HTTP response code of 303, which is rarely returned in conjunction with redirects. More often a 302 is used. But it is valid, if not a bit odd. That's not the real problem with this one, anyway. The author claims "The HTTP redirect approach has two weaknesses." That's true, it has two weaknesses - and a few more as well. He correctly identifies that this approach does nothing for availability and exposes the infrastructure, which is a security risk. But he fails to mention that using HTTP redirects introduces additional latency because it requires additional requests that must be made by the client (increasing network traffic), and that it is further incapable of providing any other advanced functionality at the load balancing point because it essentially turns the architecture into a variation of a DSR (direct server return) configuration. THAT"S ONLY 2 BAD WAYS, WHERE'S THE .5? The half bad way comes from the fact that the solutions are presented as a Java based solution. They will work in the sense that they do what the author says they'll do, but they won't scale. Consider this: the reason you're implementing load balancing is to scale, because one server can't handle the load. A solution that involves putting a single server - with the same limitations on connections and session tables - in front of two servers with essentially the twice the capacity of the load balancer gains you nothing. The single server may be able to handle 1.5 times (if you're lucky) what the servers serving applications may be capable of due to the fact that the burden of processing application requests has been offloaded to the application servers, but you're still limited in the number of concurrent users and connections you can handle because it's limited by the platform on which you are deploying the solution. An application server acting as a cluster controller or load balancer simply doesn't scale as well as a purpose-built load balancing solution because it isn't optimized to be a load balancer and its resource management is limited to that of a typical application server. That's true whether you're using a software solution like Apache mod_proxy_balancer or hardware solution. So if you're implementing this type of a solution to scale an application, you aren't going to see the benefits you think you are, and in fact you may see a degradation of performance due to the introduction of additional hops, additional processing, and poorly designed network architectures. I'm all for load balancing, obviously, but I'm also all for doing it the right way. And these solutions are just not the right way to implement a load balancing solution unless you're trying to learn the concepts involved or are in a computer science class in college. If you're going to do something, do it right. And doing it right means taking into consideration the goals of the solution you're trying to implement. The goals of a load balancing solution are to provide availability and scale, neither of which the solutions presented in this article will truly achieve.299Views0likes1CommentHow to instrument your Java EE applications for a virtualized environment
If you're excited about the automation capabilities of cloud computing and virtualization, you are going to love this solution. In a virtualized environment where applications can ostensibly be popping up all over, and applications are no longer tied to specific servers, there is a need to automatically manage these application instances in a high-availability (load balanced) environment. What you need is the ability to automagically add and remove application instances from the application delivery controller (load balancer) so you don't have to worry about tying those applications down, which could reduce the benefits typically associated with virtualization. If you aren't going to a fully virtualized and automated data center, you might be happy to know that you can still reap the benefits of this automated solution. Not only is this solution perfect for a virtualized environment, it's also just as great for a non-virtualized environment for automating availability of applications. Truth be told, the application and the solution doesn't care (nor should it) whether it's running in a virtual image or not; it merely "is". In a nutshell, when an application initializes, it adds itself to the appropriate application pool on the application delivery controller. When the application is destroyed, it removes itself. This means no matter where the application instance is living - in a virtual image, in a different servlet container, on a new server - it will automatically be "discovered" and immediately become part of the high availability pool of servers. iControl, F5's service-enabled API providing configuration and management control of its solutions, can be used from within your Java EE application to enable automation of pool management on a BIG-IP application delivery controller. This solution uses the Java Servlet 2.3 ServletContextListener interface. The ServletContextListener interface can be used to listen and react to a variety of servlet events, including application lifecycle. In order to automate the addition and removal of an application from the appropriate BIG-IP pool, we'll be listening for two events: contextInitialized and contextDestroyed. In the former, we'll add the application to the appropriate pool and in the latter, we'll remove it automatically. This proactive approach to managing applications managed by BIG-IP LTM (Local Traffic Manager) also ensures that requests are not caught in between a monitor's health check interval, which can result in either an error or a second connection as part of a retry event within an iRule. This improves performance by ensuring that only active applications receive requests, and reduces connection attempts that can improve the efficiency of high-volume applications. This is also an excellent method of automating availability for applications for which synthetic monitors are problematic. You can read about the full solution with code in this article. Yes, they actually let me code from time to time. Happy coding!232Views0likes2CommentsHow AJAX can make a more agile enterprise
In general, we talk a lot about the benefits of SOA in terms of agility, aligning IT with the business, and risk mitigation. Then we talk about WOA (web oriented architecture) separately from SOA (service oriented architecture) but go on to discuss how the two architectures can be blended to create a giant application architecture milkshake that not only tastes good, but looks good. AJAX (Asynchronous JavaScript and XML) gets lumped under the umbrella of "Web 2.0" technologies. It's neither WOA nor SOA, being capable of participating in both architectural models easily. Some might argue that AJAX, being bound to the browser and therefore the web, is WOA. But WOA and SOA are both architectural models, and AJAX can participate in both - it is neither one or the other. It's seen as a tool; a means to an end, rather than as an enabling facet of either architectural model. It's seen as a mechanism for building interactive and more responsive user interfaces, as a cool tool to implement interesting tricks in the browser, and as yet another cross-browser incompatible scripting technology that makes developer's lives miserable. But AJAX, when used to build enterprise applications, can actually enable and encourage a more agile application environment. When AJAX is applied to user-interface elements to manipulate corporate data the applications or scripts on the server-side that interact with the GUI are often distilled into discrete blocks of functionality that can be reused in other applications and scripts in which that particular functionality is required. And thus services are born. Services that are themselves agile and thus enable broader agility within the application architecture. They aren't SOA services, at least that's what purists would say, but they are services, empowered with the same characteristics of their SOA-based cousins: reusable and granular. The problem is that AJAX is still seen as an allen wrench in an architecture that requires screwdrivers. It's often viewed only in terms of building a user interface, and the services it creates or takes advantage of on the back-end as being unequal to those specifically architected for inclusion in the enterprise SOA. Because AJAX drives the development of discrete services on the server-side, it can be a valued assistant in decomposing applications into its composite services. It can force you to think about the services and the operations required because AJAX necessarily interacts with granular functions of a service in a singular fashion. If we force AJAX development to focus on the user-interface, we lose some of the benefits we can derive from the design and development process by ignoring how well AJAX fits into the service-oriented paradigm. We lose the time and effort that goes into defining the discrete services that will be used by an AJAX-enabled component in the user-interface, and the possibility of reusing those services in the broader SOA. An SOA necessarily compels us to ignore platform and language and concentrate on the service. Services deployed on a web server utilizing PHP or ASP or Ruby as their implementation language are no different than those deployed on heavy application servers using JSP or Java or .NET. They can and should be included in the architectural design process to ensure they can be reused when possible. AJAX forces you to think in a service-oriented way. The services required by an AJAX-enabled user-interface should be consistent with the enterprise's architectural model and incorporated into that architecture whenever possible in order to derive agility and reuse from those services. AJAX is inherently an agile technology. Recognizing that early and incorporating the services required by AJAX-enabled components can help build a more agile, more consistent, more SOA-like application infrastructure.229Views0likes0CommentsMitigate Java Vulnerability with iRules
I got a request yesterday morning to asking if there was a way to drop HTTP requests if a certain number was referenced in the Accept-Language header. The user referenced this post on Exploring Binary. The number, 2.2250738585072012e-308, causes the Java runtime and compiler to go into an infinite loop when converting it to double-precision binary floating-point. Not good. Twitter is ablaze on the issue, and there is a good discussion thread on Hacker News as well. So how do you stop it? At first, this appeared to be a no-brainer, just copy that string and drop if found in that header, right? Well, there’s a catch. A few actually. This number can be represented in many ways: Decimal point placement => 0.00022250738585072012e-304 Leading Zeroes => 00000000002.2250738585072012e-308 Trailing Zeroes => 2.225073858507201200000e-308 Leading Zeroes in the Exponent => 2.2250738585072012e-00308 Superfluous Digits past digit 17 => 2.2250738585072012997800001e-308 String match seemed the perfect fit for this as I need a few wildcards to sort this out. I started in the Tcl shell just to make sure all the use cases matched:217Views0likes4CommentsDespite Good Intentions PaaS Interoperability Still Only Skin Deep
Salesforce and Google have teamed up with VMware to promote cloud portability but like beauty that portability is only skin deep. VMware has been moving of late to form strategic partnerships that enable greater portability of applications across cloud computing providers. The latest is an announcement that Google and VMware have joined forces to allow Java application “portability” with Google’s App Engine. It is important to note that the portability resulting from this latest partnership and VMware’s previous strategic alliance formed with Salesforce.com will be the ability to deploy Java-based applications within Google and Force.com’s “cloud” environments. It is not about mobility, but portability. The former implies the ability to migrate from one environment to another without modification while the latter allows for cross-platform (or in this case, cross-cloud) deployment. Mobility should require no recompilation, no retargeting of the application itself while portability may, in fact, require both. The announcements surrounding these partnerships is about PaaS portability and, even more limiting, targeting Java-based applications. In and of itself that’s a good thing as both afford developers a choice. But it is not mobility in the sense that Intercloud as a concept defines mobility and portability, and the choice afforded developers is only skin deep.182Views0likes1CommentDevCentral Top5 02/04/2011
If your week has been anything like mine, then you’ve had plenty to keep you busy. While I’d like to think that your “busy” equates to as much time on DevCentral checking out the cool happenings while people get their geek on as mine does, I understand that’s less than likely. Fortunately, though, there is a mechanism by which I can distribute said geeky goodness for your avid assimilation. I give to you, the DC Top 5: iRuling the New FSE Crop http://bit.ly/f1JIiM Easily my favorite thing that happened this week was something I was fortunate enough to get to be a part of. A new crop of FSEs came through corporate this week to undergo a training boot camp that has been, from all accounts, a smashing success. A small part of this extensive readiness regimen was an iRules challenge issued unto the newly empowered engineers by yours truly. Through this means they were intended to learn about iRules, DevCentral, and the many resources available to them for researching and investigating any requirements and questions they may have. The results are in as of today and I have to say I’m duly impressed. I’ll post the results next week but, for now, here’s a taste of the challenge that was issued. Keep in mind these people range from a few weeks to maybe a couple months tops experience with F5, let alone iRules or coding in general, so this was a tall order. The gauntlet was laid down and the engineers answered, and answered with vigor. Stay tuned for more to come. Mitigate Java Vulnerabilities with iRules http://bit.ly/gbnPOe Jason put out a fantastic blog post this week showing how to thwart would be JavaScript abusing villains by way of iRules fu. Naturally I was interested so I investigated further. It turns out there was a vuln that cropped up plenty last week dealing with a specific string (2.2250738585072012e-308) that has a nasty habit of making the Java runtime compiler go into an infinite loop and, eventually, pack up its toys and go home. This is, as Jason accurately portrayed, “Not good.”. Luckily though iRules is able to leap to the rescue once more, as is its nature. By digging through the HTTP::request variable, Jason was able to quickly and easily strip out any possibly harmful instances of this string in the request headers. For more details on the problem, the process and the solution, click the link and have a read. F5 Friday: ‘IPv4 and IPv6 Can Coexist’ or ‘How to eat your cake and have it too’ http://bit.ly/ejYYSW Whether it was the promise of eating cake or the timely topic of IPv4 trying to cling to its last moments of glory in a world hurtling quickly towards an IPv6 existence I don’t know, but this one drew me in. Lori puts together an interesting discussion, as is often the case, in her foray into the “how can these two IP formats coexist” arena. With the reality of IPGeddon acting as the stick, the carrot of switching to an IPv6 compatible lifestyle seems mighty tasty for most businesses that want to continue being operational once the new order sets in. Time is quickly running out, as are the available IPv4 addresses, so the hour is nigh for decisions to be made. This is a look at one way in which you can exist in the brave new world of 128-bit addressing without having to reconfigure every system in your architecture. It’s interesting, timely, and might just save you 128-bits worth of headaches. Deduplication and Compression – Exactly the same, but different http://bit.ly/h8q0OS There’s something that got passed over last week because of an absolute overabundance of goodness that I wanted to bring up this week, as I felt it warranted some further review and discussion. That is, Don’s look at Deduplication and Compression. Taking the angle of the technologies being effectively the same is an interesting one to me. Certainly they aren’t the same thing, right? Clearly one prevents data from being transmitted while the other minimizes the transmission necessary. That’s different, right? Still though, as I was reading I couldn’t help but find myself nodding in agreeance as Don laid out the similarities. Honestly, they really do accomplish the same thing, that is minimizing what must pass through your network, even though they achieve it by different means. So which should you use when? How do they play together? Which is more effective for your environment? All excellent questions, and precisely why this post found its way into the Top5. Go have a look for yourself. Client Cert Fingerprint Matching via iRules http://bit.ly/gY2M69 Continuing in the fine tradition of the outright thieving of other peoples’ code to mold into fodder for my writing, this week I bring to you an awesome snippet from the land down under. Cameron Jenkins out of Australia was kind enough to share his iRule for Client Cert Fingerprint matching with the team. I immediately pounced on it as an opportunity to share another cool example of iRules doing what they do best: making stuff work. This iRule shows off an interesting way to compare cert fingerprints in an attempt to verify a cert’s identity without needing to store the entirety of the cert and key. It’s also useful for restricting access to a given list of certs. Very handy in some situations, and a wickedly simple iRule to achieve that level of functionality. Good on ya, Cameron, and thanks for sharing. There you have it, another week, another 5 piece of hawesome from DevCentral. See you next time, and happy weekend. #Colin179Views0likes0CommentsDevCentral Top5 12/10/2010
It’s Friday! Your work week is wrapping up (most of you, anyway) or perhaps it’s even over already, depending on your time zone. You’re looking forward to your weekend and trying to wrap up the last few loose ends at work to get yourself set up for a solid Monday. One of those things should be this week’s Top 5. With the 5 coolest things to grace DevCentral this week in hand, you can go spread the good word about how cool this stuff is to anyone that will listen. Well, that’s what I do, anyway. You do with it what you will, but here it is just the same: Talking Microsoft Lync Server Availability and Scalability http://bit.ly/ifbSfw This week we got to sit down, albeit some of us virtually, with part of the Microsoft team here at F5 and talk about Microsoft Lync Server. Honestly, we let them do most of the talking, which was good because they really knew their stuff. They talked about how to design deployments, how to ensure things are stable while scaling or optimally available despite the unexpected occurring, etc. Jeff’s blog post does a good job of detailing the conversations' bullet points as well as providing some links to follow to read more. The real reason it’s on the list though is that we recorded the live stream, complete with the F5 Microsoft folks taking questions from users via chat and answering them, and it was some good stuff. Take a peek. F5 Friday: F5 ARX Cloud Extender Opens Cloud Storage http://bit.ly/gtMHhz I’m not one to talk about the cloud all that much, generally speaking. It’s there, I realize it’s there and that companies are dealing with it more and more, but frankly there are enough people talking about it that I leave it to them to dish out all the necessary info to keep you abreast. People like Lori and, in this case since it’s dealing with storage as well, Don, are great at exactly that. In this joint post they talk about cloud based storage, why it’s designed horribly from the ground up, at least from the standpoint of someone trying to attach to it via conventional means is concerned, and how a cloud extender can help solve that problem. It’s pretty interesting stuff, honestly, even for cloud talk. Take a look and see if you can help your company save some time, money, energy or any combination thereof by getting them connected to cloud based storage easily. DevCentral Weekly Podcast 157 – Security and Some Free Beer http://bit.ly/ex9bUw We do the DC podcast every Thursday (at 2PM PST for those that might want to tune in) so it’s not normally something that I report on here unless there is something particularly interesting that crops up. It just so happens that this is such a week. We were lucky enough to wrangle not one, not two, but three, yes count them three F5 security types (Andy Oehler, Chris Webber, and Jonathan George) to hang out with us on the podcast this week. We talked about, as you might imagine, security. More specifically we talked about the new consolidated security group on DevCentral, how security is a moving target and continues to change with technology and the pressures being applied via standards and new requirements, and a whole lot more. We kind of squeezed a couple DevCentral topics in there at the end, but the meaty bits, the part with the guests who know their stuff, is right there at the front. Take a listen/look and see what these guys had to say about the security world from their perspective. Java iControl Objects – Networking SelfIP http://bit.ly/f9CWSF You can always count on Joe to dish out some smooth, refreshing, less filling iControl goodness, and that’s precisely what he’s done again here. It’s straight-forward, it’s detailed, it contains chunky chunks of code and it shows you how to set up a self IP on your F5 device via iControl, this tech tip has it all. It also illustrates something that could come in quite handy if you’re in the “I need to configure a bunch of boxes but don’t want to log into each one and manually make the changes” setting that some people are in more than they care to speak of. If you’re into that whole iControl thing, and I know I am, then definitely take a look. Heck, even if you’re not using iControl this series is a solid one, and there’s no time like the present to dive in. 20 Lines or Less #42 – Secret list … OF DOOM http://bit.ly/e0NM2X Always last, but never least, the 20 Lines or Less is back yet again, bringing you three tasty morsels of iRules goodness in so few lines of code that they could safely travel under the speed limit in most US states. This week I’ve got a follow up from a post reaching way back that shows how a user eventually got in-line payload matching working to replace password data, some selective HTTP/HTTPS redirection on a single virtual, and a sweet bit of iRuling from our friendly FSEs that shows off how to perform some very handy rate limiting using the table command. All of these are cool examples, and weigh in at less than 21 lines of code each, which means anyone could grab them and get started right away. Maybe you’d rather tweak them to better suit your needs, either way is fine by me, just check them out, they’re cool. The last example is so cool in fact that I’m breaking it down into a full tech tip for Monday, in a wicked cool new style that Jason and I have been talking about. But that’s Monday, for now, take a look and check out some cool iRules. There you have it, my favorite five from DevCentral this week. Be back next week for more, and as always, drop me a line if you have any comments, questions, suggestions or feedback of any sort. #Colin179Views0likes0Comments