pki
8 TopicsLightboard Lessons: What's in a certificate?
When you visit a "https://" website, you exchange a digital certificate with the web server that hosts that website. But, what exactly is a digital certificate, and what's inside it? In this Lightboard Lesson video, John explores the details of what a digital certificate is and what information it holds. Enjoy! Related Resources: How RSA encryption works Managing SSL certificates for BIG-IP systems411Views0likes4CommentsAsk the Expert – Why SSL Everywhere?
Kevin Stewart, Security Solution Architect, talks about the paradigm shift in the way we think about IT network services, particularly SSL and encryption. Gone are the days where clear text roams freely on the internal network and organizations are looking to bring SSL all the way to the application, which brings complexity. Kevin explains some of the challenges of encrypting all the way to the application and ways to solve this increasing trend. SSL is not just about protecting data in motion, it’s also about privacy. ps Related: Ask the Expert – Are WAFs Dead? RSA2015 – SSL Everywhere (feat Holmes) AWS re:Invent 2015 – SSL Everywhere…Including the Cloud (feat Stanley) F5 SSL Everywhere Solutions Technorati Tags: f5,ssl,encryption,pki,big-ip,security,privacy,silva,video Connect with Peter: Connect with F5:333Views0likes0CommentsSecurity Sidebar: Google Leads The Way On Sunsetting SHA-1
SHA-1 (or Secure Hash Algorithm) is a cryptographic algorithm that was developed by the National Security Agency in the 1990s and is widely used in popular cryptographic protocols like Secure Sockets Layer (SSL) and Transport Layer Security (TLS). These protocols are designed to provide secure communications over the Internet. The SHA-1 algorithm is commonly used by Certificate Authorities (CA) as a part of the overall Public Key Infrastructure (PKI). While the intent of this article is not meant to fully explain PKI, it is important to note that many CAs utilize the SHA-1 algorithm to digitally sign certificates for secure websites. These CA-issued certificates are critical for users who want to maintain a level of trust and security when accessing those secure websites. If a user visits a secure website (https) and the digital certificate is not valid, it could mean that a bad guy is attempting to steal your information. Your Internet browser (Internet Explorer, Firefox, Chrome, Safari, etc) will notice that the certificate is bad and it will alert you to the fact that you are about to engage in some non-secure communications. Each browser presents this information in a slightly different way, but they all give you an alert nonetheless. The screenshot below shows an example of Google Chrome attempting to access a secure website that has an invalid certificate. On September 5, 2014, Google announced that their popular Chrome browser will sunset the SHA-1 algorithm. They claim that SHA-1 has been a weak digital signature algorithm for at least 9 years. One of the primary reasons for this weakness is the ease of collision attacks against SHA-1, thus prompting Google to declare it no longer safe for public consumption. Google is not alone in their current fear and loathing of SHA-1...most other browsers have stated their intention to deprecate SHA-1 as well. While everyone agrees that SHA-1 needs to be replaced, not everyone agrees on the process or timeline to do so. Starting in November 2014 (as in, like, next month), Google will methodically sunset the SHA-1 algorithm starting with Chrome version 39. Websites using HTTPS whose certificate chains use SHA-1 and are valid past January 1, 2017 will no longer appear to be fully trusted in Chrome. Google Chrome has several different icon indicators in the address bar that display the overall trust factor of a given certificate chain for the website you are accessing. The first is a lock with a yellow triangle over it. This indicates a certificate chain that is "secure, but with minor errors." The next is a blank page icon. This indicates a certificate chain that is "neutral, lacking security." The last is a lock with a red X and a red strike-through text treatment in the URL scheme. This indicates a certificate chain that is "affirmatively insecure." Check out the above screenshot again...notice that it falls in the category of "affirmatively insecure" based on the red X and the red strike-through text in the URL. So, how does all this fit together? Well, Google has announced that it will start displaying these various icons on websites that use the SHA-1 algorithm. The following table shows the details of the SHA-1 certificate expiration date and the related Chrome icon display in the address bar. Today, SHA-1 is used in over 98% of certificates issued worldwide. Likewise, Google Chrome accounts for 38% of all Internet browsers used today (as of August 2014...see the chart below). When you combine the fact that Google Chrome accounts for almost 40% of all Internet browsers and SHA-1 is used in over 98% of all certificates worldwide, you can see why so many CAs are scrambling right now to re-issue new certificates in very short order. As you update/validate the certificates in your organization, you will need to verify that legacy applications will support the new algorithm. Also, if you have external hosted applications, you may need to issue new certificates so that users don't get those crazy browser warnings. This SHA-1 situation is just another reminder of the ever-changing technical world we live in. It's important to know what's out there, and it's important to stay as much ahead of it as possible.326Views0likes4CommentsDispelling the New SSL Myth
Claiming SSL is not computationally expensive is like saying gas is not expensive when you don’t have to drive to work every day. My car is eight years old this year. It has less than 30,000 miles on it. Yes, you heard that right, less than 30,000 miles. I don’t drive my car very often because, well, my commute is a short trip down two flights of stairs. I don’t need to go very far when I do drive it’s only ten miles or so round trip to the grocery store. So from my perspective, gas isn’t really very expensive. I may use a tank of gas a month, which works out to … well, it’s really not even worth mentioning the cost. But for someone who commutes every day – especially someone who commutes a long-distance every day – gas is expensive. It’s a significant expense every month for them and they would certainly dispute my assertion that the cost of gas isn’t a big deal. My youngest daughter, for example, would say gas is very expensive – but she’s got a smaller pool of cash from which to buy gas so relatively speaking, we’re both right. The same is true for anyone claiming that SSL is not computationally expensive. The way in which SSL is used – the ciphers, the certificate key lengths, the scale – has a profound impact on whether or not “computationally expensive” is an accurate statement or not. And as usual, it’s not just about speed – it’s also about the costs associated with achieving that performance. It’s about efficiency, and leveraging resources in a way that enables scalability. It’s not the cost of gas alone that’s problematic, it’s the cost of driving, which also has to take into consideration factors such as insurance, maintenance, tires, parking fees and other driving-related expenses. MYTH: SSL is NOT COMPUTATIONALLY EXPENSIVE TODAY SSL is still computationally expensive. Improvements in processor speeds in some circumstances have made that expense less impactful. Circumstances are changing. Commoditized x86 hardware can in fact handle SSL a lot better today than it ever could before –when you’re using 1024-bit keys and “easy” ciphers like RC4. Under such parameters it is true that commodity hardware may perform efficiently and scale up better than ever when supporting SSL. Unfortunately for proponents of SSL-on-the-server, 1024-bit keys are no longer the preferred option and security professionals are likely well-aware that “easy” ciphers are also “easy” pickings for miscreants. In January 2011, NIST recommendations regarding the deployment of SSL went into effect. While NIST is not a standards body can require compliance or else, they can and do force government and military compliance and have shown their influence with commercial certificate authorities. All commercial certificate authorities now issue only 2048-bit keys. This increase has a huge impact on the capacity of a server to process SSL and renders completely inaccurate the statement that SSL is not computationally expensive anymore. A typical server that could support 1500 TPS using 1024-bit keys will only support 1/5 of that (around 300 TPS) when supporting modern best practices, i.e. 2048-bit keys. Also of note is that NIST recommends ephemeral Diffie-Hellman - not RSA - for key exchange, and per TLS 1.0 specification, AES or 3DES-EDE-CBC, not RC4. These are much less “easy” ciphers than RC4 but unfortunately they are also more computationally intense, which also has an impact on overall performance. Key length and ciphers becomes important to the performance and capacity of SSL not just during the handshaking process, but in bulk-encryption rates. It is one thing to say a standard server deployed to support SSL can handle X handshakes (connections) and quite another to simultaneously perform bulk-encryption on subsequent data responses. The size and number of those responses have a huge impact on the consumption rate of resources when performing SSL-related functions on the overall server’s capacity. Larger data sets require more cryptographic attention that can drag down the rate of encryption – that means slower response times for users and higher resource consumption on servers, which decreases resources available for handshaking and server processing and cascades throughout the entire system to result in a reduction of capacity and poor performance. Tweaked configurations, poorly crafted performance tests, and a failure to consider basic mathematical relationships may seem to indicate SSL is “not” computationally expensive yet this contradicts most experience with deploying SSL on the server. Consider this question and answer in the SSL FAQ for the Apache web server: Why does my webserver have a higher load, now that it serves SSL encrypted traffic? SSL uses strong cryptographic encryption, which necessitates a lot of number crunching. When you request a webpage via HTTPS, everything (even the images) is encrypted before it is transferred. So increased HTTPS traffic leads to load increases. This is not myth, this is a well-understood fact – SSL requires higher computational load which translates into higher consumption of resources. That consumption of resources increases with load. Having more resources does not change the consumption of SSL, it simply means that from a mathematical point of view the consumption rates relative to the total appear to be different. The “amount” of resources consumed by SSL (which is really the amount of resources consumed by cryptographic operations) is proportional to the total system resources available. The additional consumption of resources from SSL is highly dependent on the type and size of data being encrypted, the load on the server from both processing SSL and application requests, and on the volume of requests. Interestingly enough, the same improvements in capacity and performance of SSL associated with “modern” processors and architecture is also applicable to intermediate SSL-managing devices. Both their specialized hardware (if applicable) and general purpose CPUs significantly increase the capacity and performance of SSL/TLS encrypted traffic on such solutions, making their economy of scale much greater than that of server-side deployed SSL solutions. THE SSL-SERVER DEPLOYED DISECONOMY of SCALE Certainly if you have only one or even two servers supporting an application for which you want to enable SSL the costs are going to be significantly different than for an organization that may have ten or more servers comprising such a farm. It is not just the computational costs that make SSL deployed on servers problematic, it is also the associated impact on infrastructure and the cost of management. Reports that fail to factor in the associated performance and financial costs of maintaining valid certificates on each and every server – and the management / creation of SSL certificates for ephemeral virtual machines – are misleading. Such solutions assume a static environment and a deep pocket or perhaps less than ethical business practices. Such tactics attempt to reduce the capital expense associated with external SSL intermediaries by increasing the operational expense of purchasing and managing large numbers of SSL certificates – including having a ready store that can be used for virtual machine instances. As the number of services for which you want to provide SSL secured communication increase and the scale of those services increases, the more costly it becomes to manage the required environment. Like IP address management in an increasingly dynamic environment, there is a diseconomy of scale that becomes evident as you attempt to scale the systems and processes involved. DISECONOMY of SCALE #1: CERTIFICATE MANAGEMENT Obviously the more servers you have, the more certificates you need to deploy. The costs associated with management of those certificates – especially in dynamic environments – continues to rise and the possibility of missing an expiring certificate increase with the number of servers on which certificates are deployed. The promise of virtualization and cloud computing is to address the diseconomy of scale; the ability to provision and ready-to-function server complete with the appropriate web or application stack serving up an application for purposes of scale assumes that everything is ready. Unless you’re failing to properly provision SSL certificates you cannot achieve this with a server-deployed SSL strategy. Each virtual image upon which a certificate is deployed must be pre-configured with the appropriate certificate and keys and you can’t launch the same one twice. This has the result of negating the benefits of a dynamically provisioned, scalable application environment and unnecessarily increases storage requirements because images aren’t small. Failure to recognize and address the management and resulting impact on other areas of infrastructure (such as storage and scalability processes) means ignoring completely the actual real-world costs of a server-deployed SSL strategy. It is always interesting to note the inability of web servers to support SSL for multiple hosts on the same server, i.e. virtual hosts. Why can't I use SSL with name-based/non-IP-based virtual hosts? The reason is very technical, and a somewhat "chicken and egg" problem. The SSL protocol layer stays below the HTTP protocol layer and encapsulates HTTP. When an SSL connection (HTTPS) is established Apache/mod_ssl has to negotiate the SSL protocol parameters with the client. For this, mod_ssl has to consult the configuration of the virtual server (for instance it has to look for the cipher suite, the server certificate, etc.). But in order to go to the correct virtual server Apache has to know the Host HTTP header field. To do this, the HTTP request header has to be read. This cannot be done before the SSL handshake is finished, but the information is needed in order to complete the SSL handshake phase. Bingo! Because an intermediary terminates the SSL session and then determines where to route the requests, a variety of architectures can be more easily supported without the hassle of configuring each and every web server – which must be bound to IP address to support SSL in a virtual host environment. This isn’t just a problem for hosting/cloud computing providers, this is a common issue faced by organizations supporting different “hosts” across the domain for tracking, for routing, for architectural control. For example, api.example.com and www.example.com often end up on the same web server, but use different “hosts” for a variety of reasons. Each requires its own certificate and SSL configuration – and they must be bound to IP address – making scalability, particularly auto-scalability, more challenging and more prone to the introduction of human error. The OpEx savings in a single year from SSL certificate costs alone could easily provide an ROI justification for the CapEx of deploying an SSL device before even considering the costs associated with managing such an environment. CapEx is a onetime expense while OpEx is recurring and expensive. DISECONOMY of SCALE #2: CERTIFICATE/KEY SECURITY The simplistic nature of the argument also fails to take into account the sensitive nature of keys and certificates and regulatory compliance issues that may require hardware-based storage and management of those keys regardless of where they are deployed (FIPS 140-2 level 2 and above). While there are secure and compliant HSM (Hardware Security Modules) that can be deployed on each server, this requires serious attention and an increase of management and skills to deploy. The alternative is to fail to meet compliance (not acceptable for some) or simply deploy the keys and certificates on commoditized hardware (increases the risk of theft which could lead to far more impactful breaches). For some IT organizations to meet business requirements they will have to rely on some form of hardware-based solution for certificate and key management such as an HSM or FIPS 140-2 compliant hardware. The choices are deploy on every server (note this may become very problematic when trying to support virtual machines) or deploy on a single intermediary that can support all servers at the same time, and scale without requiring additional hardware/software support. DISECONOMY of SCALE #3: LOSS of VISIBILITY / SECURITY / AGILITY SSL “all the way to the server” has a profound impact on the rest of the infrastructure, too, and the scalability of services. Encrypted traffic cannot be evaluated or scanned or routed based on content by any upstream device. IDS and IPS and even so-called “deep packet inspection” devices upstream of the server cannot perform their tasks upon the traffic because it is encrypted. The solution is to deploy the certificates from every machine on the devices such that they can decrypt and re-encrypt the traffic. Obviously this introduces unacceptable amounts of latency into the exchange of data, but the alternative is to not scan or inspect the traffic, leaving the organization open to potential compromise. It is also important to note that encrypted “bad” traffic, e.g. malicious code, malware, phishing links, etc… does not change the nature of that traffic. It’s still bad, it’s also now “hidden” to every piece of security infrastructure that was designed and deployed to detect and stop it. A server-deployed SSL strategy eliminates visibility and control and the ability to rapidly address both technical and business-related concerns. Security is particularly negatively impacted. Emerging threats such as a new worm or virus for which AV scans have not yet but updated can be immediately addressed by an intelligent intermediary – whether as a long-term solution or stop-gap measure. Vulnerabilities in security protocols themselves, such as the TLS man-in-the-middle attack, can be immediately addressed by an intelligent, flexible intermediary long before the actual solutions providing the service can be patched and upgraded. A purely technical approach to architectural decisions regarding the deployment of SSL or any other technology is simply unacceptable in an IT organization that is actively trying to support and align itself with the business. Architectural decisions of this nature can have a profound impact on the ability of IT to subsequently design, deploy and manage business-related applications and solutions and should not be made in a technical or business vacuum, without a full understanding of the ramifications. The Anatomy of an SSL Handshake [Network Computing] Get Ready for the Impact of 2048-bit RSA Keys [Network Computing] SSL handshake latency and HTTPS optimizations [semicomplete.com] Black Hat: PKI Hack Demonstrates Flaws in Digital Certificate Technology [DarkReading] SSL/TLS Strong Encryption: FAQ [apache.org] The Open Performance Testing Initiative The Order of (Network) Operations Congratulations! You do no nothing faster than anyone else! Data Center Feng Shui: SSL WILS: SSL TPS versus HTTP TPS over SSL F5 Friday: The 2048-bit Keys to the Kingdom TLS Man-in-the-Middle Attack Disclosed Yesterday Solved Today with Network-Side Scripting305Views0likes2CommentsThe Encrypted Elephant in the Cloud Room
#infosec Encrypting data in the cloud is tricky and defies long held best practices regarding key management. New kid on the block Porticor aims to change that. Anyone who’s been around cryptography for a while understands that secure key management is a critical foundation for any security strategy involving encryption. Back in the day it was SSL, and an entire industry of solutions grew up specifically aimed at protecting the key to the kingdom – the master key. Tamper-resistant hardware devices are still required for some US Federal security standards under the FIPS banner, with specific security protections at the network and software levels providing additional assurance that the ever important key remains safe. In many cases it’s advised that the master key is not even kept on the same premises as the systems that use it. It must be locked up, safely, offsite; transported via a secure briefcase, handcuffed to a security officer and guarded by dire wolves. With very, very big teeth. No, I am not exaggerating. At least not much. The master key really is that important to the security of cryptography. That’s why encryption in the cloud is such a tough nut to crack. Where, exactly, do you store the keys used to encrypt those Amazon S3 objects? Where, exactly, do you store the keys used to encrypt disk volumes in any cloud storage service? Start-up Porticor has an answer, one that breaks (literally and figuratively) traditional models of key management and offers a pathway to a more secure method of managing cryptography in the cloud. SPLIT-KEY ENCRYPTION Porticor is a combination SaaS / IaaS solution designed to enable encryption of data at rest in IaaS environments with a focus on cloud, currently available on AWS and other clouds. It’s a combination in not just deployment model – which is rapidly becoming the norm for cloud-based services – but in architecture, as well. To alleviate violating best practices with respect to key management, i.e. you don’t store the master key right next to the data it’s been used to encrypt – Porticor has developed a technique it calls “Split-Key Encryption.” Data encryption comprises, you’ll recall, the execution of an encryption algorithm on the data using a secret key, the result of which is ciphertext. The secret key is the, if you’ll pardon the pun, secret to gaining access to that data once it has been encrypted. Storing it next to the data, then, is obviously a Very Bad Idea™ and as noted above the industry has already addressed the risk of doing so with a variety of solutions. Porticor takes a different approach by focusing on the security of the key not only from the perspective of its location but of its form. The secret master key in Porticor’s system is actually a mathematical combination of the master key generated on a per project (disk volumes or S3 objects) basis and a unique key created by the Porticor Virtual Key Management™ (PVKM™) system. The master key is half of the real key, and the PVKM generated key the other half. Only by combining the two – mathematically – can you discover the true secret key needed to work with the encrypted data. The PVKM generated key is stored in Porticor’s SaaS-based key management system, while the master keys are stored in the Porticor virtual appliance, deployed in the cloud along with the data its protecting. The fact that the secret key can only be derived algorithmically from the two halves of the keys enhances security by making it impossible to find the actual encryption key from just one of the halves, since the math used removes all hints to the value of that key. It removes the risk of someone being able to recreate the secret key correctly unless they have both halves at the same time. The math could be a simple concatenation, but it could also be a more complicated algebraic equation. It could ostensibly be different for each set of keys, depending on the lengths to which Porticor wants to go to minimize the risk of someone being able to recreate the secret key correctly. Still, some folks might be concerned that the master key exists in the same environment as the data it ultimately protects. Porticor intends to address that by moving to a partially homomorphic key encryption scheme. HOMOMORPHIC KEY ENCRYPTION If you aren’t familiar with homomorphic encryption, there are several articles I’d encourage you to read, beginning with “Homomorphic Encryption” by Technology Review followed by Craig Stuntz’s “What is Homomorphic Encryption, and Why Should I Care?” If you can’t get enough of equations and formulas, then wander over to Wikipedia and read its entry on Homomorphic Encryption as well. Porticor itself has a brief discussion of the technology, but it is not nearly as deep as the aforementioned articles. In a nutshell (in case you can’t bear to leave this page) homomorphic encryption is the fascinating property of some algorithms to work both on plaintext as well as on encrypted versions of the plaintext and come up with the same result. Executing the algorithm against encrypted data and then decrypting it gives the same result as executing the algorithm against the unencrypted version of the data. So, what Porticor plans to do is apply homomorphic encryption to the keys, ensuring that the actual keys are no longer stored anywhere – unless you remember to tuck them away someplace safe or write it down. The algorithms for joining the two keys are performed on the encrypted versions of the keys, resulting in an encrypted symmetric key specific to one resource – a disk volume or S3 object. The resulting system ensures that: No keys are ever on a disk in plain form Master keys are never decrypted, and so they are never known to anyone outside the application owner themselves The "second half" of each key (PVKM stored) are also never decrypted, and are never even known to anyone (not even Porticor) Symmetric keys for a specific resource exist in memory only, and are decrypted for use only when the actual data is needed, then they are discarded This effectively eliminates one more argument against cloud – that keys cannot adequately be secured. In a traditional data encryption solution the only thing you need is the secret key to unlock the data. Using Porticor’s split-key technology you need the PVKM key and the master key used to recombine those keys. Layer atop that homomorphic key encryption to ensure the keys don’t actually exist anywhere, and you have a rejoined to the claim that secure data and cloud simply cannot coexist. In addition to the relative newness of the technique (and the nature of being untried at this point) the argument against homomorphic encryption of any kind is a familiar one: performance. Cryptography in general is by no means a fast operation and there is more than a decade’s worth of technology in the form of hardware acceleration (and associated performance tests) specifically designed to remediate the slow performance of cryptographic functions. Homomorphic encryption is noted to be excruciatingly slow and the inability to leverage any kind of hardware acceleration in cloud computing environments offers no relief. Whether this performance penalty will be worth the additional level of security such a system adds is largely a matter of conjecture and highly dependent upon the balance between security and performance required by the organization. Related blogs & articles: Getting at the Heart of Security in the Cloud Threat Assessment: Terminal Services RDP Vulnerability The Cost of Ignoring ‘Non-Human’ Visitors Identity Gone Wild! Cloud Edition184Views0likes0CommentsThe New Certificate 2048 My Performance
SSL is a cryptographic protocol used to secure communications over the internet. SSL ensures secure end-to-end transmission and is implemented in every Web browser. It can also be used to secure email, instant messaging and VoIP sessions. The encryption and decryption of SSL is computationally intensive and can put a strain on server resources like CPU. Currently, most server SSL Certificates are 1024-bit key length and the National Institute of Standards and Technology (NIST) is recommending a transition to 2048-bit key lengths by Jan 1 st 2011. SSL and its brethren, TLS (Transport Layer Security) provide the security and encryption necessary for secure communications over the internet, and particularly for creating an encrypted link between the browser and web server. You will see ‘https’ in your browser address bar when visiting a site that is SSL enabled. The strength of SSL is tied to the size of the Public Key Infrastructure (PKI) key. Key length or key size (1024 bit, 2048 bit, 4092 bit) is measured in bits and typically used to indicate the strength of the encryption algorithm; the longer the key length, the harder it is decode. In order to enable an SSL connection, the server needs to have a digital certificate installed. If you have multiple servers, each requiring SSL, then each server must have a digital certificate. Transactions handled over SSL can require substantial computational power to establish the connection (handshake) and then to encrypt and decrypt the transferred data. If you need the same performance as non-secured data, then additional computing power (CPU) is needed. SSL processing can be up to 5 times more computationally expensive than clear text to have the same level of performance, no matter which vendor is providing the hardware. This can have significant, detrimental ramifications to server performance. SSL Offload takes much of that computing burden off the servers and places it on dedicated SSL hardware. SSL offload allows organizations to migrate 100% of their communications to SSL for greater security, consolidation of certificates, centralized management, and reduction of cost and allows for selective content encryption & encrypted cookies along with the ability to inspect and modify encrypted traffic. SSL offloading can relieve the Web server of the processing burden of encrypting and/or decrypting traffic sent via SSL. Customers, vendors and the industry as a whole will soon face the challenge of what to do regarding their SSL strategy. Those who have valid 1024-bit certificates need to understand the ramifications of the switch and that next time they go to renew their certificates, they will be forced to buy 2048-bit certificates. This will drastically affect their SSL capacity on both the servers and the load balancer. There is a significant increase in needed computational power going from 1024-bit to 2048-bit and an exponential drop off in performance when doubling key sizes regardless of the platform or vendor. Most CAs, like Entrust have already stopped issuing 1024-bit certificates, and Verisign will stop doing so in 4-5 months. Since many certificate vendors are now only issuing 2048-bit certificates, customers might not understand the potential SSL performance capacity. The overall performance impact of 2048-bit keys on the servers if you don’t offload will increase significantly. This can be a challenge when you have hundreds of servers providing content. Existing certificates issued with 1024-bit encryption will not stop working. If you still have valid certificates but need to ensure you are delivering 2048-bit certificates to users (or due to regulatory requirements), one option, as mentioned in Lori’s blog, is to install the 2048-bit certificate on your BIG-IP LTM for the off-load performance capabilities and then use your existing 1024-bit keys from BIG-IP LTM to the back-end server farm. Simply import the server certificates directly into BIG-IP. This means that the SSL Certificates that would normally go on each server can be centrally stored and managed by LTM, thereby reducing the cost of the certificates needed as well as the cost for any specialized server software/hardware required. This keeps the load off the servers, potentially eliminating any performance issues and allows you to stay current with NIST guidelines while still providing an end to end SSL connection for your web applications. This is a huge advantage over commodity hardware with no SSL offload capabilities. BIG-IP LTM has specialized SSL chips which are dedicated and optimized for SSL encryption and decryption. These chips provide the ability to maintain performance levels even at longer key lengths, whereas in commodity hardware the computational load of SSL decreases the overall system performance impacting user experience and other server tasks. The F5 SSL Acceleration Module removes all the bottlenecks for secure, wire-speed processing, including concurrent users, bulk throughput, and new transactions per second along with supporting certificates up to 4092-bits. The fully loaded F5 VIPRION chassis is the most powerful SSL-offloading engine on the market today and, along with the BIG-IP LTM Virtual Edition (VE), provides a powerful solution to the SSL challenge. By front-ending BIG-IP VE farms with a VIPRION, you can assign load balancing or SSL offloading to a dedicated ADC. The same approach can remedy access to legacy systems that might not support 2048-bit certificates or cannot be upgraded due to business restrictions or other rationale. By deploying an F5 BIG-IP device with 2048k certificate in front of the legacy systems, back-end encryption can be accomplished using existing 1024-bit certificates. F5 does support 4096-bit keys, future-proofing support for longer keys down the road and offers backwards and forwards compatibility but unless there is a strong business case, 2048-bit keys are recommended for optimal performance and protection. ps309Views0likes0CommentsTelecommute your way to a greener bottom line
For the past eight years I've been telecommuting, first for Network Computing Magazine and now for F5. In fact, Don and I have been telecommuters (or teleworkers, depending on whom you ask) for so long that our children don't realize that most people actually have to get dressed and go to work on a daily basis. Granted, that's because we happen to live (and want to stay) in that great technological mecca of the midwest (Green Bay) even though F5 is headquartered in Seattle, but F5 being the best high-tech company in the Pacific Northwest (really, I'm not just saying that) has employees who routinely telecommute despite living in the Seattle area. Obviously there are personal benefits to telecommuting that cannot be measured, particularly if you have a family or hate to shower on a regular basis. But there are also plenty of disbenefits (that is too a word, I just made it up) that come from being "in the office" all the time, particularly with the lure of "getting just one more thing done" constantly in your face and at your fingertips. There are many corporate benefits, as well, and some that are often more far reaching than just saving office space at corporate headquarters. The positive impact of the reduction in carbon emissions saved even by employees telecommuting one or two days a week should not be underestimated, especially given the number of employees who commute to the workplace and the length of time they spend doing so. Mindy S. Lubber at the Harvard Business Online Leading Green blog ponders the effects of physically commuting to work: And it makes me wonder--are we really maximizing the impact of open work as a strategy to combat rising energy use, increased greenhouse gas emissions, and the greater climate change crisis? In my home state of Massachusetts, more than 3 million people commute by car each day--74 percent of those commuters driving alone. Every year, urban commuters in the U.S. waste 2.9 billion gallons of fuel idling in traffic--the equivalent of 58 fully-loaded supertanker ships. But it's more than just environmental consciousness that is driving the march toward more telecommuting options. As Ted Samson of InfoWorld noted last year, there are many financial benefits to telecommuting to consider. For starters, the ITAC found that employers can realize an annual per-employee savings of $5,000 through implementing telecommuting programs. "Your organization could save one office for every three teleworkers (that's about $2,000 per teleworker per year, or $200,000 per 100 teleworkers)," according to the Canadian Telework Assocation(CTA). Case in point: Through Sun's telecommute program, called Sun Open Work Practice, around 2,800 employees work home three to five days a week; another 14,219 work remotely twice weekly, according to reports. The company says its efforts have resulted not only in 29,000 fewer tons of CO2 emissions -- but the company reaped $63 million in the last fiscal year by cutting 6,660 office seats. With those kinds of green savings - both financial and environmental - the question has to be why more corporations aren't jumping on the telecommuting bandwagon. THE TECHNOLOGY FACTOR In the past, the cost and complexity of the PKI (Public Key Infrastructure) necessary to support corporate access via a VPN were often prohibitive and made telecommuting an unfavorable option with corporations. But the advent of SSL VPNs reduced both the cost and complexity of providing secure remote access to corporate resources from remote locations and have virtually eliminated both cost and complexity as a reason to not implement a telecommuting policy. Even in the past few years the ability of SSL VPNs to integrate with the rest of the corporate infrastructure and support connectivity beyond the desktop via Apple's iPhone and Windows Mobile devices has expanded and improved, making corporate connectivity a breeze no matter where a telecommuter or roaming employee might be. THE HUMAN FACTOR The bigger question is, of course, whether employees are good telecommuters or not. A high drop in productivity can offset the savings realized by telecommuting, so it's a somewhat risky proposition. An SSL VPN is perfect for implementing a trial program for telecommuting. Because it requires no hardware or software at remote sites (client connections are proxied through a web-based client in almost all cases at the time the user logs in) there's less time and effort and money invested in giving employees a chance to try out telecommuting and see if it works for them - and you. All you need is an SSL VPN at corporate headquarters and you can implement a trial run to see what works best for you and your employees. Maybe it turns out to be an incentive program, or a reward for service - on par with how most employees accrue more vacation days the longer they are with the organization. IT DOESN'T HAVE TO BE ALL OR NOTHING Even if a telecommuting initiative doesn't work out, having an SSL VPN available will still turn out to be a good investments. Everyone has days when they're too sick to come into the office, but yet could work if they could just do it from home. Likewise, children get sick and need parents at home who could be working off and on rather than losing the entire day. Traveling employees can still have access to corporate resources if need be, which is another great use of the investment, whether it's used for a telecommuting initiative or not. SSL VPNs provide a wide variety of options for secure remote access regardless of the reasons why that access is required. Whether you're into green cash or green grass, there's a good reason to consider deploying (and using) an SSL VPN.223Views0likes1Comment