ssl offload
5 TopicsImplementing ECC+PFS on LineRate (Part 1/3): Choosing ECC Curves and Preparing SSL Certificates
(Editors note: the LineRate product has been discontinued for several years. 09/2023) --- Overview In case you missed it,Why ECC and PFS Matter: SSL offloading with LineRatedetails some of the reasons why ECC-based SSL has advantages over RSA cryptography for both performance and security. This article will generate all the necessary ECC certificates with the secp384r1 curve so that they may be used to configure an LineRate System for SSL Offload. Getting Started with LineRate In order to appreciate the advantages of SSL/TLS Offload available via LineRate as discussed in this article, let's take a closer look at how to configure SSL/TLS Offloading on a LineRate system. This example will implement Elliptical Curve Cryptography and Perfect Forward Secrecy. SSL Offloading will be added to an existing LineRate System that has one public-facing Virtual IP (10.10.11.11) that proxies web requests to a Real Server on an internal network (10.10.10.1). The following diagram demonstrates this configuration: Figure 1: A high-level implementation of SSL Offload Overall, these steps will be completed in order to enable SSL Offloading on the LineRate System: Generate a private key specifying the secp384r1 elliptic curve Obtain a certificate from a CA Configure an SSL profile and attach it to the Virtual IP Note that this implementation will enable only ECDHE cipher suites. ECDH cipher suites are available, but these do not implement the PFS feature. Further, in production deployments, considerations to implement additional types of SSL cryptography might be needed in order to allow backward compatibility for older clients. Generating a private key for Elliptical Curve Cryptography When considering the ECC curve to use for your environment, you may choose one from the currently available curves list in the LineRate documentation. It is important to be cognizant of the curve support for the browsers or applications your application targets using. Generally, the NIST P-256, P-384, and P-521 curves have the widest support. This example will use the secp384r1 (NIST P-384) curve, which provides an RSA equivalent key of 7680-bits. Supported curves with OpenSSL can be found by running the openssl ecparam -list_curves command, which may be important depending on which curve is chosen for your SSL/TLS deployment. Using OpenSSL, a private key is generated for use with ssloffload.lineratesystems.com. The ECC SECP curve over a 384-bit prime field (secp384r1) is specified: openssl ecparam -genkey -name secp384r1 -out ssloffload.lineratesystems.com.key.pem This command results in the following private key: -----BEGIN EC PARAMETERS----- BgUrgQQAIg== -----END EC PARAMETERS----- -----BEGIN EC PRIVATE KEY----- MIGkAgEBBDD1Kx9hghSGCTujAaqlnU2hs/spEOhfpKY9EO3mYTtDmKqkuJLKtv1P 1/QINzAU7JigBwYFK4EEACKhZANiAASLp1bvf/VJBJn4kgUFundwvBv03Q7c3tlX kh6Jfdo3lpP2Mf/K09bpt+4RlDKQynajq6qAJ1tJ6Wz79EepLB2U40fC/3OBDFQx 5gSjRp8Y6aq8c+H8gs0RKAL+I0c8xDo= -----END EC PRIVATE KEY----- Generating a Certificate Request (CSR) to provide the Certificate Authority (CA) After the primary key is obtained, a certificate request (CSR) can be created. Using OpenSSL again, the following command is issued filling out all relevant information in the successive prompts: openssl req -new -key ssloffload.lineratesystems.com.key.pem -out ssloffload.lineratesystems.com.csr.pem This results in the following CSR: -----BEGIN CERTIFICATE REQUEST----- MIIB3jCCAWQCAQAwga8xCzAJBgNVBAYTAlVTMREwDwYDVQQIEwhDb2xvcmFkbzET MBEGA1UEBxMKTG91aXN2aWxsZTEUMBIGA1UEChMLRjUgTmV0d29ya3MxGTAXBgNV BAsTEExpbmVSYXRlIFN5c3RlbXMxJzAlBgNVBAMTHnNzbG9mZmxvYWQubGluZXJh dGVzeXN0ZW1zLmNvbTEeMBwGCSqGSIb3DQEJARYPYS5yYWdvbmVAZjUuY29tMHYw EAYHKoZIzj0CAQYFK4EEACIDYgAEi6dW73/1SQSZ+JIFBbp3cLwb9N0O3N7ZV5Ie iX3aN5aT9jH/ytPW6bfuEZQykMp2o6uqgCdbSels+/RHqSwdlONHwv9zgQxUMeYE o0afGOmqvHPh/ILNESgC/iNHPMQ6oDUwFwYJKoZIhvcNAQkHMQoTCGNpc2NvMTIz MBoGCSqGSIb3DQEJAjENEwtGNSBOZXR3b3JrczAJBgcqhkjOPQQBA2kAMGYCMQCn h1NHGzigooYsohQBzf5P5KO3Z0/H24Z7w8nFZ/iGTEHa0+tmtGK/gNGFaSH1ULcC MQCcFea3plRPm45l2hjsB/CusdNo0DJUPMubLRZ5mgeThS/N6Eb0AHJSjBJlE1fI a4s= -----END CERTIFICATE REQUEST----- Obtaining a Certificate from a Certificate Authority (CA) Rather than using a self-signed certificate, a test certificate is obtained from Entrust. Upon completing the certificate request and receiving it from Entrust, a simple conversion needs to be done to PEM format. This can be done with the following OpenSSL command: openssl x509 -inform der -in ssloffload.lineratesystems.com.cer -out ssloffload.lineratesystems.com.cer.pem This results in the following certificate: -----BEGIN CERTIFICATE----- MIIC5jCCAm2gAwIBAgIETUKHWzAKBggqhkjOPQQDAzBtMQswCQYDVQQGEwJVUzEW MBQGA1UEChMNRW50cnVzdCwgSW5jLjEfMB0GA1UECxMWRm9yIFRlc3QgUHVycG9z ZXMgT25seTElMCMGA1UEAxMcRW50cnVzdCBFQ0MgRGVtb25zdHJhdGlvbiBDQTAe Fw0xNDA4MTExODQ3MTZaFw0xNDEwMTAxOTE3MTZaMGkxHzAdBgNVBAsTFkZvciBU ZXN0IFB1cnBvc2VzIE9ubHkxHTAbBgNVBAsTFFBlcnNvbmEgTm90IFZlcmlmaWVk MScwJQYDVQQDEx5zc2xvZmZsb2FkLmxpbmVyYXRlc3lzdGVtcy5jb20wdjAQBgcq hkjOPQIBBgUrgQQAIgNiAASLp1bvf/VJBJn4kgUFundwvBv03Q7c3tlXkh6Jfdo3 lpP2Mf/K09bpt+4RlDKQynajq6qAJ1tJ6Wz79EepLB2U40fC/3OBDFQx5gSjRp8Y 6aq8c+H8gs0RKAL+I0c8xDqjgeEwgd4wDgYDVR0PAQH/BAQDAgeAMB0GA1UdJQQW MBQGCCsGAQUFBwMBBggrBgEFBQcDAjA3BgNVHR8EMDAuMCygKqAohiZodHRwOi8v Y3JsLmVudHJ1c3QuY29tL0NSTC9lY2NkZW1vLmNybDApBgNVHREEIjAggh5zc2xv ZmZsb2FkLmxpbmVyYXRlc3lzdGVtcy5jb20wHwYDVR0jBBgwFoAUJAVL4WSCGvgJ zPt4eSH6cOaTMuowHQYDVR0OBBYEFESqK6HoSFIYkItcfekqqozX+z++MAkGA1Ud EwQCMAAwCgYIKoZIzj0EAwMDZwAwZAIwXWvK2++3500EVaPbwvJ39zp2IIQ98f66 /7fgroRGZ2WoKLBzKHRljVd1Gyrl2E3BAjBG9yPQqTNuhPKk8mBSUYEi/CS7Z5xt dXY/e7ivGEwi65z6iFCWuliHI55iLnXq7OU= -----END CERTIFICATE----- Note that the certificate generation process is very familiar with Elliptical Curve Cryptography versus traditional cryptographic algorithms like RSA. Only a few differences are found in the generation of the primary key where an ECC curve is specified. Continue the Configuration Now that the certificates needed to configure Elliptical Curve Cryptography have been created, it is now time to configure SSL Offloading on LineRate. Part 2: Configuring SSL Offload on LineRate continues the demonstration of SSL Offloading by importing the certificate information generated in this article and getting the system up and running. In case you missed it,Why ECC and PFS Matter: SSL offloading with LineRatedetails some of the reasons why ECC-based SSL has advantages over RSA cryptography for both performance and security. (Editors note: the LineRate product has been discontinued for several years. 09/2023) Stay Tuned! Next week a demonstration on how to verify a correct implementation of SSL with ECC+PFS on LineRate will make a debut on DevCentral. The article will detail how to check for ECC SSL on the wire via WireShark and in the browser. In the meantime, take some time to download LineRate and test out its SSL Offloading capabilities. In case you missed any content, or would like to reference it again, here are the articles related to implementing SSL Offload with ECC and PFS on LineRate: Why ECC and PFS Matter: SSL offloading with LineRate Implementing ECC+PFS on LineRate (Part 1/3): Choosing ECC Curves and Preparing SSL Certificates Implementing ECC+PFS on LineRate (Part 2/3): Configuring SSL Offload on LineRate Implementing ECC+PFS on LineRate (Part 3/3): Confirming the Operation of SSL Offloading399Views0likes0CommentsThe New Certificate 2048 My Performance
SSL is a cryptographic protocol used to secure communications over the internet. SSL ensures secure end-to-end transmission and is implemented in every Web browser. It can also be used to secure email, instant messaging and VoIP sessions. The encryption and decryption of SSL is computationally intensive and can put a strain on server resources like CPU. Currently, most server SSL Certificates are 1024-bit key length and the National Institute of Standards and Technology (NIST) is recommending a transition to 2048-bit key lengths by Jan 1 st 2011. SSL and its brethren, TLS (Transport Layer Security) provide the security and encryption necessary for secure communications over the internet, and particularly for creating an encrypted link between the browser and web server. You will see ‘https’ in your browser address bar when visiting a site that is SSL enabled. The strength of SSL is tied to the size of the Public Key Infrastructure (PKI) key. Key length or key size (1024 bit, 2048 bit, 4092 bit) is measured in bits and typically used to indicate the strength of the encryption algorithm; the longer the key length, the harder it is decode. In order to enable an SSL connection, the server needs to have a digital certificate installed. If you have multiple servers, each requiring SSL, then each server must have a digital certificate. Transactions handled over SSL can require substantial computational power to establish the connection (handshake) and then to encrypt and decrypt the transferred data. If you need the same performance as non-secured data, then additional computing power (CPU) is needed. SSL processing can be up to 5 times more computationally expensive than clear text to have the same level of performance, no matter which vendor is providing the hardware. This can have significant, detrimental ramifications to server performance. SSL Offload takes much of that computing burden off the servers and places it on dedicated SSL hardware. SSL offload allows organizations to migrate 100% of their communications to SSL for greater security, consolidation of certificates, centralized management, and reduction of cost and allows for selective content encryption & encrypted cookies along with the ability to inspect and modify encrypted traffic. SSL offloading can relieve the Web server of the processing burden of encrypting and/or decrypting traffic sent via SSL. Customers, vendors and the industry as a whole will soon face the challenge of what to do regarding their SSL strategy. Those who have valid 1024-bit certificates need to understand the ramifications of the switch and that next time they go to renew their certificates, they will be forced to buy 2048-bit certificates. This will drastically affect their SSL capacity on both the servers and the load balancer. There is a significant increase in needed computational power going from 1024-bit to 2048-bit and an exponential drop off in performance when doubling key sizes regardless of the platform or vendor. Most CAs, like Entrust have already stopped issuing 1024-bit certificates, and Verisign will stop doing so in 4-5 months. Since many certificate vendors are now only issuing 2048-bit certificates, customers might not understand the potential SSL performance capacity. The overall performance impact of 2048-bit keys on the servers if you don’t offload will increase significantly. This can be a challenge when you have hundreds of servers providing content. Existing certificates issued with 1024-bit encryption will not stop working. If you still have valid certificates but need to ensure you are delivering 2048-bit certificates to users (or due to regulatory requirements), one option, as mentioned in Lori’s blog, is to install the 2048-bit certificate on your BIG-IP LTM for the off-load performance capabilities and then use your existing 1024-bit keys from BIG-IP LTM to the back-end server farm. Simply import the server certificates directly into BIG-IP. This means that the SSL Certificates that would normally go on each server can be centrally stored and managed by LTM, thereby reducing the cost of the certificates needed as well as the cost for any specialized server software/hardware required. This keeps the load off the servers, potentially eliminating any performance issues and allows you to stay current with NIST guidelines while still providing an end to end SSL connection for your web applications. This is a huge advantage over commodity hardware with no SSL offload capabilities. BIG-IP LTM has specialized SSL chips which are dedicated and optimized for SSL encryption and decryption. These chips provide the ability to maintain performance levels even at longer key lengths, whereas in commodity hardware the computational load of SSL decreases the overall system performance impacting user experience and other server tasks. The F5 SSL Acceleration Module removes all the bottlenecks for secure, wire-speed processing, including concurrent users, bulk throughput, and new transactions per second along with supporting certificates up to 4092-bits. The fully loaded F5 VIPRION chassis is the most powerful SSL-offloading engine on the market today and, along with the BIG-IP LTM Virtual Edition (VE), provides a powerful solution to the SSL challenge. By front-ending BIG-IP VE farms with a VIPRION, you can assign load balancing or SSL offloading to a dedicated ADC. The same approach can remedy access to legacy systems that might not support 2048-bit certificates or cannot be upgraded due to business restrictions or other rationale. By deploying an F5 BIG-IP device with 2048k certificate in front of the legacy systems, back-end encryption can be accomplished using existing 1024-bit certificates. F5 does support 4096-bit keys, future-proofing support for longer keys down the road and offers backwards and forwards compatibility but unless there is a strong business case, 2048-bit keys are recommended for optimal performance and protection. ps304Views0likes0CommentsLoad Balancing For Developers: Security and TCP Optimizations
It has been a while since I wrote a Load Balancing for Developers installment, and since they’re pretty popular and there’s still a lot about Application Delivery Controllers (ADCs) that are taken for granted in the Networking industry but relatively unknown in the development world, I thought I’d throw one out about making your security more resilient with ADCs. For those who are just joining this series, here’s the full list of posts I’ve tagged as Load Balancing for Developers, though only the ones whose title starts with “Load Balancing for Developers” or “Advance Load Balancing for Developers” were actually written from this perspective, utilizing our fictional web application Zap’N’Go! as an example. This post, like most of them, doesn’t require that you read the other entries in the “Load Balancers for Developers” series, but if you’re interested in the topic, they are all written from the developer’s perspective, and only bring in the networking/ops portions where it makes sense. So your organization has a truly successful web application called Zap’N’Go! That has taken the Internet by storm. Your hits are in the thousands an hour, and orders are rolling in. All was going well until your server couldn’t keep up and you went to a load balanced scenario so that multiple servers could share the load. The problem is that with the money you’ve generated off of Zap’N’Go, you’ve bought a competitor and started several new web applications, set up a forum or portal for your customers to communicate with you and each other directly, and are using the old datacenter from the company you purchased as a redundant datacenter in case the worst should happen. And all of that means that you are suffering server (and VM) sprawl. The CPU cycles being eaten up by your applications are truly astounding, and you’re looking into ways to drive them down. Virtualization helped you to be more agile in responding to the requests of the business, but also brings a lot of management overhead in making certain servers aren’t overloaded with too high a virtual density. One of the cool bits about an ADC is that they do a lot more than load balance, and much of that can be utilized to improve application performance without re-architecting the entire system. While there are a lot of ways that an ADC can improve application performance, we’ll look at a couple of easy ones here, and leave some of the more difficult or involved ones for another time. That keeps me in writing topics, and makes certain that I can give each one the attention it deserves in the space available. The biggest and most obvious improvement in an ADC is of course load balancing. This blog assumes you already have an ADC in place, and load balancing was your primary reason for purchasing it. While I don’t have market numbers in front of me, it is my experience that this is true of the vast majority of ADC customers. If you have overburdened web applications and have not looked into load balancing, before you go rewriting your entire system, take a look at the rest of this series. There really are options out there to help. After that win, I think the biggest place – in a virtualized environment – that developers can reap benefits from an ADC is one that developers wouldn’t normally think of. That’s the reason for this series, so I suppose that would be a good thing. Nearly every application out there hits a point where SSL is enabled. That point may be simply the act of accessing it, or it may be when they go to the “shopping cart” section of the web site, but they all use SSL to protect sensitive user data being passed over the Internet. As a developer, you don’t have to care too much about this fact. Pay attention to the protocol if you’re writing at that level and to the ports if you have reason to, but beyond that you don’t have to care. Networking takes care of all of that for you. But what if you could put a request in to your networking group that would greatly improve performance without changing a thing in your code and from a security perspective wouldn’t change much – most companies would see it as not changing anything, while a few will want to talk about it first? What if you could make this change over lunch and users wouldn’t know the difference? Here’s the background. SSL Encryption is expensive in terms of CPU cycles. No doubt you know that, most developers have to face this issue head-on at some point. It takes a lot of power to do encryption, and while commodity hardware is now fast enough that it isn’t a problem on a stand-alone server, in a VM environment, the number of applications requesting SSL encryption on the same physical hardware is many times what it once was. That creates a burden that, at this time at least, often drags on the hardware. It’s not the fault of any one application or a rogue programmer, it is the summation of the burdens placed by each application requiring SSL translation. One solution to this problem is to try and manage VM deployment such that encryption is only required on a couple of applications per physical server, but this is not a very appealing long-term solution as loads shift and priorities change. From a developers’ point of view, do you trust the systems/network teams to guarantee your application is not sharing hardware with a zillion applications that all require SSL encryption? Over time, this is not going to be their number one priority, and when performance troubles crop up, the first place that everyone looks in an in-house developed app is at the development team. We could argue whether that’s the right starting point or not, but it certainly is where we start. Another, more generic solution is to take advantage of a non-development feature of your ADC. This feature is SSL termination. Since the ADC sits between your application and the Internet, you can tell your ADC to handle encryption for your application, and then not worry about it again. If your network team sets this up for all of your applications, then you have no worries that SSL is burning up your CPU cycles behind your back. Is there a negative? A minor one that most organizations (as noted above) just won’t see as an issue. That is that from the ADC to your application, communications will happen in the clear. If your application is internal, this really isn’t a big deal at all. If you suspect a bad-guy on your internal network, you have much more to worry about than whether communications between two boxes are in the clear. If you application is in the cloud, this concern is more realistic, but in that case, SSL termination is limited in usefulness anyway because you can’t know if the other apps on the same hardware are utilizing it. So you simply flick a switch on your ADC to turn on SSL termination, and then turn it off on your applications, and you have what the ADC industry calls “SSL offload”. If your ADC is purpose-built hardware (like our BIG-IP), then there is encryption hardware in the box and you don’t have to worry about the impact to the ADC of overloading it with SSL requests, it’s built to handle the load. If your ADC is software or a VM (like our BIG-IP LTM VE), then you’ll have to do a bit of testing to see what the tolerance level for SSL load is on the hardware you deployed it on – but you can ask the network staff to worry about all of that, once you’ve started the conversation. Is this the only security-based performance boost you can get? No, but it is the easy one. Everything on the Internet remains encrypted, but your application is not burdening the server’s CPU with encryption requests each time communications in or out occur. The other easy one is TCP optimizations. This one requires less talk because it is completely out of the realm of the developer. Simply put, TCP is a well designed protocol that sometimes gets bogged down communicating and has a lot of overhead in those situations. Turning on TCP optimizations in your ADC can reduce the overhead – more or less, depending upon what is on the other end of the communications network – and improve perceived performance, which honestly is one of the most important measures of web application availability. By making it seem to load faster, you’ve improved your customer experience, and nothing about your development has to change. TCP optimizations are not new, and thus the ones that are turned on when you activate the option on most ADCs are stable and won’t disrupt most applications. Of course you should run a short test cycle with them enabled, just to be certain, but I would be surprised if you saw any issues. They’re not unheard of, but they are very rare. That’s enough for now, I think. I don’t want these to get so long that you wander off to develop some more. Keep doing what you do. And strive to keep your users from doing this. Slow apps anger users226Views0likes0CommentsThe Need For Speed. SSDs in the Enterprise.
#F5Friday SSDs speed more than just disk I/O on your servers. If you’re one of those geeks (or gamers) that squeezes every last ounce of performance out of their personal computing equipment, then you’re well aware that the performance of Solid State Drives (SSDs) is far and away better than the performance of traditional Hard Disk Drives (HDDs). Simply put, because the disk does not have to spin up, the arm does not have to seek, and the head doesn’t have to wait for the correct sector to pass under it, SSDs are faster. The ability to just look up a storage location in a manner very similar to how your computer looks in RAM and return values through a conventional hard disk interface means they will likely always be faster than HDD technology. They’re more expensive though, and the bigger the drive, the bigger the cost difference per gigabyte. And even though SSDs have gone down in price since their introduction, HDDs have too, maintaining the disparity in prices. But that doesn’t mean they’re unobtainable, or even significantly limit their uses where the enterprise is concerned. For systems that truly need SDDs, the cost differential is warranted. If you have the cache on your database read/writing from SSD for example, your database performance will go up significantly, making the ROI worthwhile for many organizations. And the same is very much true for other caching environments that require high-speed throughput. The ability to write out to disk at two or three times the rate of an HDD can greatly improve performance of high-throughput systems. SSD and HDD, Courtesy of MSystems and Wikipedia That is why we at F5 recently introduced an SSD option for our new F5BIG-IP 11000 platform. With the optional SSD drives, you can speed processing for such disk-intensive operations as encryption, compression, and if you have BIG-IP WOM installed, de-duplication. These processes are commonly offloaded to BIG-IP systems for the purpose of lightening the load on servers, and now SSDs can speed the processing on the BIG-IP. To be sure, many organizations don’t need SSD drives, that’s why they are optional in our configurations. Should your organization be one of those that does, however, now you have a solution. By speeding these processes – that occur in-line during transport – you speed overall communications on whatever network you are utilizing, be that a WAN replication scenario or an internal LAN Web Services request. And that’s important. When you are shipping information over the public Internet, encrypting it on the way out preserves server CPU cycles for the application, and SSDs stretch just how much can be offloaded, because performance is increased. If your network is overloaded, having compression and/or de-duplication is also a major bonus, but only if the device doing the work is fast enough to keep up. For those organizations with so much throughput that encryption, compression, and/or de-duplication are causing unwanted latency, SSDs in their BIG-IP is the answer. Another solution from our broad selection of tools, all aimed at helping you deliver solid solutions to meet the needs of your business, and keeping the network secure, fast, and available.171Views0likes0CommentsHow May I Speed and Secure Replication? Let Me Count the Ways.
Related Articles and Blogs: SMB Storage Replication: Acceleration VS. More Bandwidth Three Top Tips for Successful Business Continuity Planning Data Replication for Backup Best Practices Like a Matrushka, WAN Optimization is Nested Load Balancing for Developers – ADC WAN Optimization Functionality152Views0likes0Comments