cost
5 TopicsQuantifying Reputation Loss From a Breach
#infosec #security Putting a value on reputation is not as hard as you might think… It’s really easy to quantify some of the costs associated with a security breach. Number of customers impacted times the cost of a first class stamp plus the cost of a sheet of paper plus the cost of ink divided by … you get the picture. Some of the costs are easier than others to calculate. Some of them are not, and others appear downright impossible. One of the “costs” often cited but rarely quantified is the cost to an organization’s reputation. How does one calculate that? Well, if folks sat down with the business people more often (the ones that live on the other side of the Meyer-Briggs Mountain) we’d find it’s not really as difficult to calculate as one might think. While IT folks analyze flows and packet traces, business folks analyze market trends and impacts – such as those arising from poor customer service. And if a breach of security isn’t interpreted by the general populace as “poor customer service” then I’m not sure what is. While traditionally customer service is how one treats the customer, increasingly that’s expanding to include how one treats the customer’s data. And that means security. This question “how much does it really cost” is one Jeremiah Grossman asks fairly directly in a recent blog, “Indirect Hard Losses”: As stated by InformationWeek regarding a Ponemon Institute study on the Cost of a Data Breach, “Customers, it seems, lose faith in organizations that can't keep data safe and take their business elsewhere.” The next logical question is how much? Jeremiah goes on to focus on revenue lost from web transactions after a breach and that’s certainly part of the calculation, but what about those losses that might have been but now will never be? How can we measure not only the loss of revenue (meaning a decrease in first-order customers) but the potential loss of revenue? That’s harder, but just as important as it more accurately represents the “reputation loss” often mentioned in passing but never assigned a concrete value (at least not publicly, some industries discretely share such data with trusted members of the same industry, but seeing these numbers in the wild? Good luck!) HERE COMES the ALMOST SCIENCE 20% of the businesses that lost data lost customers as a direct result. The impacts were most severe for companies with more than 100 employees. Almost half of them lost sales. Rubicon Survey One of the first things we have to calculate is influence, as that directly impacts reputation. It is the ability of even a single customer to influence a given number of others (negatively or positively) that makes up reputation. It’s word of mouth, what people say about you, after all. If we turn to studies that focus more on marketing and sales and businessy things, we can find a lot of this data. It’s a well-studied area. One study 1 indicates that the reach of a single dissatisfied customer will tell approximately 8-16 people. Each of those people has a circle of influence of about 250, with 25 of those being within an organization's primary target audience. Of all those told 2% (1 in 50) will defect or avoid an organization upon hearing of the victim's dissatisfaction. So for every angry customer, the reputation impact is a loss of anywhere from 40-80 customers, existing and future. So much for thinking 100 records stolen in a breach is small potatoes, eh? Thousands of existing and potential customers loss is nothing to sneeze at. Now, here’s where it gets a little harder, because you’re going to have to talk to the businessy folks to get some values to attach to those losses. See, there’s two numbers you need yet: customer lifetime value (CLV) and the cost to replace a customer (which is higher than the cost of acquire a customer, but don’t ask me why, I’m not a businessy folk). Customer values are highly dependent upon industry. For example, based on 2010 FDIC data, the industry average annual customer value for a banking customer is $209 2 . Facebook’s annual revenue per user (ARPU) is estimated at $2.00 3 . Estimates claim Google makes $9.85 annually off each Android user 4 . And Zynga’s ARPU is estimated at $3.96 (based on a reported $0.33 monthly per user revenue) 5 . This is why you actually have to talk to the businessy guys, they know what these values are and you’ll need them to plug in to the influence calculation to come up with a at-least-it’s-closer-than-guessing value. You also need to ask what the average customer lifetime is, so you can calculate the loss from dissatisfied and defecting customers. Then you just need to start plugging in the numbers. Remember, too, that it’s a model; an estimate. It’s not a perfect valuation system, but it should give you some kind of idea of what the reputational impact from a breach would be, which is more than most folks have today. Even if you can’t obtain the cost to replace value, try the model without it. Try a small breach, just for fun, say of 100 records. Let’s use $4.00 as an annual customer value and a lifetime of ten years as an example. Affected Customer Loss: 100 * ($4 *10) = $4000 Influenced Customer Loss: 100 * (40) = 4000 * 40 = $160,000 Total Reputation Cost: $164,000 Adding in the cost to replace can only make this larger and serves very little purpose except to show that even what many consider a relatively small breach (in terms of records lost) can be costly. WHY is THIS VALUABLE? The reason this is valuable is two-fold. First, it serves as the basis for a very logical and highly motivating business case for security solutions designed to prevent breaches. The problem with much of security is it’s intangible and incalculable. It is harder to put monetary value to risk than it is to put monetary value on solutions. Thus, the ability to perform a cost-benefit analysis that is based in part on “reputation loss” is difficult for security professionals and IT in general. The business needs to be able to justify investments, and to do that they need hard-numbers that they can balance against. It is the security professionals who so often are called upon to explain the “risk” of a breach and loss of data to the business. By providing them tangible data based on accepted business metrics and behavior offers them a more concrete view of the costs – in money – of a breach. That gives IT the leverage, the justification, for investing in solutions such as web application firewalls and vulnerability scanning services that are designed to detect and ultimately prevent such breaches from occurring. It gives infosec some firm ground upon which stand and talk in terms the business understands: dollar signs. [1] PUTTING A PRICE TAG ON A LOST CUSTOMER [2] Free Checking and Debit Incentives Post-Durbin [3] Facebook’s Annual Revenue Per User [4] Each Android User Will Make Google $9.85 per Year in 2012 [5] Zynga Doubled ARPU From Last Year Even as Facebook Platform Changes Slowed Growth1.1KViews0likes0CommentsBeware Using Internal Encryption as an IT Security Blanket
It certainly sounds reasonable: networks are moving toward a perimeter-less model so the line between internal and external network is blurring. The introduction of cloud computing as overdraft protection (cloud-bursting) further blurs that perimeter such that it’s more a suggestion than a rule. That makes the idea of encrypting everything whether it’s on the internal or external network seem to be a reasonable one. Or does it? THE IMPACT ON OPERATIONS A recent post posits that PCI Standard or Not, Encrypting Internal Network Traffic is a Good Thing. The arguments are valid, but there is a catch (there’s always a catch). Consider this nugget from the article: Bottom line is everyone with confidential data to protect should enable encryption on all internal networks with access to that data. In addition, layer 2 security features should be enabled on the access switches carrying said data. Be sure to unencrypt your data streams before sending them to IPS, DLP, and other deep packet inspection devices. This is easy to say but in many cases harder to implement in practice. If you run into any issues feel free to post them here. I realize this is a controversial topic for security geeks (like myself) but given recent PCI breaches that took advantage of the above weaknesses, I have to error on the side of security. Sure more security doesn’t always mean better security, but smarter security always equals better security, which I believe is the case here. [emphasis added] It is the reminder to decrypt data streams before sending them to IPS, DLP, and other “deep packet inspection devices” that brings to light one of the issues with such a decision: complexity of operations and management. It isn’t just the additional latency inherent in the decryption of secured data streams required for a large number of the devices in an architecture to perform their tasks that’s the problem, though that is certainly a concern. The larger problem is the operational inefficiency that comes from the decryption of secured data at multiple points in the architecture. See there’s this little thing called “keys” that have to be shared with every device in the data center that will decrypt data, and that means managing each of those key stores in their own right. Keys are the, well, key to the kingdom of data encryption and if they are lost or stolen it can be disastrous to the security of all affected systems and applications. By better securing data in flight through encryption of all data on the internal network an additional layer of insecurity is introduced that must be managed. But let’s pretend this additional security issue doesn’t exist, that all systems on which these keys are stored are secure (ha!). Operations must still (a) configure every inline device to decrypt and re-encrypt the data stream and (b) manage the keys/certificates on every inline device. That’s in addition to managing the keys/certificates on every endpoint for which data is destined. There’s also the possibility that intermediate devices for which data will be decrypted before receiving – often implemented using spanned/mirrored ports on a switch/router – will require a re-architecting of the network in order to implement such an architecture. Not only must each device be configured to decrypt and re-encrypt data streams, it must be configured to do so for every application that utilizes encryption on the internal network. For an organization with only one or two applications this might not be so onerous a task, but for organizations that may be using multiple applications, domains, and thus keys/certificates, the task of deploying all those keys/certificates and configuring each device and then managing them through the application lifecycle can certainly be a time-consuming process. This isn’t a linear mathematics problem, it’s exponential. For every key or certificate added the cost of managing that information increases by the number of devices that must be in possession of that key/certificate. INTERNAL ENCRYPTION CAN HIDE REAL SECURITY ISSUES The real problem, as evinced by recent breaches of payment card processing vendors like Heartland Systems is not that data was or was not encrypted on the internal network, but that the systems through which that data was flowing were not secured. Attackers gained access through the systems, the ones we are pretending are secure for the sake of argument. Obviously, pretending they are secure is not a wise course of action. One cannot capture and sniff out unsecured data on an internal network without first being on the internal network. This is a very important point so let me say it again: One cannot capture and sniff out unsecured data on an internal network without first being on the internal network. It would seem, then, that the larger issue here is the security of the systems and devices through which sensitive data must travel and that encryption is really just a means of last resort for data traversing the internal network. Internal encryption is often a band-aid which often merely covers up the real problem of insecure systems and poorly implemented security policies. Granted, in many industries internal encryption is a requirement and must be utilized, but those industries also accept and grant IT the understanding that costs will be higher in order to implement such an architecture. The additional costs are built into the business model already. That’s not necessarily true for most organizations where operational efficiency is now just as high a priority as any other IT initiative. The implementation of encryption on internal networks can also lead to a false sense of security. It is important to remember that encrypted tainted data is still tainted data; it is merely hidden from security systems which are passive in nature unless the network is architected (or re-architected) such that the data is decrypted before being channeled through the solutions. Encryption hides data from prying eyes, it does nothing to ensure the legitimacy of the data. Simply initiating a policy of “all data on all networks must be secured via encryption” does not make an organization more secure and in fact it may lead to a less secure organization as it becomes more difficult and costly to implement security solutions designed to dig deeper into the data and ensure it is legitimate traffic free of taint or malicious intent. Bottom line is everyone with confidential data to protect should enable encryption on all internal networks with access to that data. The “bottom line” is everyone with confidential data to protect – which is just about every IT organization out there – needs to understand the ramifications of enabling encryption across the internal network both technically and from a cost/management perspective. Encryption of data on internal networks is not a bad thing to do at all but it is also not a panacea. The benefits of implementing internal encryption need to be weighed against the costs and balanced with risk and not simply tossed blithely over the network like a security blanket. PCI Standard or Not, Encrypting Internal Network Traffic is a Good Thing The Real Meaning of Cloud Security Revealed The Unpossible Task of Eliminating Risk Damned if you do, damned if you don't The IT Security Flowchart802Views0likes1CommentDear Slashdot: You get what you pay for
Open Source SSL Accelerator solution not as cost effective or well-performing as you think o3 Magazine has a write up on building an SSL accelerator out of Open Source components. It’s a compelling piece, to be sure, that was picked up by Slashdot and discussed extensively. If o3 had stuck to its original goal – building an SSL accelerator on the cheap – it might have had better luck making its arguments. But it wanted to compare an Open Source solution to a commercial solution. That makes sense, the author was trying to show value in Open Source and that you don’t need to shell out big bucks to achieve similar functionality. The problem is that there are very few – if any – commercial SSL accelerators on the market today. SSL acceleration has long been subsumed by load balancers/application delivery controllers and therefore a direct comparison between o3’s Open Source solution and any commercially available solution would have been irrelevant; comparing apples to chicken is a pretty useless thing to do. To the author’s credit, he recognized this and therefore offered a complete Open Source solution that would more fairly be compared to existing commercial load balancers/application delivery controllers, specifically he chose BIG-IP 6900. The hardware platform was chosen, I assume, based on the SSL TPS rates to ensure a more fair comparison. Here’s the author’s description of the “full” Open Source solution: The Open Source SSL Accelerator requires a dedicated server running Linux. Which Linux distribution does not matter, Ubuntu Server works just as well as CentOS or Fedora Core. A multi-core or multi-processor system is highly recommended, with an emphasis on processing power and to a lesser degree RAM. This would be a good opportunity to leverage new hardware options such as Solid State Drives for added performance. The only software requirement is Nginx (Engine-X) which is an Open Source web server project. Nginx is designed to handle a large number of transactions per second, and has very well designed I/O subsystem code, which is what gives it a serious advantage over other options such as Lighttpd and Apache. The solution can be extended by combining a balancer such as HAproxy and a cache solution such as Varnish. These could be placed on the Accelerator in the path between the Nginx and the back-end web servers. o3 specs out this solution as running around $5000, which is less than 10% of the listed cost of a BIG-IP 6900. On the surface, this seems to be quite the deal. Why would you ever purchase a BIG-IP, or any other commercial load balancer/application delivery controller based on the features/price comparison offered? Turns out there are quite a few reasons; reasons that were completely ignored by the author. CHAINING PROXIES vs INTEGRATED SOLUTIONS While all of the moving parts cited by the author (Nginx, Apache, HAproxy, Varnish) are all individually fine solutions, he suggests combining them to assemble a more complete application delivery solution that provides caching, Layer 7 inspection and transformation, and other advanced functionality. Indeed, combining these solutions does provide a deployment that is closer to the features offered by a commercial application delivery controller such as BIG-IP. Unfortunately, none of these Open Source components are integrated. This necessitates an architecture based on chaining of proxies, regardless of their deployment on the same hardware (as suggested by the author) or on separate devices; in path, of course, but physically separated. Chaining proxies incurs latency at every point in the process. If you chain proxies, you are going to incur latency in the form of: TCP connection setup and teardown processing Inspection of application data (layer 7 inspection is rarely computationally inexpensive) Execution of functionality (caching, security, acceleration, etc…) Transfer of data between proxies (when deployed on the same device this is minimized) Multiple log files This network sprawl degrades response time by adding latency at every hop and actually defeats the purposes for which they were deployed. The gains in performance achieved by offloading SSL to Nginx is almost immediately lost when multiple proxies are chained in order to provide the functionality required to match a commercial application delivery controller. A chained proxy solution adds complexity, obscures visibility (impacts ability to troubleshoot) and makes audit paths more difficult to follow. Aggregated logging never mentioned, but this is a serious consideration, especially where regulatory compliance enters the picture. The issue of multiple log files is one that has long plagued IT departments everywhere, as they often require manual aggregation and correlation – which incurs time and costs. A third party solution is often required to support troubleshooting and transactional monitoring, which incurs additional costs in the form of acquisition, maintenance, and management not considered by the author. Soft costs, too, are ignored by the author. The configuration of the multiple Open Source intermediaries required to match a commercial solution often require manual editing of configuration files; and must be configured individually. Commercial solutions – and specifically BIG-IP – reduce the time and effort required to configure such solutions by offering myriad options for management – standards-based API, scripting, command line, GUI, application templates and wizards, central management system, and integration as part of other standard data center management systems. COMPRESSION SHOULD NEVER BE A BINARY CONFIGURATION The author correctly identifies that offloading compression duties from back-end servers to an intermediary can result in improved performance of the application and greater efficiencies of the servers. NGinx supports industry-standard gzip compression. The problem with this – and there is a problem – is that it is not always beneficial to apply compression. Years of extensive experience and testing prove that the use of compression can actually degrade performance. Factors such as size of application payload, type of content, and the speed of the network on which the application data will be transferred should all be considered when making the decision to compress or not compress. This intelligence, this context-awareness, is not offered by this Open Source solution. o3’s solution is on or off, with nothing in between. In situations where images are being delivered over a LAN, for example, this will not provide any significant performance benefit and in fact will likely degrade performance. Certainly NGinx could be configured to ignore images, but this does not solve the problem of the inherent uselessness of trying to compress content traversing a LAN and/or under a specific length. SECURITY Another overlooked item is security. Not just application security, but full TCP/IP stack security. The Open Source solution could easily add mod_security to the list to achieve parity with the application security features available in commercial solutions. That does not address the underlying stack security. The author suggests running on any standard Linux platform. To be sure, anyone building such a solution for deployment in a production environment will harden the base OS; potentially using SELinux to further lock down the system. No need to argue about this; it’s assumed good administrators will harden such solutions. But what will not be done – and can’t be done – is securing the system against network and application attacks. Simple DoS, ARP poisoning, SYN floods, cookie tampering. The potential attacks against a system designed to sit in front of web and application servers are far more lengthy than this, but even these commonly seen attacks will not be addressed by o3’s Open Source solution. By comparison, these types of attacks are part and parcel of BIG-IP; no additional modules or functionality necessary. Furthermore, the performance numbers provided by o3 for their solution seem to indicate that testing was accomplished using 512-bit key certificates. A single Opteron core can only process around 1500 1024-bit RSA operations per second. This means an 8-core CPU could only perform approximately 12,000 1024-bit RSA ops per second – assuming that’s all they were doing. 512-bit keys run around five times faster than 1024-bit. The author states: “The system had no problems handling over 26,590 TPS” which seems to indicate it was not using the industry standard 1024-bit key based on the core capabilities of the processors to process RSA operations. In fact, 512-bit key certificates are no longer supported by most CAs due to their weak key strength. Needless to say, if the testing used to determine the SSL TPS for BIG-IP were to use 512-bit keys, you’d see a marked increase in the number of SSL TPS in the data sheet. YOU GET WHAT YOU PAY FOR Look, o3 has a put together a fairly cool and cheap solution that accomplishes many of the same tasks as a commercial application delivery controller. That’s not the point. The point is trying to compare a robust, integrated application delivery solution with a cobbled together set of components designed to mimic similar functionality is silly. Not only that, the logic that claims it is more cost efficient is flawed. Is the o3 solution cheaper? Sure- as long as we look only at acquisition. If we look at cost to application performance, to maintain the solution, to troubleshoot, and to manage it then no, no it isn’t. You’re trading in immediate CAPEX cost savings for long-term OPEX cost outlays. And as is always the case, in every market, you get what you pay for. A $5000 car isn’t going to last as long or perform as well as the $50,000 car, and it isn’t going to come with warranties and support, either. It will do what you want, at least for a while, but you’re on your own when you take the cheap route. That said, you are welcome to do so. It is your data center, after all. Just be aware of what you’re sacrificing and the potential issues with choosing the road less expensive. Application Acceleration: To compress or not compress Open Source SSL Accelerator IT @ AnandTech: Intel Woodcrest, AMD’s Operatorn and Sun’s UltraSparc T1: Server CPU Shoot-out268Views0likes5CommentsIf I Had a Hammer…
Or Why Carr’s Analogy is Wrong. Again. Nicolas Carr envisioned compute resources being delivered in a means similar to electricity. Though providers and consumers alike use the terminology to describe cloud computing billing and metering models, the reality is that we’ve just moved from a monthly server hosting model to a more granular hourly one, and the delivery model has not changed in any way as we’ve moved to this more “on-demand” model of IT resources. There’s very little difference between choosing amongst a list of virtual “servers” and a list of physical “servers” with varying memory capacity and compute power. Instead of choosing “Brand X Server with a specific memory and CPU spec”, you’re choosing “generic image with a specific memory and CPU spec.” You are still provisioning based on a concrete set of resources, though arguably the virtual kind can be much more easily modified than its physical predecessors. Still, you are provisioning – and ultimately paying – for a defined set of resources and you’re doing so every hour that it remains active. You may provision the smallest amount of resources possible as a means to better perform capacity planning and keep costs lower, but you’re still paying for unused resources no matter how you slice it (pun intended).205Views0likes1CommentDevelopment Performance Metrics Will Eventually Favor Cost per Line of Code
It is true right now that for the most part, virtualization changes deployment of applications but not their development. Thus far this remains true, primarily because those with an interest in organizations moving to public cloud computing have reason to make it “easy” and painless, which means no changes to applications. But eventually there will be changes that are required, if not from cloud providers then from the organization that pays the bills. One of the most often cited truism of development is actually more of a lament on the part of systems’ administrators. The basic premise is that while Moore’s Law holds true, it really doesn’t matter because developers’ software will simply use all available CPU cycles and every bit and byte of memory. Basically, the belief is that developers don’t care about writing efficient code because they don’t have to – they have all the memory and CPU in the world to execute their applications. Virtualization hasn’t changed that at all, as instances are simply sized for what the application needs (which is a lot, generally). It doesn’t work the other way around. Yet. But it will, eventually, as customers demand – and receive - a true pay-per-use cloud computing model. The premise of pay-for-what-you-use is a sound one, and it is indeed a compelling reason to move to public cloud computing. Remember that according to IDC analysts at Directions 2010, the primary driver for adopting cloud computing is all about “pay per use” with “monthly payments” also in the top four reasons to adopt cloud. Luckily for developers cloud computing providers for the most part do not bill “per use”, they bill “per virtual machine instance.”149Views0likes1Comment