geolocation
8 TopicsThe BIG-IP Application Security Manager Part 7: Geolocation
This is the seventh article in a 10-part series on the BIG-IP Application Security Manager (ASM). The first six articles in this series are: What is the BIG-IP ASM? Policy Building The Importance of File Types, Parameters, and URLs Attack Signatures XML Security IP Address Intelligence and Whitelisting The ASM can do lots of great things to protect your application, not the least of which is the Geolocation features it offers. Geolocation enforcement allows you to configure which countries can access your web application. The ASM matches the client's IP address to its physical location and if your security policy is configured to allow that location, it allows the client to access your application. If you need to block certain geographical locations (i.e. you are getting several attacks from a specific country), then you can simply disallow that Geolocation in your security policy. The ASM uses the layer 3 IP header to identify the client's IP address, but it can also be configured to use the X-Forwarded-For (XFF) address as the source address (we'll see this in a minute). If the XFF address is trusted, then the header's inner-most value will be used as the query input. I know what you're probably thinking...in this crazy TCP/IP world, it's fairly easy to hide the true physical location of an IP address by using anonymity networks (like Tor), proxy servers, etc. And, you would be right. So, I'll simply say that it's important to keep this in mind as you allow or disallow certain locations in your security policy (i.e. don't think that just because you disallow a certain country that you have completely blocked every person from that country). Fortunately, the BIG-IP ASM has specific geolocations identified for Anonymous Proxies. In addition, if you are concerned about clients from anonymous networks accessing your application, be sure to read all about IP Address Intelligence to find even more options for blocking these anonymous users! BIG-IP Configuration Let's see how the BIG-IP is set up to configure all this goodness. Navigate to Security >> Application Security >> Geolocation Enforcement and you will see the screen shown below. Notice that I selected a few unique geolocations for your viewing pleasure. I wanted to call these out since they are not typical country locations. N/A represents all the internal IP addresses that are not mapped to a country. Other represents external IP addresses that are not mapped to a specific country. Finally, as I mentioned earlier, Anonymous Proxy represents known servers that are acting as proxies (allow clients to mask their true source IP address). As you can see, this interface keeps it really simple...you just scroll through the list of geolocations and add the ones you want blocked to the list of disallowed locations. Then, hit save...and don't forget to "Apply Policy" when you are finished! Blocking : Settings If you have been reading the other articles in this ASM series, you already know about this next setting. In order to block, the ASM requires you to do more than just move a geolocation to the "disallowed" list...you have to configure the blocking settings as well. After you finish listing all the geolocations that you want to block, you will need to navigate to Security >> Application Security >> Blocking >> Settings and make sure the "Access from disallowed Geolocation" is set to "Block" (you can set it to Learn and Alarm as well if you want). The screenshot below gives you all the details: X-Forwarded-For Let's check out how the ASM can be configured to trust the XFF header, and then we'll get into the test to make sure all this works correctly. To trust the XFF header, you will need to configure the properties of the security policy itself. Navigate to Security >> Application Security >> Security Policies and then click on the policy you want to change. This will take you to the properties page of the policy. Be sure to view the "Advanced" configuration on this page (Basic is the default view). When you are in the Advanced Configuration view, you will see the "Trust XFF Header" at the bottom of the page. Enable this setting and then add the name of the custom XFF header and click "Add". I named mine "XFF_Geo" since we're doing this crazy geolocation stuff. As a reminder, don't forget to "Apply Policy" after making these changes...or any other changes for that matter. Now that the custom XFF header is configured and trusted, I can use this to build a GET request from any IP address I choose. The Test... I used Fiddler2 (awesome tool) to craft a GET request for the online auction site I've been using for this article series (https://auction.f5demo.com). You'll notice in the screenshot below that I used the XFF_Geo to allow for a different IP address than the one I normally use. In this case, I picked an address from Antarctica and sent the request...who knew they had IP addresses in Antarctica? The Results... Prior to adding the Antarctica geolocation to the list of Disallowed locations, I took a screenshot of the GET request from the ASM logs (Security >> Event Logs >> Application >> Requests). As you can see in the screenshot below, the request came through just fine. You can even see at the bottom of the page that the ASM knows this address is from Antarctica. In fact, if you want, you can click the "Disallow this Geolocation" right from this screen, and the ASM will move this location to the Disallowed locations for you (you still have to "Apply Policy" though). I will remind you, though...if you click on the Disallow this Geolocation button, it will disallow every IP address from that country. I'm not saying don't do it, but just be aware. In fact, this could actually be a really helpful button if you are in the middle of an attack and you need to quickly cut off access from a given location! After I sent the first request from Fiddler2, I updated my security policy to disallow the Antarctica location. Then, I sent the request again. Here's what the ASM caught. Notice that the violation is the exact setting from the Blocking Settings we looked at earlier. Well, that wraps up the ASM Geolocation discussion. I hope you enjoyed learning about this really helpful feature. Come back next time for more fun with the ASM! Update: Now that the article series is complete, I wanted to share the links to each article. If I add any more in the future, I'll update this list. What is the BIG-IP ASM? Policy Building The Importance of File Types, Parameters, and URLs Attack Signatures XML Security IP Address Intelligence and Whitelisting Geolocation Data Guard Username and Session Awareness Tracking Event Logging3.1KViews0likes4CommentsDNS Architecture in the 21st Century
It is amazing if you stop and think about it, how much we utilize DNS services, and how little we think about them. Every organization out there is running DNS, and yet there is not a ton of traction in making certain your DNS implementation is the best it can be. Oh sure, we set up a redundant pair of DNS servers, and some of us (though certainly not all of us) have patched BIND to avoid major vulnerabilities. But have you really looked at how DNS is configured and what you’ll need to keep your DNS moving along? If you’re looking close at IPv6 or DNSSEC, chances are that you have. If you’re not looking into either of these, you probably aren’t even aware that ISC – the non-profit responsible for BIND – is working on a new version. Or that great companies like Infoblox (fair disclosure, they’re an F5 partner) are out there trying to make DNS more manageable. With the move toward cloud computing and the need to keep multiple cloud providers available (generally so your app doesn’t go offline when a cloud provider does, but at a minimum for a negotiation tool), and the increasingly virtualized nature of our application deployments, DNS is taking on a new importance. In particular, distributed DNS is taking on a new importance. What a company with three datacenters and two cloud providers must do today, only ISPs and a few very large organizations did ten years ago. And that complexity shows no signs of slacking. While the technology that is required to operate in a multiple datacenter (whether those datacenters are in the cloud or on your premise) environment is available today, as I alluded to above, most of us haven’t been paying attention. No surprise with the number of other issues on our plates, eh? So here’s a quick little primer to give you some ideas to start with when you realize you need to change your DNS architecture. It is not all-inclusive, the point is to give you ideas you can pursue to get started, not teach you all that some of the experts I spent part of last week with could offer. In a massively distributed environment, DNS will have to direct users to the correct location – which may not be static (Lori tells me the term for this is “hyper-hybrid”) In a IPv6/IPv4 world, DNS will have to serve up both types of addresses, depending upon the requestor Increasingly, DNSSEC will be a requirement to play in the global naming game. While most orgs will go there with dragging feet, they will still go The failure of a cloud, or removal of a cloud from the list of options for an app (as elasticity contracts) will require dynamic changes in DNS. Addition will follow the same rules Multiple DNS servers in multiple locations will have to remain synched to cover a single domain. So the question is where do you begin if you’re like so many people and vaguely looked into DNSSEC or DNS for IPv6, but haven’t really stayed up on the topic. That’s a good question. I was lucky enough to get two days worth of firehose from a ton of experts – from developers to engineers configuring modern DNS and even a couple of project managers on DNS projects. I’ll try to distill some of that data out for you. Where it is clearer to use a concrete example or specific terminology, as almost always that example will be of my employer or a partner. From my perspective it is best to stick to examples I know best, and from yours, simply call your vendor and ask if they have similar functionality. Massively distributed is tough if you are coming from a traditional DNS environment, because DNS alone doesn’t do it. DNS load balancing helps, but so does the concept of a Wide IP. That’s an IP that is flexible on the back end, but static on the front end. Just like when load balancing you have a single IP that directs users to multiple servers, a Wide IP is a single IP address that directs people to multiple locations. A Wide IP is a nice abstraction to actively load balance not just between servers but between sites. It also allows DNS to be simplified when dealing with those multiple sites because it can route to the most appropriate instance of an application. Today most appropriate is generally defined by geographically closest, but in some cases it can include things like “send our high-value customers to a different datacenter”. There are a ton of other issues with this type of distribution, not the least of which is database integrity and primary sourcing, but I’m going to focus on the DNS bit today, just remember that DNS is a tool to get users to your systems like a map is a tool to get customers to your business. In the end, you still have to build the destination out. DNS that supports IPv4 and IPv6 both will be mandatory for the foreseeable future, as new devices come online with IPv6 and old devices persist with IPv4. There are several ways to tackle this issue, from the obvious “leave IPv4 running and implement v6 DNS” to the less common “implement a solution that serves up both”. DNSSEC is another tough one. It adds complexity to what has always been a super-simplistic system. But it protects your corporate identity from those who would try to abuse it. That makes DNSSEC inevitable, IMO. Risk management wins over “it’s complex” almost every time. There are plenty of DNSSEC solutions out there, but at this time DNSSEC implementations do not run BIND. The update ISC is working on might change that, we’ll have to see. The ability to change what’s behind a DNS name dynamically is naturally greatly assisted by the aforementioned Wide IPs. By giving a constant IP that has multiple variable IPs behind it, adding or removing those behind the Wide IP does not suffer the latency that DNS propagation requires. Elasticity of servers servicing a given DNS name becomes real simply by the existence of Wide IPs. Keeping DNS servers synched can be painful in a dynamic environment. But if the dynamism is not in DNS address responses, but rather behind Wide IPs, this issue goes away also. The DNS servers will have the same set of Name/address pairs that require changes only when new applications are deployed (servers is the norm for local DNS, but for Wide-IP based DNS, servers can come and go behind the DNS service with only insertion into local DNS, while a new application might require a new Wide-IP and configuration behind it). Okay, this got long really quickly. I’m going to insert an image or two so that there’s a graphical depiction of what I’m talking about, then I’m going to cut it short. There’s a lot more to say, but don’t want to bore you by putting it all in a single blog. You’ll hear from me again on this topic though, guaranteed. Related Articles and Blogs F5 Friday: Infoblox and F5 Do DNS and Global Load Balancing Right. How to Have Your (VDI) Cake and Deliver it Too F5 BIG-IP Enhances VMware View 5.0 on FlexPod Let me tell you Where To Go. Carrier Grade DNS: Not your Parents DNS Audio White Paper - High-Performance DNS Services in BIG-IP ... Enhanced DNS Services: For Administrators, Managers and Marketers The End of DNS As We Know It DNS is Like Your Mom F5 Video: DNS Express—DNS Die Another Day339Views0likes0CommentsF5 Friday: I am in UR HTTP Headers Sharing Geolocation Data
#DNS #bigdata #F5 #webperf How'd you like some geolocation data with that HTTP request? Application developers are aware (you are aware, aren't you?) that when applications are scaled using most modern load balancing services that the IP address of the application requests actually belong to the load balancing service. Application developers are further aware that this means they must somehow extract the actual client IP address from somewhere else, like the X-Forwarded-For HTTP header. Now, that's pretty much old news. Like I said, application developers are aware of this already. What's new (and why I'm writing today) is the rising use of geolocation to support localized (and personalized) content. To do this, application developers need access to the geographical location indicated by either GPS coordinates or IP address. In most cases, application developers have to get this information themselves. This generally requires integration with some service that can provide this information despite the fact that infrastructure like BIG-IP and its DNS services, already have it and have paid the price (in terms of response time) to get it. Which means, ultimately, that applications pay the performance tax for geolocation data twice - once on the BIG-IP and once in the application. Why, you are certainly wondering, can't the BIG-IP just forward that information in an HTTP header just like it does the client IP address? Good question. The answer is that technically, there's no reason it can't. Licensing, however, is another story. BIG-IP includes, today, a database of IP addresses that locates clients, geographically, based on client IP address. The F5 EULA, today, allows customers to use this information for a number of purposes, including GSLB load balancing decisions, access control decisions with location-based policies, identification of threats by country, location blocking of application requests, and redirection of traffic based on the client’s geographic location. However, all decisions had to be made on BIG-IP itself and geographic information could not be shared or transmitted to any other device. However, a new agreement allows customers an option to use the geo-location data outside of BIG-IP, subject to fees and certain restrictions. That means BIG-IP can pass on State, Province, or Region geographic data to applications using an easily accessible HTTP header. How does that work? Customers can now obtain a EULA waiver which permits certain off-box use cases. This allows customers to use the geolocation data included with BIG-IP in applications residing on a server or servers in an “off box” fashion. For example, location information may be embedded into an HTTP header or similar and then sent on to the server for it to perform some geo-location specific action. Customers (existing or new) can contact their F5 sales representative to start the process of obtaining the waiver necessary to enable the legal use of this data in an off-box fashion. All that's necessary from a technical perspective is to determine how you want to share the data with the application. For example, you'll (meaning you, BIG-IP owner and you, application developer) will have to agree upon what HTTP header you'll want to use to share the data. Then voila! Developers have access to the data and can leverage it for existing or new applications to provide greater location-awareness and personalization. If your organization has a BIG-IP (and that's a lot of organizations out there), check into this opportunity to reduce the performance tax on your applications that comes from double-dipping into geolocation data. Your users (especially your mobile users) will appreciate it.327Views0likes0CommentsLocation-Aware Load Balancing
No, it’s not global server load balancing or GeoLocation. It’s something more… because knowing location is only half the battle and the other half requires the ability to make on-demand decisions based on context. In most cases today, global application delivery bases the decision on which location should service a given client based on the location of the user, availability of the application at each deployment location and, if the user is lucky, some form of performance-related service-level agreement. With the advent of concepts like cloud bursting and migratory applications that can be deployed at any number of locations at any given time based on demand, the ability to determine not just the user location accurately but the physical location of the application as well is becoming increasingly important to address concerns regarding regulatory compliance. Making the equation more difficult is that these regulations vary from country to country and the focus of each varies greatly. In the European Union the focus is on privacy for the consumer, while in the United States the primary focus is on a combination of application location (export laws) and user location (access restrictions). These issues become problematic for not just application providers who want to tap into the global market, but for organizations whose employee and customer base span the globe. Many of the benefits of cloud computing are based on the ability to tap into cloud providers’ inexpensive resources not just at any time its needed for capacity (cloud bursting) but at any time that costs can be minimized (cloud balancing). These benefits are appealing, but can quickly run organizations afoul of regulations governing data and application location. In order to maximize benefits and maintain compliance with regulations relating to the physical location of data and applications and ensure availability and performance levels are acceptable to both the organization and the end-user, some level of awareness must be present in the application delivery architecture. Awareness of location provides a flexible application delivery infrastructure with the ability to make on-demand decisions regarding where to route any given application request based on all the variables required; based on the context. Because of the flexible nature of deployment (or at least the presumed flexibility of application deployment) it would be a poor choice to hard-code such decisions so that users in location X are always directed to the application at location Y. Real-time performance and availability data must also be taken into consideration, as well as capacity of each location.268Views0likes1CommentThe Four V’s of Big Data
#stirling #bigdata #ado #interop “Big data” focuses almost entirely on data at rest. But before it was at rest – it was transmitted over the network. That ultimately means trouble for application performance. The problem of “big data” is highly dependent upon to whom you are speaking. It could be an issue of security, of scale, of processing, of transferring from one place to another. What’s rarely discussed as a problem is that all that data got where it is in the same way: over a network and via an application. What’s also rarely discussed is how it was generated: by users. If the amount of data at rest is mind-boggling, consider the number of transactions and users that must be involved to create that data in the first place – and how that must impact the network. Which in turn, of course, impacts the users and applications creating it. It’s a vicious cycle, when you stop and think about it. This cycle shows no end in sight. The amount of data being transferred over networks, according to Cisco, is only going to grow at a staggering rate – right along with the number of users and variety of devices generating that data. The impact on the network will be increasing amounts of congestion and latency, leading to poorer application performance and greater user frustration. MITIGATING the RISKS of BIG DATA SIDE EFFECTS Addressing that frustration and improving performance is critical to maintaining a vibrant and increasingly fickle user community. A Yotta blog detailing the business impact of site performance (compiled from a variety of sources) indicates a serious risk to the business. According to its compilation, a delay of 1 second in page load time results in: 7% Loss in Conversions 11% Fewer Pages Viewed 16% Decrease in Customer Satisfaction This delay is particularly noticeable on mobile networks, where latency is high and bandwidth is low – a deadly combination for those trying to maintain service level agreements with respect to application performance. But users accessing sites over the LAN or Internet are hardly immune from the impact; the increasing pressure on networks inside and outside the data center inevitably result in failures to perform – and frustrated users who are as likely to abandon and never return as are mobile users. Thus, the importance of optimizing the delivery of applications amidst potentially difficult network conditions is rapidly growing. The definition of “available” is broadening and now includes performance as a key component. A user considers a site or application “available” if it responds within a specific time interval – and that time interval is steadily decreasing. Optimizing the delivery of applications while taking into consideration the network type and conditions is no easy task, and requires a level of intelligence (to apply the right optimization at the right time) that can only be achieved by a solution positioned in a strategic point of control – at the application delivery tier. Application Delivery Optimization (ADO) Application delivery optimization (ADO) is a comprehensive, strategic approach to addressing performance issues, period. It is not a focus on mobile, or on cloud, or on wireless networks. It is a strategy that employs visibility and intelligence at a strategic point of control in the data path that enables solutions to apply the right type of optimization at the right time to ensure individual users are assured the best performance possible given their unique set of circumstances. The technological underpinnings of ADO are both technological and topological, leveraging location along with technologies like load balancing, caching, and protocols to improve performance on a per-session basis. The difficulties in executing on an overarching, comprehensive ADO strategy is addressing variables of myriad environments, networks, devices, and applications with the fewest number of components possible, so as not to compound the problems by introducing more latency due to additional processing and network traversal. A unified platform approach to ADO is necessary to ensure minimal impact from the solution on the results. ADO must therefore support topology and technology in such a way as to ensure the flexible application of any combination as may be required to mitigate performance problems on demand. Topologies Symmetric Acceleration Front-End Optimization (Asymmetric Acceleration) Lengthy debate has surrounded the advantages and disadvantages of symmetric and asymmetric optimization techniques. The reality is that both are beneficial to optimization efforts. Each approach has varying benefits in specific scenarios, as each approach focuses on specific problem areas within application delivery chain. Neither is necessarily appropriate for every situation, nor will either one necessarily resolve performance issues in which the root cause lies outside the approach's intended domain expertise. A successful application delivery optimization strategy is to leverage both techniques when appropriate. Technologies Protocol Optimization Load Balancing Offload Location Whether the technology is new – SPDY – or old – hundreds of RFC standards improving on TCP – it is undeniable that technology implementation plays a significant role in improving application performance across a broad spectrum of networks, clients, and applications. From improving upon the way in which existing protocols behave to implementing emerging protocols, from offloading computationally expensive processing to choosing the best location from which to serve a user, the technologies of ADO achieve the best results when applied intelligently and dynamically, taking into consideration real-time conditions across the user-network-server spectrum. ADO cannot effectively scale as a solution if it focuses on one or two comprising solutions. It must necessarily address what is a polyvariable problem with a polyvariable solution: one that can apply the right set of technological and topological solutions to the problem at hand. That requires a level of collaboration across ADO solutions that is almost impossible to achieve unless the solutions are tightly integrated. A holistic approach to ADO is the most operationally efficient and effective means of realizing performance gains in the face of increasingly hostile network conditions. Mobile versus Mobile: 867-5309 Identity Gone Wild! Cloud Edition Network versus Application Layer Prioritization Performance in the Cloud: Business Jitter is Bad The Three Axioms of Application Delivery Fire and Ice, Silk and Chrome, SPDY and HTTP The HTTP 2.0 War has Just Begun Stripping EXIF From Images as a Security Measure222Views0likes0CommentsF5 Friday: Hyperlocalize Applications for Everyone
Desktops aren’t GPS-enabled but don’t let that stop you from providing hyperlocal information to all your fans. IMAGE from macmillan buzzword dictionary Two people are sitting in an Internet-enabled café. Let’s call the café Starbucks. One of them is using an iPhone or iPad while having a Hoffachino to find out what’s going on in the area. One of them is using a laptop to do the same. One of these two people is likely to get more accurate responses with less work. Which one is it? Yeah, the Apple fanbois. To be fair, it could be a Blackberry fanbois or other GPS-enabled smart phone user. The point is that it’s much easier to hyperlocalize applications targeting smart phones because of their innate location-awareness supplied by built-in GPS. But why restrict your hyperlocal application to just mobile devices? For developers the answer is simple – because the data required to hyperlocalize an application, i.e. the location of the user, is simply easier to get from a mobile device like the iPhone or iPad or Blackberry than it is from a desktop browser. Seriously. The API calls for the application are simple and adding them to either the request or shoving into HTTP headers is just as simple. Google Gears offers an implementation compliant with the W3C GeoLocation API specification that provides one solution, though obtaining accurate coordinates without a GPS-enabled endpoint may be trickier for the developer. While GeoLocation is an integral component of mobile device SDKs, there’s no complement on the desktop that provides this data. The result is that either desktop users are treated as second-class Internet citizens (how’s that for irony?) and are not provided with a hyperlocalized interface to the application or they can jump through a series of hoops to get to the same data that is offered up to mobile users automagically. DYNAMIC INFRASTRUCTURE to the RESCUE One of the key characteristics of dynamic infrastructure is in its ability to integrate and collaborate with other infrastructure components, both at provision and run-time. Part of those dynamic, run-time capabilities includes the means to intercept, inspect, and if desired modify the requests and responses that traverse the entirety of the application or service delivery chain. Coupling the ability to perform such a task with the ability to grab location-based data gleaned from the client IP address and suddenly you have the information necessary to hyperlocalize all applications; not just those coming from a GPS-enabled smart phone or device. There are a number of ways in which such a technique could be applied. For example, if applications are hyperlocalized already based on region, one could redirect users to the appropriate hyperlocalized location using a simple network-side script like iRules:190Views0likes0CommentsF5 Friday: A Network Heatwave That’s Good For Operations
The grab bag of awesome that is network-side scripting is, in general, often overlooked. Generally speaking “network gear” isn’t flexible, nor is it adaptable, and it certainly isn’t extensible. But when you put network-side scripting into the mix, suddenly what was inflexible and static becomes extensible and dynamic. In many cases if you’ve ever said “I wish that thing could do X” well, in the case of application delivery it probably can – you just have to learn how. The how, in the case of F5, is iRules. iRules is network-side scripting, so it’s executing “in the network” as it were, as data is traversing from client to server and server to client. iRules lets you intercept data (it’s event-driven) and then, well, what do you want to do to it? You can transform it, apply conditional policies, log it, search it, reject it. Being network-side means you have context, and with context you can do a lot of application, location, and client-specific “things” to data and requests and sessions. Being “in the network” means you have access to the full network stack. If you need IP header information, you can get that. If you need application-specific information from within the request or response, you can get that. The entire stack is available to inspect and can ultimately be used to instruct BIG-IP. iRules figures prominently into F5’s vision of cloud computing as one component of the dynamic control plane necessary to realize the benefits of cloud computing and virtualization. It figures prominently into agile operations and the ability to respond rapidly to datacenter events, and it’s integral to emerging switching and routing architectures based on content and context, such as message-based load balancing and content-based routing. I don’t often get a chance to cook up iRules myself any more so those who do – and are masters at it – make me just a bit jealous. And it’d be hard to find someone in our kitchen that makes me greener than DevCentral’s own Colin Walker. He’s done it again, and I can’t say enough good things about this solution. Not only is it hot (pun intended) it’s highly applicable to a variety of uses that go beyond just generating eye-candy for operations. What makes it really interesting is the use of an external service to generate a dynamic view of real-time operations. Colin is leveraging Google’s charting API and relying on the confidence of GeoLocation data based on our integration with Quova to enable a real-time visual display of the locations from which HTTP requests are being received by a BIG-IP. It’s dynamic integration, it’s Infrastructure 2.0, it’s devops, it’s just … cool.183Views0likes0CommentsCloud Needs Context-Aware Provisioning
Devops needs to be able to SELECT COMPUTE_RESOURCES from CLOUD where LOCATION in (APPLICATION SPECIFIC RESTRICTIONS) The awareness of the importance of context in application delivery and especially in the “new network” is increasing, and that’s a good thing. It’s a necessary evolution in networking as both users and applications become increasingly mobile. But what might not be evident is the need for more awareness of context during the provisioning, i.e. deployment, process. A desire to shift the burden of management of infrastructure does not mean a desire for ignorance of that infrastructure, nor does it imply acquiescence to a complete lack of control. But today that’s partially what one can expect from cloud computing . While the fear of applications being deployed on “any old piece of hardware anywhere in the known universe” is not entirely a reality, the possibility of having no control over where an application instance might be launched – and thus where corporate data might reside - is one that may prevent some industries and individual organizations from choosing to leverage public cloud computing. This is another one of those “risks” that tips the scales of risk versus benefit to the “too risky” side primarily because there are legal implications to doing so that make organizations nervous. The legal ramifications of deploying applications – and their data – in random geographic locations around the world differ based on what entity has jurisdiction over the application owner. Or does it? That’s one of the questions that remains to be answered to the satisfaction of many and which, in many cases, has led to a decision to stay away from cloud computing. According to the DPA, clouds located outside the European Union are per se unlawful, even if the EU Commission has issued an adequacy decision in favor of the foreign country in question (for example, Switzerland, Canada or Argentina). -- German DPA Issues Legal Opinion on Cloud Computing Back in January, Paul Miller published a piece on jurisdiction and cloud computing, exploring some of the similar legal juggernauts that exist with cloud computing: While cloud advocates tend to present 'the cloud' as global, seamless and ubiquitous, the true picture is richer and complicated by laws and notions of territoriality developed long before the birth of today's global network. What issues are raised by today's legislative realities, and what are cloud providers — and their customers — doing in order to adapt?171Views0likes0Comments