access control
26 TopicsThe IP Address – Identity Disconnect
The advent of virtualization brought about awareness of the need to decouple applications from IP addresses. The same holds true on the client side – perhaps even more so than in the data center. I could quote The Prisoner, but that would be so cliché, wouldn’t it? Instead, let me ask a question: just which IP address am I? Am I the one associated with the gateway that proxies for my mobile phone web access? Or am I the one that’s currently assigned to my laptop – the one that will change tomorrow because today I am in California and tomorrow I’ll be home? Or am I the one assigned to me when I’m connected via an SSL VPN to corporate headquarters? If you’re tying identity to IP addresses then you’d better be a psychiatrist in addition to your day job because most users have multiple IP address disorder. IP addresses are often utilized as part of an identification process. After all, a web application needs some way to identify a user that’s not supplied by the user. There’s a level of trust inherent in the IP address that doesn’t exist with my name or any other user-supplied piece of data because, well, it’s user supplied. An IP address is assigned or handed-out dynamically by what is an unemotional, uninvolved technical process that does not generally attempt to deceive, dissemble, or trick anyone with the data. An IP address is simply a number. But given the increasingly dynamic nature of data centers, of cloud computing, and of users accessing web-based services via multiple devices – sometimes at the same time – it seems a bad idea to base any part of identification on an IP address that could, after all, change in five minutes. IP addresses are no longer guaranteed in the data center, that’s the premise of much of the work around IF-MAP and dynamic connectivity and Infrastructure 2.0, so why do we assume it would be so on the client side? Ridonculous! The decoupling of IP address from identity seems a foregone conclusion. It’s simply not useful anymore. Add to this the fact that IP address depletion truly is a serious problem – the NRO announced recently that less than 10% of all public IPv4 addresses are still available – and it seems an appropriate time to decouple application and infrastructure from relying on client IP addresses as a form of identification.301Views0likes3CommentsThe Problem with Consumer Cloud Services...
…is that they're consumer #cloud services. While we're all focused heavily on the challenges of managing BYOD in the enterprise, we should not overlook or understate the impact of consumer-grade services within the enterprise. Just as employees bring their own devices to the table, so too do they bring a smattering of consumer-grade "cloud" services to the enterprise. Such services are generally woefully inappropriate for enterprise use. They are focused on serving a single consumer, with authentication and authorization models that support that focus. There are no roles, generally no group membership, and there's certainly no oversight from some mediating authority other than the service provider. This is problematic for enterprises as it eliminates the ability to manage access for large groups of people, to ensure authority to access based on employee role and status, and provides no means of integration with existing ID management systems. Integrating consumer-oriented cloud services into enterprise workflows and systems is a Sisyphean task. Cloud-services replicating what has traditionally been considered enterprise-class services such as CRM and ERP are designed with the need to integrate. Consumer-oriented services are designed with the notion of integration – with other consumer-grade services, not enterprise systems. They lack even the most rudimentary enterprise-class concepts such as RBAC, group-based policy and managed access. SaaS supporting what are traditionally enterprise-class concerns such as CRM and e-mail have begun to enable the integration with the enterprise necessary to overcome what is, according to survey conducted by CloudConnect and Everest Group, the number two inhibitor of cloud adoption amongst respondents. The lack of integration points into consumer-grade services is problematic for both IT – and the service provider. For the enterprise, there is a need to integrate, to control the processes associated with, consumer-grade cloud services. As with many SaaS solutions, the ability to collaborate with data-center hosted services as a means to integrate with existing identity and access control services is paramount to assuaging the concerns that currently exist given the more lax approach to access and identity in consumer-grade services. Integration capabilities – APIs – that enable enterprises to integrate even rudimentary control over access is a must for consumer-grade SaaS looking to find a path into the enterprise. Not only is it a path to monetization (enterprise organizations are a far more consistent source of revenue than are ads or income derived from the sale of personal data) but it also provides the opportunity to overcome the stigma associated with consumer-grade services that have already resulted in "bans" on such offerings within large organizations. There are fundamentally three functions consumer-grade SaaS needs to offer to entice enterprise customers: Control over AAA Enterprises need the ability to control who accesses services and to correlate with authoritative sources of identity and role. That means the ability to coordinate a log-in process that primarily relies upon corporate IT systems to assert access rights and the capability of the cloud-service to accept that assertion as valid. APIs, SAML, and other identity management techniques are invaluable tools in enabling this integration. Alternatively, enterprise-grade management within the tools themselves can provide the level of control required by enterprises to ensure compliance with a variety of security and business-oriented requirements. Monitoring Organizations need visibility into what employees (or machines) may be storing "in the cloud" or what data is being exchanged with what system. This visibility is necessary for a variety of reasons with regulatory compliance most often cited. Mobile Device Management (MDM) and Security Because one of the most alluring aspects of consumer cloud services is nearly ubiquitous access from any device and any location, the ability to integrate #1 and #2 via MDM and mobile-friendly security policies is paramount to enabling (willing) enterprise-adoption of consumer cloud services. While most of the "consumerization" of IT tends to focus on devices, "bring your own services" should also be a very real concern for IT. And if consumer cloud services providers think about it, they'll realize there's a very large market opportunity for them to support the needs of enterprise IT while maintaining their gratis offerings to consumers.286Views0likes1CommentDNSSEC – the forgotten security asset?
An interesting article from CIO Online last month explained how DNS had been used to identify over 700 instances of a managed service provider’s customers being infected with malware. The MSP was able to determine the malware using DNS. As the article points out, a thirty year old technology was being used to defeat twenty-first century computer problems. In short DNS may be a viable means of identifying infections within networks quicker, because as well as security apps relying on DNS, the attackers do as well. DNS however still comes with its own unique security approach. The signature checking procedures outlined in the Domain Name System Security Extensions (DNSSEC) specifications were deemed adequate for the protocols surrounding domain resolution. While the certificates offer security that is authenticated, the data is not encrypted, meaning that data is not confidential. The other problem with DNSSEC is that in the event of Distributed Denial of Service (DDOS) DNS Amplification attack on a DNS server, the processing of validation requests adds to the processor usage and contributes to slowdown. DNSSEC does, however, provide protection against cache poisoning and other malicious activities and remains part of the network security arsenal. At F5, our solution for the DNSSEC load problem was to integrate our DNSSEC to our BIG-IP Global Traffic Manager. The traffic manager handles all of the overhead processing requirements created during a DDOS DNS Amplification attack. The result is that the DNS Server can be left to function with no performance limitation. On top of this the F5 solution is fully compliant with international DNSSEC regulations imposed by governments, organisations and domain registrars. While DNSSEC may seem mature and even outdated for its security specifications, the correct application of technology, such as F5’s BIG-IP Global Traffic Manager delivers peace of mind over security, performance, resource and centralised management of your DNS.282Views0likes0CommentsVideos from F5's recent Agility customer / partner conference in London
A week or so ago, F5 in EMEA held our annual customer / partner conference in London. I meant to do a little write-up sooner but after an incredibly busy conference week I flew to F5's HQ in Seattle and didn't get round to posting there either. So...better late than never? One of the things we wanted to do at Agility was take advantage of the DevCentral team's presence at the event. They pioneered social media as a community tool, kicking off F5's DevCentral community (now c. 100,000 strong) in something like 2004. They are very experienced and knowledgeable about how to use rich media to get a message across. So we thought we'd ask them to do a few videos with F5's customers and partners about what drives them and how F5 fits in. Some of them are below, and all of them can be found here.277Views0likes0CommentsThe IT Optical Illusion
Everyone has likely seen the optical illusion of the vase in which, depending on your focus, you either see a vase or two faces. This particular optical illusion is probably the best allegorical image for IT and in particular cloud computing I can imagine. Depending on your focus within IT you’re either focused on – to borrow some terminology from SOA – design-time or run-time management of the virtualized systems and infrastructure that make up your data center. That focus determines what particular aspect of management you view as most critical, and unfortunately makes it difficult to see the “big picture”: both are critical components of a successful cloud computing initiative. I realized how endemic to the industry this “split” is while prepping for today’s “Connecting On-Premise and On-Demand with Hybrid Clouds” panel at the Enterprise Cloud Summit @ Interop on which I have the pleasure to sit with some very interesting – but differently focused – panelists. See, as soon as someone starts talking about “connectivity” the focus almost immediately drops to … the network. Right. That actually makes a great deal of sense and it is, absolutely, a critical component to building out a successful hybrid cloud computing architecture. But that’s only half of the picture, the design-time picture. What about run-time? What about the dynamism of cloud computing and virtualization? The fluid, adaptable infrastructure? You know, the connectivity that’s required at the application layers, like access control and request distribution and application performance. Part of the reason you’re designing a hybrid architecture is to retain control. Control over when those cloud resources are used and how and by whom. In most cloud computing environments today, at least public ones, there’s no way for you to maintain that control because the infrastructure services are simply not in place to do so. Yet. At least I hope yet; one wishes to believe that some day they will be there. But today, they are not. Thus, in order to maintain control over those resources there needs to be a way to manage the run-time connectivity between the corporate data center (over which you have control) and the public cloud computing environment (which you do not). That’s going to take some serious architecture work and it’s going to require infrastructure services from infrastructure capable of intercepting requests, inspecting the request in context of the user and the resource requested, and applying the policies and processes to ensure that only those clients you want to access those resources can access them, and those you prefer not access them are denied. It will become increasingly important that IT be able to view its network in terms of both design and run-time connectivity if it is going to successfully incorporate public cloud computing resources into its corporate cloud computing – or traditional – network and application delivery network strategy.266Views0likes1CommentData Diversity: a leading UK tech writer posts…
To change things up a little TechView has invited tech luminary Adrian Bridgwater to share a few thoughts on Big Data and related management issues. More about Adrian can be found at the end of this post. He can be found in many places online, including Twitter and at ComputerWeekly.com. Thanks for contributing Adrian! Data Diversity: protecting the species inside the “infinite variety” Big data, complex event processing, multi-core systems and real time events are spirally upwards and outwards into a combined vortex of increased interconnectivity with the resultant data now traveling across a multiplicity of network protocols -- so are we headed for a fall, or do we have the application and services management layers in place to be able to survive in this new ecosystem? Let’s approach this question organically and ecologically. I first came across the term “infinite variety” while watching the naturalist David Attenborough present the BBC series Life On Earth, but I believe its actually a turn of phrase attributable to Shakespeare’s Antony and Cleopatra. Either way, the rise of terms like so-called “big data” have come to the fore as we now see data streams and data sets evolving into an (almost) infinite number of new forms. Data is interconnecting and (in some cases) self-populating through automation controls. The question now is whether we have taken the trouble to prepare and architect for this new data-enriched landscape that we find ourselves in. After all, huge swathes of data without management and analysis functions… are just huge swathes of data. The “value” fulcrum tips in our favour when and only when we can evidence some degree of control and power over the data before us. But it’s not just about data management is it? The problem here (somewhat predictably perhaps) comes back to data security in the first instance. You may have already read about (Chris) Hoff’s Law on F5 DevCentral which states that: “If your security practices suck in the physical realm, you’ll be delighted by the surprising lack of change when you move to cloud.” We can further postulate that the corollary or upshot of Hoff’s Law is that if your data security practices DO NOT suck in the physical realm, you’ll most likely be concerned by the inability to continue that practice when you move to cloud and the world of big data. So tying our two themes together here: we have the “infinite variety” of human data usage and its constantly evolving new data streams across new form factors and devices as they emerge -- and we also have security and the need to protect our species and the lifeblood and food-source (in this case data) that it thrives upon. In the real world we arm ourselves with defences and build a stronghold inside which we can live and function and go about our normal functions. In the virtual world, we should perhaps treat encryption techniques and anti-malware controls as body armour, but recognise that we still need deeper controls such as software appliances with functionality like multi-tenant support to really batten down the hatches. From this point, we can explore new ecosystems and planets and populate our world further without fear of disease or plague -- or (if that’s one analogy too far for you) at least, we can keep our data locked down at get on with life. Adrian’s bio: Adrian Bridgwater is a freelance journalist specialising in cross platform software application development as well as all related aspects of software engineering and project management. Adrian is a regular writer and blogger with Computer Weekly, Dr Dobbs Journal and others covering the application development landscape to detail the movers, shakers and start-ups that make the industry the vibrant place that it is. His journalistic creed is to bring forward-thinking, impartial, technology editorial to a professional (and hobbyist) software audience around the world. His mission is to objectively inform, educate and challenge - and through this champion better coding capabilities and ultimately better software engineering.247Views0likes0CommentsPolicy is key for protection in the cloud era
Today, companies host mission-critical systems such as email in the cloud, which contain both customer details, company-confidential information and without which, company operations would grind to a halt. Although cloud providers were forced to reconsider their security and continuity arrangements after the large cloud outages and security breaches last year, cloud users still have a number of challenges. Unless organisations work with a small, specialist provider, it is unlikely that they can guarantee where their data is stored, or the data handling policies of the cloud provider in question. Organisations frequently forget that their in-house data policies simply will not be exported to the cloud with their data. Authentication, authorisation and accounting services (AAA) are often cited as major concerns for companies using cloud services. Organisations need assurance of due process of data handling, or else a way to remove the problem so that they lose no sleep over cloud. Aside from problems with location, one of the main problems with cloud is that it does not lend itself to static security policy. For example, one of the most popular uses of cloud is cloudbursting, where excess traffic is directed to cloud resources to avoid overwhelming in-house servers, to spread traffic more economically or to spread the load when several tasks of high importance are being carried out at once. Firm policies about what kind of data can be moved to the cloud, at what capacity threshold, and any modifications which need to be made to data all need to be considered in a very short space of time. All of this needs to be accomplished whilst keeping data secure in transit, and with minimal management to avoid overloading IT managers at already busy times. Furthermore, organisations need to consider AAA concerns, making sure that data is kept in the right hands at all times. Organisations need to secure applications, regardless of location, and to do this, they need to be able to extend policy to the cloud to make sure that data stays safe, wherever it is. Using application delivery control enables companies to control all inbound and outbound application traffic, allowing them to export AAA services to the cloud. They should also make sure that they have a guarantee of secure tunnelling (i.e. via VPNs) which will make sure that data is secure in transit, as well as confirming that only the right users have access to it. Using some kind of secure sign on such as via two-factor authentication can also make sure that the right users are correctly authorised. In future, organisations may begin to juggle multiple cloud environments, balancing data between them for superior resilience, business continuity and pricing offers – often referred to as ‘supercloud’ - and this can be extremely complex. As company usage of cloud becomes more involved, managing and automating key processes will become more important so that cloud is an asset, rather than a millstone around the neck of IT departments.244Views0likes0CommentsVMworld 2012 Europe - Strobel's Scribblings, Part I
The first of what will be a series of reports from Barcelona...F5's Frank Strobel wraps-up Day Zero's events: ---------- VMworld EMEA 2012 – more exciting news from F5 At the evening prior to the start of the 2012 edition of VMworld EMEA, the F5 team is getting ready for another successful event - this time in beautiful Barcelona, Spain. No offense, Copenhagen, but the combination of sunshine, tapas, Sangria, and the Mediterranean has you beat. Earlier today we held a vmLIVE session with over 700 VMware channel partners in attendance (a new record for us!) interested in learning about what F5 can deliver in support of the Mobile Secure Desktop . Clearly, this is a hot topic and one that we will focus on during VMworld EMEA with a theater presentation in the solution exchange (Enhancing the User Experience for Multi-Pod VMware View Deployments -Tuesday, October 9th, 12:30pm) and our live demo in the booth. If you are evaluating VMware View for your VDI needs, you might want to consider paying us a visit to learn more. Also, today, we held a joint breakout session with VMware during the TAP pre-event day presenting on the VMware vCloud Automated Networking Framework: Network Extensibility (TEX1899) together with Ravi Neelakant. Charlie Cano delivered another standing room only performance. Those who have seen Charlie present before know why he draws large crowds. You will have a chance on Thursday to witness Charlie’s presentations skills during his own breakout sessions (SPO2069 - Solving the Application Provisioning Nightmare: Integrating vSphere and vCloud Director with Your Application Delivery Networking Services). Last but not least, stay tuned for more exciting news coming from F5 tomorrow. You don’t want to miss that one for sure. So, feel free to come by F5’s stand, G100, to check out our latest solutions and to participate at our really cool Motorcycle racing game. And, as always, there are cool prizes to be had… Viva Espana, Viva VMworld!241Views0likes0CommentsOf Escalators and Network Traffic
Escalators are an interesting first world phenomenon. While not strictly necessary anywhere, they still turn up all over in most first-world countries. The key to their popularity is, no doubt, the fact that they move traffic much more quickly than an elevator, and offer the option of walking to increase the speed to destination even more. One thing about escalators is that they’re always either going up, or down, in contrast to an elevator which changes direction with each trip. The same could be said of network traffic. It is definitely moving on the up escalator, with no signs of slackening. The increasing number of devices not just online, but accessing information both inside and outside the confines of the enterprise has brought with it a large increase in traffic. Combine that with increases in new media both inside and outside the enterprise, and you have a spike in growth that the world may never see again. And we’re in the middle of it. Let’s just take a look at a graph of Internet usage portrayed in a bit of back-and-forth between Rob Beschizza of Boing Boing and Wired magazine. This graphic only goes to 2010, and you can clearly see that the traffic growth is phenomenal. (side note, Mr. Beschizza’s blog entry is worth reading, as he dissects arguments that the web is dead) As this increase impacts an organization, there is a series of steps that generally occurs on the path to Application Delivery Networking, and it’s worth recapping here (note, the order can vary). First, an application is not performing. Application load balancing is brought in to remedy the problem. This step may be repeated, with load balancing widely deployed before... Next, Internet connections are overloaded. Link load balancing is brought in to remedy the problem. Once the enterprise side is running acceptably, it turns out that wireless devices – particularly cell phones – are slow. Application Acceleration is brought in to solve the problem. Application security becomes an issue – either for purchased packages exposed to the world, or internally developed code. A web application firewall is used to solve the problem. Remote backups or replication start to slow the systems, as more and more data is collected. WAN Optimization is generally brought in to address the problem. For storefronts and other security-enabled applications, encryption becomes a burden on CPUs – particularly in a virtualized environment. Encryption offloading is brought in to solve the problem. Traffic management and access control quickly follow – addressed with management tools and SSL VPN. That is where things generally sit right now, there are other bits, but most organizations haven’t finished going this far, so we’ll skip the other bits for now. The problem that has even the most forward-thinking companies mostly paused here is complexity. There’s a lot going on in your application network at this point, and the pause to regain control and insight is necessary. An over-arching solution to the complexity that these steps introduce is, while not strictly necessary, a precursor to further taking advantage of the infrastructure available within the datacenter (notice that I have not discussed multi-data center or datacenter to the cloud in this post), some way to control all of this burgeoning architecture from a central location. Some vendors – like F5 (just marketing here) – offer a platform that allows control of these knobs and features, while other organizations will have to look to products like Tivoli or OpenView to tie the parts together. And while we’re centralizing the management of the application infrastructure, it’s time to consider that separate datacenter or the cloud as a future location to include in the mix. Can the toolset you’re building look beyond the walls of the datacenter and meet your management and monitoring needs? Can it watch multiple cloud vendors? What metrics will you need, and can your tools get them today, or will you need more management? All stuff to ask while taking that breather. There’s a lot of change going on and it’s always a good idea to know where you’re going in the long run while you’re fighting fires in the short run. The cost of failing to ask these questions is limited capability to achieve goals in the future – eg: more firefighting. And IT works hard enough, let’s not make it harder than it needs to be. And don’t hesitate to call your sales rep. They want to give you information about products and try to convince you to buy theirs, it’s what they do. While I can’t speak for other companies, if you get on the phone with an F5 SE, you’ll find that they know their stuff, and can offer help that ranges from defining future needs to meeting current ones. To you IT pros, I say, keep making business run like they don’t know you’re there. And since they won’t generally tell you, I’ll say “thank you” for them. They have no idea how hard their life would be sans IT.241Views0likes0CommentsF5 Friday: Never Outsource Control
Extending identity management into the cloud The focus of several questions I was asked at Interop involved identity management and application access in a cloud computing environment. This makes sense; not all applications that will be deployed in a public cloud environment are going to be “customer” or “market” focused. Some will certainly be departmental or business unit applications designed to be used by employees and thus require a certain amount of access control and integration with existing identity management stores, like Active Directory. Interestingly F5 isn’t the only one that thinks identity and access management needs to be addressed for cloud computing initiatives to succeed. It's important to not reinvent the wheel when it comes to moving to the cloud, especially as it pertains to identity and access management. Brown [Timothy Brown, senior vice president and distinguished engineering of security management for CA] said that before moving to the cloud it's important that companies have a plan for managing identities, roles and relationships. Users should extend existing identity management systems. The cloud, however, brings together complex systems and opens to door for more collaboration, meaning more control is necessary. Brown said simple role systems don't always work, dynamic ones are required. [emphasis added] --“10 Things to Consider Before Moving to the Cloud”, CRN, 2010 Considering the emphasis on “control” and “security”, both of which identity management is closely tied, were the top two concerns of organizations in an InformationWeek Analytics Cloud Computing survey this is simply good advice. The problem is how do you do that? Replicate your Active Directory forest? Maybe just a branch or two? There are overarching systems that can handle that replication, of course, but do you really want your corporate directory residing in the cloud? Probably not. What you really want is to leverage your existing identity management systems where they reside – in the corporate data center – but use its authentication and authorization information to allow or deny access to cloud-based applications.235Views0likes1Comment