uk
149 TopicsFrom BYOD to BYOE - what is the reality?
I – and others - have written frequently on the subject of BYOD; for example, in this blog post . The progressing mobilisation of workers and, thus, the question for IT decision-makers, administrators and CIOs, is WHO accessing through WHAT smart device gets to WHERE on the corporate network? I know from experience with customers that, for IT departments initially, the process is all about accomplishing the above safely and quickly. Despite all the BYOD noise, it’s clear that, for many IT departments, the use of private equipment is still in the future. An IDC study says even that the BYOD hype is over (please make use of Google Translate - this page is in German). At the same time, I wonder whether the majority of employees even want to allow their personal tablets or smartphones for business purposes. Even for smart devices, there is a work/life balance. Now that BYOD is much-discussed as a topic and is widely understood, the analyst community, in the shape of Gartner, sees a new trend: Bring Your Own Everything. It’s easier for me to deal with today’s reality. For me, the IT department reality is this: how do we implement secure solutions for our employees that enable mobility and flexibility, and thus gain a competitive advantage? Employees who can work from anywhere and experience anytime access to the corporate network means more efficiency and productivity for the company, even if smartphones are turned off once in a while to ensure work-life balance is maintained. Employees these days can experience tremendous flexibility – many have freedom from working in a set office in core business hours. The companies who are most prepared for a mobile device policy that makes sense for the business - i.e. allays financial, legal and security requirements – are employing Mobile Device Management, or MDM, in the business model. MDM allows organisations to extend full network-like controls to devices outside the network and, in doing so, addresses security concerns, especially around corporate applications access. IT teams have an enormous task on their hands with initiatives such as BYOD. Only by providing the optimum infrastructure can secure, fast access and increased flexibility for the company and also for employees be achieved.428Views0likes0CommentsWill you be DDoS attacked?
The threat posed by DDoS attacks is ever-growing and something I have talked about a numerous occasions at security conferences this year. As it continues to be a topic which interests and concerns the industry as a whole, I decided to write down my predictions around what 2013 will carry with it and why I think DNS Reflection attacks (and other amplification attacks) will play a more predominant part of DDoS attacks in the future. For those of you I have spoken to on the topic before, it’s a theme I regularly stress. The major drive of these types of DNS attacks is the decreasing number of bots available for rent. One explanation is that the authorities have been more effective of closing down major botnets. With the decreasing number of bots now available, hactivists and other cyber criminals are now finding new ways in which to amplify their attacks. So how does a DNS reflection attack work? It actually quite simple, and is based around amplifying the data you generate by reflecting it via an open DNS resolver. Imagine that you send a DNS query with a packet size of 40 bytes to a DNS server and get back 2500 bytes in the DNS response. That sounds like a pretty good deal, right? Now, what if you spoofed the source IP to reflect the attack against your target/victim via the open DNS resolver? You can see where this is leading… The DNS resolver will generate a huge amount of data and send it to the spoofed IP address. Because DNS is using a stateless protocol called UDP there is really no source address verification. This means you can easily spoof the address and achieve the result of an amplified packet size in the attack. I believe DNS reflection attacks will be a preferred tool for three simple reasons: 1.) In the list of top ten AS numbers with most open DNS resolvers you find around~20 000 open DNS resolvers (*) 2.) You can amplify an attack with a factor of 250 and it requires little bandwidth from the cyber criminals. The more bots you are in control of, the bigger effect it can have 3.) As the attack is reflected, very often the open DNS resolver has little logging turned on so the cyber criminals can easily hide behind them Over the last two years, we have seen an increasing number of attacks using this technique and it has been very effective for cyber criminals. A few attacks have recorded speeds of up to 35 Gbps - more than enough to take out an average company’s internet connection. One thing to remember, however, is that very often the DDoS attack is just a smoke screen for a more sophisticated attack that can potentially cost the company even more money. The problem here is to find the needle in the haystack. How do your security products cope with the influx of traffic during a DDoS attack. More importantly, can they find things like SQL injection attacks in the storm of traffic? So how can you protect your business in the light of such threats? The approach is very often layered, which means that you need a combined defense for network layer DDoS attacks (L2-L4) with DDoS attacks on application layer (L5-L7). I believe that a combination of on-premise equipment for detecting network based DDoS attacks and attacks on the application level allows you to close the window for cyber criminals and more efficiently stop any attack on a network and application layer. To answer the question in the headline, the risk of being “DDoS attacked” has never been greater. DDoS attacks have become the de-factor standard for online protests and it will continue to be used by hacktivists to make themselves heard, whether for political, ideological, financial or religious reasons. Our job is to ensure we continue to build the best solutions to prevent such attacks. Feel free to reach out to discuss the best way to protect your business with any of our system engineers! References: * HostExploit’s – World hosts report Q3 2012289Views0likes3CommentsKeeping the data flowing in this mobile world
It seems to me that you cannot check any technology news website these days without being bombarded by news about mobility, bring your own device (BYOD) and the arrival of 4G networks, bringing superfast internet access to millions of mobile workers across the country. But while most users just care about getting the latest handset and being able to get a decent mobile connection when out and about, the service providers behind the scenes have a lot more to deal with. The increasing number of subscribers, the “always on” nature of today’s devices and the vast amount of data they create are causing headaches for service providers. It’s something we know all about here at F5. As the features available to mobile users become richer and more advanced, it places additional strain on the networks. The trouble is that many of the networks and the infrastructure they run on are old, built before these superfast networks and advanced mobile devices were available. This means that, sometimes, performance and security can be compromised. Some applications will not perform at their optimal speed if there is heavy traffic on the network while many applications from unofficial sources could pose a security threat to the user, the business and the service provider. That’s why we believe in an application-orientated approach to security and centralised management and policy controls. This means you can tailor policies and protection for each application to each individual organisation, while centralising the management means a reduction in time and money spent on configuring policies and pushing them out. Essentially, the key is to ensure that what goes on behind the scenes is seamless and easy to manage, so users get a fast, reliable and secure mobile service and providers don’t have to stress about ensuring they can deliver on those promises. Our latest developments in the firewall market can help mobile service providers; click here to find out how. http://www.f5.com/about/news/press/2013/20130619/278Views0likes0CommentsA brief history of (F5) time
John McAdam, F5’s CEO and Chairman, was the first keynote speaker at Agility 2013. He talked about the history of the company, much of which is known, but drew attention in particular to a couple of items; the LineRate acquisition, whose technology is to do with application layer SDN, the opening of F5’s London and New York International Technology Centres. The latter drives massive preference for F5 amongst our customers. Once they visit, try our technology in (effectively) their own environments in tandem with other partner technologies, it demonstrates why F5 is generally the right choice. And then TMOS – our platform. We remain the only full proxy strategic control point in the data centre, and aim to be in the cloud or hybrid environments too. The secret sauce here is the platform’s modularity; everything F5 works on the same platform, so a web application firewall can be found on a BIG-IP device along with Local Traffic Manager. Wrapping up, John drew attention to our customer satisfaction levels; currently at 9.2/10, this is a source of tremendous pride for the company and is a key performance indicator right up to board level. What follows – and I will cover as many as I can, in-between video shoots with customers and partners – are more in-depth slots on security, connections, scale and F5’s place in helping customers overcome these challenges.274Views0likes0CommentsValue-added strategic partnerships: increase efficiency through manageability
In today's interconnected and globalized world, it is more important than ever that IT infrastructure components interact to optimise the data centre. There is good business logic for this; better usage and collaboration means operating and capital expenditure costs are reduced, and this chimes well with the ‘do more with less’ mantra so popular in boardrooms throughout Europe currently. This efficiency drive means many IT business unit managers have to be creative in order to drive the business forward with the same or fewer resources. There are many ways to get to a goal, but one of the more obvious ones is to optimise what you already have and make sure that you are going to be able to use it for the foreseeable future. This is as true for IT as it is for machinery to manufacture footwear or the system you have for answering the phones. Manageability is therefore becoming a key word in IT. That’s not to say it' has been overlooked up to now, but it is much more important than it was as many organisations, in their drive for efficiency, are phasing out point solutions that perform different aspects of – overall – the same task. Platforms that offer multiple functions with common manageability are becoming more popular in order to roll out new services more quickly and flexibly. At F5, we have taken this approach for a number of years. We work with and are certified by the world's leading technology companies to provide our customers with exactly this overall concept, whether you’re implementing VMware of Microsoft Exchange. Here’s a very simple example – a Tech Fact from the independent customer surveyors TechValidate, regarding Microsoft environments. 62% of companies surveyed (the entire survey includes 364 anonymous F5 customers) to confirm that they were able to reduce 10% or more of CAPEX using their F5 solutions: Take a look at how we can support you in achieving your efficiency goals with our strategic technology alliances.267Views0likes0CommentsVideos from F5's recent Agility customer / partner conference in London
A week or so ago, F5 in EMEA held our annual customer / partner conference in London. I meant to do a little write-up sooner but after an incredibly busy conference week I flew to F5's HQ in Seattle and didn't get round to posting there either. So...better late than never? One of the things we wanted to do at Agility was take advantage of the DevCentral team's presence at the event. They pioneered social media as a community tool, kicking off F5's DevCentral community (now c. 100,000 strong) in something like 2004. They are very experienced and knowledgeable about how to use rich media to get a message across. So we thought we'd ask them to do a few videos with F5's customers and partners about what drives them and how F5 fits in. Some of them are below, and all of them can be found here.262Views0likes0CommentsThe UK Cookie Law – <place your own bad pun here>
Many of you will be aware on the 26 th of May the law that applies to how cookies and other ‘cookie-like’ objects are stored on users’ devices changed. Whilst the Information Commissioners Office has indicated that there will be a one year grace period before enforcement begins, it seems wise to start addressing this issue now so that a) you’ve got time to test and implement your chosen solution b) you can’t say I didn’t tell you so when they slap a £500 000 fine on you. What do the new regulations say? Well essentially whereas cookies could be stored with what I, as a non-lawyer, would term implied consent, i.e. the cookies you set are listed along with their purpose and how to opt out in some interminable privacy policy on your site, you are now going to have to obtain a more active and informed consent to store cookies on a user’s device. I’m not going to use this post to debate the rights and wrongs of this (there are plenty of forums out there doing just that), and anyway the last time I was trusted to administer a website you did it in vi (or emacs if you were that way inclined). I’m far more interested in making life as easy as possible for our customers. Solving this problem means you are going to have to first capture the cookies that your site is using, and then build a mechanism to allow users to grant consent. Whilst you could implement this in your application server code I bet you can guess where I think it makes sense to address this. I’ve started mucking about with some iRules to capture and log cookies as they are set and to produce a consent page and (guess what) cookie based method of recording who has authorised cookie use for future reference. I’d be really interested in your views on this, so leave a comment.260Views0likes0CommentsR.I.P Barnaby Jack!
It's sad to hear that Barnaby Jack passed away just 35 years old. He was found dead in his apartment in San Francisco on the 25th of July. BJ was to me special in many ways and a person I happily followed in his works as a security expert / white hat hacker and a good source of inspiration. Born on the 22 of November 1977 in New Zealand he became known for many big discoveries in the industry, everything from making an ATM spitting out hundred dollar bills (http://www.youtube.com/watch?v=qwMuMSPW3bU) to how your remotely took over Dlink routers by modifying the binary firmware downloaded and injecting code for remote execution. This was when I first came to meet with him in person. In april 2008 in Mallorca, Spain. We both worked for for a different company at the time, BJ was presenting his latest research on the Dlink router for all the SE's at an European SE conference. Though this was something he discover on his spear time it really illustrated his passion and knowledge that he used in his everyday work trying to make sure that the product shipped was 99,9% secure and did not contain and flows in the hardware or software. For several years he now run his own security research company and focusing on security issues with medical equipment like insulin pumps and pacemakers. This Black Hat was going to be his show…. His latest research work was around pacemakers and their weaknesses. He was going to demonstrate how your remotely within 30 meters could turn one off. It's sad to say that I will not be able to listen to that presentation presented by him but hope his work lives on by someone else. You will be missed by the security community and this Black Hat will probably get dedicated to you at least in my mind.259Views0likes0CommentsCloud Computing, Economics and a Universal Truth
I was reading an interesting article in ZDNet about the economics of cloud computing and was struck by a universal truth - to deliver a service for the lowest cost you need to make maximum use of your resources for the maximum amount of time. This is the principle that drives a wider range of designs than you’d maybe imagine, e.g: - Storage subsystem designs - IT outsourcing - Cloud computing All of these are designed in such a way as to smooth the peaks of demand to reduce the resource needed to supply the service. Whilst the RAM cache of a storage controller might be somewhat different to a team of network engineers in a NOC, they are all there to service a workload. Just as it’s more economical to provide memory cache to deal with the peak I/O workloads, rather than hundreds of extra disk spindles, the cost of running a 24x7x365 NOC with enough staff to cover sickness, holidays and training is better shared by multiple organisations than each providing its own. The same works for cloud computing, where cloud service providers will rely on providing enough compute resource to meet their average requirements, rather than the theoretical maximum if every customer used their resources all at once. Whilst all this was running around my head (I prefer to believe that it’s this high-value deep thinking time that results in my somewhat modest productivity rather than my easily distracted nature), F5 quietly announced availability of Version 11 of BIG-IP. There are loads of new and pretty cool things in this release, which we promise to bombard you about over the coming months, but one that stood out was our new Scale N architecture, which lets you break away from traditional two node clustering into a world where workloads can migrate between members of a pool of application delivery controllers. So if, for example, your downloads site hits a huge peak, you could migrate off your Outlook Web Access workload to one application delivery controller and your E-commmerce site to another. If one device fails, its workload can be spread around multiple peers. Suddenly we seem to have a way to smooth those peaks and meet even the toughest SLAs.258Views0likes0CommentsInfosec 2013 - Businesses Unprepared for DNS Reflection Threat, Despite Biggest Attack in History
Further to my previous posts onDDoS attacks(and particularly the recent Spamhaus attack), we thought thatInfosec 2013would offer the perfect audience from which to gauge whether businesses are prepared for and even understand what I predict will be the biggest threat to enterprises this year. I have to say, I’m quite surprised at the results. Only 10 per cent of the security professionals we surveyed could describe accurately how DNS reflection attacks work, and only 11 per cent would be completely confident that the day-to-day operations of their business would not be disrupted, should they be hit by such an attack. Interestingly,83 per cent of respondents revealed they are less than fully confident that their organisation has consistent security and availability policies across their entire IT infrastructure. And yet there are a number of concerns associated with suffering a DDoS attack. 22% of respondents highlighted reputational damage as a top concern, with 20% worrying about the impact on customers and 16% on data loss. More than one in 10 respondents picked out revenue loss as one of their top three concerns. The results speak for themselves, but businesses need to take note and prioritise security or run the risk of allowing cyber criminals to access data or hacktivists to target them with DDoS attacks. Businesses need to react to the threat of DDoS attacks and particularly DNS reflection attacks. It’s crucial that we get on the front foot when it comes to tackling cyber crime and try to limit the damage. Both the scale and the method of the Spamhaus attacks should have acted as a wake-up call, but the research suggests that many security professionals would still struggle to deal effectively with these new breed of DDoS attacks, despite fearing the impact of data loss, reputational damage and the impact on their customers. As organisations continue to move their applications to the cloud as a way to increase infrastructure agility and reduce costs, it’s vital that they close off any back doors to would-be attackers. Conventional firewalls are failing in the face of increasingly complex internet threats; more intelligence has to be built into corporate network and to ensure their security can handle the newest threats. This includes being able to seamlessly configure and automate security to ensure the entire IT environment is protected, regardless of the mix of on-premise, cloud or hybrid infrastructures.252Views0likes0Comments