Showing results for 
Search instead for 
Did you mean: 
F5 Employee
F5 Employee

Cloning. Boomeranging. Trojan clouds. Start up CloudPassage takes aim at emerging attack surfaces but it’s still more about process than it is product.

Before we go one paragraph further let’s start out by setting something straight: this is not a “cloud is insecure” or “cloud security – oh noes!” post.

Cloud is involved, yes, but it’s not necessarily the source of the problem - that would be virtualization and processes (or a lack thereof). Emerging attack methods and 0151T000003d8F9QAI.pngbotnet propagation techniques can just as easily be problematic for a virtualization-based private cloud as they are for public cloud. That’s because the problem isn’t with necessarily cloud, it’s with an underlying poor server security posture that is made potentially many times more dangerous by the ease with which vulnerabilities can be propagated and migrated across and between environments.

That said, a recent discussion with a cloud startup called CloudPassage has given me pause to reconsider some of the ancillary security issues that are being  discovered as a result of the use of cloud computing . Are there security issues with cloud computing? Yes. Are they enabled (or made worse) because of cloud computing models? Yes. Does that mean cloud computing is off-limits? No. It still comes down to proper security practices being extended into the cloud and potentially new architectural-based solutions for addressing the limitations imposed by today’s compute-on-demand-focused offerings. The question is whether or not proper security practices and processes can be automated through new solutions, through devops tools like Chef and Puppet, or require manual adherence to secure processes to implement.   


We’ve talked about “cloud” security before and I still hold to two things: first, there is no such thing as “cloud” security and second, cloud providers are, in fact, acting on security policies designed to secure the networks and services they provide.

That said, there are in fact several security-related issues that arise from the way in which we might use public cloud computing and it is those issues we need to address. These issues are not peculiar to public cloud computing per se, but some are specifically related to the way in which we might use – and govern – cloud computing within the enterprise. These emerging attack methods may give some credence to the fears of cloud security. Unfortunately for those who might think that adds another checkmark in the “con” list for cloud it isn’t all falling on the shoulders of the cloud or the provider; in fact a large portion of the problem falls squarely in the lap of IT professionals.


One of the benefits of virtualization often cited is the ability to easily propagate a “gold image”. That’s true, but consider what happens when that “gold image” contains a root kit? A trojan? A bot-net controller? Trojans, malware, and viruses are just as easily propagated via virtualization as web and application servers and configuration. The bad guys make a lot of money these days by renting out bot-net controllers, and if they can enlist your cloud-hosted services in that endeavor to do most of the work for them, they’re making money for nothing and getting bots for free.

Solution: Constant vigilance and server vulnerability management. Ensure that guest operating system images are free of vulnerabilities, hardened, patched, and up to date. Include identity management issues, such as accounts created for development that should not be active in production or those accounts assigned to individuals who no longer need access.


Public cloud computing is also often touted as a great way to reduce the time and costs associated with development. Just fire up a cloud instance, develop away, and when it’s ready you can migrate that image into your data center production environment. So what if that image is compromised while it’s in the public cloud? Exactly – the compromised image, despite all your security measures, is now inside your data center, ready to propagate itself.

Solution: Same as with cloning, but with additional processes that require a vulnerability scan of an image before its placed into the production environment, and vice-versa.


How the heck could an image in the public cloud be compromised, you ask? Vulnerabilities in the base operating system used to create the image. It is as easy as point and click to create a new server in a public cloud such as EC2, but that server – the operating system – may not be hardened, patched, or anywhere near secured when it’s created. That’s your job. Public cloud computing implies a shared customer-provider security model, one in which you, as the customer, must actively participate.

0151T000003d8ElQAI.gif “…the customer should assume responsibility and management of, but not limited to, the guest operating system.. and associated application software...”

“it is possible for customers to enhance security and/or meet more stringent compliance requirements with the addition of… host based firewalls, host based intrusion detection/prevention, encryption and key management.” [emphasis added]

-- Amazon Web Services: Overview of Security Processes (August 2010)

Unfortunately, most public cloud provider’s terms of service prohibit actively scanning servers in the public cloud for such vulnerabilities. No, you can’t just run Nexxus out there, because the scanning process is likely to have a negative impact on the performance of other customers’ services shared on that hardware. So you’re left with a potentially vulnerable guest operating system – the security of which you are responsible for according to the terms of service but yet cannot adequately explore with traditional security assessment tools. Wouldn’t you like to know, after all, whether the default guest operating configuration allows or disallows null passwords for SSH? Wouldn’t you like to know whether vulnerable Apache modules – like python – are installed? Patched? Up to date?

Catch-22, isn’t it? Especially if you’re considering migrating that image back into the data center at some point.


One aspect of these potential points of exploitation is that organizations can’t necessarily extend all the security practices of the data center into the public cloud.

If an organization routinely scans and hardens server operating systems – even if they are virtualized – they can’t continue that practice into the cloud environment. Control over the topology (architecture) and limitations on modern security infrastructure – such components often are not capable of handling the dynamic IP addressing environment inherent in cloud computing – make deploying a security infrastructure in a public cloud computing environment today nearly impossible.

Too, is a lack of control over processes. The notion that developers can simply fire up an image “out there” and later bring it back into the data center without any type of governance is, in fact, a problem. This is the other side of devops – the side where developers are being expected to step into operations and not only find but subsequently address vulnerabilities in the server images they may be using in the cloud.

Joe McKendrick, discussing the latest survey regarding cloud adoption from CA Technologies, writes:

0151T000003d8EmQAI.gif The survey finds members of security teams top the list as the primary opponents for both public and private clouds (44% and 27% respectively), with sizeable numbers of business unit leaders/managers also sharing that attitude (23% and 18% respectively).

Overall, 53% are uncomfortable with public clouds, and 31% are uncomfortable with private clouds. Security and control remain perceived barriers to the cloud. Executives are primarily concerned about security (68%) and poor service quality (40%), while roughly half of all respondents consider risk of job loss and loss of control as top deterrents.

-- Joe McKendrick 0151T000003d8FBQAY.png , “Cloud divide: senior executives want cloud, security and IT managers are nervous

Control, intimately tied to the ability to secure and properly manage performance and availability o... regardless of where they may be deployed, remains high on the list of cloud computing concerns.

One answer is found in Amazon’s security white paper above – deploy host-based solutions. But the lack of topological control and inability of security infrastructure to deal with a dynamic environment (they’re too tied to IP addresses, for one thing) make that a “sounds good in theory, fails in practice” solution.

A startup called CloudPassage, coming out of stealth today, has a workable solution. It’s host-based, yes, but it’s a new kind of host-based solution – one that was developed specifically to address the restrictions of a public cloud computing environment that prevent that control as well as the challenges that arise from the dynamism inherent in an elastic compute deployment.


0151T000003d8FAQAY.jpg If you’ve seen the movie “Robots” then you may recall that the protagonist, Rodney, was significantly influenced by the mantra, “See a need, fill a need.” That’s exactly what CloudPassage has done; it “fills a need” for new tools to address cloud computing-related security challenges. The need is real, and while there may be many other ways to address this problem – including tighter governance by IT over public cloud computing use and tried-and-true manual operational deployment processes – CloudPassage presents a compelling way to “fill a need.”

CloudPassage is trying to fill that need with two new solutions designed to help discover and mitigate many of the risks associated with vulnerable server operating systems deployed in and moving between cloud computing environments. Its first solution – focused on server vulnerability management (SVM) - comprises three components:

 Halo Daemon0151T000003d8EnQAI.jpg

The Halo Daemon is added to the operating system and because it is tightly integrated it is able to perform tasks such as server vulnerability  assessment without violating public cloud computing terms of service regarding scanning. It runs silently – no ports are open, no APIs, there is no interface. It communicates periodically via a secured, message-based system residing in the second component: Halo Grid.

Halo Grid

The “Grid” collects data from and sends commands to the Halo Daemon(s). It allows for centralized management of all deployed daemons via the third component, the Halo Portal. 

Halo Portal

The Halo Portal, powered by a cloud-based farm of servers, is where operations can scan deployed servers for vulnerabilities and implement firewalling rules to further secure inter and intra-server communications.

Technically speaking, CloudPassage is a SaaS provider that leverages a small footprint daemon, integrated into the guest operating system, to provide centralized vulnerability assessments and configuration of host-based security tools in “the cloud” (where “the cloud” is private, public, or hybrid). The use of a message-based queuing-style integration system was intriguing. Discussions around how, exactly, Infrastructure 2.0 and Intercloud-based integration could be achieved have often come down to a simi... architectures. It will be interesting to see how well Cloud Passage’s Grid scales out and whether or not it can maintain performance and timeliness of configuration under heavier load, common concerns regarding queuing-based architectures.

The second solution from CloudPassage is its Halo Firewall. The firewall is deployed like any other host-based firewall, but it can be managed via the Halo Daemon as well. One of the exciting facets of this firewall and the management method is that eliminates the need to “touch” every server upon which the firewall is deployed. It allows you to group-manage host-based firewalls in a simple way through the Halo Portal. Using a simple GUI, you can easily create groups, define firewall policies, and deploy to all servers assigned to the group. What’s happening under the covers is the creation of iptables code and a push of that configuration to all firewall instances in a group.

What ought to make ops and security folks even happier about such a solution is that the cloning of a server results in the automatic update. The cloned server is secured automatically based on the group from which its parent was derived, and all rules that might be impacted by that addition to the group are automagically updated. This piece of the puzzle is what’s missing from most modern security infrastructure – the ability to automatically update / modify configuration based on current operating environment: context, if you will. This is yet another dynamic data center concern that has long eluded many: how to automatically identify and incorporate newly provisioned applications/virtual machines into the application delivery flow. Security, load balancing, acceleration, authentication. All these “things” that are a part of application delivery must be updated when an application is provisioned – or shut down. Part of the core problem with extended security practices into dynamic environments like cloud computing is that so many security infrastructure solutions are not Infrastructure 2.0 enabled and they are still tightly coupled to IP addresses and require a static network topology, something not assured in dynamically provisioned environments. CloudPassage has implemented an effective means by which at least a portion of the security side of application delivery concerns related to extreme dynamism can be more easily addressed. 

As of launch, CloudPassage supports Linux-based operating systems with plans to expand to Windows-based offerings in the future.

AddThis Feed Button Bookmark and Share

F5 Employee
F5 Employee
Dome9: Closing the (Cloud) Barn Door
Version history
Last update:
‎24-Jan-2011 02:02
Updated by: