#infosec #apt Advanced persistent threats are the new black in security. A more context-aware architecture may help avoid compromise and the ensuing ambush.
Meet the new attack, same as the old attack. That’s because it is an old attack. Really. It’s an attack that’s already been executed, the results of which have lain dormant waiting for the highest bidder to lease it out. Advanced persistent threats or APT are not new, but because of their longevity are only beginning to receive the attention they deserve. An APT is so named because the exploit mechanism is deposited long before it may be used. Compromise of systems is desired not to steal information or resources today, but down the road. Trojans and other malware are embedded in compromised systems and later on leased out for use in attacks – either for their resources or the access to internal networks they provide.
This class of threat was recently named responsible for the attacks that hit RSA and is thought to have affected 760 other organizations, many of them high-profile sites such as Google, Microsoft, and a significant percentage of Fortune 100 organizations.
Hundreds of Organizations Targeted in Attack That Hit RSA
The attack that hit RSA earlier this year appears to have hit computer systems at other organizations as well. Information obtained from an undisclosed source suggests that at least 760 other organizations were compromised in the same set of attacks. The organizations listed have computer systems that were found to be checking in for instructions with the same infrastructure that was used in the RSA attack. Some of those listed are Internet service providers (ISPs) and are probably listed because subscribers were infected. Some of those on the list are anti-virus companies and may appear on the list because they deliberately infected systems with malware in an attempt to reverse-engineer. The attack used more than 300 command and control networks located primarily in China and South Korea.
One of the most common suggested mitigations is to simply deny access to resources and applications based on country, as the majority of control and command networks indicated a location of China and South Korea. While this is certainly a valid approach, many organizations – particularly those with a highly-global customer or employee base – simply cannot afford to cut off entire countries from access. But the threat specifically from those areas is valid and should be addressed. To ignore it is to invite peril.
A less disruptive (and drastic) solution may be found in a more dynamic, aware infrastructure architecture. Rather than establish a very negative security posture, i.e. deny all from country X and Y, it might be advantageous to simply subject requests and traffic coming from specific countries or domains to additional scrutiny. While many organizations apply such scrutiny by default., it can be a source of undesirable latency in communications and therefore either not universally applied or applied with less depth of scrutinization. Thus it may be desirable to pre-screen all requests or merely a subset based on geo-location or domain.
The goal in both cases is to architect a data flow process that enables deeper scrutiny only when necessary to avoid performance penalties. To achieve this requires leveraging a strategic point of control to make decisions based on the identified variables.
A context-aware, dynamic application delivery controller provides the means by which such routing can easily be achieved.
While application delivery controllers are generally associated with load balancing – and specifically load balancing of applications – they can also be used to dynamically route traffic to both applications and infrastructure. This capability enables scale of both infrastructure and application resources and a more efficient, intelligent architecture.
Using the inspection capabilities of an application delivery controller – typically topologically at the edge of the network and one of the first network components to receive inbound traffic – allows policies to be enforced that route requests inbound from identified networks, domains, users, or even devices to either the application or security infrastructure for further analysis. Using this same inspection capability further allows for tracking of inbound requests such that once a particular session has been “cleared” by both the application delivery controller and the security infrastructure, it can be routed directly to the application, eliminated the additional latency associated with deeper analysis by security components.
While certainly not a panacea, a more capable infrastructure architecture provides the opportunity to better screen inbound traffic – particularly when identified as having a higher risk of bearing a malicious payload – without compromising on performance for traffic not identified as a potential risk or already having been evaluated and cleared as legitimate.
There are certainly other data flow processes and architectures that can be designed to assist in identifying the source of advanced persistent threats. The key is a flexible infrastructure that can provide the visibility of context associated with every request and the control required to trigger and enforce such a flow.