security model
2 TopicsF5 Distributed Cloud - Mitigation for Cross Tenant Origin Exposure (CTOE)
F5 Distributed Cloud (XC) offers a suite of powerful features designed to simplify the lives of administrators and engineers. A key aspect of this ease of use comes from shared objects, such as Regional Edge Proxies which utilize well-known public IP addresses. However, while this shared infrastructure enhances scalability and efficiency, it can also present risks if leveraged by attackers; and in this case, cross tenant origin exposure (CTOE). For instance: Customer(x) has tenant(x) in XC with a Load Balancer pointing to their public IP origin servers. These may be behind a perimeter firewall NAT (as diagrammed below) or be actual public IPs on the servers. Customers perimeter firewall is configured to deny all inbound traffic to public IP for site1.example.com Perimeter Firewall is configured to allow inbound traffic to public IP for site1.example.com for XC IP’s. (which is a well-known and public shared IP range) XC Proxy IP’s Reference Doc This setup is generally considered a minimum best practice because it restricts traffic to only those sources originating from XC. However, depending on the organization’s risk appetite, this level of security may be insufficient. The Risk Another account/tenant(y) within Distributed Cloud could create a load balancer and point to the public IP or DNS name of the origin pools for tenant(x). The attacker must know or learn the actual origin servers IP, or network segment to perform this attack. This discovery is fairly trivial and there are many approaches. In addition, what if the origin pool in tenant(x) is pointing to a DNS name that resolves to public IP’s? This is common with SaaS API gateways such as AWS and Azure to name a few and these gateways all use the same DNS name for the gateway respective to their cloud. Same DNS = Same IP’s = Easy to learn or guess Origin IP’s. For instance a common flow where a customer is using XC for WAF/WAAP and a 3rd party SAAS solution for an APIGW, may be Client–>XC(LB-WAAP)–>APIGW(pub-ip)–>API. In this default configuration, an attacker could learn the customers public NAT IP and add it to their Origin Pool. They can now instantiate attacks from their tenant(y) which will be sourced from the XC IP’s and allowed by the customer(x) perimeter firewall. Mitigation There are at least 4 ways to mitigate this risk. 1. L7 Header - If the origin servers (on-prem or SAAS) have something in front of them that is “L7 aware” or they themselves can be configured to do header validation, a custom HTTP request header could be injected into the flow by the load balancer in “tenant x”. Tenant y would not know or be able to see this header. Of course traffic not containing this header would still make it all the way to the L7 aware service before being dropped. While this would suffice for a L7 DoS or or other L7 type attack, it would not help with a L3/4 type attack which could still make it’s way through the infrastructure. 2. MTLS - A unique differentiator for F5 XC, is our ability to use server-side MTLS. If a customer has the capability on the Web Server/Service or something in front of it similar to the previous L7 header example, then we can add an additional layer of source validation by using mutual certificate authentication (mtls). Even a self-signed cert would add a lot of value here. No cert = no layer 7 access to the app or service. This does not prevent an L3/4 attack but will prevent unwanted application access. 3. Customer Edge (CE) proxies are deploy-able software that creates a private mesh back to our Application Delivery Network (ADN). These come with additional cost and need to be deployed at each location, thus creating a private mesh or overlay network that is unavailable outside of the tenant. in this scenario, the attacker traffic could potentially make it to the public IP of (or in front of) the CE and be dropped, thus protecting the application itself but still potentially allowing bad L3/4. 4. Private Link is a paid add-on to XC that enables connectivity between XC, clients, and resources. It offers many advantages, particularly when addressing regulatory and other security compliance requirements. Perimeter firewall rules can be simplified to allow traffic exclusively from Private Links, which are accessible only from the designated tenancy. Private Links can mitigate L3-L7 attacks because the link is entirely private by design. XC Private Link Overview A Word on L3/4 DDoS: L3/4 attacks were brought up several times above when talking about the technicalities of each mitigation method. While a L3/4 attack is not always distributed by nature, most are. One very important concept to keep in mind is the fact that XC natively provides L3/4 DDoS mitigation at our Regional Edges. Even in the examples above where “attack” traffic could make it all the way to the app or at least to the perimeter, if it was a true DDoS, this would get picked up by our Regional Edges and automatically mitigated. Conclusion In today’s interconnected cloud ecosystems, mitigating CTOE attacks is crucial to maintaining service availability and performance. By understanding the vulnerabilities that stem from cross-cloud communications and applying best practices, organizations can safeguard their systems from exploitation. As we continue to expand our cloud footprints, proactive security measures are not only necessary but must evolve alongside the complexity of the environments we manage. Effective CTOE prevention is an essential part of ensuring a resilient, high-performing network in this cloud-driven world. Like this article? Please drop a like or line below!122Views1like2CommentsVerify, but Never Trust?
Much is being written lately about so-called "Zero Trust Model" security, which prompts me to ask, "Since when did we security folk trust anyone?" On the NIST site, you'll find a thorough report NIST commissioned from Forrester. A main theme of this report is that the old axiom of security "trust, but verify" is now obsolete. Hardened perimeters, once successfully traversed, leave infrastructures that trust the user and traffic implicitly, to their unending peril. What does all this mean for those of us tasked with security? Well, it's not a new concept, just a new label. We have known for years that the notion of a perimeter in a data center is evaporating, largely due to the increasingly browser-driven nature of all apps, and threats moving up the stack to the application. The network "perimeter" is largely intact, but with seemingly everything of importance transported via HTTP (and increasingly TLS-encrypted), our infrastructures may as well be open at the network level. Let's consider the fundamental tenets set forth in the report linked above: Zero Trust is applicable for every organization/industry. Zero Trust is technology and vendor agnostic. Zero Trust is scalable. Zero Trust protects Civil Liberties by protecting personal/confidential data. First, if we're in security, we should be considering how Zero Trust applies and can help improve my organization's security posture. We should be evangelizing this new way of thinking internally, in an effort to educate all aspects of the organization - networking, platform, application development, and any other team that may have a vested stake. Since Zero Trust is vendor- and technology-agnostic, it's incumbent upon everyone to evaluate current technologies, solutions and architectures to determine whether current implementations adhere to a Zero Trust model. No one piece of technology or one vendor will bring you to Zero Trust nirvana. Next, we must consider what is meant by "scalable" in this context. F5 has long been in the business of highly-scalable solutions, whether for offloading encryption, web application security, access management, or good old fashioned load-balancing. However, that's only part of what is meant by scalable here. Does our implementation of a Zero Trust Model scale across the organization? Does it apply to both internal and external users and applications? Is access to data cumbersome and overwhelmed by security controls? Does it consider all paths to sensitive data? On that last question, regarding paths to data, we hit upon the most important tenet above: the protection of data. In the end, "data wants to be free" and it is up to the security measures in place to ensure that it still travels freely, but only to those individuals who are properly authorized. This implies that web-based access paths (Internet and Intranet apps) along with other non-HTTP paths such as drive mounts or direct database access must all be considered and properly secured. Protecting data then requires good access management, good input validation, and at-rest data encryption. In order to be scalable, these security measures must be more or less frictionless from a UX perspective. These are high bars, indeed. The BIG-IP platform is uniquely instrumented to deliver business applications, and facilitate a Zero Trust model. Whether it is providing good input validation to prevent data exfiltration via CSRF or SQL injection with Application Security Manager (ASM), or integrating diverse access management mechanisms via Access Policy Manager (APM) without need of any special clients or portals, BIG-IP has a part to play in your Zero Trust implementation. Zero Trust is nothing new, we have been working for years to improve our application layer defenses through better coding, better frameworks, and new web technologies. Zero Trust does provide a codified framework to measure our success in developing highly secure and scalable infrastructures. Has your organization begun considering Zero Trust Model security? What challenges are you seeing, and how are F5 technologies factoring in (or not) along the way to overcoming those challenges? I look forward to your comments below.338Views0likes3Comments