f5 sirt
70 TopicsSecurity Best Practices for F5 Products
My colleagues previously wrote this article as a security best practice guidance for BIG-IP and BIG-IQ. This is an updated overview of key recommendations and not an exhaustive list of steps for securing an F5 product. I’ve also included updates to keep it relevant including newly published product hardening guides. These include: K53108777: Hardening your F5 system and K45321906: Harden your BIG-IQ system along with the two newly published K000156803: Hardening NGINX and NGINX Plus and K000156807: Secure the AOM subsystem. Regarding BIG-IP, the F5 SIRT team recently collaborated with the F5 iHealth team to create new diagnostic heuristics that align with these hardening best practices. These heuristics are now included in the Security tab of QKViews under a new "Security Best Practices" panel. You can also filter the alerts in the Diagnostics tab to show "Security_best_practices". Beyond these resources, there is extensive documentation available on MyF5 detailing specific steps for configuring functionality, though many are version-specific due to changes and enhancements across major releases. The most relevant links for configuration can usually be found within the hardening guides listed above. Additionally, F5 documentation occasionally refers to the "control-plane" and "data-plane." The control-plane includes all methods for managing a device or installation, such as the Web UI (TMUI), iControl REST, iControl SOAP, SSH, and related daemons like big3d and bigd. The data-plane, on the other hand, refers to all constructs that handle user traffic, such as Virtual Servers, NATs, SNATs, and other similar components. Going forward, references in this context will pertain to these constructs. Step 1: Minimize access to the control-plane It is crucial to implement sound security practices for any system, especially those in privileged network positions like BIG-IP or edge firewalls. One fundamental principle is keeping the control-plane off the internet whenever possible, with limited exceptions such as big3d communications between BIG-IP DNS and BIG-IP LTM devices that may traverse the internet. Ideally, access to the control-plane should be restricted solely to authorized IT staff. Measures should be taken to control access to control-plane services (such as SSH, HTTP, and SNMP) to ensure traffic only comes from expected hosts, as outlined in K13092: Overview of securing access to the BIG-IP system and 10 Settings to Lock Down Your BIG-IP. Adding pre-login and post-login banners is another effective security step, as they can help enforce security policies—such as informing users that activities are logged—or notify users of system updates like scheduled maintenance. Guidance for configuring banners can be found in K6068: Configuring a pre-login or post-login message banner for the BIG-IP or Enterprise Manager system and K71515276: Configuring a pre-login or post-login message banner for the BIG-IQ system. Ideally, control-plane access should be managed via a management DMZ, and additional restrictions on lateral movement within the DMZ can be enforced through micro-segmentation or the use of on-device controls. For BIG-IP, these on-device controls were notably enhanced in version 14.1 and above with a robust management interface firewall. Access to the management DMZ itself should be through a jump box or VPN with 2FA enabled. Jump boxes provide a dedicated and secure environment for administrative tasks, offering substantial protection against attacks like XSS and CSRF, because administrators will use them solely for device administration rather than general browsing or other activities. In the absence of this infrastructure, using a local virtual machine or dedicated browser for administrative duties is still recommended to mitigate risks from phishing-delivered XSS and CSRF attacks. While changes to network design to accommodate a management DMZ may take time, the on-device management interface firewall can be implemented independently, along with a mandate for more secure administrative environments. Several articles provide guidance for minimizing access to the control-plane, including K5380: Specify allowable IP ranges for SSH access, K11719: Mitigating risk from SSH brute-force login attacks, K13309: Restricting access to the Configuration utility by source IP address (11.x–17.x), K9908: Configure an automatic logout for idle sessions, and K75211108: Configure automatic logout for idle sessions on the BIG-IQ system. Furthermore, articles like K80425458: Modifying the list of ciphers and MAC and key exchange algorithms used by the SSH service on BIG-IP or BIG-IQ systems, K92748202: Restrict access to the BIG-IQ management interface using network firewall rules, and K31401771: Restricting access to the BIG-IQ or F5 iWorkflow user interface by source IP address provide additional strategies for securing critical management interfaces. Step 2: BIG-IP Management and Self IPs To enhance security, ensure that all Self IPs are set to "Lockdown None" to prevent the exposure of control-plane services unless explicitly required. If a service such as big3d (port 4353) needs to be exposed, carefully restrict access to only the specific ports required. For dedicated management VLANs and non-routable HA VLANs, the "Allow Default" setting can be used, though it is recommended to allow only specific ports whenever possible for tighter access control. Relevant guidance can be found in K17333: Overview of port lockdown behavior (12.x–17.x), K39403510: Managing the port lockdown configuration on the BIG-IQ system, and K15612: Connectivity requirements for the BIG-IQ system. Out-of-band management via a dedicated interface or VLAN is strongly recommended for optimal security. This can be implemented using the hardware platform’s dedicated management interface or a dedicated management VLAN on production interfaces when a dedicated management interface is unavailable, such as in single-NIC cloud deployments. Step 3: Hardening the BIG-IP To improve security, consider using a Hardware Security Module (HSM) for storing sensitive information such as SSL keys. Options like an onboard FIPS HSM or NetHSM offer a high level of protection, while the built-in SecureVault functionality can provide additional security by making SSL key recovery more difficult for unauthorized users who gain access to the BIG-IP’s control plane. For more details about SecureVault, F5 offers a knowledge base article: K73034260: Overview of the BIG-IP system Secure Vault feature. Additionally, reduce your attack surface by provisioning modules only as needed instead of upfront, which can also decrease the frequency of applicable Security Advisories. For further access restriction, appliance mode is another option designed to limit BIG-IP administrative access, making it behave more like a typical network appliance rather than a multi-user UNIX device (K12815: Overview of Appliance mode). For authentication, the BIG-IP control-plane should integrate with enterprise-grade AAA solutions such as RADIUS, TACACS+, or LDAP, as these bring administrative accounts under pre-existing enterprise security practices. However, note that root and admin passwords are available as fallback authentication, so these should be configured with strong, secure passwords. Guidance for setting up AAA solutions can be found in articles such as K8811: Configuring TACACS+ authentication for BIG-IP administrative users, K11072: Configuring LDAP remote authentication for Active Directory, K17403: Configuring RADIUS authentication for administrative users, and corresponding BIG-IQ articles like K31586420: Configuring the BIG-IQ system to use TACACS+ based authentication and authorization, K00153876: Enabling LDAP remote authentication for Advanced Shell access to the BIG-IQ system, and K51458353: Configuring the BIG-IQ system to use RADIUS authentication. If remote authentication is not being used, it is essential to enforce a strong password policy for local accounts on the BIG-IP or BIG-IQ systems. Several articles on MyF5 provide detailed instructions for locking down authentication on F5 devices, including K15497: Configuring a secure password policy for the BIG-IP system, K13121: Changing system maintenance account passwords, K4139: Configuring the BIG-IP system to enforce the use of strict passwords, K32203233: The root and admin accounts are now subject to the enforcement restrictions of the secure password policy, K12173: Overview of BIG-IP administrative access controls, and K49507549: Configuring a secure password policy for the BIG-IQ system. For systems running BIG-IP 15.0.0 or later, remote APM authentication can be used to manage control-plane access while also implementing two-factor or multi-factor authentication (2FA/MFA) using the APM system. For further details, see https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-local-traffic-manager-implementations/implementing-apm-system-authentication.html . Step 4: Monitoring To maintain comprehensive security and monitoring, it is recommended to configure off-box syslog, ideally directed to a SIEM, to ensure you have a reliable and immutable record of events such as configuration changes, potential indicators of compromise, and system issues. Alerts based on these logs can be set up to monitor critical events in real-time. Additionally, consider utilizing SNMP traps and polling to keep track of system performance and load while monitoring for potential attack indicators against the data-plane, such as denial of service (DoS) attacks. Regularly uploading qkviews to iHealth is another beneficial practice—unless restricted by enterprise security policies—as iHealth’s built-in heuristics can identify potential device misconfigurations, vulnerabilities specific to your version, hardware, or configuration, and any indicators of compromise within your system. This process can be automated via BIG-IQ, which also has the capability to automate regular configuration snapshots. For enhanced awareness of system access, refer to resources such as K13426: Monitoring login attempts (11.x–17.x) and K08662997: Monitoring login attempts on the BIG-IQ system. Step 5: Maintaining It is highly recommended to run a recent software release, ideally within the last two LTS (Long-Term Support) branches, as F5 continuously enhances functionality to address new attack vectors and ensure rapid adoption of security fixes. While some customers opt for engineering hotfixes to resolve specific issues, it is advised to migrate back to a mainline branch as soon as the necessary fixes are incorporated to minimize time-to-patch for newly discovered defects or vulnerabilities. Useful references include K9957: Creating a custom RSS feed to view new and updated documents, K2200: Most recent versions of F5 software, K9502: BIG-IP hotfix and point release matrix, and K15113: BIG-IQ hotfix and point release matrix. To stay informed about significant vulnerabilities, customers should subscribe to the F5 Security mailing list to receive alerts for critical vulnerabilities, including Quarterly Security Notifications (QSNs) and out-of-band notifications for high-impact third-party vulnerabilities. For more information about the QSN process and scheduling, consult K67091411: Guidance for Quarterly Security Notifications and K9970: Subscribe to email notifications regarding F5 products and security announcements. Additionally, reporting software issues—whether security-related or not—ensures the continuous improvement of F5 software. Any issues reported to F5 allow developers to address them promptly, facilitating early fixes. Resources such as K4602: Overview of the F5 security vulnerability response policy and K4918: Overview of the F5 critical issue hotfix policy provide more insights into how F5 handles reported vulnerabilities. Regular backups of your devices are another critical aspect of maintaining security and stability. Backups ensure you have a reliable, uncompromised configuration to restore in case a device needs reimaging. BIG-IQ can assist in automating this process, but it is crucial to thoroughly test and validate backup scripts to ensure they capture valid data and do not unintentionally delete necessary files during backup rotation. Step 6: Recovery Although compromise is relatively uncommon, adhering to the outlined security steps and best practices can significantly reduce the likelihood of it occurring. However, preparation is critical to ensuring a successful recovery should a compromise take place. Since recovery efforts often involve multiple departments within an organization, having a documented recovery plan is essential. At a minimum, the plan should address key areas such as how to isolate the compromised device. For example, if a device pair is compromised, should a potentially compromised box remain online and serve customers despite serious implications like PCI or GDPR noncompliance? Does your application delivery design allow you to continue serving customers after losing a device pair, or should you activate Disaster Recovery? The plan should also define when and how devices can be reintroduced into service. If company policy requires devices to be held for forensic analysis, ensure you have spare devices available to maintain uninterrupted service. Include steps for reimaging devices from scratch and recovering configurations from backups, as well as revoking and replacing potentially compromised SSL keys. Additionally, consider other secrets that might need to be replaced, such as RADIUS, TACACS, or SNMP credentials. Although this level of preparation may seem burdensome, having these discussions in advance is far easier than making critical, service-impacting decisions under pressure. Moreover, your recovery plan should not be limited to only your F5 systems but should account for broader infrastructure. For additional guidance, refer to K11438344: Considerations and guidance when you suspect a security compromise on a BIG-IP system. Step 7: Secure Against Brute Force and Application Attacks Protecting your F5 system is only part of securing your network; it is equally important to protect the applications and application servers that sit behind it. F5 systems can be configured in numerous ways to provide protection not only for the system itself but also for your applications. Starting at the lower layers, protections can be implemented using TCP profiles or by adding additional modules like F5’s Advanced Firewall Manager (AFM). AFM is a high-performance, stateful, full-proxy network firewall designed to safeguard data centers from incoming threats. It supports widely used protocols such as HTTP/S, SMTP, DNS, SIP, and FTP. For further guidance, consult resources such as K25301105: Mitigate HTTP SLOWRead attacks, K37718515: Investigating BIG-IP AFM attack vector logs and tuning the DoS Vector Attack Type, and K41305885: BIG-IP AFM DoS vectors. At higher layers, HTTP applications can be protected using a Web Application Firewall (WAF). F5 offers several WAF solutions, including Distributed Cloud, NGINX App Protect WAF, and Advanced WAF/ASM. With the increasing complexity of web applications, adding a WAF has become essential. A WAF provides significant mitigation capabilities and can be configured to protect against emerging attacks, offering robust defenses against threats such as authentication attacks and brute-force attempts. For additional information, refer to K07359270: Succeeding with application security, K15405450: Overview of web scraping detection, K18650749: Configuring brute force attack protection (13.1.0 and later), and K14199: Determining if the BIG-IP ASM system has detected and prevented a Slow HTTP POST DDoS attack. Implementing these layers of protection ensures comprehensive security for both your F5 systems and the applications they support. Step 8: Prevent Data Leakage The BIG-IP system offers several HTTP protections even without utilizing a Web Application Firewall (WAF). For example, HTTP cookies can be encrypted to prevent the exposure of sensitive data, ensuring better security for client-server communication. Additionally, the BIG-IP system can be configured to remove sensitive HTTP response headers that might otherwise reveal information about the backend server, thereby reducing the risk of information leakage. Furthermore, an HTTP profile can be configured to enable Layer 7 inspections, ensuring that clients remain RFC compliant. These features collectively help safeguard against the leakage of sensitive data and enhance the overall security of HTTP transactions. For additional details, refer to resources such as K6917: Overview of BIG-IP persistence cookie encoding, K14784: Configuring cookie encryption within the HTTP profile, K23254150: Configuring cookie encryption for BIG-IP persistence cookies from the cookie persistence profile, and K40243113: Overview of the HTTP profile. Summary As noted earlier, this list is not exhaustive and should be considered within the context of your organization's existing guidelines for securing, monitoring, and maintaining systems, as well as any disaster recovery plans in place. While the technical details may evolve over time as F5’s product offerings expand—whether with BIG-IP or the NGINX suite—the overarching principles of system security will largely remain constant. To assist with these efforts, there is a wealth of documentation available on MyF5 that outlines specific technical steps, additional resources, and best practices for securing systems. A few key references include K67091411: Guidance for Quarterly Security Notifications, K9970: Subscribing to email notifications regarding F5 products, K27404821: Using F5 iHealth to diagnose vulnerabilities, K11438344: Considerations and guidance when you suspect a security compromise on a BIG-IP system, K53108777: Hardening your F5 system, K45321906: Harden your BIG-IQ system, and K000156803: Hardening NGINX and NGINX Plus.4.6KViews9likes1CommentUnderstanding The TikTok Ban, Salt Typhoon and More | AppSec Monthly January Ep.27
In this episode of AppSec Monthly, our host MegaZone is joined by m_heath, Merlyn Albery-Speyer, and AubreyKingF5, as they dive into the latest cybersecurity news. We explore the complexities of the TikTok ban, the impact of geopolitical decisions on internet freedom, and the nuances of data sovereignty. Our experts also discuss the implications of recent breaches by Chinese state actors and the importance of using end-to-end encrypted apps to protect your data. Additionally, we shed light on the fascinating history of internet control and how it continues to evolve with emerging technologies. Stay tuned until the end for insights on the upcoming VulnCon 2025 and how you can participate. Don’t forget to subscribe for more AppSec insights!65Views1like0CommentsWAF evasion techniques for Command Injection
Let’s talk about Command Injection; I’m going to talk about this specifically from the perspective of Web Application Firewalls (like BIG-IP Advanced WAF, BIG-IP Next WAF, F5 Distributed Cloud WAF and so on) but these concepts are generally applicable anywhere user-input is used to construct commands run on the system, directly or indirectly. So, what is Command Injection? To quote OWASP, who put it very nicely: Command injection is an attack in which the goal is execution of arbitrary commands on the host operating system via a vulnerable application. Command injection attacks are possible when an application passes unsafe user supplied data (forms, cookies, HTTP headers etc.) to a system shell. In this attack, the attacker-supplied operating system commands are usually executed with the privileges of the vulnerable application. Command injection attacks are possible largely due to insufficient input validation. This attack differs from Code Injection, in that code injection allows the attacker to add their own code that is then executed by the application. In Command Injection, the attacker extends the default functionality of the application, which execute system commands, without the necessity of injecting code. Like I say, in this case I’m going to talk about command injection to web applications, but they can happen in almost any piece of software that works on untrusted user input. Perhaps the most famous example of a command injection vulnerability is Shellshock, a suite of vulnerabilities in the Unix Bash shell and if you needed any proof that they can be hard things to find as a defender, Shellshock lived, undiscovered (or at least undisclosed, we’ve no way of proving that no malicious entities knew of the bug!) for 25 years from 1989 to 2014, in one of the most widely used pieces of software in the world. The original Shellshock vulnerability involved a maliciously crafted environment variable containing (malicious) commands after a function definition, e.g. env x=’() { :;}; echo vulnerable’ bash -c “echo test” On a vulnerable system, running the above commands would display “vulnerable” because of Bash continuing to execute the (injected) commands following the function definition. Injecting a command here requires the"| use of two specific characters plus the command, the semi-colon and space characters. If you imagine a web application passing commands to bash on a vulnerable system, you’ll see that it would be possible to block this attack simply by blocking requests containing semi-colon or space (or indeed having a signature for the full function definition and trailing semi-colon of “{ :;};”) Bypassing protections Any time there is a WAF in front of a vulnerable system – and sometimes even when there isn’t – an attacker must try to evade the rules preventing them from simply injecting their chosen command. You’ll often see this when attackers or scanners are looking for SQL injection vulnerabilities in web applications, replacing characters like ‘ with %27 or the space with %20 (and many other tricks), or using chunks of existing text with the SUBSTRING() function to construct queries without having to use the actual text. Many of the same tricks work for command injection vulnerabilities, and I’d like to talk about a specific example here because it’s one I hadn’t considered until it turned up in some real life traffic.. Bypassing WAF signatures using Environment Variables Remember I just said you could construct SQL queries using sub-strings of existing text? Well, if your target system is Windows-based and you know there’s a command injection vulnerability but you’re unable to exploit it due to character blocks or similar restrictions, then good news! Windows environment variables might be what you’re looking for.. Environment variables exist in most operating systems, and Microsoft ones are no exception – they date back to DOS and were one of the enhancements Microsoft brought to the table over and above CP/M (unlike the 8.3 filenames, which came right from CP/M!); their behaviour has been pretty much the same throughout, and there have always been a number of ‘default’ environment variables like the PATH, TMP & TEMP. Current versions of Windows add a number of additional default environment variables like PROGRAMDATA, PROGRAMFILES etc. Windows also allows the shell to return just a part of the environment variable value using the following syntax: %VARIABLE:~start_pos,end_pos% How is this useful to us, you ask? Let’s say you know you can inject a command, but you need a space in your command line; you want to inject “ping 127.0.0.1” but the space is dropped or the request is blocked by a WAF looking for “ping <IP>”, well then you just need an environment variable you know will have a space in it! %PROGRAMFILES%, by default, is going to be set to C:\Program Files on most systems, which has a space right there in the middle! All we need to do to get to it is use it as %PROGRAMFILES:~10,1%, for example: ping%PROGRAMFILES:~10,1%127.0.0.1 Go ahead, fire up a command prompt, and try it out! You could even construct the whole command that way: %PROGRAMFILES:~3,1%%SYSTEMROOT:~4,2%%PROGRAMFILES:~6,1%%PROGRAMFILES:~10,1%127.0.0.1 Again, fire up a command prompt and give that a try! Protect against bypasses with BIG-IP Advanced WAF Now here’s the good news: ASM includes signatures, by default, for all of those useful Windows environment variables (and the same for many other systems, too), so if you were to try the above on a vulnerable system with the right signatures in the policy, you’d still be blocked – like this: All these signatures are part of the Predictable Resource Location Signatures signature set, so you’ll want to make sure you either have all signatures of Medium or above, or at least this set assigned to your policy: Summary Command Injection is a huge topic, much bigger than I can talk about in one blog post here, but hopefully this shows you one way an attacker might try to evade protection in front of a vulnerable Windows system, and some ways in which you can protect it — BIG-IP Advanced WAF or F5 Distributed Cloud WAF both have signatures for this kind of evasion.1.1KViews2likes0CommentsEnhancing Software Security with Rust: A Solution to Common Vulnerabilities
Introduction The digital landscape is continually evolving, with cybersecurity threats growing both in sophistication and number. Among these threats, memory safety vulnerabilities stand out, contributing to a significant portion of software security issues today. Supported by guidance from CISA, NSA, and many other security minded individuals, there is an urgent need for programming practices that inherently mitigate memory safety risks. Addressing Memory Safety with Rust Rust is an open-source programming language renowned for its dedication to safety and performance. It effectively addresses common challenges related to memory safety and concurrency without sacrificing execution speed. In this article, we will explore the top three Common Weakness Enumerations (CWEs) from the 2023 Known Exploited Vulnerabilities list: Use After Free, Heap-based Buffer Overflow, and Out-of-bounds Write. All these CWEs directly relate to memory safety problems. Throughout this article, we will demonstrate how Rust’s unique capabilities serve as effective safeguards against these widespread concerns. Note: While Rust also mitigates other critical issues such as double-free, dangling pointers, and concurrency issues like race conditions, deadlocks, and improper synchronization, these will not be covered in detail in this article. CWE-416: Use After Free This vulnerability occurs when a program continues to use a memory location after it has been freed, potentially leading to application crashes or in more severe scenarios, arbitrary code execution. Languages that require manual memory management, such as C and C++, are typically vulnerable to this issue, as developers must explicitly manage memory allocation and deallocation. Rust uniquely addresses CWE-416 through its ownership, borrowing, and lifetimes systems, catching potential vulnerabilities at compile time. Ownership rules enforce that each piece of data is owned by a single entity. When this owner or piece of data goes out of scope, Rust automatically deallocates the memory associated with it, thereby eliminating the risk of accessing freed memory. Borrowing allows functions to access data via references without taking ownership, a process carefully scrutinized by the borrow checker. This component of the compiler ensures that all borrowed references adhere strictly to lifetime rules, preventing them from outliving the data they reference, thereby avoiding use-after-free vulnerabilities. Lifetimes specify the scope for which a reference is valid, enabling the compiler to track and manage the lifespan of data throughout the program. By requiring explicit lifetime annotations where necessary, Rust enforces a clear contract for how long data can be safely borrowed, further strengthening its memory safety by preventing dangling references that could lead to vulnerabilities. CWE-122: Heap-based Buffer Overflow Heap-based buffer overflows occur when data exceeds its allocated memory in the heap, potentially allowing attackers to read or write memory they shouldn't have access to. This can result in crashing the application or enabling arbitrary code execution. Such vulnerabilities are particularly prevalent in languages like C and C++, which do not automatically enforce bounds checking. Rust effectively addresses CWE-122 through multiple security measures, including a type system, the principle of immutability by default, and robust memory safety abstractions. Type systems are critical for security, and Rust's system exemplifies this by being both statically and strongly typed. Static typing ensures all data types are defined before runtime, allowing the compiler to catch type errors early and mitigate related vulnerabilities. Strong typing in Rust requires explicit type conversions, guarding against unsafe coercions that could lead to issues like buffer overflows. Additionally, Rust enforces runtime bounds checking, which actively prevents heap-based buffer overflows and out-of-bounds writes by causing errors to panic rather than fail silently or behave unpredictably. Together, these features not only enhance security by enforcing strict type safety and data integrity but also by ensuring reliable and predictable error handling. Immutability by Default ensures that all data is immutable unless explicitly declared mutable. This design significantly reduces the risk of unintended data modifications that could lead to buffer overflows. By default, this immutability prevents many common programming errors associated with memory corruption. Memory Safety Abstractions provide high-level abstractions such as Vec<T> for managing dynamic arrays and Box<T> for smart pointers. These abstractions come with built-in bounds checking, which are enforced at runtime. When data operations exceed their allocated bounds, Rust's approach ensures that these operations result in controlled runtime panics, thus preventing unsafe memory access and preserving application integrity. Additionally, Rust promotes using iterators when working with collections. Iterators are both safe and efficient because they abstract away the need for manual bounds checking. This not only simplifies the code but also eliminates a common source of errors associated with direct index access, further enhancing safety and performance. CWE-787: Out-of-bounds Write This vulnerability involves writing data past the bounds of allocated memory, which can corrupt data, crash the application, or lead to code execution. It predominantly affects languages like C and C++ where bounds checking is not enforced automatically by the language, requiring manual oversight by developers. Rust addresses CWE-787: Out-of-bounds Write through its robust memory safety protocols, including automatic bounds checking for all memory write operations. This ensures that data stays within safe operational limits at both compilation and runtime stages, preventing potential security breaches. Additionally, features like the Option enum and fearless concurrency further safeguard against out-of-bounds writes by enforcing strict data handling and thread-safe access. Automatic bounds checking for all memory write operations, effectively preventing data from being written outside allocated segments. This safety measure operates during both compilation and runtime, where Rust ensures safe failure modes through structured error handling and panics rather than allowing undefined behavior. Option enum is special in Rust and its use ensures proper management of data safely, helping developers avoid The Billion Dollar Mistake. Unlike many other languages, Rust does not have null values for any data types and using the Option enum requires developers to explicitly handle cases of Some (data present) and None (data absent), promoting deliberate and safe data access patterns. This forces developers to handle cases which may otherwise go undefined in other languages. Fearless Concurrency is another defining feature of Rust that guarantees thread-safe data access, effectively eliminating the risk of data races that could lead to out-of-bounds writes. This is achieved through Rust’s ownership and borrowing rules (described earlier), which ensure that data is accessed by only one thread at a time unless explicitly shared in a thread-safe manner. By leveraging these strict concurrency controls, Rust allows developers to build highly concurrent applications without the typical safety compromises seen in other languages, enhancing both performance and security and avoiding difficult to detect and reproduce defects. Conclusion The future of programming, particularly in systems and kernel development, is trending towards languages that provide strong memory safety guarantees. Rust's integration into system programming and even parts of the Linux kernel highlights a significant shift in software development paradigms. While Rust represents the future of secure programming, it's crucial to recognize the enduring legacy of languages like C. The Linux kernel and widely-used software such as OpenSSL and NGINX are predominantly written in C, illustrating that an immediate wholesale transition to Rust across all development sectors isn't practical. However, as we move forward, Rust's role in fostering more secure software is poised to expand with its focus on memory safety becoming a cornerstone of modern system software. The adoption of memory-safe languages like Rust isn't just about addressing current vulnerabilities; it's about reshaping software development practices to prioritize security from the ground up. This evolution marks a future where software inherently withstands a wide range of cybersecurity threats, greatly enhancing the resilience of our digital infrastructure against new challenges.253Views1like0CommentsCoordinated Vulnerability Disclosure: A Balanced Approach
The world of vulnerability disclosure encompasses, and affects, many different parties – security researchers, vendors, customers, consumers, and even random bystanders who may be caught in the blast radius of a given issue. The security professionals who manage disclosures must weigh many factors when considering when and what to disclose. There are risks to disclosing an issue when there is no fix yet available, possibly making more malicious actors aware of the issue when those affected have limited options. Conversely, there are also risks to not disclosing an issue for an extended period when malicious actors may already know of it, yet those affected remain blissfully unaware of their risk. This is but one factor to be considered. Researchers and Vendors The relationship between security researchers and product vendors is sometimes perceived as contentious. I’d argue that’s largely due to the exceptions that make headlines – because they’re exceptions. When some vendor tries to silence a researcher through legal action, blocking a talk at a conference, stopping a disclosure, etc., those moves make for sensational stories simply because they are unusual and extreme. And those vendors are clearly not familiar with the Streisand Effect. The reality is that security researchers and vendors work together every day, with mutual respect and professionalism. We’re all part of the security ecosystem, and, in the end, we all have the same goal – to make our digital world a safer, more secure place for everyone. As a security engineer working for a vendor, you never want to have someone point out a flaw in your product, but you’d much rather be approached by a researcher and have the opportunity to fix the vulnerability before it is exploited than to become aware of it because it was exploited. Sure, this is where someone will say that vendors should be catching the issues before the product ships, etc. In a perfect world that would be the case, but we don’t live in a perfect world. In the real world, resources are finite. Every complex product will have flaws because humans are involved. Especially products that have changed and evolved over time. No matter how much testing you do, for any product of sufficient complexity, you can never be certain that every possibility has been covered. Furthermore, many products developed 10 or 20 years ago are now being used in scenarios that could not be conceived of at the time of their design. For example, the disintegration of the corporate perimeter and the explosion of remote work has exposed security shortcomings in a wide range of enterprise technologies. As they say, hindsight is 20/20. Defects often appear obvious after they’ve been discovered but may have slipped by any number of tests and reviews previously. That is, until a security researcher brings a new way of thinking to the task and uncovers the issue. For any vendor who takes security seriously, that’s still a good thing in the end. It helps improve the product, protects customers, and improves the overall security of the Internet. Non sequitur. Your facts are uncoordinated. When researchers discover a new vulnerability, they are faced with a choice of what to do with that discovery. One option is to act unilaterally, disclosing the vulnerability directly. From a purely mercenary point of view, they might make the highest return by taking the discovery to the dark web and selling it to anyone willing to pay, with no regard to their intentions. Of course, this option brings with it both moral and legal complications. It arguably does more to harm the security of our digital world overall than any other option, and there is no telling when, or indeed if, the vendor will become aware of the issue for it to be fixed. Another drastic, if less mercenary, option is Full Disclosure - aka the ‘Zero-Day’ or ‘0-day’ approach. Dumping the details of the vulnerability on a public forum makes them freely available to all, both defenders and attackers, but leaves no time for advance preparation of a fix, or even mitigation. This creates a race between attackers and defenders which, more often than not, is won by the attackers. It is nearly always easier, and faster, to create an exploit for a vulnerability and begin distributing it than it is to analyze a vulnerability, develop and test a fix, distribute it, and then patch devices in the field. Both approaches may, in the long term, improve Internet security as the vulnerabilities are eventually fixed. But in the short- and medium-terms they can do a great deal of harm to many environments and individual users as attackers have the advantage and defenders are racing to catch up. These disclosure methods tend to be driven primarily by monetary reward, in the first case, or by some personal or political agenda, in the second case. Dropping a 0-day to embarrass a vendor, government, etc. Now, Full Disclosure does have an important role to play, which we’ll get to shortly. Mutual Benefit As an alternative to unilateral action, there is Coordinated Disclosure: working with the affected vendor(s) to coordinate the disclosure, including providing time to develop and distribute fixes, etc. Coordinated Disclosure can take a few different forms, but before I get into that, a slight detour. Coordinated Disclosure is the current term of art for what was once called ‘Responsible Disclosure’, a term which has generally fallen out of favor. The word ‘responsible’ is, by its nature, judgmental. Who decides what is responsible? For whom? To whom? The reality is it was often a way to shame researchers – anyone who didn’t work with vendors in a specified way was ‘irresponsible’. There were many arguments in the security community over what it meant to be ‘responsible’, for both researchers and vendors, and in time the industry moved to the more neutrally descriptive term of ‘Coordinated Disclosure’. Coordinated Disclosure, in its simplest form means working with the vendor to agree upon a disclosure timeline and to, well, coordinate the process of disclosure. The industry standard is for researchers to give vendors a 90-day period in which to prepare and release a fix, before the disclosure is made. Though this may vary with different programs and may be as short as 60-days or as long as 120-days, and often include modifiers for different conditions such as active exploitation, Critical Severity (CVSS) issues, etc. There is also the option of private disclosure, wherein the vendor notifies only customers directly. This may happen as a prelude to Coordinated Disclosure. There are tradeoffs to this approach – on the one hand it gives end users time to update their systems before the issues become public knowledge, but on the other hand it can be hard to notify all users simultaneously without missing anyone, which would put those unaware at increased risk. The more people who know about an issue, the greater the risk of the information finding its way to the wrong people, or premature disclosure. Private disclosure without subsequent Coordinated Disclosure has several downsides. As already stated, there is a risk that not all affected users will receive the notification. Future customers will have a harder time being aware of the issues, and often scanners and other security tools will also fail to detect the issues, as they’re not in the public record. The lack of CVE IDs also means there is no universal way to identify the issues. There’s also a misguided belief that private disclosure will keep the knowledge out of the wrong hands, which is just an example of ‘security by obscurity’, and rarely effective. It’s more likely to instill a false sense of security which is counter-productive. Some vendors may have bug bounty programs which include detailed reporting procedures, disclosure guidelines, etc. Researchers who choose to work within the bug bounty program are bound by those rules, at least if they wish to receive the bounty payout from the program. Other vendors may not have a bug bounty program but still have ways for researchers to official report vulnerabilities. If you can’t find a way to contact a given vendor, or aren’t comfortable doing so for any reason, there are also third-party reporting programs such as Vulnerability Information and Coordination Environment (VINCE) or reporting directly to the Cybersecurity & Infrastructure Security Agency (CISA). I won’t go into detail on these programs here, as that could be an article of its own – perhaps I will tackle that in the future. As an aside, at the time of writing, F5 does not have a bug bounty program, but the F5 SIRT does regularly work with researchers for coordinated disclosure of vulnerabilities. Guidelines for reporting vulnerabilities to F5 are detailed in K4602: Overview of the F5 security vulnerability response policy. We do provide an acknowledgement for researchers in any resulting Security Advisory. Carrot and Stick Coordinated disclosure is not all about the researcher, the vendor has responsibilities as well. The vendor is being given an opportunity to address the issue before it is disclosed. They should not see this as a burden or an imposition, the researcher is under no obligation to give them this opportunity. This is the ‘carrot’ being offered by the researcher. The vendor needs to act with some urgency to address the issue in a timely fashion, to deliver a fix to their customers before disclosure. The researcher is not to blame if the vendor is given a reasonable time to prepare a fix and fails to do so. The ’90-day’ guideline should be considered just that, a guideline. The intention is to ensure that vendors take vulnerability reports seriously and make a real effort to address them. Researchers should use their judgment, and if they feel that the vendor is making a good faith effort to address the issue but needs more time to do so, especially for a complex issue or one that requires fixing multiple products, etc., it is not unreasonable to extend the disclosure deadline. If the end goal is truly improving security and protecting users, and all parties involved are making a good faith effort, reasonable people can agree to adjust deadlines on a case-by-case basis. But there should still be some reasonable deadline, remember that it is an undisclosed vulnerability which could be independently discovered and exploited at any time – if not already – so a little firmness is justified. Even good intentions can use a little encouragement. That said, the researcher also has a stick for the vendors who don’t bite the carrot – Full Disclosure. For vendors who are unresponsive to vulnerability reports, who respond poorly to such (threats, etc.), who do not make a good faith effort to fix issues in a timely manner, etc., this is alternative of last resort. If the researcher has made a good faith effort at Coordinated Disclosure but has been unable to do so because of the vendor, then the best way to get the word out about the issue is Full Disclosure. You can’t coordinate unless both parties are willing to do so in good faith. Vendors who don’t understand it is in their best interest to work with researchers may eventually learn that it is after dealing with Full Disclosure a few times. Full Disclosure is rarely, if ever, a good first option, but if Coordinated Disclosure fails, and the choice becomes No Disclosure vs. Full Disclosure, then Full Disclosure is the best remaining option. In All Things, Balance Coordinated disclosure seeks to balance the needs of the parties mentioned at the start of this article – security researchers, vendors, customers, consumers, and even random bystanders. Customers cannot make informed decisions about their networks unless vendors inform them, and that’s why we need vulnerability disclosures. You can’t mitigate what you don’t know about. And the reality is no one has the resources to keep all their equipment running the latest software release all the time, so updates get prioritized based on need. Coordinated disclosure gives the vendor time to develop a fix, or at least a mitigation, and make it available to customers before the disclosure. Thus, allowing customers to rapidly respond to the disclosure and patch their networks before exploits are widely developed and deployed, keeping more users safe. The coordination is about more than just the timing, vendors and researchers will work together on the messaging of the disclosure, often withholding details in the initial publication to provide time for patching before disclosing information which make exploitation easier. Crafting a disclosure is always a balancing act between disclosing enough information for customers to understand the scope and severity of the issue and not disclosing information which is more useful to attackers than to defenders. The Needs of the Many Coordinated disclosure gets researchers the credit for their work, allows vendors time to develop fixes and/or mitigations, gives customers those resources to apply when the issue is disclosed to them, protects customers by enabling patching faster than other disclosure methods, and ultimately results in a safer, more secure, Internet for all. In the end, that’s what we’re all working for, isn’t it? I encourage vendors and researchers alike to view each other as allies and not adversaries. And to give each other the benefit of the doubt, rather than presume some nefarious intent. Most vendors and researchers are working toward the same goals of improved security. We’re all in this together. If you’re looking for more information on handling coordinated disclosure, you might check out The CERT Guide to Coordinated Vulnerability Disclosure.348Views4likes0Comments