cloud
3799 TopicsCan SSM Agent run on Ec2 with BEST license?
I am setting up F5 on AWS, using a BEST licensed AMI from the Marketplace. I wanted to be able to manage the instance via Systems Manager. In order for ec2 instances to communicate via SSM, I must install the ssm-agent, which is not installed on the marketplace AMI. However, I have discovered that the BEST AMI has FIPS protection, installing the ssm-agent triggers critical warnings, and my system becomes unavailable after a reboot. So far, the articles here have pointed to "downgrading" to a license that does not have FIPS as the only way to disable it entirely. However, WAF is a requirement for me, and only appears to be available in the BEST license. Is there a license that has Web Application Firewall but no(or a less restrictive) FIPS, or a way to allow SSM on a FIPS protected machine? It is the ssm commands that are installed in /usr/bin that trigger the alert.28Views0likes2CommentsExport Requests or Security Analytics from F5 Distributed Cloud
Wrote this code and thought I would share. You will need Python3 installed, and may need to use "pip" to install the "requests" package. Parameters can be displayed using the "-h" argument. A valid API Token is required for access to your tenant. One required filter is the Load Balancer name, and additional filters can be added to further confine the output. Times are in UTC, just like the API requires, and is displayed in the JSON event view in the GUI Log entries are written to the specified file in JSON format, as it comes from the API. Example execution: python3 xc-log-api-extract.py test-api.json security my-tenant-name my-namespace my-api-token my-load-balancer-name 2025-01-13T17:15:00.000Z 2025-01-14T17:15:00.000Z Here is the help page: python3 xc-log-api-extract.py -h usage: xc-log-api-extract.py [-h] [-srcip SRCIP] [-action ACTION] [-asorg ASORG] [-asnumber ASNUMBER] [-policy POLICY] outputfilename {access,security} tenant namespace apitoken loadbalancername starttime endtime Python program to extract XC logs positional arguments: outputfilename File to write JSON log messages to {access,security} logtype to query tenant Tenant name namespace Namespace in tenant apitoken API Token to use for accessing log data, created in Administration/IAM/Service Credentials, type "API Token" loadbalancername Load Balancer name to filter on (required) starttime yyyy-mm-mmThh:mm:ss.sssZ endtime yyyy-mm-mmThh:mm:ss.sssZ options: -h, --help show this help message and exit -srcip SRCIP Optional filter by Source IP -action ACTION Optional filter by action (allow, block) -asorg ASORG Optional filter by as_org -asnumber ASNUMBER Optional filter by as_number -policy POLICY Optional filter by policy_hits.policy_hits.policy DeVon Jarvis, v1.2 2025/01/21 Enjoy! DeVon Jarvis12Views0likes0CommentsMitigating OWASP API Security Top 10 risks using F5 NGINX App Protect
This 2019 API Security article covers the summary of OWASP API Security Top 10 – 2019 categories and newly published 2023 API security article covered introductory part of newest edition of OWASP API Security Top 10 risks – 2023. We will deep-dive into some of those common risks and how we can protect our applications against these vulnerabilities using F5 NGINX App Protect. Excessive Data Exposure Problem Statement: As shown below in one of the demo application API’s, Personal Identifiable Information (PII) data, like Credit Card Numbers (CCN) and U.S. Social Security Numbers (SSN), are visible in responses that are highly sensitive. So, we must hide these details to prevent personal data exploits. Solution: To prevent this vulnerability, we will use the DataGuard feature in NGINX App Protect, which validates all response data for sensitive details and will either mask the data or block those requests, as per the configured settings. First, we will configure DataGuard to mask the PII data as shown below and will apply this configuration. Next, if we resend the same request, we can see that the CCN/SSN numbers are masked, thereby preventing data breaches. If needed, we can update configurations to block this vulnerability after which all incoming requests for this endpoint will be blocked. If you open the security log and filter with this support ID, we can see that the request is either blocked or PII data is masked, as per the DataGuard configuration applied in the above section. Injection Problem Statement: Customer login pages without secure coding practices may have flaws. Intruders could use those flaws to exploit credential validation using different types of injections, like SQLi, command injections, etc. In our demo application, we have found an exploit which allows us to bypass credential validation using SQL injection (by using username as “' OR true --” and any password), thereby getting administrative access, as below: Solution: NGINX App Protect has a database of signatures that match this type of SQLi attacks. By configuring the WAF policy in blocking mode, NGINX App Protect can identify and block this attack, as shown below. If you check in the security log with this support ID, we can see that request is blocked because of SQL injection risk, as below. Insufficient Logging & Monitoring Problem Statement: Appropriate logging and monitoring solutions play a pivotal role in identifying attacks and also in finding the root cause for any security issues. Without these solutions, applications are fully exposed to attackers and SecOps is completely blind to identifying details of users and resources being accessed. Solution: NGINX provides different options to track logging details of applications for end-to-end visibility of every request both from a security and performance perspective. Users can change configurations as per their requirements and can also configure different logging mechanisms with different levels. Check the links below for more details on logging: https://www.nginx.com/blog/logging-upstream-nginx-traffic-cdn77/ https://www.nginx.com/blog/modsecurity-logging-and-debugging/ https://www.nginx.com/blog/using-nginx-logging-for-application-performance-monitoring/ https://docs.nginx.com/nginx/admin-guide/monitoring/logging/ https://docs.nginx.com/nginx-app-protect-waf/logging-overview/logs-overview/ Unrestricted Access to Sensitive Business Flows Problem Statement: By using the power of automation tools, attackers can now break through tough levels of protection. The inefficiency of APIs to detect automated bot tools not only causes business loss, but it can also adversely impact the services for genuine users of an application. Solution: NGINX App Protect has the best-in-class bot detection technology and can detect and label automation tools in different categories, like trusted, untrusted, and unknown. Depending on the appropriate configurations applied in the policy, requests generated from these tools are either blocked or alerted. Below is an example that shows how requests generated from the Postman automation tool are getting blocked. By filtering the security log with this support-id, we can see that the request is blocked because of an untrusted bot. Lack of Resources & Rate Limiting Problem Statement: APIs do not have any restrictions on the size or number of resources that can be requested by the end user. Above mentioned scenarios sometimes lead to poor API server performance, Denial of Service (DoS), and brute force attacks. Solution: NGINX App Protect provides different ways to rate limit the requests as per user requirements. A simple rate limiting use case configuration is able to block requests after reaching the limit, which is demonstrated below. Conclusion: In short, this article covered some common API vulnerabilities and shows how NGINX App Protect can be used as a mitigation solution to prevent these OWASP API security risks. Related resources for more information or to get started: F5 NGINX App Protect OWASP API Security Top 10 2019 OWASP API Security Top 10 20232.4KViews7likes0CommentsKubernetes architecture options with F5 Distributed Cloud Services
Summary F5 Distributed Cloud Services (F5 XC) can both integrate with your existing Kubernetes (K8s) clusters and/or host a K8s workload itself. Within these distinctions, we have multiple architecture options. This article explores four major architectures in ascending order of sophistication and advantages. Architecture #1: External Load Balancer (Secure K8s Gateway) Architecture #2: CE as a pod (K8s site) Architecture #3: Managed Namespace (vK8s) Architecture #4: Managed K8s (mK8s) Kubernetes Architecture Options As K8s continues to grow, options for how we run K8s and integrate with existing K8s platforms continue to grow. F5 XC can both integrate with your existing K8s clusters and/or run a managed K8s platform itself. Multiple architectures exist within these offerings too, so I was thoroughly confused when I first heard about these possibilities. A colleague recently laid it out for me in a conversation: "Michael, listen up: XC can either integrate with your K8s platform, run inside your K8s platform, host virtual K8s (Namespace-aaS), or run a K8s platform in your environment." I replied, "That's great. Now I have a mental model for differentiating between architecture options." This article will overview these architectures and provide 101-level context: when, how, and why would you implement these options? Side note 1: F5 XC concepts and terms F5 XC is a global platform that can provide networking and app delivery services, as well as compute (K8s workloads). We call each of our global PoP's a Regional Edge (RE). RE's are highly meshed to form the backbone of the global platform. They connect your sites, they can expose your services to the Internet, and they can run workloads. This platform is extensible into your data center by running one or more XC Nodes in your network, also called a Customer Edge (CE). A CE is a compute node in your network that registers to our global control plane and is then managed by a customer as SaaS. The registration of one or more CE's creates a customer site in F5 XC. A CE can run on a hypervisor (VMWare/KVM/Etc), a Hyperscaler (AWS, Azure, GCP, etc), baremetal, or even as a k8s pod, and can be deployed in HA clusters. XC Mesh functionality provides connectivity between sites, security services, and observability. Optionally, in addition, XC App Stack functionality allows a large and arbitrary number of managed clusters to be logically grouped into a virtual site with a single K8s mgmt interface. So where Mesh services provide the networking, App Stack services provide the Kubernetes compute mgmt. Our first 2 architectures require Mesh services only, and our last two require App Stack. Side note 2: Service-to-service communication I'm often asked how to allow services between clusters to communicate with each other. This is possible and easy with XC. Each site can publish services to every other site, including K8s sites. This means that any K8s service can be reachable from other sites you choose. And this can be true in any of the architectures below, although more granular controls are possible with the more sophisticated architectures. I'll explore this common question more in a separate article. Architecture 1: External Load Balancer (Secure K8s Gateway) In a Secure Kubernetes Gateway architecture, you have integration with your existing K8s platform, using the XC node as the external load balancer for your K8s cluster. In this scenario, you create a ServiceAccount and kubeconfig file to configure XC. The XC node then performs service discovery against your K8s API server. I've covered this process in a previous article, but the advantage is that you can integrate with existing K8s platforms. This allows exposing both NodePort and ClusterIP services via the XC node. XC is not hosting any workloads in this architecture, but it is exposing your services to your local network, or remote sites, or the Internet. In the diagram above, I show a web application being accesssed from a remote site (and/or the Internet) where the origin pool is a NodePort service discovered in a K8s cluster. Architecture 2: Run a site within a K8s cluster (K8s site type) Creating a K8s site is easy - just deploy a single manifest found here. This file deploys multiple resources in your cluster, and together these resources work to provide the services of a CE, and create a customer site. I've heard this referred to as "running a CE inside of K8s" or "running your CE as a pod". However, when I say "CE node" I'm usually referring to a discreet compute node like a VM or piece of hardware; this architecture is actually a group of pods and related resources that run within K8s to create a XC customer site. With XC running inside your existing cluster, you can expose services within the cluster by DNS name because the site will resolve these from within the cluster. Your service can then be exposed anywhere by the F5 XC platform. This is similar to Architecture 1 above, but with this model, your site is simply a group of pods within K8s. An advantage here is the ability to expose services of other types (e.g. ClusterIP). A site deployed into a K8s cluster will only support Mesh functionality and does not support AppStack functionality (i.e., you cannot run a cluster within your cluster). In this architecture, XC acts as a K8s ingress controller with built-in application security. It also enables Mesh features, such as publishing of other sites' services on this site, and publishing of this site's discovered services on other sites. Architecture 3: vK8s (Namespace-as-a-Service) If the services you use include AppStack capabilities, then architectures #3 and #4 are possible for you. In these scenarios, our XC node actually runs your K8s on your workloads. We are no longer integrating XC with your existing K8s platform. XC is the platform. A simple way to run K8s workloads is to use a virtual k8s (vK8s) architecture. This could be referred to as a "managed Namespace" because by creating a vK8s object in XC you get a single namespace in a virtual cluster. Your Namespace can be fully hosted (deployed to RE's) or run on your VM's (CE's), or both. Your kubeconfig file will allow access to your Namespace via the hosted API server. Via your regular kubectl CLI (or via the web console) you can create/delete/manage K8s resources (Deployments, Services, Secrets, ServiceAccounts, etc) and view application resource metrics. This is great if you have workloads that you want to deploy to remote regions where you do not have infrastructure and would prefer to run in F5's RE's, or if you have disparate clusters across multiple sites and you'd like to manage multiple K8s clusters via a single centralized, virtual cluster. Best practice guard rails for vK8s With a vK8s architecture, you don't have your own cluster, but rather a managed Namespace. So there are some restrictions (for example, you cannot run a container as root, bind to a privileged port, or to the Host network). You cannot create CRD's, ClusterRoles, PodSecurityPolicies, or Namespaces, so K8s operators are not supported. In short, you don't have a managed cluster, but a managed Namespace on a virtual cluster. Architecture 4: mK8s (Managed K8s) In managed k8s (mk8s, also known as physical K8s or pk8s) deployment, we have an enterprise-level K8s distribution that is run at your site. This means you can use XC to deploy/manage/upgrade K8s infrastructure, but you manage the Kubernetes resources. The benefits include what is typical for 3rd-party K8s mgmt solutions, but also some key differentiators: multi-cloud, with automation for Azure, AWS, and GCP environments consumed by you as SaaS enterprise-level traffic control natively allows a large and arbitrary number of managed clusters to be logically managed with a single K8s mgmt interface You can enable kubectl access against your local cluster and disable the hosted API server, so your kubeconfig file can point to a global URL or a local endpoint on-prem. Another benefit of mK8s is that you are running a full K8s cluster at your site, not just a Namespace in a virtual cluster. The restrictions that apply to vK8s (see above) do not apply to mK8s, so you could run privileged pods if required, use Operators that make use of ClusterRoles and CRDs, and perform other tasks that require cluster-wide access. Traffic management controls with mK8s Because your workloads run in a cluster managed by XC, we can apply more sophisticated and native policies to K8s traffic than non-managed clusters in earlier architectures: Service isolation can be enforced within the cluster, so that pods in a given namespace cannot communicate with services outside of that namespace, by default. More service-to-service controls exist so that you can decide which services can reach with other services with more granularity. Egress control can be natively enforced for outbound traffic from the cluster, by namespace, labels, IP ranges, or other methods. E.g.: Svc A can reach myapi.example.com but no other Internet service. WAF policies, bot defense, L3/4 policies, etc—all of these policies that you have typically applied with network firewalls, WAF's, etc—can be applied natively within the platform. This architecture took me a long time to understand, and longer to fully appreciate. But once you have run your workloads natively on a managed K8s platform that is connected to a global backbone and capable of performing network and application delivery within the platform, the security and traffic mgmt benefits become very compelling. Conclusion: As K8s continues to expand, management solutions of your clusters make it possible to secure your K8s services, whether they are managed by XC or exist in disparate clusters. With F5 XC as a global platform consumed as a service—not a discreet installation managed by you—the available architectures here are unique and therefore can accommodate the diverse (and changing!) ways we see K8s run today. Related Articles Securely connecting Kubernetes Microservices with F5 Distributed Cloud Multi-cluster Multi-cloud Networking for K8s with F5 Distributed Cloud - Architecture Pattern Multiple Kubernetes Clusters and Path-Based Routing with F5 Distributed Cloud9.3KViews29likes5CommentsMitigating OWASP Web Application Security Top 10 – 2021 risks using F5 Distributed Cloud Platform
Overview: In the early 90’s, applications were in dormant phase and JavaScript & XML were dominating this technology. But in 1999, the first web application was introduced after the release of the Java language in 1995. Later with the adoption of new languages like Ajax, HTML, Node, Angular, SQL, Go, Python, etc. and availability of web application frameworks have boosted application development, deployment, and release to production. With the evolving software technologies, modern web applications are becoming more and more innovative, providing users with a grand new experience and ridiculously ease of interface. With these leading-edge technologies, novel exploit surfaces are also exposed which made them a primary target for intruders/hackers. Application safeguarding against all these common exploits is a necessary step in protecting backend application data. Open Worldwide Application Security Project (OWASP) is one of those security practices which protects application with above issues. This article is the first part of the series and covers OWASP evolution, its importance and overview of top 10 categories. Before diving into OWASP Web Application Security Top 10, let’s time travel to era of 1990’s and try to identify challenges the application customers, developers and users were facing. Below are some of them: Rapid and diversified cyber-attacks has become a major concern and monitoring/categorizing them was difficult Product owners are concerned about application security & availability and are in desperate need of a checklist/report to understand their application security posture Developers are looking for recommendations to securely develop code before running into security flaws in production No consolidated repo to manage, document and provide research insights for every security vulnerability After running into the above concerns, people across the globe have come together in 2001 and formed an international open-source community OWASP. It’s a non-profit foundation which has people from different backgrounds like developers, evangelist, security experts, etc. The main agenda for this community is to solve application related issues by providing: Regularly updating “OWASP TOP 10” report which provides insights of latest top 10 security issues in web applications Report also provides security recommendations to protect them from these issues Consolidated monitoring and tracking of application vulnerabilities Conducting events, trainings and conferences around the world to discuss, solve and provide preventive recommendations for latest security issues OWASP also provides security tools, research papers, libraries, cheat sheets, books, presentations and videos covering application security testing, secure development, and secure code review OWASP WEB SECURITY TOP 10 2021: With the rapid increase of cyber-attacks and because of dynamic report updates, OWASP gained immense popularity and is considered as one of the top security aspects which application companies are following to protect their modern applications against known security issues. Periodically they release their Top 10 vulnerabilities report and below are the latest Top 10 - 2021 categories with their summary: A01:2021-Broken Access Control Access controls enforce policy such that users cannot act outside of their intended permissions. Also called authorization, it allows or denies access to your application's features and resources. Misuse of access control enables unauthorized access to sensitive information, privilege escalation and illegal file executions. Check this article on protection against broken access vulnerabilities A02:2021-Cryptographic Failures In 2017 OWASP top 10 report, this attack was known as Sensitive Data Exposure, which focuses on failures related to cryptography leading to exposure of sensitive data. Check this article on cryptographic failures A03:2021-Injection An application is vulnerable to injection if user data and schema is not validated by the application. Some of the common injections are XSS, SQL, NoSQL, OS command, Object Relational Mapping (ORM), etc., causing data breaches and loss of revenue. Check this article on safeguarding against injection exploits A04:2021-Insecure Design During the development cycle, some phases might be reduced in scope which leads to some of the vulnerabilities. Insecure Design represents the weaknesses i.e., lack of security controls which are not tracked in other categories throughout the development cycle. Check this article on design flaws and mitigation A05:2021-Security Misconfiguration This occurs when security best practices are overlooked allowing attackers to get into the system utilizing the loopholes. XML External Entities (XXE), which was previously a Top 10 category, is now a part of security misconfiguration. Check this article on protection against misconfiguration vulnerabilities A06:2021-Vulnerable and Outdated Components Applications used in enterprises are prone to threats such as code injection, buffer overflow, command injection and cross-site scripting from unsupported, out of date open-source components and known exploited vulnerabilities. Utilizing components with security issues makes the application itself vulnerable. Intruders will take use of this defects and exploit the deprecated packages thereby gaining access to backend applications. Check this article on finding outdated components A07:2021-Identification and Authentication Failures Confirmation of the user's identity, authentication, authorization and session management is critical to protect applications against authentication-related attacks. Apps without valid authorization, use of default credentials and unable to detect bot traffic are some of the scenarios in this category. Check this article on identifying and protection against bots A08:2021-Software and Data Integrity Failures Software and data integrity failures occurs when updates are pushed to the deployment pipeline without verifying its integrity. Insecure Deserialization, which was a separate category in OWASP 2017, has now become a part of this larger category set. Check this article on software failures protection A09:2021-Security Logging and Monitoring Failures As a best recommendation, we shall always log all incoming request details and monitor application for fraudulent transactions, invalid logins, etc. to identify if there are any attacks or breaches. Applications without logging capabilities provide opportunities to the attackers to exploit the application and may lead to many security concerns. Without logging and monitoring we won’t be able to validate the application traffic and can’t identify the source of the breach. Check this article for identifying logging issues A10:2021-Server-Side Request Forgery Server-Side Request Forgery (SSRF) attack is a technique which allows intruders to manipulate the server-side application vulnerability and make a malicious request to the internal-only resources. Attacker exploits this flaw by modifying/crafting a URL which forces the server to retrieve and disclose sensitive information. Check this article which focusses on SSRF mitigation NOTE: This is an overview article of this OWASP series, check the below links to prevent these vulnerabilities using F5 Distributed Cloud Platform. OWASP Web Application Security Series: Broken access mitigation Cryptographic failures Injection mitigation Insecure design mitigation Security misconfiguration prevention Vulnerable and outdated components Identification failures prevention Software failures mitigation Security logging issues prevention SSRF Mitigation3.3KViews6likes1CommentF5 Distributed Cloud JA4 detection for enhanced performance and detection
JA4+ is a suite of network fingerprinting methods. These methods are both human and machine readable to facilitate more effective threat-hunting and analysis. The use cases for these fingerprints include scanning for threat actors, malware detection, session hijacking prevention, compliance automation, location tracking, DDoS detection, grouping of threat actors, reverse shell detection, and many more. Introduction In a previous article, Identity-Aware decisions with JA4+ we discussed using JA4 fingerprints with BIG-IP. In this article, we are exploring the use of JA4 in F5 Distributed Cloud. A very useful use case for using JA4 in F5 Distributed Cloud is explained at F5 App Connect and NetApp S3 Storage – Secured Scalable AI RAG. Let's go through the steps of getting the JA4 fingerprints applied to a traffic sample. Implementation In this example we are using NGINX instance deployed via F5 Distributed Cloud Distributed Apps. Deploy Virtual K8s through Distributed Apps. Create service policy with the matching JA4 fingerprints to block. JA4 Database can be found over here JA4 Database Service policy creation From Distributed Cloud UI > Distributed Apps > Manage > Service Policies > Service Policies Add Service Policy Add name: ja4-service-policy Under rules, select Custom rules and then click configure Click Add item Update the below, Add name, Actions. Show advanced fields in the client section. TLS Fingerprint Matcher: JA4 TLS Fingerprint Click Configure JA4 TLS Fingerprint Click Add item and match the needed JA4 fingerprint. In our case, we are blocking curl, wget fingerprints. Click Apply, to save, then Save, and Exit. Now, we attach the service policy to our HTTP Load balancer. Manage > HTTP Loadbalancer > Click Manage configurations Click Edit Configurations At Common Security Controls section, Select Apply Service Policies and click Edit Configurations. Select the configured policy, then Apply. Testing From Firefox browser From Ubuntu using curl Observing logs from F5 Distributed Cloud From HTTP Loadbalancers > select the created loadbalancer and click Security Monitoring Click Security Events to check the requests You can see the events with the requests and client information From Action column, you can select Explain with AI to gain further information and recommendations. We have the service policy configured and attached. It can be attached as well to different component for client identification as well. Related Content F5 App Connect and NetApp S3 Storage – Secured Scalable AI RAG | DevCentral Fingerprint TLS Clients with JA4 on F5 BIG-IP using iRules JA4 Part 2: Detecting and Mitigating Based on Dynamic JA4 Reputation | DevCentral Identity-Aware decisions with JA4+ | DevCentral Setting Up A Basic Customer Edge To Run vk8s in F5 Distributed Cloud App Stack | DevCentral218Views0likes0CommentsBIG-IP Telemetry Streaming to Azure
Steps First important point is that you have to use the REST-API for configuring Telemetry Streaming - there isn't a way to provision using TMSH or the GUI. The way it is done is by POSTing a json declaration to BIG-IP Telemetry Streaming’s declarative REST API endpoint. For Azure, the details are here: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/setting-up-consumer.html#microsoft-azure-log-analytics I like to use AS3 where possible so I provide the AS3 code snippets, but I'll also show the config on the GUI as well. The steps are: Download and install AS3 and Telemetry Streaming Create Azure Sentinel workspace Send TS declaration base AS3 declaration adding AFW logs adding ASM logs adding LTM logs This article is a result of using TS in Azure for nearly 3 years, over which time I've gained a good understanding of how it works. However, there are updates and changes all the time, so I'd welcome any feedback if any part of the article is incorrect or out of date. Download and install AS3 and Telemetry Streaming To create necessary configuration needed for streaming to Sentinel, you firstly need to download the iControl LX plug-ins. These are available in the F5 github repository for AS3 and Telemetry Streaming as RPM files. Links are: Telemetry Streaming: F5Networks/f5-telemetry-streaming: F5 Telemetry Streaming (github.com) AS3: F5Networks/f5-appsvcs-extension: F5 Application Services 3 Extension (github.com) On the right hand side of the github page you'll see a link to the latest release: - it's the RPM file you need (usually the biggest size file!). I download the files to my PC and then import them using the GUI in: iApps/Package Management LX: Some key points: Your BIG-IP needs to be on at least 13.1 to send a declaration to TS and AS3 your account must have the Administrator role - it's usually recommended to use the 'admin' username. I use a REST API client to send declarations. I use Insomnia but Postman is another popular alternative. Setting up Azure Workspace You can create logging to Azure from the BIG-IP using a pre-built F5 connector available as part of Azure Sentinel. Alternatively, you can just setup a Log Analytics Workspace and stream the logs into it. I'll explain both methods: Using Azure Sentinel To create a Sentinel instance, you need to first create a Log Analytics Workspace. Then add Sentinel to the workspace. If there are no workspaces defined in the region you are adding Sentinel, the ARM template will prompt you to add one. Once created, you can add Sentinel to it: Once created you need the workspace credentials to allow the BIG-IP to connect and send data. Azure Workspace Credentials To be able to send logs into the Azure workspace, you need 2 important pieces of data - firstly the "Log Analytics Workspace ID", and then the "Primary key". F5 provide a data connector for Sentinel which is an easy way to get this information. On the Sentinel page select the 'Content Management' / 'Content Hub' blade (1), search for 'f5' and then select the 'F5 Advanced WAF Integration via Telemetry Streaming' connector (3). Click on the 'Install' button (3): Once installed, on the blade menu, select "Configuration" and "Data connectors". You should see a connector called "F5 BIG-IP". If you select this, and then click "Open connector page": This will then tell you the Workspace ID and the Primary Key you need (in section "Configuration"). The connector is a handy tool within Sentinel as it monitors and shows you the status of the telemetry coming into Azure needed for the 2 workbooks which have also been added as part of the Content Hub installation you did in the previous step. We will see this working later... Using Log Analytics only Sentinel is a SIEM solution which 'sits' on top of Log Analytics. If you don't need Sentinel's features, then BIG-IP Telemetry Streaming works fine just with a Log Analytics Workspace. Create the workspace from the Azure portal, ideally in the same region as the BIG-IP devices to avoid inter-VLAN costs in sending data to the workspace if you are using network isolation. In Azure Portal search bar type "Log Analytics workspaces" and + Create. All is needed is a name and region. Once created, navigate to "Settings" and "Agents". In the section "Log Analytics agent instructions" you will see the Workspace ID and the Primary Key you need for the TS declaration: Using MSI Telemetry Streaming v1.11 added support for sending data to Azure with an Azure Managed Service Identity (MSI). An MSI is a great way of maintaining secure access between Azure objects by leveraging Entra ID (formally Azure AD) to grant access without needing keys. The Primary Workspace key may be regenerated at some point (this may be a part of the key rotation policies of the customer) and if this happens, TS will stop as Azure will reject the incoming telemetry connection from BIG-IP. To use the MSI, create it in the Azure Portal and assign it to the Virtual Machine running the BIG-IP (Security/Identity). I would recommend creating a user assigned MSI rather than a system one. The system ID is restricted to a single resource and only for the lifetime of that resource. A user MSI can be assigned to multiple big-ip machines. Once created, assign the following role assignments to the Log Analytics Workspace (in "Access Control" blade in the LAW): "Log Analytics Contributor". Send TS declaration We now can send the TS declaration. The endpoint you need to reach is: POST https://{{BIG_IP_DEVICE_IP}}/mgmt/shared/telemetry/declare Before I give the declaration, there are a few issues I found using TS in Azure which I need to explain... System Logging Issue The telemetry logs from the BIG-IP will create what are known as "Custom Logs" in LAW. These are explained in more detail at the end of this article, but the most important thing about them is that have a limit of 500 columns for each Log Type. This was originally causing issues as BIG-IP was creating a set of columns for all the properties of each named item and very soon, the 500 limit was reached. F5 had already spotting this issue and fixed it in v1.24 with an option "format" with value "propertyBased" on the Consumer class (ref: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/setting-up-consumer.html#additions-to-the-azure-log-analytics-consumer) However, I found that when ASM is enabled on the BIG-IP, each signature update download was creating an set of additional columns in the System log which eventually took it over the 500 limit again: This has now been fixed by F5 with TS v.1.37.0 with a new "format" value of "propertyBasedV2". This allows asmAttackSignatures to be under Log Type F5Telemetry_asmAttackSignatures instead of F5Telemetry_system. If you see any logs either not appearing, or logs stopping on Azure, you can check for this error with the following Kusto query: Operation | where OperationCategory contains “Ingestion” AVR Logging Issue When ASM is enabled, it automatically enables AVR as well. This creates an AVR log which has a LOT of data in it. I've noticed that the AVR log is both excessive and can also exceed to 500 column limit due to a mishmash of data in it. Therefore, In my declaration I have made use of the excludeData option in the TS Listener to remove some of the log sources - the column/field 'Entity_s' identifies the source of the data: DNS_Offbox_All - this generates a log for every DNS request which is made. If your BIG-IP is acting as a DNS Cache for your environment, this very quickly becomes a massive log. ProcessCpuUtil - again, this creates a load of additional columns which record the CPU utilisation of every running process on the device. Might be useful to some... not to me! TcpStat - this logs TCP events against each TCP profile on the BIG-IP (whether used for not) every 5 minutes. If you have a busy device, they quickly flood the log. IruleEvents - shows data associated with iRules (times triggered, event counts..etc). I had no use for this data. I use iRules but did not need statistics on how many times an iRule was used. ACL_STAGE and ACL_FORCE - these seemed to be pointless logs related to AFM but not really giving any information which isn't already in the AFM logs. It was duplicated data of no value. There were also a number of other AVR logs which did not seem to create any meaningful data for me. These were: ServerHealth, GtmWideip, BOT DEFENSE EVENT, InterfaceHealth, AsmBypassInfo, FwNatTransSrc, FwNatTransDest I therefore have excluded these log types. This is not an exhaustive list of entity types in the AVR logs, but hopefully omitting these will (a) reduce your log sizes and (b) prevent the 500 column issue. If you want to analyse what different types (entities) of logs are in the AVR log, the following kusto query can be run: F5Telemetry_AVR_CL | summarize count() by Entity_s This will show the amount of AVR logs by each Entity type (source). You can then run a query for a specific type, analyse the content, and decide whether to filter it or not. Declaration Ok - after all that, here is my declaration. It contains the following: A Telemetry System class - this is needed to generate the device system logging which goes into a log called "F5Telemetry_system_CL" A System Poller - this collects system data at a set interval (60 seconds is a reasonable setting here which produces logs which are not too large but with good granularity of data). The System Poller also allow us to filter logs using excludeData. We exclude the following: asmAttackSignatures - as explained above, these should no longer appear in System logs, but this is just to make sure! diskLatency - this is a large set of columns storing the disk stats. As we are using VMs in Azure, this info is available within the Azure IaS service, so I did not see any point of collecting it again from the VM level, especially as the latency is a function of the selected machine type in Azure. location - this is just the SNMP location, waste of a column name. description - this is just the SNMP description, waste of a column name. A Telemetry Listener - this is used to listen to and collect event logs it receives on the specified port from configured BIG-IP system services, including LTM, ASM, AFM and AVR. A Telemetry Push Consumer - this is used to push the collected data to Azure. It is here we use the workspace ID and the primary key we collected in the above steps. { "class": "Telemetry", "controls": { "class": "Controls", "logLevel": "info", "debug": false }, "telemetry-system-azure": { "class": "Telemetry_System", "trace": false, "allowSelfSignedCert": true, "host": "localhost", "port": 8100, "protocol": "http", "systemPoller": [ "telemetry-systemPoller-azure" ] }, "telemetry-systemPoller-azure": { "class": "Telemetry_System_Poller", "interval": 60, "actions": [ { "excludeData": {}, "locations": { "system": { "asmAttackSignatures": true, "diskLatency": true, "tmstats": true, "location": true, "description": true } } } ] }, "telemetry-listener-azure": { "class": "Telemetry_Listener", "port": 6514, "enable": true, "trace": false, "match": "", "actions": [ { "setTag": { "tenant": "`T`", "application": "`A`" }, "enable": true }, { "excludeData": {}, "ifAnyMatch": [ { "Entity": "DNS_Offbox_All" }, { "Entity": "ProcessCpuUtil" }, { "Entity": "TcpStat" }, { "Entity": "IruleEvents" }, { "Entity": "ACL_STAGE" }, { "Entity": "ACL_FORCE" }, { "Entity": "ServerHealth" }, { "Entity": "GtmWideip" }, { "Entity": "BOT DEFENSE EVENT" }, { "Entity": "InterfaceHealth" }, { "Entity": "AsmBypassInfo" }, { "Entity": "FwNatTransSrc" }, { "Entity": "FwNatTransDest" } ], "locations": { "^.*$": true } } ] }, "telemetry-pushConsumer-azure": { "class": "Telemetry_Consumer", "type": "Azure_Log_Analytics", "format": "propertyBasedV2", "trace": false, "workspaceId": "{{LOG_ANALYTICS_WORKSPACE_ID}}", "passphrase": { "cipherText": "{{LOG_ANALYTICS_PRIMARY_KEY}}" }, "useManagedIdentity": false } } Note: If you are using an MSI managed identity, the consumer changes to this: "telemetry-pushConsumer-azure": { "class": "Telemetry_Consumer", "type": "Azure_Log_Analytics", "format": "propertyBasedV2", "trace": false, "useManagedIdentity": true } You need to look for a "200 OK" response to come back from the REST client. The logs for Telemetry go into: /var/log/restnoded/restnoded.log and will alert if there are errors in connectivity from the BIG-IP into LAW. Adding Non-System Logs To add logs from the Security managers on BIG-IP (AFM, ASM ..etc) you need to create a few AS3 resources to handle the internal routing of logs from the various managers into the telemetry listener just created above. The resources are: a Log Publisher for the security log profile to link to. a Log Destination formatter to a high speed link (HSL) pool, with a format type of "splunk" a Log Destination HSL to a pool which maps to an internal address using TCP port 6514 an LTM Pool which uses a local address. a TCP Virtual Server (vIP) on tcp/6514 with the local address. an iRule for the vIP to remap traffic onto the loopback address (where it will be picked up by the TS listener. "irule-telemetryLocalRule": { "class": "iRule", "remark": "Telemetry Streaming", "iRule": { "base64": "d2hlbiBDTElFTlRfQUNDRVBURUQgcHJpb3JpdHkgNTAwIHsNCiAgbm9kZSAxMjcuMC4wLjEgNjUxNA0KfQ==" } }, "logDestination-telemetryHsl": { "class": "Log_Destination", "type": "remote-high-speed-log", "protocol": "tcp", "pool": { "use": "pool-telemetry" } }, "logDestination-telemetry": { "class": "Log_Destination", "type": "splunk", "forwardTo": { "use": "logDestination-telemetryHsl" } }, "logPublisher-telemetry": { "class": "Log_Publisher", "destinations": [ { "use": "logDestination-telemetry" } ] }, "pool-telemetry": { "class": "Pool", "remark": "Telemetry Streaming to Azure Sentinel", "monitors": [ ], "members": [ { "serverAddresses": [ "255.255.255.254" ], "adminState": "enable", "servicePort": 6514 } ] }, "vip-telemetryLocal": { "class": "Service_TCP", "virtualAddresses": [ "255.255.255.254" ], "iRules": [ "irule-telemetryLocalRule" ], "pool": "pool-telemetry", "remark": "Telemetry Streaming", "addressStatus": true, "virtualPort": 6514 } The iRule is base64 encoded in the AS3 declaration above but is just this: when CLIENT_ACCEPTED priority 500 { node 127.0.0.1 6514 } Now you have a Log Publisher which routes to a local pool mapping to the loopback of the BIG-IP. The TS Listener will then pick this up (notice the port in the TS Declaration object "telemetry-listener-azure" matches the Log High Speed Logging Destination pool (6514)). Loopback Issue When creating the virtual server above, tmm errors are observed which prevent logging via the Telemetry virtual server iRule as it rejects remapping to the loopback. The following log is seen in /var/log/ltm : testf5 err tmm1[6506]: 01220001:3: TCL error: /Common/Shared/irule-telemetryLocalRule - disallow self or loopback connection (line 1)TCL error (line 1) (line 1) invoked from within "node 127.0.0.1 6514" Ref: After an upgrade, iRules using the loopback address may fail and log TCL errors (f5.com) To fix this, change the following db value: tmsh modify sys db tmm.tcl.rule.node.allow_loopback_addresses value true tmsh save sys config Adding AFM logs The Advanced Firewall Manager allows firewall policy to be defined at a number of points (called "contexts") in the flow of traffic through the F5. A global policy can be applied, or a policy can be added at the Self-IP, Route Domain, or Virtual Server level. What is important to realize is that there is a pre-built Security Logging Profile for all policies operating at the 'Global' context - called global-network. If your policy is applied as a global policy, you have to change this profile to get logging into Azure. The profile is here under Security / Event Logs / Logging Profiles: Click on the 'global-network' profile and in the "Network Firewall" tab set the publisher to the one you have built above. You can also decide what to log - at the least you should log any policy drops or rejects: For any AFM policies added at any other context, you can create your own logging profile The logs produced go into the Azure custom log: F5Telemetry_AFM_CL. A log is produced for every firewall event with the column "action_s" recording the rule match action (Accept, Drop or Reject). Adding ASM, DDoS and IDPS logs Logging for the Application Security Manager (ASM), Protocol Inspection (IDPS) and DoS Protection features are all via a Security Logging Profile which is then assigned to the virtual server. "security-loggingProfile": { "class": "Security_Log_Profile", "application": { "localStorage": false, "remoteStorage": "splunk", "protocol": "tcp", "servers": [ { "address": "127.0.0.1", "port": "6514" } ], "storageFilter": { "requestType": "all" } }, "network": { "publisher": { "use": "logPublisher-telemetry" }, "logRuleMatchAccepts": false, "logRuleMatchRejects": true, "logRuleMatchDrops": true, "logIpErrors": true, "logTcpErrors": true, "logTcpEvents": true }, "dosApplication": { "remotePublisher": { "use": "logPublisher-telemetry" } }, "dosNetwork": { "publisher": { "use": "logPublisher-telemetry" } }, "protocolDnsDos": { "publisher": { "use": "logPublisher-telemetry" } }, "protocolInspection": { "publisher": { "use": "logPublisher-telemetry" }, "logPacketPayloadEnabled": true } }, In the example above we are enabling logging for the ASM in the "application" property. An important configuration here is server setting. ASM logging only works if the address used here is 127.0.0.1 and port tcp/6514. In the GUI it looks like this: We have also enabled logging for DoS and IDS/IPS (Protocol Inspection). This is more straightforward as it just references the Log Publisher we created earlier: To assign the various Security features to the virtual server, we use the Security Policy tab and as we mentioned, this is also where we assign the Security Log Profile we created earlier: An example AS3 code snippet for a HTTP virtual server matching what you see in the GUI above is shown below: "vip-testapi": { "class": "Service_HTTPS", "virtualAddresses": [ "172.16.255.254" ], "shareAddresses": false, "profileHTTP": { "use": "http" }, "remark": "Test API", "addressStatus": true, "allowVlans": [ "vlan001" ], "virtualPort": 443, "redirect80": false, "snat": "auto", "policyWAF": { "use": "policy-test" }, "profileDOS": { "use": "dos" }, "profileProtocolInspection": { "use": "protocol_inspection_http" }, "securityLogProfiles": [ { "bigip": "security-loggingProfile" } ] } The securityLogProfiles property references the logging profile we created above. Note that an "Application Security Policy" (property: policyWAF) can only be enabled when the virtual server is of type: Service_HTTP or Service_HTTPS and has a HTTP profile assigned (property: profileHTTP). The outputted logs from the various security managers end up in the following logs: Advanced Firewall Manager (AFM) F5Telemetry_AFM_CL | where isnotempty(acl_policy_name_s) Application Security Manager (ASM) F5Telemetry_ASM_CL DoS Protection F5Telemetry_AVR_CL | where Entity_s contains "DosVisibility" or Entity_s contains "AfmDosStat" Protocol Inspection F5Telemetry_AFM_CL | where isnotempty(insp_id_s) Adding DNS logs If you are using the BIG-IP as an DNS (formally GTM) for GSLB Wide IP load balancing, you will probably want to see the GSLB requests logged in Azure. I found a couple of issues with this... Firstly, the DNS logging profile does not support the "splunk" format which the log destination needs to be for the AFM logging. If you create a separate log destination for "syslog" format, this creates a separate log in Azure called "F5Telemetry_event_CL" which just dumps the raw data in a "data_s" column like this: Therefore, what I have done is created an GTM iRule which can be added to the GSLB Listener and used to generate request/response DNS logs into the F5Telemetry_LTM_CL log: when DNS_REQUEST priority 50 { set hostname [info hostname] set ldns [IP::client_addr] set vs_name [virtual name] set q_name [DNS::question name] set q_type [DNS::question type] set now [clock seconds] set ts [clock format $now -format {%a, %d %b %Y %H:%M:%S %Z}] if { $q_type == "A" or $q_type == "AAAA" } { set hsl_reqlog [HSL::open -proto TCP -pool "/Common/Shared/pool-telemetry"] HSL::send $hsl_reqlog "event_source=\"dns_request_logging\",hostname=\"$hostname\",client_ip=\"$ldns\",server_ip=\"\",http_method=\"\",http_uri=\"\",virtual_name=\"$vs_name\",dns_query_name=\"$q_name\",dns_query_type=\"$q_type\",dns_query_answer=\"\",event_timestamp=\"$ts\"\n" unset hsl_reqlog -- } unset ldns vs_name q_name q_type now ts -- } when DNS_RESPONSE priority 50 { set hostname [info hostname] set ldns [IP::client_addr] set vs_name [virtual name] set q_name [DNS::question name] set q_type [DNS::question type] set q_answer [DNS::answer] set now [clock seconds] set ts [clock format $now -format {%a, %d %b %Y %H:%M:%S %Z}] if { $q_type == "A" or $q_type == "AAAA" } { set hsl_reslog [HSL::open -proto TCP -pool "/Common/Shared/pool-telemetry"] HSL::send $hsl_reslog "event_source=\"dns_response_logging\",hostname=\"$hostname\",client_ip=\"$ldns\",server_ip=\"\",http_method=\"\",http_uri=\"\",virtual_name=\"$vs_name\",dns_query_name=\"$q_name\",dns_query_type=\"$q_type\",dns_query_answer=\"$q_answer\",event_timestamp=\"$ts\"\n" unset hsl_reslog -- } unset ldns vs_name q_name q_type q_answer now ts -- } Just add this to the GTM Listener and ensure you don't have the DNS logging profile enabled in the DNS profile: Here are the logs (nicely formatted!): The event_source_s column is set to "dns_request_logging" and "dns_response_logging" to distinguish them from the LTM request logs in this log. Adding LTM logs LTM logs in Sentinel are sent to the custom log F5Telemetry_LTM_CL and are the output from the Request Logging service in BIG-IP. This creates a log for every HTTP request (and optionally the response) which is made through a Virtual Server which has a HTTP profile applied and which also includes a Request Logging profile. Request Logging uses the High Speed Log (HSL) to send the logs directly out of TMM. We already setup a HSL log destination in our base AS3 declaration so we can use this. The request log is very flexible in what you want to record and fields are detailed here: Reference: Configuring request logging using the Request Logging profile (f5.com) I find a useful field is $TIME_USECS which is added to the Microtimestamp column. This is useful as it can be used to tie together the request with the response when troubleshooting. Here is the AS3 code snippet for adding a Request Logging Profile: "profile-ltmRequestLog": { "class": "Traffic_Log_Profile", "requestSettings": { "requestEnabled": true, "requestProtocol": "mds-tcp", "requestPool": { "use": "pool-telemetry" }, "requestTemplate": "event_source=\"request_logging\",hostname=\"$BIGIP_HOSTNAME\",client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\",dest_ip=\"$VIRTUAL_IP\",dest_port=\"$VIRTUAL_PORT\",http_method=\"$HTTP_METHOD\",http_uri=\"$HTTP_URI\",virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\",Microtimestamp=\"$TIME_USECS\"" }, "responseSettings": { "responseEnabled": true, "responseProtocol": "mds-tcp", "responsePool": { "use": "pool-telemetry" }, "responseTemplate": "event_source=\"response_logging\",hostname=\"$BIGIP_HOSTNAME\",client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\",http_method=\"$HTTP_METHOD\",http_uri=\"$HTTP_URI\",virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\",http_statcode=\"$HTTP_STATCODE\",http_status=\"$HTTP_STATUS\",Microtimestamp=\"$TIME_USECS\",response_ms=\"$RESPONSE_MSECS\"" } } Note that it references the High Speed Logging pool we created earlier. If you want to add the template in the BIG-IP GUI, below is the formatted text to add to the template field. Make sure 'Request Logging' and 'Response Logging' is enabled, the HSL Protocol is TCP, and the Pool Name is the pool we created earlier (called 'pool-telemetry' in my example): Request Settings / Template: event_source="request_logging",hostname="$BIGIP_HOSTNAME",client_ip="$CLIENT_IP",server_ip="$SERVER_IP",dest_ip="$VIRTUAL_IP",dest_port="$VIRTUAL_PORT",http_method="$HTTP_METHOD",http_uri="$HTTP_URI",virtual_name="$VIRTUAL_NAME",event_timestamp="$DATE_HTTP",Microtimestamp="$TIME_USECS" Response Settings / Template: event_source="response_logging",hostname="$BIGIP_HOSTNAME",client_ip="$CLIENT_IP",server_ip="$SERVER_IP",http_method="$HTTP_METHOD",http_uri="$HTTP_URI",virtual_name="$VIRTUAL_NAME",event_timestamp="$DATE_HTTP",http_statcode="$HTTP_STATCODE",http_status="$HTTP_STATUS",Microtimestamp="$TIME_USECS",response_ms="$RESPONSE_MSECS" Sending syslog to Azure Some errors on the system may not show up in the standard telemetry logging tables - in particular TLS errors due to certificate issues (reported by the pkcs11d daemon) do not generate logs. To aid reporting, we can redirect syslog for any logs of a particular level (e.g. warning and above) and push them to the localhost on port 6514 - they are then picked up by the Telemetry System listener and pushed out to Azure Log Analytics. (tmos)# edit /sys syslog all-properties this opens up the settings in the vi editor. in the edited section remove the line: include none replace with below: include " filter f_remote_loghost { level(warning..emerg); }; destination d_remote_loghost { udp(\"127.0.0.1\" port(6514)); }; log { source(s_syslog_pipe); filter(f_remote_loghost); destination(d_remote_loghost); }; " then write-quit vi (type ':wq') you should get the prompt: Save changes? (y/n/e) select 'y' finally save the config: (tmos)# save /sys config This will create a new custom log called F5Telemetry_syslog_CL which contains the syslog message. The messages are send in raw format, so need a bit of kusto manipulation. The following KQL extracts the data into columns to hold the reporting process/daemon, the Severity, Hostname, and the log text: F5Telemetry_syslog_CL | extend processName = extract(@'([\w-]+)\[\d+\]:', 1, data_s) | extend message_s = extract(@'\[\d+\]: (.*)', 1, data_s) | extend severity = extract(@'(\w+)\s[\w-]+\[\d+\]', 1, data_s) | extend severity_s = replace_strings( severity, dynamic(['err', 'emerg']), // Lookup strings dynamic(['error', 'emergency']) // Replacements ) | project TimeGenerated, Severity = severity_s, Process = processName, ['Log Message'] = message_s, Hostname = tostring(split(hostname_s, ".")[0]) | order by TimeGenerated desc The output looks like this: The Azure Log Collector API Telemetry Streaming leverages the Azure HTTP Data Collector API as a client and uses the exposed REST API to send formatted log data. All data in Log Analytics is stored as a record with a particular record type. TS formats the data as multiple records in json format with appropriate headers to direct data into specific logs. An individual record is created for each record in the request payload. The data sent into the Azure Monitor HTTP Data Collector API via Telemetry Streaming is formatted to place records whose Type is equal to the LogType value specified and appends with _CL. For example, the Telemetry System Listener creates logs with a logType of "F5Telemetry_system" which outputs all records into a custom log in the Log Analytics Workspace called F5Telemetry_system_CL. Reference: https://learn.microsoft.com/en-us/previous-versions/azure/azure-monitor/logs/data-collector-api Note: Please be aware that the API has been deprecated and will no longer be functional as of 14/09/2026. Hopefully TS will be updated accordingly.169Views0likes0CommentsMitigating OWASP Web Application Risk: Broken Access attacks using F5 Distributed Cloud Platform
This article is in continuation of the owasp series and will cover broken access control. Check here for overview article. Introduction to Broken Access Control attack: Access controls enforces policy such that users cannot act outside of their intended permissions. Also called authorization, allows or denies access to your application's features and resources. Misuse of access control enables: Unauthorized access to sensitive information. Privilege escalation. Illegal file executions. There are many ways to infiltrate application servers using broken access controls and we are going to focus on the 2 scenarios below and how to mitigate them. Scenario 1: Broken access + SQL injection attack Instead of logging with valid credentials, attacker uses SQL injection attacks to login as another standard or higher privileged user, like admin. We can also say this is broken authentication, because an attacker authenticated to a system using injection attack without providing valid credentials. For this demo I am using OWASP Juice shop (reference links at bottom for more info). Step1: Please follow steps suggested in Article1 to configure HTTP load balancer and WAF in cloud console. Make sure WAF is configured in Monitoring mode to generate the attack. Step2: Open a browser and navigate to the login page of the application load balancer. In the Email field provide “' OR true --” and any password as below: Step3: Validate you can login to application as administrator as below: Scenario2: File upload vulnerability Any file which has the capability to harm the server is a malicious file. For example, a php file which has some dangerous php functions like exec () can be considered as a malicious file as these functions can execute OS command and can remotely provide us the control of the application server. Suppose there is a file upload functionality in the web application and only jpeg extension file is allowed to be uploaded. Failing to properly enforce access restrictions on file properties can lead to broken access control attacks providing attackers a way to upload potentially dangerous files with different extensions. For this demo I am using DVWA as the vulnerable testing application (reference links at bottom for more info). Step by step process: Step1: Open a notepad editor and paste below contents and save to desktop as malicious.php Step2: Open a browser and navigate to the application load balancer URL. Login to DVWA application using admin/password as the credentials. Click on “File Upload” option in left side of the menu section. Step3: This page is used to upload images with extensions .jpeg, .png, .gif etc. But this demo application doesn’t have file restrictions enabled making attackers to upload any file extensions. Click on “Choose File” button and upload above created .php file. Step4: Note the location displayed in the message, open the URL in the browser and validate we can see all the users available as below. NOTE: Since this is just a demo environment, I'm using same F5 Distributed Cloud load balancer for both the demo applications by changing the IP and ports in F5 Distributed Cloud Origin pool as per my needs. That's why you can see both apps are accessible using juiceshop domain. Solution: To mitigate these attacks, navigate to Firewall section and in “App Firewall” configuration make sure “Enforcement Mode” is set to “Blocking” as below: Next in browser try to generate above scenarios and validate your request is blocked as below. Login Mitigation: Illegal File Upload mitigation: Illegal File Execution mitigations: In Distributed Cloud Console expand the security event and check the WAF section to understand the reason why request was blocked. Conclusion: As shown above, OWASP Top 10: Broken access control attacks can be mitigated by configuring WAF firewall in “Blocking” mode. For further information click the links below: OWASP - Broken access control File Upload Vulnerability OWASP Juice Shop DVWA3.7KViews6likes0CommentsF5xC Migration
Hey Amigos, Need some advice.. I am implementing F5xC on our infra and migrating applications, however, ran into a small problem and need guidance.. There's an on-prem application sitting behind Citrix LB with the SSL offloaded directly on to the backend members i.e. SSL passthrough configured.. We have to migrate this app behind F5xC with SSL certificate on the F5xC as well.. Have below concerns ; Would this solution work if we get the SSL cert from the server itself and deploy it on the F5xC ? Has anyone implemented this sort of solution before, if yes, can anyone share their observations ? There's no test env so I can't really test this in non-prod.. This has to be implemented in prod directly and hence the precautions :)41Views0likes2Comments