waf
187 TopicsF5 BIG-IP Advanced WAF – DOS profile configuration options.
F5 BIG IP Advanced WAF is the perfect tool for detection and prevention of application Distributed Denial-of-Service (DDoS) attacks against a web application. This article will review the possible configurations of the dos profile also known as Adv WAF anti DDoS feature to stop those attacks.134Views2likes0CommentsNeed to restrict access to URLs
Hello team, I have a new https://xyz.com that needs to be published to internet. We are planning to launch its services in phases. For 1st phase I have received set of 29 URI paths (These are wildcard URI path i.e https://xyz.com/asdf/xyz/morning*) that needs to be accessible from internet public IPv4 & public IPv6 IPs. Any other URI paths than these 29 paths should be redirected tohttps://oldapplication.com when accessed from internet public IPv4 & public IPv6 IPs. Access to https://xyz.com from internal organization private IPs should be accessible without any URI path restriction. Please inform how I can achieve above requirement using iRule or LTM policy or WAF. Thanks in advance39Views0likes2Commentscannot find Security -> Application Security: Headers: Cookie List
Hello F5 Community, My WAF trial VM runs on 17.1.1.4. I cannot find Security -> Application Security: Headers: Cookie List in the WAF. Is that feature removed or Located in new place. I searched over internet but I could not find a resolution. Security -> Application Security : Security Polices : Policy -> HTTP Message Protection -> Cookies Also empty.63Views0likes2CommentsGitOps: Declarative Infrastructure and Application Delivery with NGINX App Protect
First thing first. What is GitOps? In a nutshell, GitOps is a practice (Git Operation) that allows you to use GIT and code repository as your configuration source of truth (Declarative Infrastructure as Code and Application Delivery as Code) couple with various supporting tools. The state of your git repository syncs with your infrastructure and application states. As operation team runs daily operations (CRUD - "Create, Read, Update and Delete") leveraging the goodness and philosophy of DevOps, they no longer required to store configuration manifest onto various configuration systems. THe Git repo will be the source of truth. Typically, the target systems or infrastructures runs on Kubernetes base platform. For further details and better explanation of GitOps, please refer to below or Google search. https://codefresh.io/learn/gitops/ https://www.atlassian.com/git/tutorials/gitops Why practice GitOps? I havebeen managing my lab environment for many years. I use my lab for research of technologies, customer demos/Proof-of-Concepts, applications testing, and code development. Due to the nature of constant changes to my environment (agile and dynamic nature), especially with my multiple versions of Kubernetes platform, I have been spending too much time updating, changing, building, deploying and testing various cloud native apps. Commands like docker build, kubectl, istioctl and git have been constantly and repetitively used to operate environments. Hence, I practice GitOps for my Kubernetes infrastructure. Of course, task/operation can be automated and orchestrated with tools such as Ansible, Terraform, Chef and Puppet. You may not necessarily need GitOps to achieve similar outcome. I managed with GitOps practice partly to learn the new "language" and toexperience first-hand the full benefit of GitOps. Here are some of my learnings and operations experience that I have been using to manage F5's NGINX App Protect and many demo apps protected by it,which may benefit you and give you some insight on how you can run your own GitOps. You may leverage your own GitOps workflow from here. For details and description on F5's NGINX App Protect, please refer to https://www.nginx.com/products/nginx-app-protect/ Key architecture decision of my GitOps Workflow. Modular architecture - allows me to swap in/out technologies without rework. "Lego block" Must reduce my operational works - saves time, no repetitive task, write once and deploy many. Centralize all my configuration manifest - single source of truth. Currently, my configuration exists everywhere - jump hosts, local laptop, cloud storage and etc. I had lost track of which configuration was the latest. Must be simple, modern, easy to understand and as native as possible. Use Case and desirable outcome Build and keep up to date NGINX Plus Ingress Controller with NGINX App Protect in my Kubernetes environment. Build and keep up to date NGINX App Protect's attack signature and Threat Champaign signature. Zero downtime/impact to apps protected by NGINX App Protect with frequent releases, update and patch cycle for NGINX App Protect. My Problem Statement I need to ensure that my infrastructure (Kubernetes Ingress controller) and web application firewall (NGINX App Protect) is kept up to date with ease. For example, when there are new NGINX-ingress and NGINX App Protect updates (e.g., new version, attack signature and threat campaign signature), I would like to seamlessly push changes out to NGINX-ingress and NGINX App Protect (as it protects my backend apps) without impacting applications protected by NGINX App Protect. GitOps Workflow Start small, start with clear workflow. Below is a depiction of the overall GitOps workflow. NGINX App Protect is the target application. Hence, before description of the full GitOps Workflow, let us understand deployment options for NGINX App Protect. NGINX App Protect Deployment model There are four deployment models for NGINX App Protect. A common deployment models are: Edge - external load balancer and proxies (Global enforcement) Dynamic module inside Ingress Controller (per service/URI/ingress resource enforcement) Per-Service Proxy model - Kubernetes service tier (per service enforcement) Per-pod Proxy model - proxy embedded in pod (per endpoint enforcement) Pipeline demonstrated in this article will workwith either NGINX App Protect deployed as Ingress Controller (2) or per-service proxy model (3). For the purposes of this article, NGINX App Protect is deployed at the Ingress controller at the entry point to Kubernetes (Kubernetes Edge Proxy). For those who prefer video, Video Demonstration Part 1 /3 – GitOps with NGINX Plus Ingress and NGINX App Protect - Overview Part 2 /3 – GitOps with NGINX Plus Ingress and NGINX App Protect – Demo in Action Part 3/3 - GitOps with NGINX Plus Ingress and NGINX App Protect - WAF Security Policy Management. Description of GitOps workflow Operation (myself) updates nginx-ingress + NAP image repo (e.g. .gitlab-ci.yml in nginx-plus-ingress) via VSCODE and perform code merge and commit changes to repo stored in Gitlab. Gitlab CI/CD pipeline triggered. Build, test and deploy job started. Git clone kubernetes-ingress repo from https://github.com/nginxinc/kubernetes-ingress/. Checkout latest kubernetes-ingress version and build new image with DockerfilewithAppProtectForPlus dockerfile. [ Ensure you have appropriate nginx app protect license in place ] As part of the build process, it triggers trivy container security scanning for vulnerabilities (Open Source version of AquaSec). Upon completion of static binary scanning, pipeline upload scanned report back to repo for continuous security improvement. Pipeline pushesimage to private repository. Private image repo has been configured to perform nightly container security scan (Clair Scanner). Leverage multiple scanning tools - check and balance. Pipeline clone nginx-ingress deployment repo (my-kubernetes-apps) and update nginx-ingress deployment manifest with the latest build image tag (refer excerpt of the manifest below). Gitlab triggers a webhook to ArgoCD to refresh/sync the desire application state on ArgoCD with deployment state in Kubernetes. By default, ArgoCD will sync with Kubernetes every 3 mins. Webhook will trigger instance sync. ArgoCD (deployed on independent K3S cluster) fetches new code from the repo and detects code changes. ArgoCD automates the deployment of the desired application states in the specified target environment. It tracks updates to git branches, tags or pinned to a specific version of manifest at a git commit. Kubernetes triggers an image pull from private repo, performs a rolling updates, and ensures zero interruption to existing traffic. New pods (nginx-ingress) will spin up and traffic will move to new pods before terminating theold pod. Depending on environment and organisation maturity, a successful build, test and deployment onto DEV environment can be pushed to production environment. Note: DAST Scanning (ZAP Scanner) is not shown in this demo. Currently, running offline non-automated scanning. ArgoCD and Gitlab are integrated with Slack notifications. Events are reported into Slack channel via webhook. NGINX App Protectevents are send to ELK stack for visibility and analytics. Snippet on where Gitlab CI update nginx-plus-ingress deployment manifest (Flow#6). Each new image build will be tagged with <branch>-hash-<version> ... spec: imagePullSecrets: - name: regcred serviceAccountName: nginx-ingress containers: - image: reg.foobz.com.au/apps/nginx-plus-ingress:master-f660306d-1.9.1 imagePullPolicy: IfNotPresent name: nginx-plus-ingress ... Gitlab CI/CD Pipeline Successful run of CI/CD pipeline to build, test, scan and push container image to private repository and execute code commit onto nginx-plus-ingress repo. Note: Trivy scanning report will be uploaded or committed back to the same repo. To prevent Gitlab CI triggering another build process ("pipeline loop"), the code commit is tagged with [skip ci]. ArgoCD continuous deployment ArgoCD constantly (default every 3 mins) syncs desired application state with my Kubernetes cluster. Its ensures configuration manifest stored in Git repository is always synchronised with the target environment. Mytrain-dev apps are protected by nginx-ingress + NGINX App Protect. Specific (per service/URI enforcement). NGINX App Protect policy is applied onto this service. nginx-ingress + NGINX App Protect is deployed as an Ingress Controller in Kubernetes. pod-template-hash=xxxx is labeled and tracked by ArgoCD. $ kubectl -n nginx-ingress get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx-ingress-776b64dc89-pdtv7 1/1 Running 0 8h app=nginx-ingress,pod-template-hash=776b64dc89 nginx-ingress-776b64dc89-rwk8w 1/1 Running 0 8h app=nginx-ingress,pod-template-hash=776b64dc89 Please refer to the attached video links above for full demo in actions. References Tools involved NGINX Plus Ingress Controller - https://www.nginx.com/products/nginx-ingress-controller/ NGINX App Protect - https://www.nginx.com/products/nginx-app-protect/ Gitlab - https://gitlab.com Trivy Scanner - https://github.com/aquasecurity/trivy ArgoCD - https://argoproj.github.io/argo-cd/ Harbor Private Repository - https://goharbor.io/ Clair Scanner - https://github.com/quay/clair Slack - https://slack.com DAST Scanner - https://www.zaproxy.org/ Elasticsearch, Logstash and Kibana (ELK) - https://www.elastic.co/what-is/elk-stack, https://github.com/464d41/f5-waf-elk-dashboards K3S - https://k3s.io/ Source repo used for this demonstration Repo for building nginx-ingress + NGINX App Protect image repo https://github.com/fbchan/nginx-plus-ingress.git Repo use for deployment manifest of nginx-ingress controller with the NGINX App Protect policy. https://github.com/fbchan/my-kubernetes-apps.git Summary GitOps perhaps is a new buzzword. It may or may not make sense in your environment. It definitely makes sense for me. It integrated well with NGINX App Protect and allows me to constantly update and push new code changesinto my environment with ease. A few months down the road, when I need to update nginx-ingress and NGINX App Protect, I just need to trigger a CI job, and then everything works like magic. Your mileage may vary. Experience leads me to think along the line of - start small, start simple by "GitOps-ing" on one of your apps that may require frequency changes. Learn, revise and continuously improve from there. The outcome that GitOps provides will ease your operational burden with "do more with less". Ease of integration of nginx-ingress and NGINX App Protect into your declarative infrastructure and application delivery with GitOps and F5's industry leading Web Application firewall protection will definitely alleviate your organisation's risk exposure to external and internal applications threat.1.8KViews1like3CommentsHigh CPU utilization (100%).
I observed high CPU utilization (100%) on F5 device, resource provision ASM nominal. I checked the client-side throughput and server-side throughput both are normal but found management interface throughput is very high and what i noticed this is happening in same time period for last 30 days. What could be the reason for this spike. Many thanks in advanced for your time and consideration.154Views0likes14CommentsHSTS is not working.
Hi there, We have one irule is configured on VIP which is redirecting to maintenance page if user access the wrong url on that page HSTS is not working but if we access the right url then HSTS is working. We have enabled HSTS in http profile and that is attached to the same VIP with irule. Is there any way to enable HSTS on maintenance page or any remediation to fix that issue. if { $DEBUG } { log local0. "TEST - Source IP address: [IP::client_addr]" } switch -glob $uri_ext { "/httpfoo*" {set uri_int [string map {"/httpfoo" "/adapter_plain"} $uri_ext]} "/httptest*" {set uri_int [string map {"/httptest" "/adapter_plain"} $uri_ext]} default { HTTP::respond 200 content [ifile get ifile_service_unavailable_html] set OK 0 } } Many thanks in advance.Solved107Views0likes1CommentAbout Vulnerability Countermeasures
Thank you for your assistance. I would like to know if the following product is effective as a vulnerability countermeasure. Product name: F5 Rules for AWS WAF Common Vulnerabilities and Exposures Target vulnerability: CVE-2021-26691 CVE-2021-26690 CVE-2020-35452 We apologize for the inconvenience, but we would appreciate it if you could check on this issue as soon as possible. Thank you in advance for your cooperation.39Views0likes2CommentsUsing the WAF instead of a jump server for ssh-tunneling?
Hello everyone, This is how it works at the moment: We go from server A, in the internal network, with a public IP via ssh to a jump server in the DMZ. From the jump server we then go on to server B in the secure zone. I am relatively new to this and have been given the task of seeing if the WAF can replace the jump server. We use Advanced Web Application Firewall, r2600 with BIG-IP 17.1.1.3 Is this possible and what do we need for it? Thank you in advance for your help ! Best regards.47Views0likes1CommentIrule to allow specific IPs
I have a site which is abc.com Trying to achieve below requirements- 1) If uri is / it should redirect to abc.com/xyz - open for all 2) If uri is /rdp_xyz_tshoot should accessible to internal network - (here we can use the datagroup list) As this site is migrated to akamai where they have requirement to use below irule- when HTTP_REQUEST { if { [HTTP::header exists True-Client-IP] } { set trueclientip [HTTP::header True-Client-IP] HTTP::header replace X-Forwarded-For $trueclientip } } Cause for above akamai irule= Normally the True-Client-IP header includes the real IP of the clients when requests are coming from Akamai. It will be unaffected and be sent as part of the request to the pool member. So, your backend servers could look for that header and do something with its value. However, if you want the F5 to translate it to the X-Forwarded-For header, you can use an iRule to convert the Akamai True-Client-IP header to the X-Forwarded-For header. we are trying with below irule which is not working- when HTTP_REQUEST { if { ([HTTP::uri] starts_with "/rdp_xyz_tshoot") && (not[class match [IP::client_addr] equals allowed_IPs])} { reject } if { [HTTP::uri] == "/" } { HTTP::redirect "https://[HTTP::host]/abc_login.jsp" } } Please help41Views0likes2Comments