cancel
Showing results for 
Search instead for 
Did you mean: 
James_Jinwon_Lee
F5 Employee
F5 Employee

In the previous article, we explained how NetSecOps and DevSecOps could manage their application security policies to prevent advanced attacks from external organization networks. But in advanced persistent hacking, hackers sometimes exploit application vulnerabilities and use advanced malware with phishing emails to the operators. This is an old technique but still valid and utilized by many APT (Advanced Persistent Threat) Hacking Groups. And if the advanced hackers obtain a DevOps operator's ID and password using the malware, they could access a Kubernetes or OpenShift cluster through the normal login process and easily bypass advanced WAF(Web Application Firewall) solutions deployed in front of the cluster. Once the attacker can get a user ID and password of the Kubernetes or OpenShift cluster, the attacker also can access each application that is running inside of the cluster.

Since most people on the SecOps team normally install very basic security functions inside the Kubernetes or OpenShift cluster, the hacker who logged in to the cluster can attack other applications in the same cluster without any security barrier. F5 Container Ingress Service is not designed to stop these sort of attacks within the cluster. To overcome this challenge, we have another tool, NGINX App Protect. NGINX App Protect delivers Layer 7 visibility and granular control for the applications while enabling an advanced application security policies. With an NGINX App Protect deployment, DevSecOps can ensure only legitimate traffic is allowed while all other unwanted traffic is blocked. NGINX App Protect can monitor the traffic traversing namespace boundaries between pods and provide advanced application protection at layer 7 for East-West traffic.

Solution Overview

This article will cover how NGINX App Protect can protect the critical applications in an OpenShift environment against an attack originating within the same cluster.

Detecting advanced application attacks inside the cluster is beneficial for the DevSecOps team but this can increase the complexity of security operations. To provide a certain level of protection for the critical application the NGINX App Protect instance should be installed as a ‘PoD Proxy’ or a ‘Service Proxy’ for the application. This means the customer may need multiple NGINX App Protect instances to have the required level of protection for their applications. On the face of it this might seem like a dramatic increase in the complexity of security related operations.

Security automation is the recommended solution to overcome the increased complexity of this security operations challenge. In this use case, we use Red Hat Ansible as our security automation tool. With Red Hat Ansible, the user can automate their incident response process with their existing security solutions. This can dramatically reduce the security team’s response time from hours to minutes. We use Ansible and Elasticsearch to provide all the required ‘security automation’ processes in this demo.

0151T000003q4rrQAA.png

With all these combined technologies, the solution provides WAF protection for the critical applications deployed in the OpenShift cluster. Once it detects the application-based attack from the same cluster subnet, it immediately blocks the attack and deletes the compromised pod with a pre-defined security automation playbook.


The workflow is organized as shown below:

  1. The malware of 'Phishing email' infects the developer's laptop.
  2. The attacker steals the ID/PW of the developer using the malware. In this demo, the stolen ID is 'dev_user.'
  3. The attacker logs in the 'Test App' on the 'dev-test01' namespace, owned by the 'dev_user'.
  4. The attacker starts the network-scanning process on the internal subnet of the OpenShift cluster. And the attacker finds the 'critical-app' application pod.
  5. The attacker starts the web-based attack against 'critical-app'.
  6. NGINX App Protect protects the 'critical-app'; thus, the attack traffic is blocked immediately.
  7. NGINX exports the alert details to the external Elasticsearch.
  8. If this specific alert meets a pre-defined condition, Elasticsearch will trigger the pre-defined Ansible playbook.
  9. Ansible playbook accesses OpenShift and deletes the compromised 'Test App’ pod automatically.


* Since this demo focuses on an attack inside the OpenShift cluster, the demo does not include the 'Step#1' and 'Step#2' (Phishing email).


Understanding of the ‘Security Automation’ process

The ‘Security Automation’ is the key part of this demo because the organizations don’t want to respond to each WAF alert manually, one by one. Manual incident-response processes are a time-consuming job and inefficient, especially in a modern-app environment with hundreds of container-based applications. In this demo, Red Hat Ansible and Elasticsearch take the security automation. Below is the brief workflow of the security automation of this use case.


0151T000003q7PbQAI.png

In this use case, the F5 Advanced WAF has been deployed in front of the OpenShift cluster and has inserted the X-Forwarded-For header value at each session. Since F5 Advanced WAF inserts the X-Forwarded-For header into the packet that comes from the external, if the packet doesn’t include the X-Forwarded-For header, it is likely coming from the internal network.


NGINX App Protect installed as a pod proxy’ with the critical application we want to protect. Because NGINX App Protect runs as a pod proxy, all the traffic must be sent through this to reach the ‘critical-application.’


If the NGINX App Protect detects any malicious activities, it sends the alert details to the external Elasticsearch System.


When any new alerts come from the NGINX App Protect, Elasticsearch analyzes  the details of the alerts. If the alert meets the below conditions, Elasticsearch triggers the notification to the Logstash.

  • If the source IP address of the alert is a part of the OpenShift cluster subnet…
  • If the WAF alert severity is Critical…

Once the Logstash system receives the notification from the Elasticsearch, it creates the ip.txt file, which includes the source IP address of the attack and executes the pre-defined Ansible playbook.


Ansible playbook reads the ip.txt file and extracts the IP address from the file. And Ansible accesses the OpenShift and finds the compromised pod using that Source IP Address from the ip.txt file. Then Ansible deletes the compromised pod and ip.txt files automatically.

Creates Ansible Playbook

Red Hat Ansible is the automation tool that enables network and security automation for users with enterprise-ready functions. F5 and Red Hat have a strategic partnership and deliver the joint use cases for our customer base. With Ansible integration with F5 solutions, organizations can have the single pane of glass management for network and security automation. In this use-case, we implement an automated security response process with the Ansible playbook when the F5 NGINX App Protect detects malicious activities in the OpenShift cluster. Below is the Ansible playbook to execute the incident response process for the attacker's compromised pod.

ansible_ocp.yaml
 
---
- hosts: localhost
  gather_facts: false
 
  tasks:
  - name: Login to OCP cluster
    k8s_auth:
      host: https://yourocpdomain:6443
      username: kubeadmin
      password: your_ocp_password
      validate_certs: no
    register: k8s_auth_result
 
  - name: Extract IP Address
    command: cat /yourpath/ip.txt
    register: badpod_ip
 
  - name: Extract App Label from OpenShift
    shell: |
      sudo oc get pods -A -o json --field-selector status.podIP={{ badpod_ip.stdout }} |
      grep "\"app\":" |
      awk '{print $2}' |
      sed 's/,//'
    register: app_label
 
  - name: Delete Malicious Deployments
    shell: |
      sudo oc delete all --selector app={{ app_label.stdout }} -A
    register: delete_pod
  
  - name: Delete IP and Info File
    command: rm -rf /yourpath/ip.txt
 
  - name: OCP Service Deletion Completed
    debug:
      msg: "{{ delete_pod.stdout }}"

 

Configuring Elasticsearch Watcher and Logstash

To trigger the Ansible playbook for the Security Automation, SOC analysts need to validate the alert from the NGINX App Protect first. And based on the difference of the alert details, the SOC analyst might want to execute a different playbook. For example, if the alert is related to a Credential Stuffing Attack, the SOC analysts may want to block the user's application access. But if the alert is related to the known IP Blacklist, the analyst might want to block that IP address in the firewall. To support these requirements, the security team needs to have a tool that can monitor the security alerts and trigger the required actions based on them.

Elasticsearch Watcher is the feature of the commercial version of Elasticsearch that users can use to create actions based on conditions, which are periodically evaluated using queries on the data. 

  1. Configuring the Watcher of Kibana

* You need an Elastic Platinum license or Eval license to use this feature on the Kibana.

* Go to Kibana UI.

* Management -> Watcher -> Create -> Create advanced watcher

* Copy and paste below JSON code

watcher_ocp.json
 
{
  "trigger": {
    "schedule": {
      "interval": "1m"
    }
  },
  "input": {
    "search": {
      "request": {
        "search_type": "query_then_fetch",
        "indices": [
          "nginx-*"
        ],
        "rest_total_hits_as_int": true,
        "body": {
          "query": {
            "bool": {
              "must": [
                {
                  "match": {
                    "outcome_reason": "SECURITY_WAF_VIOLATION"
                  }
                },
                {
                  "match": {
                    "x_forwarded_for_header_value": "N/A"
                  }
                },
                {
                  "range": {
                    "@timestamp": {
                      "gte": "now-1h",
                      "lte": "now"
                    }
                  }
                }
              ]
            }
          }
        }
      }
    }
  },
  "condition": {
    "compare": {
      "ctx.payload.hits.total": {
        "gt": 0
      }
    }
  },
  "actions": {
    "logstash_logging": {
      "webhook": {
        "scheme": "http",
        "host": "localhost",
        "port": 1234,
        "method": "post",
        "path": "/{{watch_id}}",
        "params": {},
        "headers": {},
        "body": "{{ctx.payload.hits.hits.0._source.ip_client}}"
      }
    },
    "logstash_exec": {
      "webhook": {
        "scheme": "http",
        "host": "localhost",
        "port": 9001,
        "method": "post",
        "path": "/{{watch_id}}",
        "params": {},
        "headers": {},
        "body": "{{ctx.payload.hits.hits[0].total}}"
      }
    }
  }
}

2. Configuring 'logstash.conf' file. Below is the final version of the 'logstash.conf' file.

Please note that you have to start the logstash with 'sudo' privilege

logstash.conf
 
input {
    syslog {
        port => 5003
        type => nginx
        }
 
    http {
        port => 1234
        type => watcher1
        }
 
    http {
        port => 9001
        type => ansible1
        }
}
 
filter {
if [type] == "nginx" {
        
        grok {
            match => {
                "message" => [
                ",attack_type=\"%{DATA:attack_type}\"",
                ",blocking_exception_reason=\"%{DATA:blocking_exception_reason}\"",
                ",date_time=\"%{DATA:date_time}\"",
                ",dest_port=\"%{DATA:dest_port}\"",
                ",ip_client=\"%{DATA:ip_client}\"",
                ",is_truncated=\"%{DATA:is_truncated}\"",
                ",method=\"%{DATA:method}\"",
                ",policy_name=\"%{DATA:policy_name}\"",
                ",protocol=\"%{DATA:protocol}\"",
                ",request_status=\"%{DATA:request_status}\"",
                ",response_code=\"%{DATA:response_code}\"",
                ",severity=\"%{DATA:severity}\"",
                ",sig_cves=\"%{DATA:sig_cves}\"",
                ",sig_ids=\"%{DATA:sig_ids}\"",
                ",sig_names=\"%{DATA:sig_names}\"",
                ",sig_set_names=\"%{DATA:sig_set_names}\"",
                ",src_port=\"%{DATA:src_port}\"",
                ",sub_violations=\"%{DATA:sub_violations}\"",
                ",support_id=\"%{DATA:support_id}\"",
                ",unit_hostname=\"%{DATA:unit_hostname}\"",
                ",uri=\"%{DATA:uri}\"",
                ",violation_rating=\"%{DATA:violation_rating}\"",
                ",vs_name=\"%{DATA:vs_name}\"",
                ",x_forwarded_for_header_value=\"%{DATA:x_forwarded_for_header_value}\"",
                ",outcome=\"%{DATA:outcome}\"",
                ",outcome_reason=\"%{DATA:outcome_reason}\"",
                ",violations=\"%{DATA:violations}\"",
                ",violation_details=\"%{DATA:violation_details}\"",
                ",request=\"%{DATA:request}\""
                ]
        }
    break_on_match => false
  }
  
        mutate {
        split => { "attack_type" => "," }
        split => { "sig_ids" => "," }
        split => { "sig_names" => "," }
        split => { "sig_cves" => "," }
        split => { "sig_set_names" => "," }
        split => { "threat_campaign_names" => "," }
        split => { "violations" => "," }
        split => { "sub_violations" => "," }
  
        remove_field => [ "date_time", "message" ]
        }
 
        if [x_forwarded_for_header_value] != "N/A" {
                mutate { add_field => { "source_host" => "%{x_forwarded_for_header_value}"}}
                } else {
                        mutate { add_field => { "source_host" => "%{ip_client}"}}
                }
  
   geoip {
    source => "source_host"
    database => "/etc/logstash/GeoLite2-City.mmdb"
}
}
}
 
output {
 
if [type] == 'nginx' {
         elasticsearch {
                hosts => ["127.0.0.1:9200"]
                index => "nginx-%{+YYYY.MM.dd}"
        }
}
 
if [type] == 'watcher1' {
  file {
    path => "/yourpath/ip.txt"
    codec => line { format => "%{message}"}
  }
}
 
if [type] == 'ansible1' {
  exec {
          command => "ansible-playbook /yourpath/ansible_ocp.yaml"
  }
}
}

Simulate the demo

You should start the Kibana watcher and logstash services first before proceeding with this step.


Kubeadmin Console

Please make sure you're logged in to the OCP cluster using a cluster-admin account. And confirm the 'critical-app' is running correctly.

j.lee$ oc whoami
kube:admin

j.lee$
j.lee$ oc get projects

NAME                                               DISPLAY NAME   STATUS

critical-app                                                      Active
default                                                           Active
dev-test02                                                        Active
kube-node-lease                                                   Active
kube-public                                                       Active
kube-system                                                       Active
openshift                                                         Active
openshift-apiserver                                               Active
openshift-apiserver-operator                                      Active
openshift-authentication                                          Active
openshift-authentication-operator                                 Active
openshift-cloud-credential-operator                               Active

j.lee$ oc get pods -o wide

NAME                               READY   STATUS    RESTARTS   AGE   IP            NODE                                             NOMINATED NODE   READINESS GATES

critical-app-v1-5c6546765f-wjhl9   2/2     Running   1          85m   10.129.2.71   ip-10-0-180-68.ap-southeast-1.compute.internal   <none>           <none>

j.lee$

dev_user Console

  1. Please make sure you're logged in to the OCP cluster using 'dev_user' account on the compromised pod and confirm the 'dev-test-app' is running correctly.
PS C:\Users\ljwca\Documents\ocp> oc whoami
dev_user

PS C:\Users\ljwca\Documents\ocp>
PS C:\Users\ljwca\Documents\ocp> oc get projects

NAME         DISPLAY NAME   STATUS

dev-test02                  Active

PS C:\Users\ljwca\Documents\ocp>
PS C:\Users\ljwca\Documents\ocp> oc get pods -o wide

NAME                           READY   STATUS    RESTARTS   AGE   IP            NODE                                              NOMINATED NODE   READINESS GATES

dev-test-v1-674f467644-t94dc   1/1     Running   0          6s    10.128.2.38   ip-10-0-155-159.ap-southeast-1.compute.internal   <none>           <none>

2. Login to 'dev-test' container using remote shell command of the OCP

PS C:\Users\ljwca\Documents\ocp> oc rsh dev-test-v1-674f467644-t94dc
$
$ uname -a
Linux dev-test-v1-674f467644-t94dc 4.18.0-193.14.3.el8_2.x86_64 #1 SMP Mon Jul 20 15:02:29 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

3. Network scanning

This step takes 1~2 hours to complete all scanning.

$ nmap -sP 10.128.0.0/14
Starting Nmap 7.80 ( https://nmap.org ) at 2020-09-29 17:20 UTC
Nmap scan report for ip-10-128-0-1.ap-southeast-1.compute.internal (10.128.0.1)
Host is up (0.0025s latency).
Nmap scan report for ip-10-128-0-2.ap-southeast-1.compute.internal (10.128.0.2)
Host is up (0.0024s latency).
Nmap scan report for 10-128-0-3.metrics.openshift-authentication-operator.svc.cluster.local (10.128.0.3)
Host is up (0.0023s latency).
Nmap scan report for 10-128-0-4.metrics.openshift-kube-scheduler-operator.svc.cluster.local (10.128.0.4)
Host is up (0.0027s latency).
.
.
.


After completion of the scanning, you will be able to find the 'critical-app' on the list.


4. Application Scanning for the target

You can find the open service ports on the target using nmap.

$ nmap 10.129.2.71
Starting Nmap 7.80 ( https://nmap.org ) at 2020-09-29 17:23 UTC
Nmap scan report for 10-129-2-71.critical-app.critical-app.svc.cluster.local (10.129.2.71)
Host is up (0.0012s latency).
Not shown: 998 closed ports
PORT     STATE SERVICE
80/tcp   open  http
8888/tcp open  sun-answerbook
 
Nmap done: 1 IP address (1 host up) scanned in 0.12 seconds
$

But you will see the 403 error when you try to access the server using port 80. This happens because the default Apache access control only allows the traffic from the NGINX App Protect.

$ curl http://10.129.2.71/
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access this resource.</p>
<hr>
<address>Apache/2.4.46 (Debian) Server at 10.129.2.71 Port 80</address>
</body></html>
$

Now, you can see the response through port 8888.

$ curl http://10.129.2.71:8888/
<html>
<head>
<title>
Network Operation Utility - NSLOOKUP
</title>
</head>
<body>
    <font color=blue size=12>NSLOOKUP TOOL</font><br><br>
    <h2>Please type the domain name into the below box.</h2>
    <h1>
    <form action="/index.php" method="POST">
        <p>
        <label for="target">DNS lookup:</label>
        <input type="text" id="target" name="target" value="www.f5.com">
        <button type="submit" name="form" value="submit">Lookup</button>
        </p>
    </form>
    </h1>
    <font color=red>This site is vulnerable to Web Exploit. Please use this site as a test purpose only.</font>
</body>
</html>
$

5. Performing the Command Injection attack.

$ curl -d "target=www.f5.com|cat /etc/passwd&form=submit" -X POST http://10.129.2.71:8888/index.php
<html><head><title>SRE DevSecOps - East-West Attack Blocking</title></head><body><font color=green size=10>NGINX App Protect Blocking Page</font><br><br>Please consult with your administrator.<br><br>Your support ID is: 878077205548544462<br><br><a href='javascript:history.back();'>[Go Back]</a></body></html>$
$

6. Verify the logs in Kibana dashboard

You should be able to see the NGINX App Protect alerts on your Elasticsearch.

You should be able to see the NGINX App Protect alerts on your ELK.

0151T000003q4s1QAA.png

7. Verify the Ansible terminates the compromised pod

Ansible deletes the compromised pod.

0151T000003q4s6QAA.png

Summary

Today’s cyber based threats are getting more and more sophisticated. Attackers keep attempting to find out the weakest link in the company’s infrastructure and finally move from there to the data in the company using that link. In most cases, the weakest link of the organization is the human and the company stores its critical data in the application. This is why the attackers use the phishing email to compromise the user’s laptop and leverage it to access the application.


While F5 is working very closely with our key alliance partners such as Cisco and FireEye to stop the advanced malware at the first stage, our NGINX App Protect can work as another layer of defence for the application to protect the organization's data. F5, Red Hat, and Elastic have developed this new protection mechanism, which is an automated process. This use case allows the DevSecOps team to easily deploy the advanced security layer in their OpenShift cluster.

If you want to learn more about this use case, please visit the F5 Business Development official Github link here.

Version history
Last update:
‎07-Jan-2021 17:44
Updated by:
Contributors