NGINX Plus
46 TopicsAWS CloudFormation EC2 UserData example to install NGINX Plus on Ubuntu 20.04 LTS using AWS SecretsManager
Introduction There are a number of ways to automate NGINX Plus instance creation on AWS - you can create a custom AMI, or build a container image. Tools like Ansible, terraform, Pulumi etc. can also install and configure NGINX Plus. This example was tested in CloudFormation template that creates an Ubuntu 20.04 LTS EC2 instance and the uses UserData script to install NGINX Plus. A critical part of NGINX Plus install is to copy your NGINX Plus SSL Certificate and Key - required for access tothe NGINX Plus private repository - into /etc/nginx/ssl/. There a few different ways to achieve this, but the AWS SecretsManager service provides a lot of audit, access control, and security capabilities. You can use the snippets below in your AWS CloudFormation templates to retrieve the NGINX certificate and key from AWS, configure the NGINX repository on your EC2 Instance, and then download and install the software. Prerequisites Store your NGINX Plus Certificate and Key in AWS Secrets Manager - in my example I'm storing the certificate and key in separate secret objects. Create an IAM Role and policy to allow your EC2 Instance to access the secrets. My Policy looks like this: { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "secretsmanager:GetResourcePolicy", "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret", "secretsmanager:ListSecretVersionIds" ], "Resource": "arn:aws:secretsmanager:*:<account ID>:secret:*" } ] } Obviously you may want to alter the the scope of the resource access. Attach this policy to a role and assign the role to your EC2 Instance in the CFT: Create a parameter (you could skip this and just add the name to the Instance Profile in the next step, but parameters can be useful if you want to change things at deploy time): NginxInstanceRole: Description: Role for instance Instance Type: String Default: EC2Secrets Reference parameter in the EC2 instance profile: NGInstanceProfile: Type: AWS::IAM::InstanceProfile Properties: InstanceProfileName: !Sub "nginx-instance-profile-${AWS::StackName}" Path: / Roles: - !Ref NginxInstanceRole Reference the instance profile in the EC2 block: # EC2 instance which will have access for http and ssh EC2Instance: Type: 'AWS::EC2::Instance' Properties: InstanceType: !Ref InstanceType SubnetId: !Ref SubnetID SecurityGroupIds: - !Ref WebSecurityGroupID - !Ref AdminSecurityGroupID KeyName: !Ref KeyPairName ImageId: !Ref InstanceImageId IamInstanceProfile: !Ref NginxInstanceRole UserData Block Now in the UserData section of your EC2 instance spec the following commands will: Install the jq package (to parse some output) Install the the AWS CLI Pull the secrets using 'aws secretsmanager get-secrets-value' Use the jq package and tr to get the files into the correct format (there may be a better way to manage this). Follow the standard NGINX plus install (add NGINX repo etc) Start the NGINX Plus service UserData: Fn::Base64: !Sub | #!/bin/bash -xe sudo apt-get update -y # good practice to update existing packages sudo apt-get install -y awscli sudo apt install -y jq sudo mkdir /etc/ssl/nginx # install the key and secret aws secretsmanager get-secret-value --secret-id nginxcert --region ${AWS::Region}| jq --raw-output '.SecretString'| tr -d '"{}'| sudo tee /etc/ssl/nginx/nginx-repo.crt aws secretsmanager get-secret-value --secret-id nginxkey --region ${AWS::Region}| jq --raw-output '.SecretString'| tr -d '"{}'| sudo tee /etc/ssl/nginx/nginx-repo.key # Add the repo sudo wget https://cs.nginx.com/static/keys/nginx_signing.key && sudo apt-key add nginx_signing.key sudo wget https://cs.nginx.com/static/keys/app-protect-security-updates.key && sudo apt-key add app-protect-security-updates.key sudo apt-get install -y apt-transport-https lsb-release ca-certificates printf "deb https://pkgs.nginx.com/plus/ubuntu `lsb_release -cs` nginx-plus\n" | sudo tee /etc/apt/sources.list.d/nginx-plus.list sudo wget -P /etc/apt/apt.conf.d https://cs.nginx.com/static/files/90pkgs-nginx # Install and start sudo apt-get update -y # good practice to update existing packages sudo apt-get install nginx-plus -y # install web server sudo systemctl start nginx.service # start webserver This should produce a working NGINX Plus install. Comment below if you'd like a complete working CFT example linked to this article.8.5KViews0likes0CommentsHow to persist 'real' IP into Kubernetes
Summary How do you preserve the “real” IP address to a Pod (container) that is running in a Kubernetes cluster?Recently Eric Chen and I got asked this question by a customer and we’ll share how we solved this problem. What's the Problem? In Kubernetes a “pod” is not sitting on your external network, instead it is sitting comfortably on its own network.The issue is that using NodePort to expose the pod will source NAT (SNAT) and the pod will see the IP address of another internal IP address.You could use a “externalTrafficPolicy” to preserve the IP address, but that has some other limitations (you have to steer traffic to the node where the pod is running). Possible Solutions X-Forwarded-For The obvious solution to preserve the source IP address is to have an external load balancer insert an X-Forwarded-For header into HTTP requests and have the pod use the HTTP header value instead of the client IP address.The load balancer would change a request from: GET /stuff HTTP/1.1 host: mypod.example.com to: GET /stuff HTTP/1.1 host: mypod.example.com X-Forwarded-For: 192.0.2.10 This works well for HTTP traffic, but does not work for TCP traffic. Proxy Protocol Proxy Protocol is a method of preserving the source IP address over a TCP connection.Instead of manipulating the traffic at Layer 7 (modifying the application traffic); we will manipulate the TCP traffic to add a prefix to the connection with the source IP address information.At the TCP level this would change the request to: PROXY TCP[IP::version] [IP::remote_addr] [IP::local_addr] [TCP::remote_port] [TCP::local_port] GET /stuff HTTP/1.1 host: mypod.example.com Note that the application payload is not modified, but has data appended to it. Some implications of proxy protcol: In the case of encrypted traffic, there is no need for the load balancer to terminate the SSL/TLS connection. The destination of the proxy protocol traffic must be able to remove prefix prior to processing the application traffic (otherwise it would believe the connection was somehow corrupted). I.e., both endpoints of a connection need to support proxy protocol. Customer's Problem Statement The customer wanted to use both a BIG-IP at the edge of their Kubernetes cluster AND use NGINX as the Ingress Controller within the Kubernetes.The BIG-IP would be responsible for providing L4 TCP load balancing to NGINX that would be acting as a L7 HTTP/HTTPS Ingress Controller. Using the solutions outlined earlier we were able to provide a solution that met these requirements. BIG-IP utilizes proxy protocol to preserve the source IP address over a TCP connection. BIG-IP can be the sender of proxy protocol with this iRule. NGINX is configured to receive proxy protocol connections from the BIG-IP and uses X-Forwarded-For headers to preserve the source IP address to the pod. To deploy the solution we leveraged Container Ingress Services to deploy the proxy protocol configuration and bring traffic to the NGINX Ingress Controller.NGINX Ingress Controller was configured to use proxy protocol for connections originating from the BIG-IP. Conclusion Using F5 BIG-IP and NGINX we were able to provide a solution to the customers requirements and provide better visibility into the “real” IP address of their clients.6.1KViews1like1CommentBetter together - F5 Container Ingress Services and NGINX Plus Ingress Controller Integration
Introduction The F5 Container Ingress Services (CIS) can be integrated with the NGINX Plus Ingress Controllers (NIC) within a Kubernetes (k8s) environment. The benefits are getting the best of both worlds, with the BIG-IP providing comprehensive L4 ~ L7 security services, while leveraging NGINX Plus as the de facto standard for micro services solution. This architecture is depicted below. The integration is made fluid via the CIS, a k8s pod that listens to events in the cluster and dynamically populates the BIG-IP pool pointing to the NIC's as they scale. There are a few components need to be stitched together to support this integration, each of which is discussed in detail over the proceeding sections. NGINX Plus Ingress Controller Follow this (https://docs.nginx.com/nginx-ingress-controller/installation/building-ingress-controller-image/) to build the NIC image. The NIC can be deployed using the Manifests either as a Daemon-Set or a Service. See this ( https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/ ). A sample Deployment file deploying NIC as a Service is shown below, apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress namespace: nginx-ingress spec: replicas: 3 selector: matchLabels: app: nginx-ingress template: metadata: labels: app: nginx-ingress #annotations: #prometheus.io/scrape: "true" #prometheus.io/port: "9113" spec: serviceAccountName: nginx-ingress imagePullSecrets: - name: abgmbh.azurecr.io containers: - image: abgmbh.azurecr.io/nginx-plus-ingress:edge name: nginx-plus-ingress ports: - name: http containerPort: 80 - name: https containerPort: 443 #- name: prometheus #containerPort: 9113 securityContext: allowPrivilegeEscalation: true runAsUser: 101 #nginx capabilities: drop: - ALL add: - NET_BIND_SERVICE env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name args: - -nginx-plus - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret - -ingress-class=sock-shop #- -v=3 # Enables extensive logging. Useful for troubleshooting. #- -report-ingress-status #- -external-service=nginx-ingress #- -enable-leader-election #- -enable-prometheus-metrics Notice the ‘- -ingress-class=sock-shop’ argument, it means that the NIC will only work with an Ingress that is annotated with ‘sock-shop’. The absence of this annotation makes NIC the default for all Ingress created. Below shows the counterpart Ingress with the ‘sock-shop’ annotation. apiVersion: extensions/v1beta1 kind: Ingress metadata: name: sock-shop-ingress annotations: kubernetes.io/ingress.class: "sock-shop" spec: tls: - hosts: - socks.ab.gmbh secretName: wildcard.ab.gmbh rules: - host: socks.ab.gmbh http: paths: - path: / backend: serviceName: front-end servicePort: 80 This Ingress says if hostname is socks.ab.gmbh and path is ‘/’, send traffic to a service named ‘front-end’, which is part of the socks application itself. The above concludes Ingress configuration with the NIC. F5 Container Ingress Services The next step is to leverage the CIS to dynamically populate the BIG-IP pool with the NIC addresses. Follow this ( https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-app-install.html ) to deploy the CIS. A sample Deployment file is shown below, apiVersion: extensions/v1beta1 kind: Deployment metadata: name: k8s-bigip-ctlr-deployment namespace: kube-system spec: # DO NOT INCREASE REPLICA COUNT replicas: 1 template: metadata: name: k8s-bigip-ctlr labels: app: k8s-bigip-ctlr spec: # Name of the Service Account bound to a Cluster Role with the required # permissions serviceAccountName: bigip-ctlr containers: - name: k8s-bigip-ctlr image: "f5networks/k8s-bigip-ctlr" env: - name: BIGIP_USERNAME valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: username - name: BIGIP_PASSWORD valueFrom: secretKeyRef: # Replace with the name of the Secret containing your login # credentials name: bigip-login key: password command: ["/app/bin/k8s-bigip-ctlr"] args: [ # See the k8s-bigip-ctlr documentation for information about # all config options # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest "--bigip-username=$(BIGIP_USERNAME)", "--bigip-password=$(BIGIP_PASSWORD)", "--bigip-url=https://x.x.x.x:8443", "--bigip-partition=k8s", "--pool-member-type=cluster", "--agent=as3", "--manage-ingress=false", "--insecure=true", "--as3-validation=true", "--node-poll-interval=30", "--verify-interval=30", "--log-level=INFO" ] imagePullSecrets: # Secret that gives access to a private docker registry - name: f5-docker-images # Secret containing the BIG-IP system login credentials - name: bigip-login Notice the following arguments below. They tell the CIS to consume AS3 declaration to configure the BIG-IP. According to PM, CCCL(Common Controller Core Library) – used to orchestrate F5 BIG-IP, is getting removed this sprint for the CIS 2.0 release. '--manage-ingress=false' means CIS is not doing anything for Ingress resources defined within the k8s, this is because that CIS is not the Ingress Controller, NGINX Plus is, as far as k8s is concerned. The CIS will create a partition named k8s_AS3 on the BIG-IP, this is used to hold L4~7 configuration relating to the AS3 declaration. The best practice is also to manually create a partition named 'k8s' (in our example), where networking info will be stored (e.g., ARP, FDB). "--bigip-url=https://x.x.x.x:8443", "--bigip-partition=k8s", "--pool-member-type=cluster", "--agent=as3", "--manage-ingress=false", "--insecure=true", "--as3-validation=true", To apply AS3, the declaration is embedded within a ConfigMap applied to the CIS pod. kind: ConfigMap apiVersion: v1 metadata: name: as3-template namespace: kube-system labels: f5type: virtual-server as3: "true" data: template: | { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "id":"1847a369-5a25-4d1b-8cad-5740988d4423", "schemaVersion": "3.16.0", "Nginx_IC": { "class": "Tenant", "Nginx_IC_vs": { "class": "Application", "template": "https", "serviceMain": { "class": "Service_HTTPS", "virtualAddresses": [ "10.1.0.14" ], "virtualPort": 443, "redirect80": false, "serverTLS": { "bigip": "/Common/clientssl" }, "clientTLS": { "bigip": "/Common/serverssl" }, "pool": "Nginx_IC_pool" }, "Nginx_IC_pool": { "class": "Pool", "monitors": [ "https" ], "members": [ { "servicePort": 443, "shareNodes": true, "serverAddresses": [] } ] } } } } } They are telling the BIG-IP to create a tenant called ‘Nginx_IC’, a virtual named ‘Nginx_IC_vs’ and a pool named ‘Nginx_IC_pool’. The CIS will update the serverAddresses with the NIC addresses dynamically. Now, create a Service to expose the NIC’s. apiVersion: v1 kind: Service metadata: name: nginx-ingress namespace: nginx-ingress labels: cis.f5.com/as3-tenant: Nginx_IC cis.f5.com/as3-app: Nginx_IC_vs cis.f5.com/as3-pool: Nginx_IC_pool spec: type: ClusterIP ports: - port: 443 targetPort: 443 protocol: TCP name: https selector: app: nginx-ingress Notice the labels, they match with the AS3 declaration and this allows the CIS to populate the NIC’s addresses to the correct pool. Also notice the kind of the manifest ‘Service’, this means only a Service is created, not an Ingress, as far as k8s is concerned. On the BIG-IP, the following should be created. The end product is below. Please note that this article is focused solely on control plane, that is, how to get the CIS to populate the BIG-IP with NIC's addresses. The specific mechanisms to deliver packets from the BIG-IP to the NIC's on the data plane is not discussed, as it is decoupled from control plane. For data plane specifics, please take a look here ( https://clouddocs.f5.com/containers/v2/ ). Hope this article helps to lift the veil on some integration mysteries.5.3KViews11likes27CommentsNGINX ingress proxy for Consul Service Mesh
Disclaimer: This blog post is an abridged (NGINX specific) version of the original blog by John Eikenberry at HashiCorp. Consul service mesh provides service-to-service connection authorization and encryption using mutual Transport Layer Security (mTLS). Secure mesh networks require ingress points to act as gateways to enable external traffic to communicate with the internal services. NGINX supports mTLS and can be configured as a robust ingress proxy for your mesh network. For Consul service mesh environments, you can use Consul-template to configure NGINX as a native ingress proxy. This can be beneficial when performance is of utmost importance. Below you can walk through a complete example setup. The examples use consul-template to dynamically generate the proxy configuration and the required certificates for the NGINX to communicate directly with the services inside the mesh network. This blog post is only meant to demonstrate features and may not be a production ready or secure deployment. Each of the services are configured for demonstration purposes and will run in the foreground and output its logs to that console. They are all designed to run from files in the same directory. In this blog post, we use the term ingress proxy to describe Nginx. When we use the term sidecar proxy, we mean the Consul Connect proxy for the internal service. Common Infrastructure The example setup below requires Consul, NGINX, and Python. The former is available at the link provided, NGINX and Python should be easily installable with the package manager on any Linux system. The Ingress proxy needs a service to proxy to, for that you need Consul, a service (E.g., a simple webserver), and the Connect sidecar proxy to connect it to the mesh. The ingress proxy will also need the certificates to make the mTLS connection. Start Consul Connect, Consul’s service mesh feature, requires Consul 1.2.0 or newer. First start your agent. $ consul agent -dev Start a Webserver For the webserver you will use Python’s simple built-in server included in most Linux distributions and Macs. $ python -m SimpleHTTPServer Python’s webserver will listen on port 8000 and publish an index.html by default. For demo purposes, create an index.html for it to publish. $ echo "Hello from inside the mesh." > index.html Next, you’ll need to register your “webserver” with Consul. Note, the Connect option registers a sidecar proxy with the service. $ echo '{ "service": { "name": "webserver", "connect": { "sidecarservice": {} }, "port": 8000 } }' > webserver.json $ consul services register webserver.json You also need to start the sidecar. $ consul connect proxy -sidecar-for webserver Note, the service is using the built-in sidecar proxy. In production you would probably want to consider using Envoy or NGINX instead. Create the Certificate File Templates To establish a connection with the mesh network, you’ll need to use consul-template to fetch the CA root certificate from the Consul servers as well as the applications leaf certificates, which Consul will generate. You need to create the templates that consul-template will use to generate the certificate files needed for NGINX ingress proxy. The template functions caRoots and caLeaf require consul-template version 0.23.0 or newer. Note, the name, “nginx” used in the leaf certificate templates. It needs to match the name used to register the ingress proxy services with Consul below. ca.crt NGINX requires the CA certificate. $ echo '{{range caRoots}}{{.RootCertPEM}}{{end}}' > ca.crt.tmpl Nginx cert.pem and cert.key NGINX requires the certificate and key to be in separate files. $ echo '{{with caLeaf "nginx"}}{{.CertPEM}}{{end}}' > cert.pem.tmpl $ echo '{{with caLeaf "nginx"}}{{.PrivateKeyPEM}}{{end}}' > cert.key.tmpl Setup The Ingress Proxies NGINX proxy service needs to be registered with Consul and have a templated configuration file. In each of the templated configuration files, the connect function is called and returns a list of the services with the passed name “webserver” (matching the registered service above). The list Consul creates is used to create the list of back-end servers to which the ingress proxy connections. The ports are set to different values so you can have both proxies running at the same time. First, register the NGINX ingress proxy service with the Consul servers. $ echo '{ "service": { "name": "nginx", "port": 8081 } }' > nginx-service.json $ consul services register nginx-service.json Next, configure the NGINX configuration file template. Set it to listen on the registered port and route to the Connect-enabled servers retrieved by the Connect call. nginx-proxy.conf.tmpl $ cat > nginx-proxy.conf.tmpl << EOF daemon off; master_process off; pid nginx.pid; error_log /dev/stdout; events {} http { access_log /dev/stdout; server { listen 8081 defaultserver; location / { {{range connect "webserver"}} proxy_pass https://{{.Address}}:{{.Port}}; {{end}} # these refer to files written by templates above proxy_ssl_certificate cert.pem; proxy_ssl_certificate_key cert.key; proxy_ssl_trusted_certificate ca.crt; } } } EOF Consul Template The final piece of the puzzle, tying things together are the consul-template configuration files! These are written in HCL, the Hashicorp Configuration Language, and lay out the commands used to run the proxy, the template files, and their destination files. nginx-ingress-config.hcl $ cat > nginx-ingress-config.hcl << EOF exec { command = "/usr/sbin/nginx -p . -c nginx-proxy.conf" } template { source = "ca.crt.tmpl" destination = "ca.crt" } template { source = "cert.pem.tmpl" destination = "cert.pem" } template { source = "cert.key.tmpl" destination = "cert.key" } template { source = "nginx-proxy.conf.tmpl" destination = "nginx-proxy.conf" } EOF Running and Testing You are now ready to run the consul-template managed NGINX ingress proxy. When you run consul-template, it will process each of the templates, fetching the certificate and server information from consul as needed, and render them to their destination files on disk. Once all the templates have been successfully rendered it will run the command starting the proxy. Run the NGINX managing consul-template instance. $ consul-template -config nginx-ingress-config.hcl Now with everything running, you are finally ready to test the proxies. $ curl http://localhost:8081 Hello from inside the mesh! Conclusion F5 technologies work very well in your Consul environments. In this blog post you walked through setting up NGINX to work as a proxy to provide ingress to services contained in a Consul service mesh. Please reach out to me and the F5-HashiCorp alliance team here if you have any questions, feature requests, or any feedback to make this solution better.5.2KViews0likes0CommentsL7 DoS Protection with NGINX App Protect DoS
Intro NGINX security modules ecosystem becomes more and more solid. Current App Protect WAF offering is now extended by App Protect DoS protection module. App Protect DoS inherits and extends the state-of-the-art behavioral L7 DoS protection that was initially implemented on BIG-IPand now protects thousands of workloads around the world. In this article, I’ll give a brief explanation of underlying ML-based DDoS prevention technology and demonstrate few examples of how precisely it stops various L7 DoS attacks. Technology It is important to emphasize the difference between the general volumetric-based protection approach that most of the market uses and ML-based technology that powers App Protect DoS. Volumetric-based DDoS protection is an old and well-known mechanism to prevent DDoS attacks. As the name says, such a mechanism counts the number of requests sharing the same source or destination, then simply drops or applies rate-limiting after some threshold crossed. For instance, requests sourcing the same IP are dropped after 100 RPS, requests going to the same URL after 200 RPS, and rate-limiting kicks in after 500 RPS for the entire site. Obviously, the major drawback of such an approach is that the selection criterion is too rough. It can causeerroneous drops of valid user requests and overall service degradation. The phenomenon when a security measure blocks good requests is called a “false positive”. App Protect DoS implements much more intelligent techniques to detect and fight off DDoS attacks. At a high level, it monitors all ongoing traffic and builds a statistical model in other words a baseline in aprocesscalled “learning”. The learning process almost never stops, thereforea baseline automatically adjusts to the current web application layout, a pattern of use, and traffic intensity. This is important because it drastically reduces maintenance cost and reaction speed for the solution. There is no more need to manually customize protection configuration for every application or traffic change. Infinite learning produces a legitimate question. Why can’t the system learn attack traffic as a baseline and how does it detect an attack then? To answer this question let us define what a DDoS attack is. A DDoS attack is a traffic stream that intends to deny or degrade access to a service. Note, the definition above doesn’t focus on the amount of traffic. ‘Low and slow’ DDoS attacks can hurt a service as severely as volumetric do. Traffic is only considered malicious when a service level degrades. So, this means that attack traffic can become a baseline, but it is not a big deal since protected service doesn’t suffer. Now only the “service degradation” term separates us from the answer. How does the App Protect DoS measure a service degradation? As humans, we usually measure the quality of a web service in delays. The longer it takes to get a response the more we swear. App Protect DoS mimics human behavior by measuring latency for every single transaction and calculates the level of stress for a service. If overall stress crosses a threshold App Protect DoS declares an attack. Think of it; a service degradation triggers an attack signal, not a traffic volume. Volume is harmless if an application servermanages to respond quickly. Nice! The attack is detected for a solid reason. What happens next? First of all, the learning process stops and rolls back to a moment when the stress level was low. The statistical model of the traffic that was collected during peacetime becomes a baseline for anomaly detection. App Protect DoS keeps monitoring the traffic during an attack and uses machine learning to identify the exact request pattern that causes a service degradation. Opposed to old-school volumetric techniques it doesn’t use just a single parameter like source IP or URL, but actually builds as accurate as possible signature of entire request that causes harm. The overall number of parameters that App Protect DoS extracts from every request is in the dozens. A signature usually contains about a dozen including source IP, method, path, headers, payload content structure, and others. Now you can see that App Protect DoS accuracy level is insane comparing to volumetric vectors. The last part is mitigation. App Protect DoS has a whole inventory of mitigation tools including accurate signatures, bad actor detection, rate-limiting or even slowing down traffic across the board, which it usesto return service. The strategy of using those is convoluted but the main objective is to be as accurate as possible and make no harm to valid users. In most cases, App Protect DoS only mitigates requests that match specific signatures and only when the stress threshold for a service is crossed. Therefore, the probability of false positives is vanishingly low. The additional beauty of this technology is that it almost doesn’t require any configuration. Once enabled on a virtual server it does all the job "automagically" and reports back to your security operation center. The following lines present a couple of usage examples. Demo Demo topology is straightforward. On one end I have a couple of VMs. One of them continuously generates steady traffic flow simulating legitimate users. The second one is supposed to generate various L7 DoS attacks pretending to be an attacker. On the other end, one VM hosts a demo application and another one hosts NGINX with App Protect DoS as a protection tool. VM on a side runs ELK cluster to visualize App Protect DoS activity. Workflow of a demo aims to showcase a basic deployment example and overall App Protect DoS protection technology. First, I’ll configure NGINX to forward traffic to a demo application and App Protect DoS to apply for DDoS protection. Then a VM that simulates good users will send continuous traffic flow to App Protect DoS to let it learn a baseline. Once a baseline is established attacker VM will hit a demo app with various DoS attacks. While all this battle is going on our objective is to learn how App Protect DoS behaves, and that good user's experience remains unaffected. Similar to App Protect WAF App Protect DoS is implemented as a separate module for NGINX. It installs to a system as an apt/yum package. Then hooks into NGINX configuration via standard “load_module” directive. load_module modules/ngx_http_app_protect_dos_module.so; Once loaded protection enables under either HTTP, server, or location sections. Depending on what would you like to protect. app_protect_dos_enable [on|off] By default, App Protect DoS takes a protection configuration from a local policy file “/etc/nginx/BADOSDefaultPolicy.json” { "mitigation_mode" : "standard", "use_automation_tools_detection": "on", "signatures" : "on", "bad_actors" : "on" } As I mentioned before App Protect DoS doesn’t require complex config and only takes four parameters. Moreover, default policy covers most of the use cases therefore, a user only needs to enable App Protect DoS on a protected object. The next step is to simulate good users’ traffic to let App Protect DoS learn a good traffic pattern. I use a custom bash script that generates about 6-8 requests per second like an average surfing activity. While inspecting traffic and building a statistical model of good traffic App Protect DoS sends logs and metrics to Elasticsearch so we can monitor all its activity. The dashboard above represents traffic before/after App Protect DoS, degree of application stress, and mitigations in place. Note that the rate of client-side transactions matches the rate of server-side transactions. Meaning that all requests are passing through App Protect DoS and there are not any mitigations applied. Stress value remains steady since the backend easily handles the current rate and latency does not increase. Now I am launching an HTTP flood attack. It generates several thousands of requests per second that can easily overwhelm an unprotected web server. My server has App Protect DoS in front applying all its’ intelligence to fight off the DoS attack. After a few minutes of running the attack traffic, the dashboard shows the following situation. The attack tool generated roughly 1000RPS. Two charts on the left-hand side show that all transactions went through App Protect DoS and were reaching a demo app for a couple of minutes causing service degradation. Right after service stress has reached a threshold an attack was declared (vertical red line on all charts). As soon as the attack has been declared App Protect DoS starts to apply mitigations to resume the service back to life. As I mentioned before App Protect DoS tries its best not to harm legitimate traffic. Therefore, it iterates from less invasive mitigations to more invasive. During the first several seconds when App Protect DoS just detected an attack and specific anomaly signature is not calculated yet. App Protect DoS applies an HTTP redirect to all requests across the board. Such measure only adds a tiny bit of latency for a web browser but allows it to quickly filter out all not-so-intelligent attack tools that can’t follow redirects. In less than a minute specific anomaly signature gets generated. Note how detailed it is. The signature contains 11 attributes that cover all aspects: method, path, headers, and a payload. Such a level of granularity and reaction time is not feasible neither for volumetric vectors nor a SOC operator armed with a regex engine. Once a signature is generated App Protect DoS reduces the scope of mitigation to only requests that match the signature. It eliminates a chance to affect good traffic at all. Matching traffic receives a redirect and then a challenge in case if an attacker is smart enough to follow redirects. After few minutes of observation App Protect DoS identifies bad actors since most of the requests come from the same IP addresses (right-bottom chart). Then switches mitigation to bad actor challenge. Despite this measure hits all the same traffic it allows App Protect DoS to protect itself. It takes much fewer CPU cycles to identify a target by IP address than match requests against the signature with 11 attributes. From now on App Protect DoS continues with the most efficient protection until attack traffic stops and server stress goes away. The technology overview and the demo above expose only a tiny bit of App Protect DoS protection logic. A whole lot of it engages for more complicated attacks. However, the results look impressive. None of the volumetric protection mechanisms or even a human SOC operator can provide such accurate mitigation within such a short reaction time. It is only possible when a machine fights a machine.4.4KViews1like3CommentsNGINX as an HTTP Load Balancer
Quick Intro You've probably used NGINX as a WebServer and you might be wondering how does it work as a load balancer. We don't need a different NGINX file or to install adifferent package to make it a load balancer. NGINX works as a load balancer out of the box as long as we specify we want it to work as a load balancer. In this article, we're going to introduce NGINX HTTP load balancer. If you need more details, please refer to NGINX official documentation. However, NGINX also supportsTCP and UDP load balancingandhealth monitorswhich are not covered here. NGINX as a WebServer by default Once NGINX is installed, it works as a WebServer by default: The response above came from our client-facing NGINX Load Balancer-to-be which is currently just a webserver. Let's make it an HTTP load balancer, shall we? NGINX as a Load Balancer Disable WebServer Let's comment out thedefaultfile on/etc/nginx/sites-enabledto disable our local WebServer just in case: Then I reload the config: If we try to connect now, it won't work now because we disabled thedefaultpage: Now, we're ready to create the load balancer's file! Creating Load Balancer's file Essentially, the default NGINX config file (/etc/nginx/nginx.conf) already has an http block which references the/etc/nginx/conf.ddirectory. With that in mind, we can pretty much create our file in/etc/nginx/conf.d/and whatever is in there will be in HTTP context: Withinupstreamdirective, we add our backend servers along with the port they're listening to. These are the servers NGINX load balancer will forward the request to. Lastly, we create a server config for our listener withproxy_passpointing to upstream name (backends). We now reload NGINX again: Lab tests NGINX has a couple of different load balancing methods and round robin (potentially weighted) is the default one. Round robin test I tried the first time: Second time: Third time: Fourth time it goes back to server 1: That's because the default load balancing method for NGINX is round robin. Weight test Now I've added weight=2 and I expect that server 2 will proportionally take 2x more requests than the rest of the servers: Once again I reload the configuration after the changes: First request goes to Server 3: Next one to Server 2: Then again to Server 2: And lastly Server 1: Administratively shutting down a server We can also administratively shut down server 2 for maintenance, for example, by adding thedownkeyword: When we issue the requests it only goes to server 1 or 3 now: Other Load Balancing methods Least connections Least connections sends requests to the server with the least number of active connections taking into consideration any optionally configuredweight: If there's a tie,round robinis used, taking into account the optionally configured weights. For more information on least connections method, please click here. IP Hash The request is sent to the server as a result of a hash calculation based on the first 3 octets of Client IP address or the whole IPv6 address. This method makes sure requests from the same client are always sent to the same server, unless it's unavailable. Note: A hash from server that is marked down is preserved for when it comes back up. For more information on IP hash method, please click here. Custom or Generic Hash We can also define a key for the hash itself. In below example, the key is the variable$request_uriwhich represents the URI present in the HTTP request sent by client: Least Time (NGINX Plus only) This method is not supported in NGINX free version. In this case, NGINX Plus picks the server with lowest average latency + number of active connections. The lowest average latency is calculated based on an argument passed toleast_timewith the following values: header: time it took to receive the first byte from server last_byte: time it took to receive the full response from server last_byteinflight: time it took toreceive full response from server taking into account incomplete requests In above example, NGINX Plus will make its load balancing decision based on the time it took to receive the full HTTP response from our server and will also include incomplete requests in the calculation. Note that /etc/nginx/conf.d/ is the default directory for NGINX config files. In above case, as I installed NGINX directly from Debian APT repository, it also added sites-enabled. Some Linux distributions do add it to make it easier for those who use Apache. For more information on least time method, please click here. Random In this method, requests are passed on to randomly selected servers but it's only ideal for environments where multiple load balancers are passing on requests to the same back end servers. Note that least_time parameter in this method is only available on NGINX Plus. For more details about this method, please clickhere.3KViews2likes0CommentsUse of NGINX Controller to Authenticate API Calls
API calls authentication is essential for API security and billing. Authentication helps to reduce load by dropping anonymous calls and provides clear view on per user/group usage information since every call bears an identity marker. NGINX Controller provides an easy method for API owners to setup authentication for calls that traverse NGINX Plus instances as API gateways. What is API authentication and how is it different from authorization? API authentication it is an action when API gateway verifies an identity of a call by checking an identity marker (token, credentials, ...) in the call body. Authorization in turn is usually based on authentication. Authorization mechanisms extract an identity marker from a call and check if this identity is allowed to make the call or not. There are many approaches to authenticate an API call: HTTP Basic: API call carries clear text credentials in HTTP Authorization header. E.g. "Authorization: Basic dXNlcjpwYXNzd29yZA==" API key: API call carries an API key in request (multiple injection points possible) E.g: "GET /endpoint?token=dXNlcjpwYXNzd29yZA" Oauth: Complex open source standard for access delegation. When oAuth is in use API consumer obtains cryptographically signed JWT (json web token) token from an external identity provider and places it in the call. Server in turn uses JWK (json web key) obtained from the same identity provider to verify token signature and make sure data in JWT is true. As you may already know from previous articles NGINX Controller doesn't process traffic on its own but it configures NGINX Plus instances which run as API gateways to apply all necessary actions and policies to the traffic. The picture below shows how all these pieces to work together. Picture 1. Controller can setup two approaches for authenticating API calls: API key based oAuth (JWT) based This article covers procedures needed to configure both of supported API call authentication methods. As a prerequisite I assume you already have NGINX Plus and Controller setup along with at least one API published (if not please take a look at the previous article for details) Assume API owner developed an API and wants to make it avaliable for authenticated users only. Owner knows that customers have different use cases therefore different authentication methods fit each use case better. So it is required to authenticate users by API key or by JWT token. As discussed in previous article NGINX Controller abstracts API gateway configuration with higher level concepts for ease of configuration. The abstractions are shown on picture below. Picture 2. Therefore API definition, gateway, and workload group form a data path, the way how calls get accepted and where they get forwarded if all policies are passed. Policies contain necessary verifications/actions which apply for every API call traversing the data path. Picture 3. Amongst others there is authentication policy which allows to authenticate API calls. As shown on picture 2 the policy applies to published API instance which in turn represents data path for the traffic. Therefore the policy affects every call which flies through. Usually every authentication method fits one use case better then others. E.g. robots/bots better go with API key because process of obtaining of an JWT token from an authorization server is complex and requires a tools which bot/robot may not have. For human situation is opposite. It is much more native for user to type username/password and get token in exchange under the covers instead of copy/paste long API key to every call. Therefore oAuth fits better here. Steps below cover configuration of both supported authentication methods: API key and oAuth2.0. Assume Company_1 has bought access to an API. As a customer Company_1 wants to consume API automatically with robots and allow its employees to make requests manually as well. In order to authenticate employees using oAuth and robots with API key two different identity providers needs to be configured on controller. First we create a provider for employees. In order to NGINX Plus to verify JWT token in a call JWK key is required. There are two ways to supply the JWK key for the provider: upload a file or reference it as web URL. In case of reference NGINX Plus automatically refreshes the key periodically. These two approaches are shown in two pictures below. A second provider is used to authenticate Customer_1 robots with a simple API key. There are two options for creating API keys in provider. The first is to create them manually. The second is to upload CSV file containing user accounts credentials. user@linux$cat api-clients.csv CUSTOMER_1_ROBOT_1000,2b31388ccbcb4605cb2b77447120c27ecd7f98a47af9f17107f8f12d31597aa2 CUSTOMER_1_ROBOT_1001,71d8c4961e228bfc25cb720e0aa474413ba46b49f586e1fc29e65c0853c8531a CUSTOMER_1_ROBOT_1002,fc979b897e05369ebfd6b4d66b22c90ef3704ef81e4e88fc9907471b0d58d9fa CUSTOMER_1_ROBOT_1003,e18f4cacd6fc4341f576b3236e6eb3b5decf324552dfdd698e5ae336f181652a CUSTOMER_1_ROBOT_1004,3351ac9615248518348fbddf11d9c597967b1e526bd0c0c20b2fdf8bfb7ae30a The next step is to assign an authentication policy to an published API. Each authentication policy may include only one client group therefore we need two of them to authenticate employees with JWT or robots with API token. Policy for robots specifies the provider and a location where an API token is placed. Along with query string in our example also headers, cookies, and bearer token locations are supported. Policy for employees specifies the provider and the JWT location NOTE: (Limitation) Policies in an environment have AND operand between them. This means environment can have only one authentication policy otherwise identity requirements from both of them need to be satisfied for call to pass. Once policy for employees is set up and config has been pushed to NGINX Plus instance, calls authentication is in place and we can now test it. At first let us make sure unauthenticated calls are rejected. I use postman as API client. As you see request without JWT token is rejected with 401 "Unauthorized" response code. Now I obtain valid token from identity provider and insert it in the same call. A call with valid JWT token successfully passes authentication and brings response back! Now we can try to replace authentication policy with policy for robots and conduct the same tests. I am emulating a robot with console tool which can not act as an oAuth client to retrieve a JWT token. So robots simply append API key to the query string. API call without any token is blocked. ubuntu@ip-10-1-1-7:~$ httphttps://prod.httpbin.internet.lab/uuid HTTP/1.1 401 Unauthorized Connection: keep-alive Content-Length: 40 Content-Type: application/json Date: Wed, 18 Dec 2019 00:12:26 GMT Server: nginx/1.17.6 { "message": "Unauthorized", "status": 401 } API call with valid API key in query string is allowed. ubuntu@ip-10-1-1-7:~$ httphttps://prod.httpbin.internet.lab/uuid?token=2b31388ccbcb4605cb2b77447120c27ecd7f98a47af9f17107f8f12d31597aa2 HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Connection: keep-alive Content-Length: 53 Content-Type: application/json Date: Wed, 18 Dec 2019 00:12:57 GMT Server: nginx/1.17.6 { "uuid": "b57f6b72-7730-4d0e-bbb7-533af8e2a4c0" } Therefore even such a complex feature implementation as API calls authentication becomes much easier yet flexible when managed by NGINX Controller. Hope this overview was useful. Good luck!2.2KViews1like0CommentsGuarding privacy and security using API Gateways: Implement the Phantom Token Flow using NGINX and the Curity Identity Server
ByDamian Curry, NGINX Business Development Technical Director, F5,andMichal Trojanowski, Product Marketing Engineer,Curity ---- In today’s world, APIs are ubiquitous, either in communication between-backend services or from front ends to back ends. They serve all kinds of purposes and come in different flavors and can return data in various formats. The possibilities are countless. Still, they all share one common trait – an API needs to be secure. Secure access to an API should be paramount for any company exposing them, especially if the APIs are available externally and consumed by third-party clients. To help organizations address the critical topic of API security, OWASP has provided guidelines to ensure safety. In 2019, OWASP released a “top 10” compilation of the most common security vulnerabilities in APIs. OAuth and JWT as a Sign of Mature API Security Modern, mature APIs are secured using access tokens issued according to the OAuth standard or ones that build on top of it. Although OAuth does not require the use of JSON Web Token (JWT) format for access tokens, per se, it is common practice to use them. Signed JWTs (JWTs signed using JSON Web Signing, JWS) have become a popular solution because they include built-in mechanisms that protect their integrity. Although signed JWTs are a reliable mechanism when the integrity of the data is considered, they are not as goodin regards toprivacy. A signed JWT (JWS) protects integrity of the data, while an encrypted one (JWE) protects also the privacy. A JSON Web Token consists of claims which can carry information about the resource owner – the user who has granted access to their resources or even about the API itself. Anyone in possession of a signed JWT can decode it and read the claims inside. This can produce various issues: You have to be really careful when putting claims about the user or your API in the token. If any Personally Identifiable Information (PII) ends up in there, this data essentially becomes public. Striving to keep PII private should be the goal of your organization. What is more, in some countries, privacy is protected by laws like GDPR or CCPA, and it can be an offense not to keep PII confidential. Developers of apps consuming your access tokens can start depending on the contents of your tokens. This can lead to a situation where making updates to the contents of your access tokens breaks the integrations with your API. Such a situation would be an inconvenience both for your company and your partners. An access token should be opaque to the consumers of your API, but this can be hard to enforce outside of your organization. By-reference Tokens and the Phantom Token Flow When there is a problem, there must be a solution. In fact, there are two ways to address these aforementioned problems: Encryption Use by-reference (AKA handle) tokens To protect the contents of JWTs from being visible, the token can be encrypted (thus creating a JWE inside the JWS). Encrypted content will be safe as long as you use a strong encryption algorithm and safeguard the keys used for encryption. To use JWEs, though, you need a Public Key Infrastructure (PKI), and setting one up can be a big deal. If you do not use PKI, you need another mechanism to exchange keys between the Authorization Server and the client. As JWEs also need to be signed, your APIs have to be able to access over TLS the keys used to verify and decrypt tokens. These requirements add considerable complexity to the whole system. Another solution is to use by-reference tokens – an opaque string that serves as a reference to the actual data kept securely by the Authorization Server. Using opaque tokens protects the privacy of the data normally kept in tokens, but now your services, which process requests, need some other way to get the information that was earlier available in the JWT’s claims. Here,the Token Introspectionstandard can prove useful. Using a standardized HTTP call to the Authorization service, the API can exchange the opaque token for a set of claims in JSON format. Exchanging an opaque token for a set of claims comes to the API just as easily as extracting the data from decoding a JWT. Because the API connects securely to the Authentication Server and trusts it, the response does not have to be signed, as is the case with JWTs. Still, in a world dominated by microservices, a problem may arise with the proposed solution. Have a look at this diagram below. Note that every service which processes the request needs to perform the introspection. When there are many services processing one request, the introspection must be done repeatedly or else the APIs would have to pass unsigned JSONs and trust the caller with the authorization data – which can lead to security issues. Situation in which every service calls the Authorization Server to introspect the token can put excess load on the Authorization Server, which will have to return the same set of claims many times. It can also overload the network between the services and the Authorization Server. If your API consists of numerous microservices, there is probably some kind of a reverse proxy or an API gateway standing in front of all of them, e.g., an instance of NGINX. Such a gatewayis capable of performing the introspection flow for you. This kind of setup is called a Phantom Token Flow. The Authorization Server issues opaque tokens and serves them to the client. The client uses the token as any other token – by appending it to the request in theAuthorizationheader. When the gateway receives a request, it extracts the opaque token from the header and performs the introspection flow, but it is a bit different from the flow performed by APIs. According to the specification, when the API introspects a token, it receives a JSON with the claims that would usually end up in a JWT. For an API, it is enough, as it trusts the Authorization Server and just needs the data associated with the access token, the claims do not have to be signed. For the API gateway, though, it’s not enough. The gateway needs to exchange the opaque token for the set of data but will be passing this data to the downstream services together with the request. The APIs must be sure that the claims they received from the gateway have not been tampered with. That’s why some Authorization Servers allow returning a JWT from the introspection endpoint. The API gateway can introspect the opaque token, but in the response, it gets a JWT that corresponds to the access token. This JWT can then be added to the request’sAuthorizationheader, and the downstream APIs can use it as any other JWT. Thanks to this solution, APIs can securely passthe access token between themselves and at the same time have simple access to the claims, without the need of contacting the Authorization Server. Using the Phantom Token Flow allows you to leverage the power of JWTs inside your infrastructure while at the same time keeping high levels of security and privacy outside of your organization. The network workload is greatly reduced because now only the gateway needs to contact the Authorization Server - the introspection is done only once for every request. Also, the amount of work the gateway needs to do is limited – it does not have to parse the body or decode the JWT in order to decide whether the token is valid or has not expired. The gateway can depend only on the status code of the response from the Authorization Server. An OK response means that the token is valid and has not expired. The amount of network traffic can be reduced even further. The API gateway can cache the Authorization Server’s response for as long as the server tells it to (which will usually be the lifetime of the access token). If the opaque reference token is globally unique, this works as an ideal cache key, and the mapping in the cache can even be shared across any API. Implementing the Phantom Token Flow using NGINX and theCurityIdentity Server AtCurity, we have created anopen-source modulefor NGINX to facilitate implementing the Phantom Token flow. The module takes care of performing the introspection and keeping the result in the cache (if caching is enabled). As values of the opaque token are globally unique, so the cache can be as well; there is no need to keep a reference on a per-client basis. The NGINX module is easy to use as all parameters can be set using standard NGINX configuration directives. The module can be built from source, but there are binaries available for a few different Linux distributions and NGINX versions. You can find them in thereleasessection on GitHub. Installing and Configuring the Module Once downloaded, the module needs to be enabled in your instance of NGINX unless you want to create your own build of NGINX with all your modules embedded. To enable the module, add this directive in the main context of your configuration: load_modulemodules/ngx_curity_http_phantom_token_module.so; Then you can apply the phantom token flow to chosen servers and locations. Here’s an example configuration that enables the Phantom Token Flow for an/apiendpoint. The configuration assumes that: Your NGINX instance runs on a different host (nginx.example.com) than yourCurityIdentity Server instance (curity.example.com). The API that will eventually process the request listens onexample.com/api. TheCurityIdentity Server’s introspection endpoint is/oauth/v2/introspection. Responses from introspection are cached by NGINX. http { proxy_cache_path/path/to/cache/cache levels=1:2keys_zone=my_cache:10mmax_size=10g inactive=60muse_temp_path=off; server { server_namenginx.example.com; location /api{ proxy_passhttps://example.com/api; phantom_tokenon; phantom_token_client_credential"client_id" "client_secret"; phantom_token_introspection_endpointcurity; } locationcurity{ proxy_pass"https://curity.example.com/oauth/v2/introspection"; proxy_cache_methodsPOST; proxy_cachemy_cache; proxy_cache_key$request_body; proxy_ignore_headersSet-Cookie; } } } Now, your NGINX gateway is capable of performing the Phantom Token Flow! Conclusion The Phantom Token Flow is a recommended practice for dealing with access tokens that are available publicly. The flow is especially effective when there are many microservices processing one request, which would otherwise have to introspect the tokens on their own. Using Phantom Tokens enhances your security and the privacy of your users, which can never be underestimated. With the help of NGINX as your gateway and theCurityIdentity Server as the Authorization Server implementing the Phantom Token Flow is as simple as loading a module in your NGINX server! Sign up now for a free API Gateway Webinar to learn more about guarding privacy and security with Curity and NGINX2KViews1like1CommentAccelerating Digital Transformation in Banking and Financial Services
Introduction A recent survey from Forrester’s Business Technographics shows that 33% of BFSI tech leaders are currently undertaking a digital transformation within their organizations. That’s 13 points ahead of the average across industries. Still, many enterprises worry that they aren't moving fast enough. For banking and financial services organizations, there is intense pressure to transform their enterprises to remain more competitive in an age of disruption. Evolving regulatory requirements, rapidly advancing technology, increasing customer demands, COVID-19 and competition from fintech’s are all forcing financial services firms to rethink the way they operate. Digital Transformation Challenges This digital transformation imperative requires banking and financial services organizations to improve their technical capabilities. But true transformation demands more than just new technologies. It requires strategic vision and commitment from the top of the organization to rethink and retool its culture, its processes, and its technology. Admittedly, the financial industry has a long history of not collaborating, lack of transparency, and resistance to adaptability, favoring instead confidentiality, siloed organizational structures, and risk aversion. For many years, that heritage enabled financial services firms to succeed. Existing cultural, behavioral, and organizational hurdles can be hard to overcome because they are so entrenched. New processes and technology are also necessary for digital transformation. Traditional development practices are common in the industry and are built on segmented and monolithic team structures that lack the agility required to achieve transformation. Additionally, very few possess the infrastructure and application architectures required to rapidly innovate. The Benefits of an Open Approach Digital transformation is not merely about adopting new technologies but also establishing new cultural practices and ‘ways of working’ within the IT organization.By taking an open approach to architecture, process, and culture, you can transform the way your entire organization operates. Modular architecture To create a more modular environment, banking and financial services institutions will require integration across the entire legacy network, as well as integration with partner systems, networks, and other external services such as Software-as-a-Service (SaaS) solutions. An open and composablearchitecture gives customers access to a growing range of ‘Best of Breed’ technologies from industry leaders, consumable in a frictionless “single-stack” feel. Agile process In the open organization model, collaboration is key. Modern, agile practices establish common goals and empower teams to move forward together. According to the Harvard Business Review article “Reassessing Digital Transformation:The Culture and Process Change Imperative”, financial services were more apt to say that DevOps was important than other industries, and were also more likely to have implemented agile development, project management processes, CI/CD, and DevOps. These new processes are necessary as financial services firms seek faster time to value and leverage microservices to effect this change. Open culture Open organizations are more transparent, inclusive, adaptive, collaborative, and community focused.When you view digital transformation as a continuous process—and emphasize the importance of culture in parallel to, not at the expense of, technology and process— you’re positioning your organization for a successful transformation. Technologies that Enable Digital Transformation The pandemic has accelerated the need for digital transformation in the BFSI segment.Not only have workforces become remote, but person to person contact has become less frequent.Financial organizations have not only had to scale up infrastructure and security to support a remote workforce but have also had to simultaneously scale to support a fully remote customer base. Inherent in this approach is a hybrid cloud strategy that allows the ability to scale up or down resources to meet application needs.Architectural design and practices must also align with these new cloud infrastructures.There is a need to balance the requirements for speed with the absolute necessity for security and availability.There are a few key best practices that BFSI organizations have used to balance these competing demands: ·Establish a foundation of resilience by adopting site reliability engineering (SRE) concepts. ·Rapidly deploy new services quickly based on market demand. ·Consolidated, consistent, and controlled security and access, including identity management, intrusion protection, anti-virus, predictive threat capabilities ·Application performance (response time and latency), on-demand scalability, and disaster recovery and backup •Automation for efficiency and to speed delivery, with consistency in operations and tools, continuous integration and continuous delivery (CI/CD) •System-wide business monitoring, reporting, and alerting. An Open Architecture with F5 and Red Hat Now that we have established the open approach for implementing a financial service platform and the capabilities needed for a successful digital transformation, we can examine the architecture needed to support it. It starts on the path toward site reliability engineering (SRE). In the SRE model, the operations team and the business give developers free rein to deploy new code—but only until the error budget is exceeded. At that point, development stops, and all efforts are redirected to technical debt. As shown in Figure 1, it boils down to 5 areas that an SRE team should be doing to achieve the balancing goal. Figure 1. Enabling SRE Best Practices Together, F5, Red Hat, Elasticsearch, and other ecosystem partners can deliver a suite of technologies to fulfill the extension and transformation of existing architecture to an agile financial service platform. Figure 2. SRE Microservice Architecture with F5, Red Hat, and Elasticsearch The following describes the most fundamental components of Figure 2 in more detail, to enable the SRE best practices: 1.Red Hat OpenShift Container Platform (container PaaS) provides a modular, scalable, cloud-ready, enterprise open-source platform. It includes a rich set of features to build and deploy containerized solutions and a comprehensive PaaS management portal that together extend the underlying Kubernetes platform. 2.Combining BIG-IP and NGINX, this architecture allows SRE to optimize the balance between agility and stability, by implementing blue-green and targeted canary deployment. It’s a good way to release beta features to users and gather their feedback, and test your ideas in a production environment, with reduced risk. 3.BIG-IP combined NGINX Plus also gives SRE the flexibility to adapt to the changing conditions of the application environments, address the needs of NetOps, DevOps, DevSecOps, and app developers 4.ELK is utilized to analyze and visualize application performance through a centralized dashboard. A dashboard enables end-users to easily correlate North-South traffic with East-West traffic for end-to-end performance visibility. 5.F5’s WAF offerings, including F5 Advanced WAF and NGINX App Protect, deployed across hybrid clouds, protect OpenShift clusters against exploits of web application vulnerabilities as well as malware attempting to move laterally. 6.Equally important is integration with Red Hat Ansible that enables the automated configuration of security policy enforcement for immediate remediation. 7.Built intoCI/CD pipeline so that any future changes to the application are built and deployed automatically. Conclusion Digital transformation has been accelerated by the dual challenges of Covid and the emergence of Fintech.Traditional BFSI organizations have had to respond to these enormous challenges by accelerating their deployment timelines and adopting agile processes without compromising security and availability. These practices also dovetail with the greater adoption of microservices architectures that allow for scale up and scale out of application services.F5 & NGINX helps aid this transformation by providing world class performance and security combined with a flexible microservices ADC (NGINX+). This hybrid architecture allows for Kubernetes deployments to become ‘production grade’.1.9KViews3likes0CommentsConfiguring NGINX API micro-gateway to support Open Banking's Advanced FAPI security profile
Introduction In my last article, Integrating NGINX Controller API Management with PingFederate to secure financial services API transactions, we have seen how to configure NGINX Controller to perform basic JWT authorization against PingFederate, configured as OIDC IdP / OAuth Autorization Server. One weakness of the basic JWT authentication mechanism is the lack of context: anyone presenting a valid JWT will be allowed to performed the actions granted by the token, even if the sender is not the original OAuth client that was issued the token. This opens an avenue for attackers to use JWTs stollen from their rightful owners. Ideally, a mechanism of restricting the usage of the JWT to their original requestor is needed and this type of protection is specifically required for API calls presenting the highest risk, such as financial API calls. For example, Financial-grade API (FAPI) Security Profile 1.0 - Part 2: Advanced (Read and Write API Security Profile) specifies that: Authorization server: shall only issue sender-constrained access tokens; shall support MTLS as mechanism for constraining the legitimate senders of access tokens; Uncertainty of resource server handling of access tokens The protected resources that conform to this document shall not accept a bearer access token. They shall only support sender-constrained access tokens via MTLS. It is therefore useful to examine the configuration of NGINX, in its micro-gateway deployment mode, needed to perform the function of a resource server in cases requiring the Advanced FAPI security profile. Setup A high-level diagram of the lab environment used to demonstrate this setup is found below: The roles performed by each network element are described below: Authentication and API flow The workflow is very similar with the one described in my last article, with the differences highlighted here in bold: The user logs into the Third Party Provider application ("client") and creates a new funds transfer The TPP application redirects the user to the OAuth Authorization Server / OIDC IdP - PingFederate The user provides its credentials to PingFederate and gets access to the consent management screen where the required "payments" scope will be listed If the user agrees to give consent to the TPP client to make payments out of his/her account, PingFederate will generate an authorization code (and an ID Token) and redirect the user to the TPP client The TPP client opens an MTLS connection to the IdP, authenticates itself with a client certificate, exchanges the authorization code for a user-constrained access token and attaches it as a bearer token to the /domestic-payments call sent to the API gateway over an MTLS session authenticated with the same client certificate The API Gateway terminates the MTLS session and obtains the client certificate, authenticates the access token by downloading the JSON Web Keys from PingFederate, checks the hashed client certificate matches the value found in the token and grants conditional access to the backend application The Kubernetes Ingress receives the API call and performs WAF security checks via NGINX App Protect The API call is forwarded to the backend server pod Examining the differences between the workflows, it becomes apparent the extra actions NGINX API micro-gateway has to perform to support this advanced security use case are MTLS termination and client certificate hash verification. NGINX API micro-gateway configuration The full configuration is available on DevCentral's Code Share: Configure NGINX microgateway for MTLS termination and client certificate hash verification I will highlight below the most relevant parts of the configuration. MTLS termination server { server_name api.bank.f5lab; listen 443 ssl; ssl_certificate /etc/nginx/f5lab.crt; ssl_certificate_key /etc/nginx/f5lab.key; ssl_session_cache off; ssl_prefer_server_ciphers off; ssl_client_certificate /etc/nginx/updated_ca.crt; ssl_verify_client on; ssl_verify_depth 10; A detailed explanation of each of these commands can be found in the ngx_http_ssl_module user guide. JWT client certificate hash verification To compute and validate the client certificate hash, we will use an njs script (more information on njs scripting language and installation process can be found here). The njs script used (named "x5t.js" in our case) is shown below: function validate(r) { var clientThumbprint = require("crypto") .createHash("sha256") .update(Buffer.from(r.variables.ssl_client_raw_cert.replace(/(\n|----|-BEGIN|-END| CERTIFICATE-)/gm, ''), 'base64')) .digest("base64url"); return clientThumbprint === r.variables.jwt_cnf_fingerprint ? '1' : '0'; } export default { validate } Importing the "x5t.js" script in the main nginx configuration is done by: js_import /etc/nginx/x5t.js; We are populating the value of variable $jwt_cnf_fingerprint (available to the njs script via "r.variables.jwt_cnf_fingerprint") by extracting the 'x5t#S256' value from JWT: auth_jwt_claim_set $jwt_cnf_fingerprint 'cnf' 'x5t#S256'; The "validate" function of "x5t.js" will the compare the value of $jwt_cnf_fingerprint variable extracted from JWT with the computed SHA256 hash of the client certificate and set the validation result in the $thumbprint_match variable. js_set $thumbprint_match x5t.validate; Lastly, we will make a decision to accept or block client's access based on the validation result: if ($thumbprint_match != 1) { return 403 'Access denied because client SSL certificate thumbprint does not match jwt_cnf_fingerprint'; } Conclusion Supporting MTLS termination and client certificate hash validation against sender-constrained JWTs issued by Authorization Servers such as PingFederate, enables NGINX API micro-gateway to support Open Banking's Advanced FAPI security profile. Resources The UDF lab environment used to build this configuration can be found here.1.9KViews1like0Comments