Adopting SRE practices with F5: Targeted Canary deployment

In the last article, we covered a blue-green deployment in depth. Another approach to promote availability for SRE SLO is the canary deployment. In some cases, swapping out the entire deployment via a blue-green environment may not be desired. In a canary deployment, you upgrade an application on a subset of the infrastructure and allow a limited set of users to access the new version. This approach allows you to test the new software under a production load for a limited set of user connections, evaluate how well it meets users’ needs, and assess whether new features are functioning as designed.

This article is focused on how we can use F5 technologies (BIG-IP and NGNIX Plus) to implement the Canary deployment in an OpenShift environment.

Solution Overview

The solution combines the F5 Container Ingress Services (CIS) with the NGINX Plus for a microservice environment. The BIG-IP provides comprehensive L4-7 security services for N-S traffic into, out of, and between OpenShift clusters, while leveraging NGINX Plus as a micro-gateway to manage and secure (E-W) traffic inside cluster. This architecture is depicted below.

Stitching the technologies together, this architecture enables the targeted canary use case. In “targeted” model, it takes canary deployment one step further by routing the users to different application versions based on the user identification, or their respective risk tolerance levels. It utilizes the following workflow:

1.    The BIG-IP Access Policy Manager (APM) authenticates each user before it enters the OpenShift cluster

2.    BIG-IP identifies users belonging to the ring 1, 2, or 3 user groups, and injects a group-based identifier into the HTTP header via a URI value

3.    The above user identification is passed on to NGINX Plus micro-gateway, which will direct users to the correct microservice versions

Each of above components will be discussed with implementation details in the following sections.

APM provides user authentication

BIG-IP APM is in the N-S traffic flow to authenticate and identify the users before their network traffic enters the cluster. To achieve this, we would need to:

  • Create an APM policy as shown below

  • Attach the above policy to HTTPS virtual server (manually, or using AS3 override)

Note that in our demonstration, we simplified the deployment with 2 user groups: 1) a user group “Test1” for ring 1, to represent early adopters who voluntarily preview releases; 2) a user group “User1” for ring 2, to who consume the applications, after passing through the early adopters. We could follow above steps to configure three rings as needed.

We use the AS3 override function of CIS to attach the APM policy, so that CIS remains as the source of truth. The AS3 override functionality allows us to alter the existing BIG-IP configuration using AS3 with a user-defined configmap without affecting the existing Kubernetes resources.

In order to do so, we would need to add a new argument to the CIS deployment file. Run the following command to enable AS3 override functionality:

--override-as3-declaration=<namespace>/<user_defined_configmap_name>

An example of user-defined configmap to attach APM policy to the HTTPS virtual server is shown below (created by an OpenShift route):

apiVersion: v1
kind: ConfigMap
metadata:
  name: f5-override-as3-declaration
  namespace: default
data:
  template: |
    {
      "declaration": {
        "openshift_AS3": {
                "Shared": {
                    "bookinfo_https_dc1": {
                        "policyIAM":
                        {
                          "bigip": "/Common/bookinfo"
                        }
                    }
                }
            }
        }
    }

Next, we run the following command to create the configmap:

oc create f5-override-as3-declaration.yaml

Note: Restart the CIS deployment after deploying the configmap.

When a user is trying to access the Bookinfo application, now it will first be authenticated with BIG-IP APM:

BIG-IP injects user identification into HTTP header

After the user is authenticated, BIG-IP creates a user identification and passes it on to NGINX Plus micro-gateway in order to direct users to the correct microservice version. It does so by mapping the user to a group and injecting the HTTP header with a URI value (http_x_request_id).

Steps to configure the BIG-IP:

  1. Create a policy with the rule shown below:
  2. Attach the policy to the HTTPS virtual server (manually, or using AS3 override)

NGINX Plus steers traffic to different versions

NGINX Plus running inside OpenShift cluster will extract the user information from the HTTP header http_x_request_id, and steer traffic to the different versions of the Bookinfo review page accordingly.

In the below example, we used configmap to configure the NGINX Plus POD that acts as the reverse proxy for the review services.

##################################################################################################
# Configmap Review Services
##################################################################################################
apiVersion: v1
kind: ConfigMap
metadata:
 name: bookinfo-review-conf
data:
 review.conf: |-

   log_format elk_format_review 'time=[$time_local] client_ip=$remote_addr virtual=$server_name client_port=$remote_port xff_ip=$remote_addr lb_server=$upstream_addr http_host=$host http_method=$request_method http_request_uri=$request_uri status_code=$status content_type="$sent_http_content_type" content_length="$sent_http_content_length" response_time=$request_time referer="$http_referer" http_user_agent="$http_user_agent" x-request-id=$myid ';

   upstream reviewApp {
      server reviews-v1:9080;
   }

   upstream reviewApp_test {
      server reviews-v1:9080;
      server reviews-v2:9080;
      server reviews-v3:9080;
   }

   # map to different upstream backends based on header
   map $http_x_request_id $pool {
      ~*test.* "reviewApp_test";
      default "reviewApp";
   }
   server {
      listen 5000;
      server_name review;

      #error_log /var/log/nginx/internalApp.error.log info;
      access_log syslog:server=10.69.33.1:8516 elk_format_review;
      #access_log /var/tmp/nginx-access.log elk_format_review;

      set $myid $http_x_request_id;
      if ($http_x_request_id ~* "(\w+)-(\w+)" ) {
        set $myid $2;
      }

      location / {
       proxy_pass http://$pool;
      }
   }

NIGINX Plus will direct the user traffic to the right version of services:

·     If it is “User1” or a normal user, it will be forwarded to “Ring 1” for the old version of the application

·     If it is “Test1” or an early adopter, it will be forwarded to “Ring 2” for the newer version of the same application

Summary

Today’s enterprises increasingly rely on different expertise and skillsets, like DevOps, DevSecOps, and app developers, to work with NetOps teams to manage the sprawling application services that are driving their accelerated digital transformation. Combining BIG-IP and NGINX Plus, this architecture uniquely gives SRE the flexibility to adapt to the changing conditions of the application environments. It means we can deliver services that meet the needs of a broader set of application stakeholders. We may use BIG-IP to define the global service control and security for NetOps or SecOps, while using NGINX Plus to extend controls for more granular and application specific security to DevOps or app developers.

So, go ahead: go to the DevCentral GitHub repo, download the source code behind our technologies, and follow the guide to try it out in your environment.

Published Dec 07, 2020
Version 1.0

Was this article helpful?