Adopting Site Reliability Engineering with F5

Foreword

The role of the Site Reliability Engineering (SRE) is common in cloud first enterprises and becoming more widespread in traditional IT teams. Here, we would like to kick off this article series to look at the concepts that give SRE shape, outline the primary tools and best practices that make it possible, and explore some common use cases around Continuous Deployment (CD) strategy, visibility and security.

While SRE and DevOps share many areas of commonality, there are significant differences between them. DevOps is a loose set of practices, guidelines, and culture designed to break down silos in Development, IT operations, Network, and Security team. DevOps does not tell you how to run operations at a detailed level.

On the other hand, SRE, a term pioneered by Google, brings an opinionated framework to the problem of how to run operations effectively. If you think of DevOps as a philosophy, you can argue that SRE implements some of the philosophy that DevOps describes. 

In a way, SRE implements DevOps practices. 

After all, SRE only works at all if we have tools and technologies to enable it. 

Balancing Release Velocity and Reliability 

SRE aims to find the balance between feature velocity and reliability, which are often treated as opposing goals. Despite the risk of making changes to software, these changes are necessary for the business to succeed. Instead of advocating against change, SRE uses the concept of Service Level Objectives (SLOs) and error budgets to measure the impact of releases on reliability. The goal is to ship software as quickly as possible while meeting the reliability targets the users expect.  

While there are a wide range of ways an SRE-focused IT team might optimize the balance between agility and stability, two deployment models stand out for their widespread applicability and general ease of execution: 

Blue-green deployment 

For SRE, availability is currently the most common SLO. If getting new software to your users and uninterrupted access is truly required, there needs to be engineering work to implement load balancing or fractional release measures like blue-green or canary deployments to minimize any downtime. Recovery is a factor too. 

The idea behind blue-green deployment is that your blue environment is your existing production environment carrying live traffic. In parallel, you provision a green environment, which is identical to the blue environment other than the new version of your code. As you prepare a new version of your software, deployment and the final stage of testing takes place in the environment that is not live: in this example, Green (or new OpenShift Cluster). When it's time to deploy, you route production traffic from the blue environment to the green environment. 

This technique can eliminate downtime due to app deployment. In addition, blue-green deployment reduces risk: if something unexpected happens with your new version on Green, you can immediately roll back by reverting traffic to the original blue environment. 

When you are looking for manipulating the traffic with more flexibility, reliability, across different clusters, different clouds, or geo locations, this is when F5 DNS Load Balancer Cloud Service comes into the picture. F5 Cloud service GSLB is a SaaS offering. It can provide automatic failover, load balancing across multiple locations, increased reliability by avoiding a single point of failure, and increased performance by directing traffic to the optimal site. 

This allows SRE to move fast while still maintaining enterprise grade. 

Targeted Canary deployment 

Another approach to promote availability for SRE SLO is canary deployment. In some cases, swapping out the entire deployment via a blue-green environment may not be desired. In a canary deployment, you upgrade an application on a subset of the infrastructure and allow a limited set of users to access the new version. This approach allows you to test the new software under a Production-like load, evaluate how well it meets users’ needs, and assess whether new features are profitable. 

One approach often used by Azure DevOps is ring deployment model. Users fall into three general buckets based on their respective different risk profiles: 

  1. Ring 1 - Canaries who voluntarily test bleeding edge features as soon as they are available. 
  2. Ring 2 - Early adopters who voluntarily preview releases, considered more refined than the canary bits. 
  3. Ring 3 - Users who consume the products, after passing through canaries and early adopters. 

Developer can promote and target new versions of the same application (version 1.2, 1.1, 1.0) to targeted users (ring 1, 2 and 3) respectively, without involving and waiting the infrastructure operations team (NoOps). 

To identify the user for the right version, you may choose to simply use IP address, authenticate directly by backend, or add an authentication layer in front of the backend. F5 technologies can help enable this targeted canary use case: 

  • BIG-IP APM in N-S will authenticate and identify users as ring 1, 2 or 3, and inject user identification into HTTP header 
  • This identification is passed on to NGINX plus micro-gateway to direct users to the correct microservice versions. 

Combining BIG-IP and NGINX, this architecture uniquely gives SRE the flexibility to adapt with the ability to define the baseline service control and security (for NetOps or SecOps), while extending controls for more granular and enhanced security to the developer team (for DevOps).  

The need for observability 

For SRE, at the heart of implementing SLOs practically is monitoring. You can't understand what you can't see. A classic and common approach to monitoring is to watch for a specific value or condition, and then to trigger an alert when that value is exceeded or that condition occurs. One of the valid monitoring outputs is logging, which is recorded for diagnosis or forensic purposes. The ELK stack, a collection of three open source projects, namely Elasticsearch, Logstash and Kibana, provides IT project stakeholders the capabilities of multi-system and multi-application log aggregation and analysis. 

ELK can be utilized for the analysis and visualization of application metrics through a centralized dashboard.  

With general visibility in place, tracking can be enabled in order to add a level of specificity to what is being observed. Taking advantage of iRule on BIG-IP, NetOps can generate UUID and insert it into the HTTP header of every HTTP request packet arriving at BIG-IP. All traffic access logs containing UUIDs, from BIG-IP and NGINX, are sent to the ELK server, for validation of information such as user location, response time by user location, response time etc. Through the dashboard, end-users can easily correlate North-South traffic (processed by BIG-IP) with East-West traffic (processed by NIGNX+ inside cluster), for an end-to-end performance visibility.  

In turn, tracking performance metrics opens up the possibility of defining service level objectives (SLO). 

With observability, security is possible 

Security incident will always occur, and hence it's essential to integrate security into observability. What’s most important is giving reliability engineers the tools so that they can identify the security problem, work around it, and fix it as quickly as possible.  

Using the right set of tools, you can build custom autogenerated dashboards and tooling to expose the generated information to engineers in a way that makes it much easier to sort through everything and determine the root cause of a security problem. These include things like Kibana dashboard, which allows engineers to investigate incident, apply filters, quickly pinpoint suspicious data traffic and source.  

In concert with F5 Advanced WAF and NGINX App Protect, SRE can protect applications against software vulnerabilities and common attacks from both inside and outside microservice clusters. Upon BIG-IP Advance WAF or NGINX App Protect detect suspicious traffic, it sends alert with details to ELK stack, which will index, and process the data, and then execute the pre-defined ‘Ansible Playbook’, to enforce security policy into Kubernetes or NGINX App Protect for immediate remediation.  

SRE does not only identify but rectify the anomalies by enacting security policy enforcement along the data path. Detect once and protect everywhere. 

What’s next? 

This serves as an introduction to or the first article of this SRE article series. In the coming articles, we will deep dive into each of the use cases, to showcase the technical details about how we are leveraging F5 technologies and capabilities to help SRE bring together DevOps, NetOps, and SecOps to develop the safeguards and implement the best practices. 

To learn more about developing a business case for SRE in your organization, please reach out to an F5 Business Development. For technical details and additional information, see this DevCentral GitHub repo. 

Published Oct 06, 2020
Version 1.0
No CommentsBe the first to comment