cancel
Showing results for 
Search instead for 
Did you mean: 
Damian_Curry
F5 Employee
F5 Employee

By Damian Curry, NGINX Business Development Technical Director, F5, and Michal Trojanowski, Product Marketing Engineer, Curity 

----
In today’s world, APIs are ubiquitous, either in communication between-backend services or from front ends to back ends. They serve all kinds of purposes and come in different flavors and can return data in various formats. The possibilities are countless. Still, they all share one common trait – an API needs to be secure. Secure access to an API should be paramount for any company exposing them, especially if the APIs are available externally and consumed by third-party clients. To help organizations address the critical topic of API security, OWASP has provided guidelines to ensure safety. In 2019, OWASP released a “top 10” compilation of the most common security vulnerabilities in APIs. 

OAuth and JWT as a Sign of Mature API Security 

Modern, mature APIs are secured using access tokens issued according to the OAuth standard or ones that build on top of it. Although OAuth does not require the use of JSON Web Token (JWT) format for access tokens, per se, it is common practice to use them. Signed JWTs (JWTs signed using JSON Web Signing, JWS) have become a popular solution because they include built-in mechanisms that protect their integrity. Although signed JWTs are a reliable mechanism when the integrity of the data is considered, they are not as good in regards to privacy. 
 

0EM1T000002KyTP.png 

A signed JWT (JWS) protects integrity of the data, while an encrypted one (JWE) protects also the privacy.   

 

A JSON Web Token consists of claims which can carry information about the resource owner – the user who has granted access to their resources or even about the API itself. Anyone in possession of a signed JWT can decode it and read the claims inside. This can produce various issues: 

  • You have to be really careful when putting claims about the user or your API in the token. If any Personally Identifiable Information (PII) ends up in there, this data essentially becomes public. Striving to keep PII private should be the goal of your organization. What is more, in some countries, privacy is protected by laws like GDPR or CCPA, and it can be an offense not to keep PII confidential. 
  • Developers of apps consuming your access tokens can start depending on the contents of your tokens. This can lead to a situation where making updates to the contents of your access tokens breaks the integrations with your API. Such a situation would be an inconvenience both for your company and your partners. An access token should be opaque to the consumers of your API, but this can be hard to enforce outside of your organization. 

By-reference Tokens and the Phantom Token Flow 

When there is a problem, there must be a solution. In fact, there are two ways to address these aforementioned problems: 

  1. Encryption 
  2. Use by-reference (AKA handle) tokens 

To protect the contents of JWTs from being visible, the token can be encrypted (thus creating a JWE inside the JWS). Encrypted content will be safe as long as you use a strong encryption algorithm and safeguard the keys used for encryption. To use JWEs, though, you need a Public Key Infrastructure (PKI), and setting one up can be a big deal. If you do not use PKI, you need another mechanism to exchange keys between the Authorization Server and the client. As JWEs also need to be signed, your APIs have to be able to access over TLS the keys used to verify and decrypt tokens. These requirements add considerable complexity to the whole system. 

Another solution is to use by-reference tokens – an opaque string that serves as a reference to the actual data kept securely by the Authorization Server. Using opaque tokens protects the privacy of the data normally kept in tokens, but now your services, which process requests, need some other way to get the information that was earlier available in the JWT’s claims. Here, the Token Introspection standard can prove useful. Using a standardized HTTP call to the Authorization service, the API can exchange the opaque token for a set of claims in JSON format. Exchanging an opaque token for a set of claims comes to the API just as easily as extracting the data from decoding a JWT. Because the API connects securely to the Authentication Server and trusts it, the response does not have to be signed, as is the case with JWTs. 

Still, in a world dominated by microservices, a problem may arise with the proposed solution. Have a look at this diagram below. Note that every service which processes the request needs to perform the introspection. 

0EM1T000002KyTQ.png 

When there are many services processing one request, the introspection must be done repeatedly or else the APIs would have to pass unsigned JSONs and trust the caller with the authorization data – which can lead to security issues. Situation in which every service calls the Authorization Server to introspect the token can put excess load on the Authorization Server, which will have to return the same set of claims many times. It can also overload the network between the services and the Authorization Server. 

If your API consists of numerous microservices, there is probably some kind of a reverse proxy or an API gateway standing in front of all of them, e.g., an instance of NGINX. Such a gateway is capable of performing the introspection flow for you.

0EM1T000002KyTR.png 

This kind of setup is called a Phantom Token Flow. The Authorization Server issues opaque tokens and serves them to the client. The client uses the token as any other token – by appending it to the request in the Authorization header. When the gateway receives a request, it extracts the opaque token from the header and performs the introspection flow, but it is a bit different from the flow performed by APIs. According to the specification, when the API introspects a token, it receives a JSON with the claims that would usually end up in a JWT. For an API, it is enough, as it trusts the Authorization Server and just needs the data associated with the access token, the claims do not have to be signed.

For the API gateway, though, it’s not enough. The gateway needs to exchange the opaque token for the set of data but will be passing this data to the downstream services together with the request. The APIs must be sure that the claims they received from the gateway have not been tampered with. That’s why some Authorization Servers allow returning a JWT from the introspection endpoint. The API gateway can introspect the opaque token, but in the response, it gets a JWT that corresponds to the access token. This JWT can then be added to the request’s Authorization header, and the downstream APIs can use it as any other JWT. Thanks to this solution, APIs can securely pass the access token between themselves and at the same time have simple access to the claims, without the need of contacting the Authorization Server.  

Using the Phantom Token Flow allows you to leverage the power of JWTs inside your infrastructure while at the same time keeping high levels of security and privacy outside of your organization. 

The network workload is greatly reduced because now only the gateway needs to contact the Authorization Server - the introspection is done only once for every request. Also, the amount of work the gateway needs to do is limited – it does not have to parse the body or decode the JWT in order to decide whether the token is valid or has not expired. The gateway can depend only on the status code of the response from the Authorization Server. An OK response means that the token is valid and has not expired. The amount of network traffic can be reduced even further. The API gateway can cache the Authorization Server’s response for as long as the server tells it to (which will usually be the lifetime of the access token). If the opaque reference token is globally unique, this works as an ideal cache key, and the mapping in the cache can even be shared across any API. 

Implementing the Phantom Token Flow using NGINX and the Curity Identity Server 

At Curity, we have created an open-source module for NGINX to facilitate implementing the Phantom Token flow. The module takes care of performing the introspection and keeping the result in the cache (if caching is enabled). As values of the opaque token are globally unique, so the cache can be as well; there is no need to keep a reference on a per-client basis. The NGINX module is easy to use as all parameters can be set using standard NGINX configuration directives. The module can be built from source, but there are binaries available for a few different Linux distributions and NGINX versions.

You can find them in the releases section on GitHub. 

Installing and Configuring the Module 

Once downloaded, the module needs to be enabled in your instance of NGINX unless you want to create your own build of NGINX with all your modules embedded. To enable the module, add this directive in the main context of your configuration: 

load_module modules/ngx_curity_http_phantom_token_module.so; 

Then you can apply the phantom token flow to chosen servers and locations. Here’s an example configuration that enables the Phantom Token Flow for an /api endpoint. The configuration assumes that: 

  • Your NGINX instance runs on a different host (nginx.example.com) than your Curity Identity Server instance (curity.example.com). 
  • The API that will eventually process the request listens on example.com/api
  • The Curity Identity Server’s introspection endpoint is /oauth/v2/introspection
  • Responses from introspection are cached by NGINX. 
http { 
    proxy_cache_path /path/to/cache/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off; 
   server { 
        server_name nginx.example.com; 
       location /api { 
            proxy_pass https://example.com/api; 
 
            phantom_token on; 
            phantom_token_client_credential "client_id" "client_secret"; 
            phantom_token_introspection_endpoint curity; 
       } 
         
       location curity { 
            proxy_pass "https://curity.example.com/oauth/v2/introspection"; 
             
            proxy_cache_methods POST; 
            proxy_cache my_cache; 
            proxy_cache_key $request_body; 
            proxy_ignore_headers Set-Cookie; 
       } 
   } 
     
 } 

 

Now, your NGINX gateway is capable of performing the Phantom Token Flow! 

Conclusion 

The Phantom Token Flow is a recommended practice for dealing with access tokens that are available publicly. The flow is especially effective when there are many microservices processing one request, which would otherwise have to introspect the tokens on their own. Using Phantom Tokens enhances your security and the privacy of your users, which can never be underestimated. With the help of NGINX as your gateway and the Curity Identity Server as the Authorization Server implementing the Phantom Token Flow is as simple as loading a module in your NGINX server! 

Sign up now for a free API Gateway Webinar to learn more about guarding privacy and security with Cu...

 
Comments
Thompson15
Nimbostratus
Nimbostratus

Thanks for the great post keep sharing this type of post

Version history
Last update:
‎18-Feb-2021 12:30
Updated by:
Contributors