Where you Rate-Limit APIs Matters

Seriously, let’s talk about this because architecture is a pretty important piece of the scalability puzzle.

 

Rate limiting is not a new concept. We used to call it “quality of service” to make it sound nicer, but the reality is that when you limited bandwidth availability based on application port or protocol, you were rate limiting.

Today we have to apply that principle to APIs (which are almost always RESTful HTTP requests) because just as there was limited bandwidth in the network back in the day, there are limited resources available to any given server. This is not computer science 301, so I won’t dive into the system-level details as to why it is TCP sockets are tied to file handles and thus limit the number of concurrent connections available. Nor will I digress into the gory details of memory management and CPU scheduling algorithms that ultimately support the truth of operational axiom #2 – as load increases, performance decreases.

Anyone who has tried to do anything on a very loaded system has experienced this truth.

So, now we’ve got modern app architectures that rely primarily on APIs. Whether invoked from a native mobile or web-based client, APIs are the way we exchange data these days. We scale APIs like we scale most HTTP-based resources. We stick a load balancer in front of two or more servers and algorithmically determine how to distribute requests. It works, after all. That can be seen every day across the Internet. Chances are if you’re doing anything with an app, it’s been touched by a load balancer.

Now, I mention that because it’ll be important later. Right now, let’s look a bit closer at API rate limiting.

The way API rate limiting works in general is that each client is allowed X requests per time_interval. The time interval might be minutes, hours, or days. It might even be seconds. The reason for this is to prevent any given client (user) from consuming so many resources (memory, CPU, database) as to prevent the system from responding to other users.

It’s an attempt to keep the server from being overwhelmed and falling over.

That’s why we scale.

The way API rate limiting is often implemented is that the app, upon receiving a request, checks with a service (or directly with a data source) to figure out whether or not this request should be fulfilled or not based on user-defined quotas and current usage.

This is the part where I let awkward silence fill the room while you consider the implication of the statement.

In an attempt to keep from overwhelming servers with API requests, that same server is tasked with determining whether or not the request should be fulfilled or not.

Now, I know that many API rate limiting strategies are used solely to keep data sources from being overwhelmed. Servers, after all, scale much easier than their database counterparts.

Still, you’re consuming resources on a server unnecessarily. You’re also incurring some pretty heavy architectural debt by coupling metering and processing logic together (part of the argument for microservices and decomposition but that’s another post) and making it very difficult to change how that rate limiting is enforced in the future. Because it’s coupled with the app.

If you recall back to the beginning of this post, I mentioned there is almost always (I’d be willing to bet on it) a load balancer upstream from the servers in question. It is upstream logically and often physically, too, and it is, by its nature, capable of managing many, many, many more connections (sockets) than a web server. Because that’s what they’re designed to do.

So if you moved the rate limiting logic from the server to the load balancer…  you get back resources and reduce architectural debt and ensure some agility in case you want to rapidly change rate limiting logic in the future. After all, changing that logic in 1 or 2 instances of a load balancer is far less disruptive than making code changes to the app (and all the testing and verification and scheduling that may require).

Now, as noted in this article laying out “Best Practices for a Pragmatic RESTful API” there are no standards for API rate limiting. There are, however, suggested best practices and conventions that revolve around the use of custom HTTP headers:

At a minimum, include the following headers (using Twitter's naming conventions as headers typically don't have mid-word capitalization):

  • X-Rate-Limit-Limit - The number of allowed requests in the current period
  • X-Rate-Limit-Remaining - The number of remaining requests in the current period
  • X-Rate-Limit-Reset - The number of seconds left in the current period

And of course when a client has reached the limit, be sure to respond with HTTP status code 429 Too Many Requests, which was introduced in RFC 6585.

Now, if you’ve got a smart load balancer; one that is capable of actually interacting with requests and responses (not just URIs or pre-defined headers, but one that can actually reach all the way into the TCP payload, if you want) and is enabled with some sort of scripting language (like TCL or node.js) then you can move API rate limiting logic to a load balancer-hosted service and stop consuming valuable compute.

Inserting custom headers using node.js (as we might if we were using iRules LX on a BIG-IP load balancing service) is pretty simple. The following is not actual code (I mean it is, but it’s not something I’ve tested). This is just an example of how you can grab limits (from a database, a file, another service) and then insert those into custom headers.

   1: limits = api_user_limit_lookup(); 
   2: req.headers["X-Rate-Limit-Limit"] =  limits.limit; 
   3: req.headers["X-Rate-Limit-Remaining"] = limits.remaining;
   4: req.headers["X-Rate-Limit-Reset"] = limits.resettime; 

You can also simply refuse to fulfill the request and return the suggested HTTP status code (or any other, if your app is expecting something else). You can also send back a response with a JSON payload that contains the same information. As long as you’ve got an agreed upon method of informing the client, you can pretty much make this API rate limiting service do what you want.

Why in the network? 

 

There are three good reasons why you should move API rate limiting logic upstream, into the load balancing proxy:

1. Eliminates technical debt

     If you’ve got rate limiting logic coupled in with app logic, you’ve got technical debt you don’t need. You can lift and shift that debt
     upstream without having to worry about how changes in rate limiting strategy will impact the app.

2. Efficiency gains

     You’re offloading logic upstream, which means all your compute resources are dedicated to compute. You can better predict
     capacity needs and scale without having to compensate for requests that are unequal consumers.

3. Security

     It’s well understood that application layer (request-response) attacks are on the rise, including denial of service. By leveraging an
     upstream proxy with greater capacity for connections you can stop those attacks in their tracks, because they never get anywhere
     near the actual server.

 

Like almost all things app and API today, architecture matters more than algorithms. Where you execute logic matters in the bigger scheme of performance, security, and scale.

Published Aug 11, 2016
Version 1.0

Was this article helpful?

1 Comment

  • Subrun's avatar
    Subrun
    Icon for Cirrostratus rankCirrostratus

    Hello,

     

    I have an Application hosted behind F5 showing 504 Gateway Timeout an when Application team was trying to load test.

     

    Users were trying to hit 150 users within an hour, in 10 minute intervals as part of load test for the Application hosted behind F5. This is a web application for your information.

     

    What could be the problem in this case ?

     

    How can I know what is the existing limit ?