Understanding how Linux reacts to memory pressure using vmstat

Introduction

This article is just a quick hands-on practical example to show how a typical Linux system usually reacts when memory pressure builds up quickly.

I'll use vmstat to watch freebuffers and cached counters to watch how they react to a sharp memory increase.

FYI, my test box has 4 GB of memory:

To quickly increase memory usage, I wrote a simple C program that allocates 3 GB of RAM to an array of integers.

It then waits 5 seconds before freeing up memory to give us some time to look at memory usage before program frees up memory.

Here's the C code:

I then compiled the code using gcc command:

On the left terminal, I'm executing watch -n 1 vmstat command and on the right terminal I'm running my C script ./mem:

I recorded a couple of seconds after program finished running, i.e. after we free up memory so we can see how free counters recover from memory pressure.

I encourage you to go through the above animation 2 times.

Once observing free and a 2nd time observing buff/cache.

free

For the first time, let's focus on free

free shows the amount of memory that is 100% physically free/idle.

Notice that it starts out with about 2.7G (2759796 KB) and sharply decreases to just 107 MB (107876 KB) in 8 seconds:

As soon as memory is freed, it sharply rises back to 3 GB (3039684 KB):

buff/cache

buff/cache are kept high in a healthy Linux system with plenty of memory available to make the most of available RAM.

Cache is the size of memory page cache and buffers is the size of in-memory block I/O buffers.

Since 2.4.10, both page and buffer cache are unified so we look at them together here.

This is also the reason why top command shows both of them together:

For Linux, idle RAM is a waste when it can be used for faster access to read frequently used data or to buffer data for faster retrieval.

For this reason, Linux uses a lot of buff/cache so in reality, the total amount of free memory on Linux is the sum of free buff/cache.

In this particular case, the fact that buff/cache is reducing significantly in size, indicates that Linux no longer has the luxury of using buff/cache for other purposes that much and is now prioritising our ./mem program as shown below:

They immediately start building up again and 6 seconds later, values are much higher:

If memory continued to be a problem, Linux could opt to use its swap address and as memory is swapped in/out of disk, we should see the values of si/so increasing.

Appendix - OOM Killer

Since we're talking about memory here, it's worth mentioning that when memory is critically low or threatening the stability of the system, Linux can trigger the Out of Memory (OOM) Killer functionality.

Its job is to kill process(es) until enough memory is freed so that the system can function smoothly.

The OOM Killer selects process(es) that It considers as least important when it comes to damaging system's functionality.

The kernel maintains an oom_score for each process which can be found in /proc:

The lower the score, the less likely OOM killer will kill it as less memory process is using.

The higher the oom_score value, the higher the chances of process getting killed by OOM Killer in an OOM situation.

As most processes on my Linux box are not using memory, most of them will get a score of 0 (143 out of 147 processes):

The last command lists all the scores in my system. I used sort -u to sort the entries and remove the duplicates in a sorted manner.

Published May 25, 2020
Version 1.0

Was this article helpful?

No CommentsBe the first to comment