Memory Overcommit Settings

Today I delved into the underworld of Linux memory allocation, in particular into overcommitting memory (RAM).

After a couple of X11 hangs I decided I needed to learn a little more about the various settings that come as stock with the Linux kernel, to try to tame them, or at least reduce or stop these annoying hangs followed by reboots!

Most applications ask for more memory than they might actually need to startup, some of this is down to bad software design, or they expect that you’ll need that much at some point in the future….a sort of “this is my worst case scenario requirement of RAM, and i’ll tell you that now before we start!”

The stock linux kernel settings kind of just agrees to the applications request without checking if the actual resource, or the hardware, could support the total requested memory in that worst case scenario, partly because most applications never need what they ask for. But, what happens when they do……..

To see your memory system now, under ‘default’ settings, enter the following into terminal:

sudo cat /proc/meminfo

We can see lots of lines but the four we’re interested in are:

MemTotal: The total amount of physical RAM available on your system.

MemFree: The total amount of physical RAM not being used for anything.

CommitLimit: The total amount of memory, both RAM and SWAP, available to commit to the running and requested applications (not necessarily directly related to the actual physical RAM amount, we will see why later).

Commited_AS: The total amount of memory required in the worse case scenario right now if all the applications actually used what they asked for at startup!

If the application/s needed what they originally asked for, an out-of-memory or ‘OOM’ would happen. This would mean that the OOM-killer would kick in to try and free up actual memory by killing running processes it thinks might help to free up memory. By then though a kernel-panic (or at best  X11 would hang) might have happened resulting in a frozen system (aka blue-screen in MS terms) or of course OOM-killer killed a critical system process.

To solve the random selections of the OOM-killer potentially killing off a critical system process, or not kicking in prior to a kernel-panic, we can change the following:

vm.overcommit_ratio=100: The percentage of total actual memory resources available to applications. This might be the total of RAM + SWAP, or just RAM if you have no SWAP. (IE: RAM=1gb & SWAP=1gb, overcommit_ratio=100 would mean 2gb could be allocated to applications. overcommit_ratio=50 would mean 1gb could be allocated to applications – this would obviously not be a sensible choice as 1gb would never be used!)

vm.overcommit_memory=2: This tells the kernel to never agree to allocate more than the total percentage of actual memory determined by overcommit_ratio= and disables the OOM-killer daemon.

We can change the above settings by entering the following into terminal:

sudo sync    — this tells any files in cache on RAM to write to disk now

sudo sh -c “sync; echo 3 > /proc/sys/vm/drop_caches”    — this drops all caches from RAM

sudo cat /proc/meminfo    — check that Committed_AS is below CommitLimit

sudo sysctl -w vm.overcommit_ratio=99    — use 99% of physical memory

sudo sysctl -w vm.overcommit_memory=2    — only allow applications to start if there is enough memory determined by the above command

So now when we try to open a memory hungry application, or we have to many applications open already, the new application is refused with a notification that IE: ‘file manager failed to fork’, or failed to start because there isn’t the available memory. Potentially the application could theoretically start with what memory is available now, but it may continue to require memory to a point the system is unusable as a result and hangs or crashes. A web-browser would be a good example, it opens with only one tab, but during the day you open a dozen more, at some point memory would be exhausted.

By using the two above tweaks we end up with a system that cannot agree to give applications more memory allocation than it physically has. This stops hangs or kernel panics that render the entire system useless, potentially losing those last bits of information you were inputting, instead it simply tells you that there is no more memory, you need to go buy more RAM!

We now know our system will just tell us there’s no more memory for that new application to open, and we like it, we want these settings to survive power cycles (rebooting), we do this by adding the above commands into:

sudo gedit /etc/sysctl.conf    — I use gedit, but nano, vi etc all work

Add: sudo sysctl -w vm.overcommit_ratio=99 and sudo sysctl -w vm.overcommit_memory=2 to the bottom of that file on separate lines and save. Mine look like this:

#system tweaks
vm.swappiness=5
vm.vfs_cache_pressure=50
vm.overcommit_ratio=99
vm.overcommit_memory=2

(I use 99% just to give a little allowance).

Of course you could increase the size of your SWAP partition as CommitLimit is a total of RAM+SWAP (remembering that SWAP is disk based so slower than RAM) so you can open all those tabs, or applications without getting ‘failed to fork’ messages, or you could add a SWAP partition if you haven’t got one already.

“But I have an SSD and SWAP is bad”, well yes it is if you are constantly using it because you only have 1gb of RAM! If you have 4+gb of ram, and depending on what you use your system for, SWAP on an SSD would act as a final safety net saving you from kernel panic under stock settings, or by using the above settings it would stop the constant ‘failed to fork’, but if that’s a regular message following these changes i’d suggest you buy more RAM!

NB: Default is: vm.overcommit_memory=0 which means in short that no tabs are kept on actual available memory space, the kernel agrees to all requests for memory from applications and OOM-killer is activated, in my experience followed by hangs and reboots…….

Feel free to contact me, the above is a condensed and simplified explanation for those still learning.

3 thoughts on “Memory Overcommit Settings

  1. Pingback: Creating a SWAP partition | Iain V Linux
  2. Hello there, my server has been falling over a lot due to oom killer. I tried this suggested solution, however my server went down as a result. I was getting fork: cannot allocate memory…

    Initially I had vm.swappiness=0. I changed it to 10 and added in vm.swappiness=5
    vm.overcommit_memory=2
    vm.overcommit_ratio=100

    When saving the file, I got invalid argument for vm.overcommit_ratio = 100. And then I noticed the server was down. Anything I typed in the console returned: fork: unable to allocate memory. I managed to get into safe mode on my server to change the file back, and now the server is up and running again. Can you assist me on what I did wrong? And maybe educate me more on the matter as I believe I am just going about it quite blindly. Thank you

    • I’m sorry it’s taken so long to reply i’ve been very busy! By now you may have sorted the issues however there may be a couple of things going on. First is that servers tend to run for long periods and it may be that your server is running an old kernel which does not allow commit-memory setting, I think 2.4 and prior? Second is that =2 is to use RAM+SWAP partition, which I’m assuming you have? For a breakdown on alternative Overcommit_memory= settings: =0(do not overcommit) =1(only RAM). Maybe one of those would solve the invalid argument for you?
      I’d also suggest lowering the commit_ratio to maybe =80 to see if it’s the kernel panicking.

Leave a comment