You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Distribution and lxcfs version: Any, with active lxcfs mounts inside container (focus on /proc/meminfo)
Issue description
Inside container, used memory is bigger than it would be in classic hosts and bare-metal systems in same situations.
Slab memory grows and grows after using the system, which is normal, that memory can be reclaimed and will be cleared in favor of new requests. But LXC container in the end shows it as a part of USED memory which is different of regular hosts behaviour. Regular behaviour is that Slab-reclaimable memory is a part of Available (MemAvail) value. It is not totally free (as not used at all) but it is available for user and system when it needs it.
Steps to reproduce
Prepare some directory where there is significatnt number of files (10GB or more for example)
run watch 'free -m ; head /proc/meminfo # free is based on /proc/meminfo which is created by lxcfs based on memory.stat
clear all buffer/slab memory with: echo 3 | sudo tee /proc/sys/vm/drop_caches
run du -hd1 /directory/path # in other terminal, this will cause growth of slab memory
inspect growth of values, USED memory rise up for significat value (available memory drops for significant value that is equal to sreclaimable plus more), and it should not.
Information to attach
I am not an expert about all details about memory, meminfo, memstat, memory.current, buffers, caches, inodes, active_file, nonactive_file ... all of this is very complex behind, even sometime free -m calculates things diferent in different environments/linux flavors. But:
I had a concrete problem:
LXC in container shows more and more usage and send me ALARMS about it
and in reality no real memory is taken, but memory that will be cleared on request (tested well, slab-reclaimable clears itself in favor of for example test memory request: dd if=/dev/zero bs=128M count=10 status=progress | tail | sleep 25)
I tested this problem on various systems well in least 15 days with helper scripts that shows growth/change of values
After all struggle and testing and complexity, everything lead to simple conclusion: slab_reclaimable should be added to MemAvailable for sure
Created patch for this, and tested few days, no obvious problem seen about it. Patched version by my tests had no disadvantages, just advantage that it SOLVES main problem of this issue.
The text was updated successfully, but these errors were encountered:
Required information
Issue description
Inside container, used memory is bigger than it would be in classic hosts and bare-metal systems in same situations.
Slab memory grows and grows after using the system, which is normal, that memory can be reclaimed and will be cleared in favor of new requests. But LXC container in the end shows it as a part of USED memory which is different of regular hosts behaviour. Regular behaviour is that Slab-reclaimable memory is a part of Available (MemAvail) value. It is not totally free (as not used at all) but it is available for user and system when it needs it.
Steps to reproduce
watch 'free -m ; head /proc/meminfo
# free is based on /proc/meminfo which is created by lxcfs based on memory.statInformation to attach
I am not an expert about all details about memory, meminfo, memstat, memory.current, buffers, caches, inodes, active_file, nonactive_file ... all of this is very complex behind, even sometime
free -m
calculates things diferent in different environments/linux flavors. But:dd if=/dev/zero bs=128M count=10 status=progress | tail | sleep 25
)slab_reclaimable should be added to MemAvailable for sure
Created patch for this, and tested few days, no obvious problem seen about it. Patched version by my tests had no disadvantages, just advantage that it SOLVES main problem of this issue.
The text was updated successfully, but these errors were encountered: