You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Later in the script, the value is used to determine the number of jobs to run for the 'make' command:
make -j`echo $((${available_vcpu} - 1))`
When running in Kubernetes/OpenShift, "egrep 'processor' /proc/cpuinfo | wc -l" gives the total number of processors on the node running the build as /proc/cpuinfo is not scoped by cgroup. The result is that it runs with as many jobs as there are cores (in our case, 72) which blows the RAM usage through the roof.
I was able to fix this by changing the value to 2 manually in the Dockerfile but if this could be added as an argument to install.sh (or available to override via an env variable) that would allow the build to work more effectively in a k8s-based environment
Thanks,
Sean
The text was updated successfully, but these errors were encountered:
You also could try to use the nproc command, which is more smart to get the number of cores assigned to a process. But I don't know if it's available on all platforms and how it behaves on kubernetes.
Thanks for the responses. We tried nproc but sadly it is not namespace aware so it too reports the total number of cores available to a node. We added the following to the Dockerfile to hardcode the value:
RUN cd /opt/mdt-dialout-collector && sed -i 's/available_vcpu=.*/available_vcpu=2/g' ./install.sh
Which does the trick but took a while to figure out why the builds were blowing up the memory being used to get here
When running the install.sh script, it determines the number of vCPUs by the following line:
Later in the script, the value is used to determine the number of jobs to run for the 'make' command:
When running in Kubernetes/OpenShift, "egrep 'processor' /proc/cpuinfo | wc -l" gives the total number of processors on the node running the build as /proc/cpuinfo is not scoped by cgroup. The result is that it runs with as many jobs as there are cores (in our case, 72) which blows the RAM usage through the roof.
I was able to fix this by changing the value to 2 manually in the Dockerfile but if this could be added as an argument to install.sh (or available to override via an env variable) that would allow the build to work more effectively in a k8s-based environment
Thanks,
Sean
The text was updated successfully, but these errors were encountered: