How to Debug High Memory Usage in Docker 🧠

Is your server that only runs Docker going to run out of memory soon? This was my case until I decided to investigate and find the root cause. 🦥

3 min read
How to Debug High Memory Usage in Docker 🧠
Photo by Josh Hild from Pexels

A while ago, I noticed that my DigitalOcean droplet memory was consistently above 90%, which I consider too high for me. 🎢

I haven't added any new services recently and even though I removed some stacks that I wasn't using, the memory impact was relatively small. I was wondering how I could improve my server's memory before I run out of it. 🤔

After searching the Internet, I came across a simple command that helped me determine which container took up too much memory: docker stats.

docker stats
A useful command to remember if you have multiple Docker containers on your server. 😉

If you type the command in your terminal, it will output a large table like the one below with the following statistics:

  • The container ID and name
  • CPU utilization percentage
  • Memory usage, limits and percentage
  • The amount of data the container sent and received from the network interface
  • The amount of data the container has read and written on disk
  • The number of processes or threads the container has started
CONTAINER ID   NAME                                                                  CPU %     MEM USAGE / LIMIT   MEM %     NET I/O           BLOCK I/O         PIDS
4036d8b5cec0   www_benjaminrancourt_ghost.1.lozlwwfcsjrdf94y9yl5q9tbe                0.07%     137.1MiB / 256MiB   53.56%    18.4MB / 12.2MB   32.6MB / 561kB    11  
2a913e0898bb   phpmyadmin_phpmyadmin.1.ycvp02lpvshqiwrpg0xzk7ehm                     0.00%     40.54MiB / 128MiB   31.67%    3.31MB / 27MB     7.94MB / 16.4MB   7   
daee5991d4be   phpmyadmin_mysql.1.txa6wbp2rj7fn9xhgnkr0dfb6                          0.24%     679.5MiB / 1GiB     66.35%    122MB / 947MB     78MB / 3.27GB     46  
ed326dda28b9   uptime-kuma_uptime-kuma.1.0tut77m69lyez7z8uwtqo55et                   0.58%     83.83MiB / 128MiB   65.49%    1.34GB / 93.1MB   814MB / 1.91GB    12  
1fcecf3bbe69   ghost_jardin_ghost.1.m9psm95qontpamrdu3ukqh2mq                        0.00%     93.48MiB / 256MiB   36.52%    738kB / 73.5kB    28.4MB / 41kB     11  
92f0264ea93c   mealie_mealie.1.j550w0meluc8yws6zlc1xsras                             0.31%     259.5MiB / 512MiB   50.69%    3.76MB / 9.9MB    34.4MB / 66.2MB   23  
35a16db2c564   phpmyadmin_mariadb.1.rym9ra2c9s6m6f9gi9co3vjpt                        0.02%     249.8MiB / 512MiB   48.79%    1.89MB / 2.81MB   58.4MB / 2.28GB   9   
f03d490eb35c   unami_umami.1.j8wyl0basbzh6h10gs4kdv18r                               0.02%     118.7MiB / 256MiB   46.38%    8.98MB / 10.8MB   33.3MB / 8.19kB   25  
ff362460c71c   portainer_portainer.1.hoxq3gmdpk1n5ytx1w7bq6v4d                       0.01%     41.09MiB / 128MiB   32.10%    8.13MB / 40.7MB   39.2MB / 37.5MB   6   
e33a94ad44cc   traefik_traefik.v47gxhgyo1hy62l702cdw2si0.i4vk6r11foyqxlt4lnobx6h03   0.02%     45.1MiB / 128MiB    35.24%    842MB / 1.48GB    9.29MB / 0B       10
A live stream of stats from the containers currently on my server. 🏄‍♂️

If you consider the previous output to be too verbose, you can supply an option to the format argument:

docker stats --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}"
The format option can be used to display only the metrics that are useful to you.
NAME                                                                  CPU %     MEM USAGE / LIMIT   MEM % 
www_benjaminrancourt_ghost.1.lozlwwfcsjrdf94y9yl5q9tbe                0.06%     142.2MiB / 256MiB   55.57%
phpmyadmin_phpmyadmin.1.ycvp02lpvshqiwrpg0xzk7ehm                     0.01%     40.55MiB / 128MiB   31.68%
phpmyadmin_mysql.1.txa6wbp2rj7fn9xhgnkr0dfb6                          0.60%     679.6MiB / 1GiB     66.37%
uptime-kuma_uptime-kuma.1.0tut77m69lyez7z8uwtqo55et                   1.15%     84.57MiB / 128MiB   66.07%
ghost_jardin_ghost.1.m9psm95qontpamrdu3ukqh2mq                        0.00%     93.48MiB / 256MiB   36.52%
mealie_mealie.1.j550w0meluc8yws6zlc1xsras                             0.60%     259.5MiB / 512MiB   50.69%
phpmyadmin_mariadb.1.rym9ra2c9s6m6f9gi9co3vjpt                        0.03%     249.8MiB / 512MiB   48.79%
unami_umami.1.j8wyl0basbzh6h10gs4kdv18r                               0.02%     118.8MiB / 256MiB   46.39%
portainer_portainer.1.hoxq3gmdpk1n5ytx1w7bq6v4d                       0.00%     41.61MiB / 128MiB   32.51%
traefik_traefik.v47gxhgyo1hy62l702cdw2si0.i4vk6r11foyqxlt4lnobx6h03   0.05%     43.85MiB / 128MiB   34.26%
Much simpler, right? 😅

After seeing a table similar to the one above, I was able to identify the faulty container and take action by deleting it.

By monitoring the stats, I was also able to better adjust the Docker limits and reservations for each of my containers.

resources:
  limits:
    cpus: '0.750'
    memory: 64M
  reservations:
    cpus: '0.001'
    memory: 32M
An example of resources I now put on each container I have.
The memory usage of my droplet. You can see that the actions I took during the evening paid off.

The next target that I'd really like to act on is my MySQL container:

NAME                                                                  CPU %     MEM USAGE / LIMIT   MEM %     PIDS
phpmyadmin_mysql.1.txa6wbp2rj7fn9xhgnkr0dfb6                          0.32%     682.2MiB / 1GiB     66.62%    47
phpmyadmin_mariadb.1.rym9ra2c9s6m6f9gi9co3vjpt                        0.02%     249.9MiB / 512MiB   48.81%    10

It's crazy when you compare the MySQL and MariaDB statistics together, right? MySQL really takes too many resources to work. Unfortunately, Ghost recently choose to only support MySQL, so I'll be tied to that for a long time... 😞

What about you, have you found other ways to improve the memory usage of your containers? 🤗