Diagnosing Docker Disk Usage

April 21, 2018    devops docker

I’ve been using Docker in Production for over a year in conjunction with Rancher 1.6, the biggest issue so far has been disk usage.

Dangling images is one of the issues so I had to use a docker container that cleans docker images at least once a day, it’s called Janitor.

Sometimes Janitor is not enough though, the easiest way to find out where to claim some disk space back is with ncdu, it’s like midnight commander for disk usage:

apt get install ncdu

Then you just execute it in terminal or SSH session:

ncdu

You get to browse the entire file system as follows:

 5.3 GiB [##########] /tmp
    2.4 GiB [####      ] /aufs
   71.1 MiB [          ] /volumes
   45.5 MiB [          ] /containers
   23.0 MiB [          ] /image
  144.0 KiB [          ] /network
   20.0 KiB [          ] /plugins
e   4.0 KiB [          ] /trust
e   4.0 KiB [          ] /swarm

You also have du:

root@localhost# du /var -ah | sort -rh | head -20
85G	/var
84G	/var/lib
72G	/var/lib/docker/tmp
72G	/var/lib/docker
13G	/var/lib/mesos/slave
13G	/var/lib/mesos
7.9G	/var/lib/mesos/slave/slaves/286668e2-1efa-41ba-b25b-6675aeffbde9-S16/frameworks
7.9G	/var/lib/mesos/slave/slaves/286668e2-1efa-41ba-b25b-6675aeffbde9-S16
7.9G	/var/lib/mesos/slave/slaves
4.4G	/var/lib/mesos/slave/store/docker/layers
4.4G	/var/lib/mesos/slave/store/docker
4.4G	/var/lib/mesos/slave/store
3.3G	/var/lib/docker/tmp/GetImageBlob961335265
3.3G	/var/lib/docker/tmp/GetImageBlob960365863
3.3G	/var/lib/docker/tmp/GetImageBlob940363171
3.3G	/var/lib/docker/tmp/GetImageBlob861421802
3.3G	/var/lib/docker/tmp/GetImageBlob797050430
3.3G	/var/lib/docker/tmp/GetImageBlob779335335
3.3G	/var/lib/docker/tmp/GetImageBlob753987945
3.3G	/var/lib/docker/tmp/GetImageBlob715284328

As you see, the Docker tmp directory is getting full.

One quick solution is to run docker system prune with the -a:

sudo docker system prune -a

After running I got most of my disk back:

ubuntu:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            2.0G     0  2.0G   0% /dev
tmpfs           396M  5.5M  390M   2% /run
/dev/xvda1       12G  3.0G  8.6G  26% /