What happened

The docker daemon won't start and displays some error logging out to the console

The Logs:
me@mymachine> level=fatal msg="Error starting daemon: error initializing graphdriver: open /var/lib/docker/devicemapper/devicemapper/data: no such file or directory"
    docker-storage-setup[55961]: Rounding up size to full physical extent 20.00 MiB
    docker-storage-setup[55961]: Volume group "centos" has insufficient free space (0 extents): 5 required.
    systemd[1]: docker-storage-setup.service: main process exited, code=exited, status=5/NOTINSSTALLED
    systemd[1]: Failed to start Docker Storage Setup.
    systemd[1]: Unit docker-storage-setup.service entered failed state.

From the logs you can see that when we tried to restart the docker daemon, it was not able to open ``/var/lib/docker/devicemapper/devicemapper/data``.

Process to Investigate Issue

These steps can also be used to verify storage used by device mappings for docker images, under /var/lib/docker/devicemapper

First, we ran ls -al on the /var/lib/docker/devicemapper/devicemapper directory, but nothing was shown:

me@mymachine> ls -al /var/lib/docker/devicemapper/devicemapper
total 0

But when we ran du -h, to see how much space was being used in that directory, we got:

me@mymachine> du -h /var/lib/docker/
80G  /var/lib/docker/devicemapper/devicemapper


This led us to the discovery of the device mapper block devices with:

me@mymachine> lsblk 
loop0                             7:0    0  100G  0 loop
└─docker-253:3-269883968-pool   253:2    0  100G  0 dm


me@mymachine> dmsetup ls
docker-253:3-269883968-pool	(253:2)

to find out what devices are actually mounted and correlate them with those listed in

me@mymachine> ls -al /dev/mapper/docker-*

N.B. Executing docker info against a running docker daemon will give you information about its storage configuration:

    me@mymachine> docker info
    Storage Driver: devicemapper
    Pool Name: docker-253:3-269883968-pool
    Pool Blocksize: 65.54 kB
    Backing Filesystem: xfs
    Data file: /dev/loop0
    Metadata file: /dev/loop1
    Data loop file: /var/lib/docker/devicemapper/devicemapper/data
    Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata

We found all the processes that were holding device mappings open for docker using:

me@mymachine> grep docker /proc/*/mounts | awk -F/ '{ print $3 }'

It is also possible on Redhat and Ubuntu based systems to run systemd-cgls which should print out your docker container processes nicely to the console.

Before proceeding any further, a quick google yielded these three issues with docker which seemed strongly related to the issue we were having:

Found it!

Our investigations led us to the conclusion that the docker daemon died or was forcibly killed, so its docker containers were not sent the kill signal which would have led to them unmounting the device mappings created by volume shares with the docker host machine.

We are using the default loopback storage backend which creates a device mapping pool and exposes this as a loop device to the docker containers.

As a consequence, when we tried to restart the docker daemon it was not able to open /var/lib/docker/devicemapper/devicemapper/data because it was still mounted by the previous docker containers' volume shares (which do not show up with basic ls and lsof commands).

Process to Resolve Issue

To recap: the cause of the device mappings not getting unmounted was related to the docker daemon exiting before it could send the kill signals to its containers. This could happen if systemd killed the docker daemon service because the docker binary and supporting files were upgraded while they were running, for example.

N.B. this is destructive - you will lose all data under /var/lib/docker/devicemapper

You can kill the one-time docker contained processes however you see fit.

Then unmount and remove the device mappings with the following command

me@mymachine> for dm in /dev/mapper/docker-*; do umount $dm; dmsetup remove $dm; done