Docker - Unable to remove dead container

I can’t delete the dead container, it appears again after restarting the Docker service.

docker ps -a CONTAINER ID STATUS 11667ef16239 Dead 

Then

 docker rm -f 11667ef16239 

Then, when I started docker ps -a, no docker containers showed up.

 docker ps -a CONTAINER ID STATUS 

However, when I restart the docker service,

 service docker restart 

And run docker ps -a again:

 docker ps -a CONTAINER ID STATUS 11667ef16239 Dead 
+73
docker
Jun 12 '15 at 1:43 on
source share
18 answers

Most likely, an error occurred when the demon tried to clean the container, and now he was stuck in this state of "zombies".

I'm afraid your only option is to manually clear it:

 $ sudo rm -rf /var/lib/docker/<storage_driver>/11667ef16239.../ 

Where <storage_driver> is the name of your driver ( aufs , overlay , btrfs or devicemapper ).

+40
Jun 12 '15 at 1:52
source share
β€” -

In fact these days, things have changed a bit to get rid of these dead containers, which you can try to unmount locked file systems to free them

So, if you get a message like this

 Error response from daemon: Cannot destroy container elated_wozniak: Driver devicemapper failed to remove root filesystem 656cfd09aee399c8ae8c8d3e735fe48d70be6672773616e15579c8de18e2a3b3: Device is Busy 

just run this

 umount /var/lib/docker/devicemapper/mnt/656cfd09aee399c8ae8c8d3e735fe48d70be6672773616e15579c8de18e2a3b3 

and you can usually remove the container after this

+44
Aug 25 '15 at 14:45
source share

You can also remove dead containers with this command

 docker rm $(docker ps --all -q -f status=dead) 

But I really don't know why and how dead containers are created. This error seems to be related https://github.com/typesafehub/mesos-spark-integration-tests/issues/34 whenever I get dead containers

[Update] With the Docker 1.13 update, we can easily remove ragged images as unwanted containers

 $ docker system df #will show used space, similar to the unix tool df $ docker system prune # will remove all unused data. 
+38
Dec 30 '15 at 4:37
source share

I had the following error while deleting a dead container (docker 17.06.1-ce on CentOS 7):

 Error response from daemon: driver "overlay" failed to remove root filesystem for <some-id>: remove /var/lib/docker/overlay/<some-id>/merged: device or resource busy 

Here is how I fixed it:

1. Check which other processes also use docker resources.

$ grep docker /proc/*/mountinfo

which outputs something like this, where the number after /proc/ is equal to pid :

 /proc/10001/mountinfo:179... /proc/10002/mountinfo:149... /proc/12345/mountinfo:159 149 0:36 / /var/lib/docker/overlay/... 

2. Check the process name above pid

 $ ps -p 10001 -o comm= dockerd $ ps -p 10002 -o comm= docker-containe $ ps -p 12345 -o comm= nginx <<<-- This is suspicious!!! 

So nginx with pid 12345 seems to also use /var/lib/docker/overlay/... , so we cannot delete the associated container and get a device or resource busy error. (See here for a discussion of how nginx shares the same mount namespace with docker containers, thereby preventing it from being deleted.)

3. Stop nginx , and then I can delete the container successfully.

 $ sudo service nginx stop $ docker rm <container-id> 
+22
Sep 06 '17 at 7:03 on
source share

I have the same problem and both answers did not help.

What helped me was to simply create the missing directories and delete them:

 mkdir /var/lib/docker/devicemapper/mnt/656cfd09aee399c8ae8c8d3e735fe48d70be6672773616e15579c8de18e2a3b3 mkdir /var/lib/docker/devicemapper/mnt/656cfd09aee399c8ae8c8d3e735fe48d70be6672773616e15579c8de18e2a3b3-init docker rm 656cfd09aee399c8ae8c8d3e735fe48d70be6672773616e15579c8de18e2a3b3 
+11
Oct. 12 '15 at 13:58
source share

Removing the container by force helped me.

docker rm -f <id_of_the_dead_container>

Notes

Remember that this command may cause this error. Error response from daemon: Driver devicemapper failed to remove root filesystem <id_of_the_dead_container>: Device is Busy

Mounting your dead container display device must be removed, despite this message. That is, you will no longer access this path:

/var/lib/docker/devicemapper/mnt/<id_of_the_dead_container>

+10
Dec 07 '16 at 11:05
source share
 grep 656cfd09aee399c8ae8c8d3e735fe48d70be6672773616e15579c8de18e2a3b3 /proc/*/mountinfo 

then find pid 656cfd09aee399c8ae8c8d3e735fe48d70be6672773616e15579c8de18e2a3b3and and kill it

+6
Dec 25 '17 at 2:26 on
source share

Try the following commands. It always works for me.

 # docker volume rm $(docker volume ls -qf dangling=true) # docker rm $(docker ps -q -f 'status=exited') 

After executing the above commands restart the docker,

 # service docker restart 
+5
06 Sep '17 at 10:40 on
source share

I tried the sentences above but did not work.

then

  1. I try: docker system prune -a , this did not work the first time
  2. I reboot the system
  3. I'm trying docker system prune -a again. This time it works. He will send a warning message and at the end ask: "Are you sure you want to continue? Yes / No?". Answer: y. There will be time, and eventually the dead containers will disappear.
  4. Check with docker ps -a

IMPORTANT - this is a nuclear option, as it destroys all containers + images

+5
Jun 01 '18 at 14:49
source share

I tried all of the above (except for rebooting / restarting the docker).

So here is om docker rm error:

 $ docker rm 08d51aad0e74 Error response from daemon: driver "devicemapper" failed to remove root filesystem for 08d51aad0e74060f54bba36268386fe991eff74570e7ee29b7c4d74047d809aa: remove /var/lib/docker/devicemapper/mnt/670cdbd30a3627ae4801044d32a423284b540c5057002dd010186c69b6cc7eea: device or resource busy 

Then I did the following:

 $ grep docker /proc/*/mountinfo | grep 958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac /proc/20416/mountinfo:629 574 253:15 / /var/lib/docker/devicemapper/mnt/958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,relatime shared:288 - xfs /dev/mapper/docker-253:5-786536-958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota 

This must be the PID of the calling process that keeps it busy - 20416 (the item after / proc /

So I did ps -p and, to my surprise, found:

 [devops@dp01app5030 SeGrid]$ ps -p 20416 PID TTY TIME CMD 20416 ? 00:00:19 ntpd 

The present is WTF. So I solved the problem using Google and found this: Then I found this https://github.com/docker/for-linux/issues/124

Turns out I had to restart the ntp daemon and this solved the problem !!!

+4
Aug 31 '18 at 17:00
source share

There are many answers here, but none of them relate to the (quick) solution that worked for me.

I am using Docker version 1.12.3, build 6b644ec.

I just ran docker rmi <image-name> for the image where the dead container came from. docker ps -a then showed that the dead container is completely missing.

Then, of course, I just pulled out the image again and ran the container again.

I have no idea how he ended up in this state, but the way it is ...

+1
Oct. 16 '18 at 14:11
source share

Try it worked for me

 $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4f13b53be9dd 5b0bbf1173ea "/opt/app/netjet..." 5 months ago Dead appname_chess $ docker rm $(docker ps --all -q -f status=dead) Error response from daemon: driver "devicemapper" failed to remove root filesystem for 4f13b53be9ddef3e9ba281546aef1c544805282971f324291a1dc91b50eeb440: failed to remove device 487b4b73c58d19ef79201cf6d5fcd6b7316e612e99c14505a6bf24399cad9795-init: devicemapper: Error running DeleteDevice dm_task_run failed su cd /var/lib/docker/containers [root@localhost containers]# ls -l total 0 drwx------. 1 root root 312 Nov 17 08:58 4f13b53be9ddef3e9ba281546aef1c544805282971f324291a1dc91b50eeb440 [root@localhost containers]# rm -rf 4f13b53be9ddef3e9ba281546aef1c544805282971f324291a1dc91b50eeb440 systemctl restart docker 
+1
Nov 17 '18 at 14:15
source share

Try it, it worked for me:

 docker rm -f <container_name> eg. docker rm -f 11667ef16239 
+1
Apr 28 '19 at 16:10
source share

for windows:

 del D:\ProgramData\docker\containers\{CONTAINER ID} del D:\ProgramData\docker\windowsfilter\{CONTAINER ID} 

Then restart Docker Desktop.

+1
Aug 27 '19 at 5:45
source share

Running on Centos7 and Docker 1.8.2, I was unable to use the Zgr3doo solution for umount by devicemapper (I think I got the answer that the volume was not installed / found.)

I think that I also had a similar situation with sk8terboi87 ツ the answer: I believe that the message was that the volumes could not be unmounted, and he listed the specific volumes that he was trying to remove in order to remove dead containers.

What worked for me, first stopped the docker, and then manually deleted the directories. I was able to determine which ones were removed from the previous command in order to remove all dead containers.

Sorry for the vague descriptions above. I found this question a few days after I handled the dead containers ... However, I noticed a similar picture today:

 $ sudo docker stop fervent_fermi; sudo docker rm fervent_fermi fervent_fermi Error response from daemon: Cannot destroy container fervent_fermi: Driver devicemapper failed to remove root filesystem a11bae452da3dd776354aae311da5be5ff70ac9ebf33d33b66a24c62c3ec7f35: Device is Busy Error: failed to remove containers: [fervent_fermi] $ sudo systemctl docker stop $ sudo rm -rf /var/lib/docker/devicemapper/mnt/a11bae452da3dd776354aae311da5be5ff70ac9ebf33d33b66a24c62c3ec7f35 $ 

I noticed that when using this approach, dockers re-created images with different names:

 a11bae452da3 trend_av_docker "bash" 2 weeks ago Dead compassionate_ardinghelli 

Perhaps this is due to the fact that the container is issued with restart = always, but the container identifier corresponds to the identifier of the container that previously used the volume that I forcibly deleted. Failed to resolve this new container:

 $ sudo docker rm -v compassionate_ardinghelli compassionate_ardinghelli 
0
Mar 21 '16 at 15:38
source share

Try to kill him and then delete> :) i.e.
docker kill $(docker ps -q)

0
Jun 07 '18 at 9:20
source share

Try it, it worked for me on centos 1) Docker Container ls -a gives you a list of container validation status that you want to get rid of enter image description here 2) Docker container rm -f 97af2da41b2b is not a big fan forced flag, but if the health check is running, just run the command again or list it. enter image description here 3) continue until we clear all dead containers enter image description here

0
Sep 27 '18 at 9:47
source share
  1. To remove all dead containers docker rm -f $(docker ps --all -q -f status=dead)

  2. To remove all docker rm -f $(docker ps --all -q -f status=exited) from the docker rm -f $(docker ps --all -q -f status=exited) container docker rm -f $(docker ps --all -q -f status=exited)

As I have -f necessary

0
May 14 '19 at 4:50
source share



All Articles