Beans stuck in a state of completion

I tried to remove the ReplicationController with 12 modules, and I saw that some blocks were stuck in the Terminating state.

My Kubernetes cluster consists of one control plane node and three work nodes installed on Ubuntu virtual machines.

What could be the cause of this problem?

 NAME READY STATUS RESTARTS AGE pod-186o2 1/1 Terminating 0 2h pod-4b6qc 1/1 Terminating 0 2h pod-8xl86 1/1 Terminating 0 1h pod-d6htc 1/1 Terminating 0 1h pod-vlzov 1/1 Terminating 0 1h 
+152
kubernetes rook-storage
Feb 17 '16 at 10:18
source share
10 answers

You can use the following command to force the removal of the POD.

 kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE> 
+312
Jul 04 '16 at 7:13
source share

Forced container removal:

 kubectl delete pod --grace-period=0 --force --namespace <NAMESPACE> <PODNAME> 

The --force flag is required.

+47
Jan 19 '17 at 22:33
source share

Remove the finalizer block from the resource (module, deployment, ds, etc.) Yaml:

 "finalizers": [ "foregroundDeletion" ] 
+16
Feb 27 '18 at 10:31
source share

The practical answer is that you can always remove the trailing block by running:

 kubectl delete pod NAME --grace-period=0 

The historical answer. In version 1.1, a problem arose when containers were sometimes overloaded in a state of completion if their nodes were uncleanly removed from the cluster.

+11
Feb 18 '16 at 4:22
source share

I found this command simpler:

 for p in $(kubectl get pods | grep Terminating | awk '{print $1}'); do kubectl delete pod $p --grace-period=0 --force;done 

It will remove all modules in the completion state in the default namespace.

+3
Mar 09 '19 at 17:53 on
source share

In my case, the --force option --force not work. I could still see the pod! It is stuck in completion / suspense mode. So after launch

 kubectl delete pods <pod> -n redis --grace-period=0 --force 

I ran

 kubectl patch pod <pod> -p '{"metadata":{"finalizers":null}}' 
+3
May 09 '19 at 17:12
source share

If --grace-period=0 does not work, you can do:

 kubectl delete pods <pod> --grace-period=0 --force 
+1
Mar 29 '18 at 21:06
source share

I recently stumbled upon this when deleting the rook ceph namespace - it was stuck in the Terminating state.

The only thing that helped was to remove the kubernetes finalizer by directly invoking the k8s api with curl, as suggested here .

  • kubectl get namespace rook-ceph -o json > tmp.json
  • remove the kubernetes finalizer in tmp.json (leave an empty array of "finalizers": [] )
  • run kubectl proxy in another terminal for authentication purposes and run the following curl request on the returned port
  • curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json 127.0.0.1:8001/k8s/clusters/c-mzplp/api/v1/namespaces/rook-ceph/finalize
  • namespace is gone

A detailed dismantling of the ceph rook is here .

+1
Sep 19 '18 at 19:26
source share

I came across this recently to free up a resource in my cluster. Here is the command to delete them all.

 kubectl get pods --all-namespaces | grep Terminating | while read line; do pod_name=$(echo $line | awk '{print $2}' ) name_space=$(echo $line | awk '{print $1}' ); kubectl delete pods $pod_name -n $name_space --grace-period=0 --force; done 

hope this helps someone who reads this

+1
Jan 23 '19 at 0:25
source share

The initial question is "What could be causing this problem?" and the answer is discussed at https://github.com/kubernetes/kubernetes/issues/51835 & https://github.com/kubernetes/kubernetes/issues/65569 and see https://www.bountysource.com/issues / 33241128-incapable of REMOVE-a-stopped container-device or-resources are busy

This is caused by a leaking dock being mounted to another namespace.

You can log in to the host host for investigation.

 minikube ssh docker container ps | grep <id> docker container stop <id> 
0
Jul 14 '19 at 0:26
source share



All Articles