To apply the new configuration, you need to create a new module (the old one will be deleted).
If your block was created automatically using the Deployment or DaemonSet , this action will be automatically run every time after updating the yaml resource. This will not happen if your resource has spec.updateStrategy.type=OnDelete .
If the problem is related to the error inside the docker image that you solved, you must update the modules manually, you can use rolling-update for this purpose. If the new image has the same tag, you can simply delete the broken block. (see below)
In case of node failure, the pod will be recreated on the new node in a few seconds, the old module will be deleted after the broken node is fully restored. It is worth noting that this will not happen if your pod was created using DaemonSet or StatefulSet .
In any case, you can manually delete the broken container:
kubectl delete pod <pod_name>
Or all items with CrashLoopBackOff :
kubectl delete pod `kubectl get pods | awk '$3 == "CrashLoopBackOff" {print $1}'
If you have a completely dead node, you can add the options --grace-period=0 --force to remove only information about this module from the tunnels.
kvaps source share