I recently experimented with kubernets, and I tried to verify the transition to backups in containers, having a replication controller in which the containers are broken as soon as they are used (which causes a reboot).
I adapted the bashttpd project for this:
https://github.com/Chronojam/bashttpd
(Where I set it so that it serves the hostname of the container, then exits)
This works fine, except that restarting is very slow for what I'm trying to do, as it works for the first two requests and then stops for a while - then it starts working again when the containers reload. (ideally, id would like to see no interruption when accessing the service).
I think (but not sure) that the fallback delay mentioned here is to blame:
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/pod-states.md#restartpolicy
some conclusion:
NAME READY STATUS RESTARTS AGE
chronojam-blog-a23ak 1/1 Running 0 6h
chronojam-blog-abhh7 1/1 Running 0 6h
chronojam-serve-once-1cwmb 1/1 Running 7 4h
chronojam-serve-once-46jck 1/1 Running 7 4h
chronojam-serve-once-j8uyc 1/1 Running 3 4h
chronojam-serve-once-r8pi4 1/1 Running 7 4h
chronojam-serve-once-xhbkd 1/1 Running 4 4h
chronojam-serve-once-yb9hc 1/1 Running 7 4h
chronojam-tactics-is1go 1/1 Running 0 5h
chronojam-tactics-tqm8c 1/1 Running 0 5h
<h3> chronojam-serve-once-j8uyc </h3>
<h3> chronojam-serve-once-r8pi4 </h3>
<h3> chronojam-serve-once-yb9hc </h3>
<h3> chronojam-serve-once-46jck </h3>
You will also notice that although there should be 2 more healthy pods, it stops coming back after the 4th.
So my question is twofold:
1)
Can I set a delay for a delay?
2)
Why is my service not sending my request to secure containers?
Remarks:
I think that perhaps the web server itself will not be able to quickly send requests, so the kubernets consider these cookies healthy and send requests there (but do not return with nothing because the process did not start?)
source
share