Of the many years of running node / rails bare metal applications; I used to be able to run as many applications as I wanted on one machine (say, 2Go in the digital ocean can easily process 10 applications without worrying, based on proper optimization or a fairly small amount of traffic).
The thing is, using the kubernets, the game sounds completely different. I installed the "get started" cluster with 2 standard vm (3.75Go).
A deployment constraint has been assigned with the following:
resources: requests: cpu: "64m" memory: "128Mi" limits: cpu: "128m" memory: "256Mi"
Then the following appears:
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- default api 64m (6%) 128m (12%) 128Mi (3%) 256Mi (6%)
What does this 6% mean?
I tried to lower the processor limit, for example, 20Mi ... the application starts (obviously, not enough resources). Documents say this is the percentage of CPU. So, 20% of the 3.75Go car? Then where did these 6% come from?
Then the size of node -pool is increased to n1-standard-2, the same block effectively covers 3% of node. This sounds logical, but what does it actually refer to?
Think about which indicators to consider for this part.
When the application starts, a large amount of memory is required, but then it uses only a minimal part of this 6%. Then I feel like I don’t understand something or am misusing it all
Thanks for any expert advice / tips to better understand the Best