I have a t2.micro EC2 instance running on a processor of about 2%. I know from other posts that the CPU usage shown in TOP is different from the CPU registered in CloudWatch, and should be trusted with the value of CloudWatch.
However, I see very different values ββfor memory usage between TOP, CloudWatch and NewRelic.
There 1 GB of RAM per instance, and TOP shows ~ 300 MB of Apache processes, plus ~ 100 MB of other processes. The total memory usage reported by TOP is 800 MB. I assume that 400 MB OS / system operating resources?
However, CloudWatch reports 700 MB of usage, and NewRelic reports 200 MB of usage (even if NewRelic reports 300 MB of Apache processes elsewhere, so I ignore them).
CloudWatch's memory frequency often exceeds 80%, and I would like to know what the actual value is, so I know when to scale, if necessary, or how to reduce memory usage.
Here's a recent memory profile, it seems that something is using more memory over time (the big failures are restarting Apache or maybe GC?)
Screenshot of memory usage in the last 12 days
source share