Even if you stop using Cache, it can still be used by the / libs framework. I also have the same problem (interestingly, in free mode, the memory limit is 1024 MB, and the total one is reduced to 512).
As I can see, the amount of memory that Azure shows on the portal seems very close to System.Diagnostics.Process.GetCurrentProcess (). The value of PrivateMemorySize.
I'm currently experimenting with caching settings to set maximum memory:
<system.web> <caching> <cache privateBytesLimit="250000000" privateBytesPollTime="00:00:15"/> </caching> </system.web>
A few days ago I installed 300 MB, but a few minutes ago it was again suspended :(, so I dropped to 250 MB.
But in any case, this is a very incomprehensible, strange and "wrong" imho solution.
UPDATE
This morning it is again suspended. Temporarily converted to standard mode with a small copy (1.7 GB of RAM).
Now my WorkSet counter is now around 200 megabytes (with PeakWorkingSet 330 megabytes). BUT! GC CollectionCount increases by about 8 times (Gen0 is 1800 times instead of 250, which is less than a day).
My current theory is that in the "shared" mode, websites run inside a "large" virtual machine with large memory, and the garbage collector just does not need to be run often, which leads to a longer "garbage life" and more memory consumption .
I donβt have access to my development computer right now for some verification, but we plan to convert the site into a web role in the ASAP cloud service - with an additional small copy (the cost is comparable to the total cost of the website) ...