As the load on our Azure website increased (along with the complexity of the work performed), we noticed that we were encountering problems with CPU usage. CPU utilization gradually increases over the course of several hours, even when the level of traffic remains fairly stable. Over time, if the Azure statistics are correct, we will somehow manage to get> 60 seconds of CPU per instance (itโs not entirely clear how this works), and the response time will start to increase dramatically.
If I restart the web server, the CPU will crash immediately and then start creeping slowly. For example, in the image below, you can see the creep of the processor, followed by a restart (with a red circle), and then CPU recovery.

I am strongly inclined to suspect that this is a problem somewhere in my own code, but I scratch my head on how to figure this out. So far, any attempts to reproduce this on my dev or test environment have been ineffective. Almost all of the proposals for profiling IIS / C # performance seem to imply either direct access to the machine in question, or at least a cloud service instance, rather than an Azure site.
I know this is a little long shot, but ... any suggestions, whether for what it might be, or how to eliminate them?
(We use C # 5.0, .NET 4.5.1, ASP.NET MVC 5.2.0, WebAPI 2.2, EF 6.1.1, Azure System Bus, the Azure SQL database, Azure cache caching, and asynchronous code path.)
Edit 8/5/14 - I tried some of the suggestions below. But when the site is really busy, i.e. ~ 100% processor load, any attempt to load a mini-dump or GC dump results in a 500 error with the message "Out of memory". The different times I had could load a mini-dump or a GC-dump, they didnโt show anything particularly interesting, at least as far as I could understand. (For example, the most interesting thing in the GC dump was half a dozen or about> 100 Kbytes of strings - they seem to be connected to the binding subsystem in some way, so I suspect they are just cached ScriptBundle or StyleBundle instances.)