Depending on the number of servers you have and your current cache architecture, it may be worthwhile to evaluate the addition of cache at the server level (or in the process). In fact, you use this as a backup cache, and it is especially useful when getting into the main storage (database) is either very resource-intensive or slow.
When I used this, I used the caching pattern for the primary cache and the pass-through design for the secondary - in which the secondary lock also ensures that the database is not overloaded with the same query. In this architecture, a primary cache miss results in a database reaching at most one entity request per server (or process).
So, the main workflow:
1) Try to extract primary / shared caches from the pool
* If successful, return * If unsuccessul, continue
2) Check cache in process for value
* If successful, return (optionally seeding primary cache) * If unsuccessul, continue
3) Get a lock using the cache key (and double-check the cache process, if it was added by another thread)
4) Extract object from primary persistence (db)
5) In-process cache and return
I did this with injectable wrappers, my cache layers implement the corresponding IRepository interface, and StructureMap introduces the correct cache stack. This allows flexible, focused and easy to maintain the actual behavior of the cache, despite the fact that it is quite complex.
source share