Our current caching implementation caches large amounts of data in report objects (in some cases, 50 MB).
Weve moved from the cache cache and uses ProtoBuf for serialization and de-serialization. This works well, however we are now experimenting with the Redis cache. The following is an example of how long it takes for Redis than using the file system. (Note: using protobuf instead of JsonConvert improves the given time to 15 seconds and gets the time to 4 seconds in the example below when setting the byte array).
// Extremely SLOW – caching using Redis (JsonConvert to serialize/de-serialize) IDatabase cache = Connection.GetDatabase(); // 23 seconds! cache.StringSet("myKey", JsonConvert.SerializeObject(bigObject)); // 5 seconds! BigObject redisResult = JsonConvert.DeserializeObject<BigObject>(cache.StringGet("myKey")); // FAST - caching using file system (protobuf to serialize/de-serialize) IDataAccessCache fileCache = new DataAccessFileCache(); // .5 seconds fileCache.SetCache("myKey",bigObject); // .5 seconds BigObject fileResult = fileCache.GetCache<BigObject>("myKey");
Thanks in advance for your help.
ps. I did not find the answer from such questions. Large Object Caching - LocalCache Performance
or
Caching large objects, reducing the impact of search time
source share