The easiest way is to use a single lock for the entire cache and require that all entries in the cache capture the lock first.
In the code example above, on line 31, you will get a lock and check to see if this result is still missing, in which case you will continue to compute and cache the result. Something like that:
lock = threading.Lock() ... except KeyError: with lock: if key in self.cache: v = self.cache[key] else: v = self.cache[key] = f(*args,**kwargs),time.time()
The above example stores the cache for each function in the dictionary, so you also need to store a lock for each function.
If you use this code in a very controversial environment, although it would probably be unacceptably inefficient, since the threads will have to wait from each other, even if they did not calculate the same. You could probably improve this by storing the key lock in the cache. You also need to globally block access to the lock store, or is there a race condition when creating locks for each key.
source share