Can someone help me understand how the cache variable still exists even after the _cachedf function returns?
This is due to the count of the Python garbage collector. The cache variable will be saved and available because the _cachedf function has a link to it, and the caller cached has a link to it. When you call this function again, you are still using the same function object that was originally created, so you still have access to the cache.
You will not lose the cache until all links to it are destroyed. You can use the del operator to do this.
For instance:
>>> import time >>> def cached(f): ... cache = {}
For the record, what you are trying to achieve is called Memoization , and there is a more complete memorable decorator available from the decorator template page , which does the same thing but uses the decorator class . Your class-based code and decorator are essentially the same, and the class-based decoder checks the hash ability before saving.
Edit (2017-02-02) : @SiminJie comments that cached(foo)(2) always takes a delay.
This is because cached(foo) returns a new function with a fresh cache. When cached(foo)(2) is called, a new fresh (empty) cache is created, and then the caching function is called immediately.
Since the cache is empty and will not find a value, it re-runs the underlying function. Instead, do cached_foo = cached(foo) , and then call cached_foo(2) several times. This will only delay the first call. Also, if it is used as a decorator, it will work as expected:
@cached def my_long_function(arg1, arg2): return long_operation(arg1,arg2) my_long_function(1,2)
If you are new to decorators, check out this answer to understand what this code means.