Note: I've folded in this code into larch, so I no longer use it myself. I'm leaving it here in case it is of use for someone else. However, PyPI already has several LRU implementations that might be useful to you.
A pure-Python implementation of an LRU cache. Basically, it is a mapping of keys to values with a maximum size (expressed in number of keys), which drops the key that has been unused for the longest time, if the number of keys grows too big.
This implementation attempts to be reasonably fast, but in my hubris, I have not benchmarked it against any of the implementations found in PyPi. (Please do that and tell me if I'm an idiot.)