Python Forum

Full Version: Speeding up code using cache
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I'm good with Python and use functions to process large datasets from files, then return the results. I've realized these input files and other parameters don't change often, so I could store the returned results for reuse.

I've used @functools.lru_cache before, but I'm facing a challenge now. Even though file paths (passed as input to my functions) remain the same, sometimes I need to ignore the cached values if the file has been updated since the last call. I don't want to complicate my functions by checking for this.

I'm looking for a caching solution that can handle this. If anyone knows an existing solution, I'd love to hear it, so I don't have to create one from scratch.
I think creating your own is the way to go here. lru_cache doesn't have any method to invalidate individual entries. Even if it did, you'd already have to have to stat the file and manage the logic to detect when to invalidate them.