cached_requests is a Python library that provides a simple and effective caching layer for your web requests. It's built on top of the popular requests library and is designed to be a drop-in replacement for requests.Session.
- Persistent Caching: Save responses to a configurable backend to speed up repeated requests.
- Multiple Cache Backends: Out-of-the-box support for filesystem caching, with easy extension to other backends.
- Automatic Cache Eviction: Set a time-to-live (TTL) for your cached responses.
- Configurable Eviction Policies: Choose from LRU (Least Recently Used), LFU (Least Frequently Used), and FIFO (First-In, First-Out) eviction policies.
- Easy to Use: A simple, intuitive API that gets out of your way.
- Offline Mode: Force the session to only use the cache, without making any network requests.
Install cached_requests using pip:
pip install cached_requestsHere's a simple example of how to use cached_requests:
from cached_requests import CacheSession, CacheConfig
from cached_requests.backend import FileCacheBackend
from datetime import timedelta
# Create a new session with a cache config
config = CacheConfig(
cache_backend=FileCacheBackend(cache_dir='.cache'),
refresh_after=timedelta(hours=1)
)
requests = CacheSession(config=config)
# Make a request
response = requests.get('https://api.github.com')
# The response is now cached. Subsequent requests to the same URL will be served from the cache.
cached_response = requests.get('https://api.github.com')
print(response.json())You can configure the behavior of cached_requests by passing a CacheConfig object to the CacheSession constructor.
Here are the available options for CacheConfig:
cache_backend: An instance of a cache backend. Defaults toNone.hash_request_fn: A function to hash the request object. Defaults to a function that hashes the method, URL, params, data, and headers.refresh_after: Atimedeltaobject that specifies how long a cached response is valid.refresh_on_error: IfTrue, the cache will be refreshed if a cached response resulted in an error.force_refresh: IfTrue, the cache will be ignored and all requests will be made to the network.offline_only: IfTrue, the session will only use the cache and will raise aConnectionErrorif a request is not in the cache.max_cache_files_count: The maximum number of files to store in the cache. If the limit is exceeded, files will be evicted based on thecache_eviction_policie.cache_eviction_policie: The cache eviction policy to use. Can beCacheEvictionPolicie.LRU,CacheEvictionPolicie.LFU, orCacheEvictionPolicie.FIFO.
cached_requests comes with a FileCacheBackend and a SQLiteCache. You can also create your own cache backend by inheriting from CacheBackend.
This is the default cache backend. It stores responses as files on the local filesystem.
from cached_requests.backend import FileCacheBackend
# By default, this will store files in a `.cache` directory
# in your current working directory
fs_cache = FileCacheBackend()This backend stores responses in a SQLite database. This can be more efficient than the FileSystemCache if you are storing a large number of responses.
[ ] Todo
from cached_requests.backend import SQLiteCache
# By default, this will create a `.cache.sqlite` file
# in your current working directory
sqlite_cache = SQLiteCache()You can also use a context manager to temporarily change the configuration:
with requests.configure(force_refresh=True):
# This request will bypass the cache
response = requests.get('https://api.github.com')cached_requests supports the following cache eviction policies:
- LRU (Least Recently Used): Evicts the least recently used items first.
- LFU (Least Frequently Used): Evicts the least frequently used items first.
- FIFO (First-In, First-Out): Evicts the oldest items first.
You can set the eviction policy using the cache_eviction_policie parameter in CacheConfig.
Contributions are welcome! If you have a feature request, bug report, or pull request, please open an issue on GitHub.
cached_requests is licensed under the MIT License. See the LICENSE file for more details.