Cache
grelmicro.cache
Cache.
CacheBackend
Bases: Protocol
Protocol for cache storage backends.
All methods are async because backends typically perform I/O.
Backends are pure key-value stores: TTL, eviction, and statistics
are managed by TTLCache.
get
async
get(*, key: str) -> bytes | None
Get raw bytes by key.
Returns None if the key is missing or expired.
set
async
set(*, key: str, value: bytes, ttl: float) -> None
Store raw bytes with a TTL in seconds.
delete
async
delete(*, key: str) -> None
Delete a key (no-op if absent).
clear
async
clear() -> None
Remove all entries managed by this backend.
CacheError
Bases: GrelmicroError
Base cache error.
CacheInfo
dataclass
CacheInfo(
hits: int,
misses: int,
maxsize: int,
currsize: int,
evictions: int,
)
Cache statistics snapshot.
| ATTRIBUTE | DESCRIPTION |
|---|---|
hits |
Number of cache hits.
TYPE:
|
misses |
Number of cache misses.
TYPE:
|
maxsize |
Maximum number of entries (
TYPE:
|
currsize |
Current number of tracked entries.
TYPE:
|
evictions |
Number of entries evicted to make room.
TYPE:
|
hits
instance-attribute
hits: int
misses
instance-attribute
misses: int
maxsize
instance-attribute
maxsize: int
currsize
instance-attribute
currsize: int
evictions
instance-attribute
evictions: int
CacheSerializer
Bases: Protocol[T]
Protocol for cache serialization strategies.
Any object implementing dumps and loads can be used
as a TTLCache serializer.
dumps
dumps(value: T) -> bytes
Serialize a value to bytes.
loads
loads(data: bytes) -> T
Deserialize bytes to a value.
CacheSettingsValidationError
CacheSettingsValidationError(error: ValidationError | str)
JsonSerializer
Serialize values as JSON bytes.
Uses orjson when available (roughly 7x faster than stdlib),
otherwise falls back to the standard library json module.
Suitable for dicts, lists, and other JSON-native types.
datetime objects are serialized to ISO 8601 strings but
deserialized back as strings (not datetime).
PickleSerializer
PickleSerializer(*, protocol: int = HIGHEST_PROTOCOL)
Bases: Generic[T]
Serialize values using Python pickle.
Supports any picklable Python object. Fast and transparent, but produces opaque binary data.
Warning
Pickle can execute arbitrary code during deserialization. Only use with trusted data sources.
| PARAMETER | DESCRIPTION |
|---|---|
protocol
|
Pickle protocol version. Defaults to the highest available protocol.
TYPE:
|
Initialize the pickle serializer.
dumps
dumps(value: T) -> bytes
Serialize a value to bytes.
loads
loads(data: bytes) -> T
Deserialize bytes to a value.
PydanticSerializer
PydanticSerializer(model: type[T])
Bases: Generic[T]
Serialize values using Pydantic's TypeAdapter.
Uses Pydantic's Rust-based serializer for fast, type-safe
roundtrips. Works with BaseModel, dataclass,
TypedDict, and any type supported by TypeAdapter.
This is the fastest serialization option (benchmarked at roughly 2x faster than pickle for Pydantic models).
| PARAMETER | DESCRIPTION |
|---|---|
model
|
The type to serialize/deserialize. Can be any type
supported by
TYPE:
|
Initialize the Pydantic serializer.
dumps
dumps(value: T) -> bytes
Serialize a value to JSON bytes via TypeAdapter.
loads
loads(data: bytes) -> T
Deserialize JSON bytes to a typed value via TypeAdapter.
TTLCache
TTLCache(
maxsize: int = 0,
ttl: float = 60,
*,
backend: CacheBackend | None = None,
serializer: CacheSerializer[T] | None = None,
)
Bases: Generic[T]
Cache with per-entry TTL and optional LRU eviction.
Delegates storage to a CacheBackend (in-memory, Redis, etc.).
TTLCache handles maxsize enforcement, LRU eviction, serialization,
and statistics on top of the backend.
When no backend is provided, the registered default is used
(see MemoryCacheBackend or RedisCacheBackend).
The type parameter T represents the cached value type.
Defaults to Any when unspecified (TTLCache()).
Use TTLCache[User](serializer=PydanticSerializer(User))
for typed caching.
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If maxsize is negative or ttl is not positive. |
Initialize the cache.
| PARAMETER | DESCRIPTION |
|---|---|
maxsize
|
Maximum number of entries.
TYPE:
|
ttl
|
Default TTL in seconds for all entries.
TYPE:
|
backend
|
The cache storage backend. By default, the registered cache backend is used.
TYPE:
|
serializer
|
Serialization strategy for cached values. Any object implementing the Built-in options:
TYPE:
|
get
async
get(key: str, default: T | None = None) -> T | None
Get a value by key.
Returns the default if the key is missing or expired. A hit promotes the key in LRU order.
| PARAMETER | DESCRIPTION |
|---|---|
key
|
The cache key.
TYPE:
|
default
|
Value to return if the key is missing or expired.
TYPE:
|
set
async
set(key: str, value: T, ttl: float | None = None) -> None
Set a value with an optional per-entry TTL override.
If the cache is full (maxsize > 0), evicts the least recently used entry before storing.
| PARAMETER | DESCRIPTION |
|---|---|
key
|
The cache key.
TYPE:
|
value
|
The value to store. Must be bytes or serializable.
TYPE:
|
ttl
|
Per-entry TTL override in seconds. Uses the default TTL if None.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If ttl is not positive. |
TypeError
|
If value is not bytes and no serializer is set. |
delete
async
delete(key: str) -> None
Delete a key from the cache.
No-op if the key does not exist.
| PARAMETER | DESCRIPTION |
|---|---|
key
|
The cache key to delete.
TYPE:
|
clear
async
clear() -> None
Remove all entries from the cache.
cached
Cached Decorator.
P
module-attribute
P = ParamSpec('P')
R
module-attribute
R = TypeVar('R')
cached
cached(
cache: TTLCache,
*,
key_maker: Callable[
[
Callable[..., Any],
tuple[Any, ...],
dict[str, Any],
],
str,
]
| None = None,
skip: Callable[[Any], bool] | None = None,
typed: bool = False,
lock: _LockType = None,
) -> Callable[[Callable[P, R]], Callable[P, R]]
Cache decorator for sync and async functions.
Automatically detects whether the decorated function is sync or async and wraps it accordingly.
The decorated function exposes cache_info() and
cache_clear() helpers (matching functools.lru_cache).
cache_clear() is always a coroutine (must be awaited).
| PARAMETER | DESCRIPTION |
|---|---|
cache
|
The TTLCache instance to store results in.
TYPE:
|
key_maker
|
Optional custom key generation function. Receives
TYPE:
|
skip
|
Optional predicate receiving the function result. When
it returns
TYPE:
|
typed
|
If
TYPE:
|
lock
|
Protect against duplicate work on a cache miss. When the cache does not have the value, only one caller runs the function. The other callers wait for the result. Set to You can also pass a custom context manager for global locking. This uses a single lock shared by all keys.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Callable[[Callable[P, R]], Callable[P, R]]
|
A decorator that caches function results. |
grelmicro.cache.memory
Memory Cache Backend.
MemoryCacheBackend
MemoryCacheBackend()
In-memory cache backend.
Stores entries in a Python dict with lazy TTL expiry. Suitable for testing and single-process applications.
Initialize the memory cache backend.
get
async
get(*, key: str) -> bytes | None
Get raw bytes by key.
Returns None if the key is missing or expired. Expired entries are removed lazily on access.
set
async
set(*, key: str, value: bytes, ttl: float) -> None
Store raw bytes with a TTL in seconds.
delete
async
delete(*, key: str) -> None
Delete a key (no-op if absent).
clear
async
clear() -> None
Remove all entries.
grelmicro.cache.redis
Redis Cache Backend.
RedisCacheBackend
RedisCacheBackend(
url: RedisDsn | str | None = None, *, prefix: str = ""
)
Redis cache storage backend.
Pure key-value storage with per-entry TTL handled natively by Redis (SETEX). Keys are prefixed for isolation.
Must be used as an async context manager to manage the connection lifecycle.
Initialize the Redis cache backend.
| PARAMETER | DESCRIPTION |
|---|---|
url
|
The Redis URL. If not provided, the URL will be taken from the environment variables REDIS_URL or REDIS_HOST, REDIS_PORT, REDIS_DB, and REDIS_PASSWORD.
TYPE:
|
prefix
|
Prefix prepended to all Redis keys to avoid conflicts with other keys. By default no prefix is added.
TYPE:
|
get
async
get(*, key: str) -> bytes | None
Get raw bytes by key.
Returns None if the key is missing or expired.
set
async
set(*, key: str, value: bytes, ttl: float) -> None
Store raw bytes with a TTL in seconds.
delete
async
delete(*, key: str) -> None
Delete a key (no-op if absent).
clear
async
clear() -> None
Remove all entries matching the configured prefix.
Uses SCAN to iterate keys without blocking Redis, then deletes in batches.