Skip to content

Cache

grelmicro.cache

Cache.

CacheBackend

Bases: Protocol

Protocol for cache storage backends.

All methods are async because backends typically perform I/O. Backends are pure key-value stores: TTL, eviction, and statistics are managed by TTLCache.

get async

get(*, key: str) -> bytes | None

Get raw bytes by key.

Returns None if the key is missing or expired.

set async

set(*, key: str, value: bytes, ttl: float) -> None

Store raw bytes with a TTL in seconds.

delete async

delete(*, key: str) -> None

Delete a key (no-op if absent).

clear async

clear() -> None

Remove all entries managed by this backend.

CacheError

Bases: GrelmicroError

Base cache error.

CacheInfo dataclass

CacheInfo(
    hits: int,
    misses: int,
    maxsize: int,
    currsize: int,
    evictions: int,
)

Cache statistics snapshot.

ATTRIBUTE DESCRIPTION
hits

Number of cache hits.

TYPE: int

misses

Number of cache misses.

TYPE: int

maxsize

Maximum number of entries (0 means unlimited).

TYPE: int

currsize

Current number of tracked entries.

TYPE: int

evictions

Number of entries evicted to make room.

TYPE: int

hits instance-attribute

hits: int

misses instance-attribute

misses: int

maxsize instance-attribute

maxsize: int

currsize instance-attribute

currsize: int

evictions instance-attribute

evictions: int

CacheSerializer

Bases: Protocol[T]

Protocol for cache serialization strategies.

Any object implementing dumps and loads can be used as a TTLCache serializer.

dumps

dumps(value: T) -> bytes

Serialize a value to bytes.

loads

loads(data: bytes) -> T

Deserialize bytes to a value.

CacheSettingsValidationError

CacheSettingsValidationError(error: ValidationError | str)

Bases: CacheError, SettingsValidationError

Cache Settings Validation Error.

JsonSerializer

Serialize values as JSON bytes.

Uses orjson when available (roughly 7x faster than stdlib), otherwise falls back to the standard library json module.

Suitable for dicts, lists, and other JSON-native types. datetime objects are serialized to ISO 8601 strings but deserialized back as strings (not datetime).

dumps

dumps(value: JSONEncodable) -> bytes

Serialize a value to JSON bytes.

loads

loads(data: bytes) -> JSONDecodable

Deserialize JSON bytes to a value.

PickleSerializer

PickleSerializer(*, protocol: int = HIGHEST_PROTOCOL)

Bases: Generic[T]

Serialize values using Python pickle.

Supports any picklable Python object. Fast and transparent, but produces opaque binary data.

Warning

Pickle can execute arbitrary code during deserialization. Only use with trusted data sources.

PARAMETER DESCRIPTION
protocol

Pickle protocol version. Defaults to the highest available protocol.

TYPE: int DEFAULT: HIGHEST_PROTOCOL

Initialize the pickle serializer.

dumps

dumps(value: T) -> bytes

Serialize a value to bytes.

loads

loads(data: bytes) -> T

Deserialize bytes to a value.

PydanticSerializer

PydanticSerializer(model: type[T])

Bases: Generic[T]

Serialize values using Pydantic's TypeAdapter.

Uses Pydantic's Rust-based serializer for fast, type-safe roundtrips. Works with BaseModel, dataclass, TypedDict, and any type supported by TypeAdapter.

This is the fastest serialization option (benchmarked at roughly 2x faster than pickle for Pydantic models).

PARAMETER DESCRIPTION
model

The type to serialize/deserialize. Can be any type supported by pydantic.TypeAdapter.

TYPE: type[T]

Initialize the Pydantic serializer.

dumps

dumps(value: T) -> bytes

Serialize a value to JSON bytes via TypeAdapter.

loads

loads(data: bytes) -> T

Deserialize JSON bytes to a typed value via TypeAdapter.

TTLCache

TTLCache(
    maxsize: int = 0,
    ttl: float = 60,
    *,
    backend: CacheBackend | None = None,
    serializer: CacheSerializer[T] | None = None,
)

Bases: Generic[T]

Cache with per-entry TTL and optional LRU eviction.

Delegates storage to a CacheBackend (in-memory, Redis, etc.). TTLCache handles maxsize enforcement, LRU eviction, serialization, and statistics on top of the backend.

When no backend is provided, the registered default is used (see MemoryCacheBackend or RedisCacheBackend).

The type parameter T represents the cached value type. Defaults to Any when unspecified (TTLCache()). Use TTLCache[User](serializer=PydanticSerializer(User)) for typed caching.

RAISES DESCRIPTION
ValueError

If maxsize is negative or ttl is not positive.

Initialize the cache.

PARAMETER DESCRIPTION
maxsize

Maximum number of entries. 0 means unlimited. Only enforced locally (not by the backend).

TYPE: int DEFAULT: 0

ttl

Default TTL in seconds for all entries.

TYPE: float DEFAULT: 60

backend

The cache storage backend.

By default, the registered cache backend is used.

TYPE: CacheBackend | None DEFAULT: None

serializer

Serialization strategy for cached values.

Any object implementing the CacheSerializer protocol (dumps / loads methods) can be used.

Built-in options:

  • PickleSerializer(): Any picklable object.
  • JsonSerializer(): JSON-native types (dict, list, etc.).
  • PydanticSerializer(Model): Type-safe Pydantic roundtrips.
  • None: Raw bytes only (no serialization).

TYPE: CacheSerializer[T] | None DEFAULT: None

get async

get(key: str, default: T | None = None) -> T | None

Get a value by key.

Returns the default if the key is missing or expired. A hit promotes the key in LRU order.

PARAMETER DESCRIPTION
key

The cache key.

TYPE: str

default

Value to return if the key is missing or expired.

TYPE: T | None DEFAULT: None

set async

set(key: str, value: T, ttl: float | None = None) -> None

Set a value with an optional per-entry TTL override.

If the cache is full (maxsize > 0), evicts the least recently used entry before storing.

PARAMETER DESCRIPTION
key

The cache key.

TYPE: str

value

The value to store. Must be bytes or serializable.

TYPE: T

ttl

Per-entry TTL override in seconds. Uses the default TTL if None.

TYPE: float | None DEFAULT: None

RAISES DESCRIPTION
ValueError

If ttl is not positive.

TypeError

If value is not bytes and no serializer is set.

delete async

delete(key: str) -> None

Delete a key from the cache.

No-op if the key does not exist.

PARAMETER DESCRIPTION
key

The cache key to delete.

TYPE: str

clear async

clear() -> None

Remove all entries from the cache.

cache_info

cache_info() -> CacheInfo

Return a snapshot of cache statistics.

cached

Cached Decorator.

P module-attribute

P = ParamSpec('P')

R module-attribute

R = TypeVar('R')

cached

cached(
    cache: TTLCache,
    *,
    key_maker: Callable[
        [
            Callable[..., Any],
            tuple[Any, ...],
            dict[str, Any],
        ],
        str,
    ]
    | None = None,
    skip: Callable[[Any], bool] | None = None,
    typed: bool = False,
    lock: _LockType = None,
) -> Callable[[Callable[P, R]], Callable[P, R]]

Cache decorator for sync and async functions.

Automatically detects whether the decorated function is sync or async and wraps it accordingly.

The decorated function exposes cache_info() and cache_clear() helpers (matching functools.lru_cache). cache_clear() is always a coroutine (must be awaited).

PARAMETER DESCRIPTION
cache

The TTLCache instance to store results in.

TYPE: TTLCache

key_maker

Optional custom key generation function. Receives (func, args, kwargs) and must return a string key.

TYPE: Callable[[Callable[..., Any], tuple[Any, ...], dict[str, Any]], str] | None DEFAULT: None

skip

Optional predicate receiving the function result. When it returns True the result is not cached.

TYPE: Callable[[Any], bool] | None DEFAULT: None

typed

If True, arguments of different types are cached separately (e.g. 3 vs 3.0).

TYPE: bool DEFAULT: False

lock

Protect against duplicate work on a cache miss. When the cache does not have the value, only one caller runs the function. The other callers wait for the result.

Set to True for per-key locking. Misses on different keys run in parallel. Misses on the same key run one at a time. The right lock type is created automatically (asyncio.Lock for async, threading.Lock for sync).

You can also pass a custom context manager for global locking. This uses a single lock shared by all keys.

TYPE: _LockType DEFAULT: None

RETURNS DESCRIPTION
Callable[[Callable[P, R]], Callable[P, R]]

A decorator that caches function results.

grelmicro.cache.memory

Memory Cache Backend.

MemoryCacheBackend

MemoryCacheBackend()

In-memory cache backend.

Stores entries in a Python dict with lazy TTL expiry. Suitable for testing and single-process applications.

Initialize the memory cache backend.

get async

get(*, key: str) -> bytes | None

Get raw bytes by key.

Returns None if the key is missing or expired. Expired entries are removed lazily on access.

set async

set(*, key: str, value: bytes, ttl: float) -> None

Store raw bytes with a TTL in seconds.

delete async

delete(*, key: str) -> None

Delete a key (no-op if absent).

clear async

clear() -> None

Remove all entries.

grelmicro.cache.redis

Redis Cache Backend.

RedisCacheBackend

RedisCacheBackend(
    url: RedisDsn | str | None = None, *, prefix: str = ""
)

Redis cache storage backend.

Pure key-value storage with per-entry TTL handled natively by Redis (SETEX). Keys are prefixed for isolation.

Must be used as an async context manager to manage the connection lifecycle.

Initialize the Redis cache backend.

PARAMETER DESCRIPTION
url

The Redis URL.

If not provided, the URL will be taken from the environment variables REDIS_URL or REDIS_HOST, REDIS_PORT, REDIS_DB, and REDIS_PASSWORD.

TYPE: RedisDsn | str | None DEFAULT: None

prefix

Prefix prepended to all Redis keys to avoid conflicts with other keys.

By default no prefix is added.

TYPE: str DEFAULT: ''

get async

get(*, key: str) -> bytes | None

Get raw bytes by key.

Returns None if the key is missing or expired.

set async

set(*, key: str, value: bytes, ttl: float) -> None

Store raw bytes with a TTL in seconds.

delete async

delete(*, key: str) -> None

Delete a key (no-op if absent).

clear async

clear() -> None

Remove all entries matching the configured prefix.

Uses SCAN to iterate keys without blocking Redis, then deletes in batches.