Logging
The logging package provides a simple and easy-to-configure logging system.
The logging feature adheres to the 12-factor app methodology, directing logs to stdout. It supports JSON formatting and allows log level configuration via environment variables.
Backend Selection
grelmicro supports three logging backends:
- stdlib (default) - Python's built-in logging module (no dependencies)
- Loguru - Feature-rich Python logging library
- structlog - Structured logging for Python
All backends produce identical JSON output structure (JSONRecordDict), making it easy to switch between them.
Dependencies
pip install grelmicro[standard]
pip install grelmicro[structlog]
pip install grelmicro[standard,opentelemetry]
# or
pip install grelmicro[structlog,opentelemetry]
pip install loguru
pip install structlog orjson
No additional dependencies required. Uses Python's built-in logging module.
Configure Logging
Just call the configure_logging function to set up the logging system.
from grelmicro.logging import configure_logging
configure_logging()
Settings
You can change the default settings using the following environment variables:
LOG_BACKEND: Select the logging backend (stdlib,loguru, orstructlog). Default:stdlibLOG_LEVEL: Set the desired log level (default:INFO). Available options:DEBUG,INFO,WARNING,ERROR,CRITICAL.LOG_FORMAT: Choose the log format. Options areTEXTandJSON, or you can provide a custom template (default:JSON).LOG_TIMEZONE: IANA timezone for timestamps (e.g.,UTC,Europe/Zurich,America/New_York) (default:UTC).LOG_JSON_SERIALIZER: JSON serializer to use (stdlibororjson). Useorjsonfor better performance (default:stdlib).LOG_OTEL_ENABLED: Enable OpenTelemetry trace context extraction (default: auto-enabled if OpenTelemetry is installed).
Backend Selection
Select the backend using the LOG_BACKEND environment variable:
# Use stdlib (default, no dependencies)
LOG_BACKEND=stdlib
# Use loguru
LOG_BACKEND=loguru
# Use structlog
LOG_BACKEND=structlog
After calling configure_logging(), use the appropriate logger for your backend:
from loguru import logger
configure_logging()
logger.info("Hello, World!", user_id=123)
import structlog
configure_logging()
log = structlog.get_logger()
log.info("Hello, World!", user_id=123)
import logging
configure_logging()
logger = logging.getLogger(__name__)
logger.info("Hello, World!", extra={"user_id": 123})
Timezone Support
The LOG_TIMEZONE setting controls the timezone used for all log timestamps in both JSON and TEXT formats. This is particularly useful when running applications across multiple regions or when you need logs in a specific timezone for compliance or debugging purposes.
JSON Format: Timestamps are ISO 8601 formatted with timezone offset
{"time":"2024-11-25T15:56:36.066922+01:00",...} // Europe/Zurich
{"time":"2024-11-25T14:56:36.066922+00:00",...} // UTC
TEXT Format: Timestamps are displayed in the format YYYY-MM-DD HH:MM:SS.mmm
2024-11-25 15:56:36.066 | INFO | ... // Europe/Zurich
2024-11-25 14:56:36.066 | INFO | ... // UTC
Structured Logging
When using JSON format, additional context can be passed to logger methods as keyword arguments. These will be captured in the ctx field:
"""Example: Structured logging with context."""
from loguru import logger
from grelmicro.logging import configure_logging
# Ensure clean state
logger.remove()
configure_logging()
logger.info("User logged in", user_id=123, ip_address="192.168.1.1")
Output:
{"time":"...","level":"INFO",...,"msg":"User logged in","ctx":{"user_id":123,"ip_address":"192.168.1.1"}}
Exceptions are automatically captured in the ctx field when using logger.exception() (loguru only):
"""Example: Exception logging with context."""
from loguru import logger
from grelmicro.logging import configure_logging
# Ensure clean state
logger.remove()
configure_logging()
try:
1 / 0 # noqa: B018
except ZeroDivisionError:
logger.exception("Operation failed", operation="divide")
Output:
{"time":"...","level":"ERROR",...,"msg":"Operation failed","ctx":{"operation":"divide","exception":"ZeroDivisionError: division by zero"}}
OpenTelemetry Integration
The logging system automatically integrates with OpenTelemetry for distributed tracing. When you install the opentelemetry extras and have an active span, trace_id and span_id are automatically added to your logs at the top level:
"""OpenTelemetry integration example."""
from loguru import logger
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (
ConsoleSpanExporter,
SimpleSpanProcessor,
)
from grelmicro.logging import configure_logging
# Set up OpenTelemetry
provider = TracerProvider()
processor = SimpleSpanProcessor(ConsoleSpanExporter())
provider.add_span_processor(processor)
trace.set_tracer_provider(provider)
# Configure logging (auto-detects OpenTelemetry)
configure_logging()
# Get a tracer
tracer = trace.get_tracer(__name__)
# Logs inside spans will automatically include trace_id and span_id
with tracer.start_as_current_span("handle_request") as span:
logger.info("Processing request", user_id=123, endpoint="/api/users")
with tracer.start_as_current_span("database_query"):
logger.info("Executing query", query="SELECT * FROM users")
logger.info("Request completed", status="success")
Output:
{
"time": "2026-01-27T16:00:00.000Z",
"level": "INFO",
"msg": "Processing request",
"trace_id": "4bf92f3577b34da6a3ce929d0e0e4736",
"span_id": "00f067aa0ba902b7",
"ctx": {"user_id": 123}
}
Trace fields follow the OpenTelemetry standard and are placed at the JSON root level (not in ctx) for compatibility with observability platforms like Jaeger, Zipkin, DataDog, and Grafana Tempo.
To disable: LOG_OTEL_ENABLED=false
Production Deployment
For strict unbuffered output (12-factor compliance), set the PYTHONUNBUFFERED=1 environment variable in your container runtime.
Performance
Benchmark results comparing all backend and serializer combinations (50,000 iterations):
| Backend | Serializer | Ops/sec | vs Best |
|---|---|---|---|
| structlog | orjson | 236,173 | 100.0% |
| stdlib | orjson | 220,328 | 93.3% |
| structlog | stdlib | 181,909 | 77.0% |
| stdlib | stdlib | 169,107 | 71.6% |
| loguru | orjson | 150,303 | 63.6% |
| loguru | stdlib | 122,035 | 51.7% |
Key findings:
- structlog + orjson is the fastest combination
- stdlib + orjson is very close (93% of best) with minimal dependencies
- orjson provides ~20-30% speedup over stdlib json across all backends
- stdlib (default, no extra dependencies) performs well at 72% of best
Performance Recommendation
For high-throughput applications, use LOG_JSON_SERIALIZER=orjson with either structlog or stdlib backend. For most applications, the default stdlib backend with stdlib serializer provides excellent performance with zero dependencies.
Run the benchmark yourself:
python benchmarks/logging_benchmark.py
Examples
Basic Usage
Here is a quick example of how to use the logging system:
from loguru import logger
from grelmicro.logging import configure_logging
configure_logging()
logger.debug("This is a debug message")
logger.info("This is an info message")
logger.warning("This is a warning message with context", user="Alice")
logger.error("This is an error message with context", user="Bob")
try:
raise ValueError("This is an exception message")
except ValueError:
logger.exception(
"This is an exception message with context", user="Charlie"
)
The console output, stdout will be:
{"time":"2024-11-25T15:56:36.066922+01:00","level":"INFO","thread":"MainThread","logger":"__main__:<module>:7","msg":"This is an info message"}
{"time":"2024-11-25T15:56:36.067063+01:00","level":"WARNING","thread":"MainThread","logger":"__main__:<module>:8","msg":"This is a warning message with context","ctx":{"user":"Alice"}}
{"time":"2024-11-25T15:56:36.067105+01:00","level":"ERROR","thread":"MainThread","logger":"__main__:<module>:9","msg":"This is an error message with context","ctx":{"user":"Bob"}}
{"time":"2024-11-25T15:56:36.067134+01:00","level":"ERROR","thread":"MainThread","logger":"__main__:<module>:14","msg":"This is an exception message with context","ctx":{"user":"Charlie","exception":"ValueError: This is an exception"}}
FastAPI Integration
You can use the logging system with FastAPI as well:
from contextlib import asynccontextmanager
from fastapi import FastAPI
from loguru import logger
from grelmicro.logging import configure_logging
@asynccontextmanager
def lifespan_startup():
# Ensure logging is configured during startup
configure_logging()
yield
app = FastAPI()
@app.get("/")
def root():
logger.info("This is an info message")
return {"Hello": "World"}
Warning
It is crucial to call configure_logging during the lifespan of the FastAPI application. Failing to do so may result in the FastAPI CLI resetting the logging configuration.
Different Log Formats
JSON Format (Default)
JSON format is ideal for production environments, log aggregation systems, and structured logging:
LOG_FORMAT=JSON
LOG_TIMEZONE=Europe/Zurich
"""Example: JSON format logging with timezone."""
from loguru import logger
from grelmicro.logging import configure_logging
# Ensure clean state
logger.remove()
configure_logging()
logger.info("Application started", version="1.0.0", environment="production")
Output:
{"time":"2024-11-25T15:56:36.066922+01:00","level":"INFO","thread":"MainThread","logger":"__main__:<module>:12","msg":"Application started","ctx":{"version":"1.0.0","environment":"production"}}
TEXT Format
TEXT format is more human-readable, ideal for local development and debugging:
LOG_FORMAT=TEXT
LOG_TIMEZONE=America/New_York
"""Example: TEXT format logging with timezone."""
from loguru import logger
from grelmicro.logging import configure_logging
# Ensure clean state
logger.remove()
configure_logging()
logger.info("Application started", version="1.0.0")
Output:
2024-11-25 09:56:36.066 | INFO | __main__:<module>:12 - Application started
Custom Format (Loguru only)
You can provide a custom loguru format template:
LOG_FORMAT="{level} | {message}"
"""Example: Custom format logging."""
from loguru import logger
from grelmicro.logging import configure_logging
# Ensure clean state
logger.remove()
configure_logging()
logger.info("Custom format example")
Output:
INFO | Custom format example
Note
Custom format strings only work with the loguru backend. When using structlog with a custom format, it falls back to the ConsoleRenderer.
JSON Record Structure
When using JSON format, log records follow this structure:
class JSONRecordDict:
time: str # ISO 8601 timestamp with timezone
level: str # Log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
msg: str # Log message
logger: str | None # Logger name in format "module:function:line"
thread: str # Thread name
trace_id: str # Optional: OpenTelemetry trace ID (32 hex chars)
span_id: str # Optional: OpenTelemetry span ID (16 hex chars)
ctx: dict[Any, Any] # Optional context data (kwargs passed to logger)
Example:
{
"time": "2024-11-25T15:56:36.066922+01:00",
"level": "INFO",
"thread": "MainThread",
"logger": "myapp.service:process_data:42",
"msg": "Processing complete",
"trace_id": "4bf92f3577b34da6a3ce929d0e0e4736",
"span_id": "00f067aa0ba902b7",
"ctx": {
"records_processed": 1000,
"duration_ms": 234
}
}
Note
The trace_id and span_id fields only appear when OpenTelemetry integration is enabled and an active span exists.