Observability Examples¶
Practical examples for logging, tracing, and rich output.
Overview¶
The observability module provides structured logging, tracing decorators, and rich console output utilities for better debugging and monitoring.
Examples¶
1. Basic Logging¶
File: examples/observability/01_basic_logging.py
Demonstrates fundamental logging patterns.
Topics:
- Simple logging setup - Configure logging with one line
- Structured logging - Log with extra fields (JSON/console formats)
- Context management - Automatic field propagation with LogContext
- Rich console output - Colorful logs with syntax highlighting
- Multiple loggers - Separate loggers for different modules
- File logging - Log to files and console simultaneously
- Real-world API logging - Complete request/response logging pattern
Run:
Key Functions:
- configure_logging() - One-line logging setup
- get_logger() - Get structured logger
- LogContext - Context manager for temporary fields
- bind_context() - Add persistent context fields
2. Tracing and Rich Output¶
File: examples/observability/02_tracing_and_rich_output.py
Shows tracing decorators and rich output utilities.
Topics:
- @timed decorator - Automatic function execution timing
- @traced decorator - Function entry/exit tracing with arguments
- @logged_errors decorator - Automatic error logging
- Async decorators - Support for async functions
- Combined decorators - Stack multiple decorators
- Pretty JSON - Syntax-highlighted JSON output
- Tables - Display tabular data
- Trees - Hierarchical data visualization
- Progress bars - Single and multi-task progress tracking
- Object inspection - Rich object introspection
- Enhanced tracebacks - Beautiful error displays
- Real-world pipeline - Complete data processing example
Run:
Key Functions:
- @timed, @traced, @logged_errors - Decorators
- print_json(), print_table(), print_tree() - Rich output
- Progress, TaskProgress - Progress bars
- inspect_object(), print_traceback() - Debugging
Quick Start¶
from dspu.observability import (
configure_logging,
get_logger,
LogContext,
timed,
traced,
)
# 1. Configure logging
configure_logging(level="INFO", format="rich")
# 2. Get logger
logger = get_logger(__name__)
# 3. Log with structured fields
logger.info("User logged in", user_id=123, ip="192.168.1.1")
# 4. Use context for automatic fields
with LogContext(request_id="req-123"):
logger.info("Processing request") # Includes request_id
# 5. Time function execution
@timed()
def slow_function():
return "done"
# 6. Trace function calls
@traced(log_args=True)
def process_order(order_id: str):
logger.info(f"Processing {order_id}")
return {"status": "success"}
Output Formats¶
The module supports three output formats:
1. Console Format (format="console")¶
Human-readable text:
2. JSON Format (format="json")¶
Structured NDJSON for log aggregation:
{"timestamp":"2024-12-05T10:30:45","level":"INFO","logger":"myapp","message":"User login","user_id":123,"ip":"192.168.1.1"}
3. Rich Format (format="rich")¶
Colorful output with syntax highlighting: - Colored log levels - Syntax-highlighted exceptions - Clickable file paths - Progress bars
Common Patterns¶
Pattern 1: API Request Logging¶
from dspu.observability import configure_logging, get_logger, LogContext
import time
configure_logging(level="INFO", format="json")
logger = get_logger(__name__)
def handle_request(request_id: str, path: str):
with LogContext(request_id=request_id):
start = time.time()
logger.info("Request started", path=path)
# Process request...
duration_ms = (time.time() - start) * 1000
logger.info("Request completed", duration_ms=duration_ms)
Pattern 2: Data Processing Pipeline¶
from dspu.observability import get_logger, timed, Progress
logger = get_logger(__name__)
@timed()
def load_data(filename: str) -> list:
logger.info(f"Loading data from {filename}")
return data
@timed()
def process_data(data: list) -> list:
with Progress() as progress:
task = progress.add_task("Processing", total=len(data))
for item in data:
# Process item
progress.update(task, advance=1)
return processed_data
# Run pipeline
data = load_data("input.csv")
results = process_data(data)
Pattern 3: Error Handling with Rich Tracebacks¶
from dspu.observability import get_logger, logged_errors, print_traceback
logger = get_logger(__name__)
@logged_errors(reraise=True)
def risky_operation():
try:
# Risky code
pass
except Exception:
print_traceback(show_locals=True) # Rich traceback
raise
Advanced Usage¶
LoggingSetup for Config Files¶
For complex setups, use LoggingSetup class:
from dspu.observability import LoggingSetup
# Load from config file
setup = LoggingSetup(
default_log_config_file="logging.json",
log_level_env="LOG_LEVEL", # Override from env
)
setup.setup()
# With stdout/stderr redirection
logger = setup.setup_with_diversion()
print("This goes to the logger!")
StreamToLogger for Print Capture¶
from dspu.observability import StreamToLogger
import logging
logger = logging.getLogger(__name__)
# Redirect print statements to logger
StreamToLogger.subvert(logger, reroute_stdout=True)
print("This is logged at INFO level")
# Restore
StreamToLogger.restore()
Best Practices¶
✅ DO:
- Use structured logging with key-value pairs
- Add context (request_id, user_id) for tracing
- Use appropriate log levels (DEBUG, INFO, WARNING, ERROR)
- Use JSON format for production log aggregation
- Use rich format for local development
- Time performance-critical functions with @timed
❌ DON'T: - Don't log sensitive data (passwords, tokens, credit cards) - Don't use string interpolation (use structured fields instead) - Don't overuse DEBUG level in production - Don't log inside tight loops (use sampling)