The timer decorator is one of the most practical and widely used examples of decorators in Python — it wraps any function to automatically measure and log its execution time without modifying the original code. By using time.perf_counter() (high-resolution wall-clock time), it provides accurate, real-world performance insights — ideal for profiling data pipelines, ML model inference, web endpoints, batch jobs, or any performance-critical function. In 2026, timer decorators remain essential — they separate timing/observability from business logic, enable easy application to multiple functions via @timer, support exception-safe cleanup, and integrate seamlessly with logging, pandas/Polars workflows, and production monitoring. They turn verbose manual timing into a clean, reusable, zero-boilerplate solution.
Here’s a complete, practical guide to the timer decorator in Python: basic implementation, preserving metadata with @wraps, handling args/kwargs, real-world usage patterns, and modern best practices with type hints, logging, multiple runs, precision, and pandas/Polars integration.
Simple timer decorator — measures wall-clock time and prints it after execution.
import time
from functools import wraps
def timer(func):
"""Decorator that prints execution time of the function."""
@wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
end = time.perf_counter()
print(f"{func.__name__} took {end - start:.6f} seconds")
return result
return wrapper
Usage — apply @timer to any function; it works transparently with arguments and return values.
@timer
def slow_sum(n: int) -> int:
"""Sum numbers from 0 to n-1."""
return sum(range(n))
slow_sum(1_000_000)
# slow_sum took 0.045678 seconds (varies by machine)
Real-world pattern: timing pandas/Polars transformations — profile heavy operations without cluttering logic.
import pandas as pd
@timer
def group_and_sum(df: pd.DataFrame) -> pd.DataFrame:
"""Group by category and sum values."""
return df.groupby('category').sum()
df = pd.DataFrame({'category': ['A','A','B'], 'value': [10,20,30]})
result = group_and_sum(df)
# group_and_sum took 0.001234 seconds
Best practices make timer decorators safe, readable, and useful. Always use @wraps(func) — preserves name, docstring, type hints, annotations. Prefer time.perf_counter() — high-resolution wall-clock time. Modern tip: use Polars lazy API — decorate .collect() or .sink_* calls to profile expensive execution. Add type hints — def timer(func: Callable) -> Callable — improves clarity and mypy checks. Use logging instead of print — logging.info(f"{func.__name__} took {elapsed:.6f}s") — better for production. Time multiple runs — average over 10–100 executions for accuracy. Use contextmanager alternative — for timing blocks. Combine with codetiming or pyinstrument — advanced profiling. Disable GC if needed — but timeit does it automatically for micro-benchmarks. Test timing — assert elapsed time roughly correct in integration tests. Stack with other decorators — order matters (@timer @log vs @log @timer). Use time.process_time() — for CPU-only timing if I/O/sleep should be excluded.
Use a @timer decorator when you want automatic, reusable, exception-safe timing across many functions — it’s cleaner than manual code and more flexible than inline timeit. In 2026, combine with logging, type hints, Polars profiling, and multiple runs for reliable performance insights. Master timer decorators, and you’ll profile, optimize, and monitor Python code effortlessly and accurately.
Next time you need to know how long a function takes — decorate it with @timer. It’s Python’s cleanest way to say: “Measure this automatically — no extra code inside.”