Time a function in Python is a common need for performance analysis, benchmarking, optimization, and debugging — especially in data pipelines, ML training, web scraping, or production code where execution speed matters. The most accurate way is using time.perf_counter() (high-resolution monotonic clock, best for short durations) or time.process_time() (CPU time, ignores sleep/system delays). In 2026, timing remains essential — but manual start/end blocks are error-prone and verbose; decorators, context managers, and tools like timeit, functools, or third-party libraries (codetiming, tenacity) make it clean, reusable, and exception-safe. Accurate timing helps identify bottlenecks, compare implementations, and ensure scalability in pandas/Polars workflows.
Here’s a complete, practical guide to timing functions in Python: manual timing, decorator/context manager patterns, timeit module, real-world examples, and modern best practices with type hints, precision, multiple runs, and pandas/Polars integration.
Manual timing with perf_counter() — simple but repetitive; use for one-off measurements.
import time
def my_function():
time.sleep(0.5) # simulate work
return sum(range(1000000))
start = time.perf_counter()
result = my_function()
end = time.perf_counter()
elapsed = end - start
print(f"Elapsed time: {elapsed:.6f} seconds")
Decorator-based timing — reusable, clean, wraps any function without modifying its code.
from functools import wraps
import time
def timer(func):
"""Decorator that prints the execution time of the function."""
@wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
end = time.perf_counter()
print(f"{func.__name__} took {end - start:.6f} seconds")
return result
return wrapper
@timer
def slow_sum(n: int) -> int:
return sum(range(n))
slow_sum(1_000_000)
# slow_sum took 0.045678 seconds (varies by machine)
Context manager timing — ideal for blocks of code, multiple statements, or non-function code.
from contextlib import contextmanager
@contextmanager
def timed_block(label: str = "Block"):
start = time.perf_counter()
try:
yield
finally:
end = time.perf_counter()
print(f"{label} took {end - start:.6f} seconds")
# Usage
with timed_block("Data processing"):
df = pd.read_csv('large.csv')
cleaned = df.dropna().groupby('category').sum()
# Data processing took 2.345678 seconds
Real-world pattern: timing pandas/Polars operations — decorate or wrap heavy transformations for profiling.
import polars as pl
@timer
def process_large_file(path: str) -> pl.DataFrame:
return (pl.scan_csv(path)
.filter(pl.col('value') > 100)
.group_by('category')
.agg(pl.col('value').sum())
.collect())
result = process_large_file('huge_data.csv')
# process_large_file took 1.234567 seconds
Best practices make timing safe, accurate, and useful. Prefer time.perf_counter() for wall-clock time (real elapsed time). Use time.process_time() for CPU-only time (ignores sleep/I/O wait). Modern tip: use Polars lazy API — time .collect() or .sink_* calls to profile expensive execution. Add type hints — def timer(func: Callable) -> Callable — improves clarity. Use @wraps — preserves function name/docstring. Time multiple runs — average over 10–100 iterations for accuracy (timeit module excels here). Use timeit for micro-benchmarks — timeit.timeit(lambda: func(), number=1000). Avoid timing inside loops — measure outer loop for realistic results. Use contextmanager for non-function blocks — timing setup/teardown. Combine with logging — log timings instead of print for production. Use codetiming or pyinstrument — advanced profiling tools. Disable GC during timing for consistency — gc.disable()/gc.enable(). Test timing decorators — assert elapsed time roughly correct.
Timing functions with perf_counter, decorators, or context managers measures performance accurately and reproducibly. In 2026, prefer decorators for reusable timing, context managers for blocks, timeit for micro-benchmarks, Polars lazy profiling, and logging for production. Master timing, and you’ll identify bottlenecks, optimize pipelines, and ensure scalable, efficient Python code.
Next time you need to measure speed — time it properly. It’s Python’s cleanest way to say: “How long does this really take?”