Using timeit is the gold standard for accurately measuring the execution time of Python code snippets — whether in interactive notebooks (%timeit) or regular scripts (timeit module). It runs your code many times (default 1 million for small snippets), disables garbage collection by default, and reports statistics (average, standard deviation, min/max) — giving reliable, low-noise results immune to system load, caching, or one-off variations. In 2026, timing with timeit is non-negotiable for performance optimization, algorithm comparison, benchmarking against SLAs, profiling before/after refactors, and catching regressions in CI/CD pipelines.
Here’s a complete, practical guide to using timeit: notebook magic, script usage, controlling repetitions, real-world benchmarking patterns, common gotchas, and modern best practices with high-resolution timers, profilers, and visualization.
In Jupyter/IPython notebooks, the %timeit magic command is the quickest way — prepend it to any statement or expression. It auto-detects the best number of runs/loops and gives mean ± std dev.
%timeit sum(range(1000)) # Fast: ~15 µs ± 200 ns
%timeit -n 10000 -r 5 sum(range(1000)) # Control: 10k loops, 5 repeats
# Multi-line: use %%timeit for cell
%%timeit
total = 0
for i in range(1000):
total += i
In regular scripts, use the timeit module — it’s more flexible and works outside notebooks. The timeit.timeit() function runs a setup + statement many times and returns total seconds.
import timeit
def my_sum(n):
total = 0
for i in range(n):
total += i
return total
# Time a function call
time_taken = timeit.timeit(lambda: my_sum(1000), number=10000)
print(f"Total time for 10k calls: {time_taken:.4f} seconds")
print(f"Average per call: {time_taken / 10000 * 1e6:.2f} µs")
# With setup (e.g., create data once)
setup_code = "import random; data = [random.random() for _ in range(10000)]"
stmt = "sum(data)"
time_taken = timeit.timeit(stmt, setup=setup_code, number=1000)
print(f"Average sum time: {time_taken / 1000 * 1e6:.2f} µs")
Real-world pattern: comparing algorithms or implementations — timeit helps you choose the fastest method objectively.
def list_sum(lst):
return sum(lst)
def loop_sum(lst):
total = 0
for x in lst:
total += x
return total
data = list(range(100_000))
print("sum():", timeit.timeit(lambda: list_sum(data), number=100))
print("loop: ", timeit.timeit(lambda: loop_sum(data), number=100))
# sum() is usually 5–10× faster
Best practices make timeit results accurate and actionable. Use time.perf_counter() (high-resolution) for manual timing — timeit uses it internally. Run many repetitions (number=10000+) and repeats (-r 7) — single runs are noisy due to OS scheduling, caching, or JIT warmup. Disable GC with timeit(..., number=..., globals=globals(), timer=timeit.default_timer()) — or let timeit handle it. Time only the hot path — use setup for one-time initialization (data creation, imports). Modern tip: use line_profiler or cProfile for line-by-line or call-graph profiling — timeit shows total time, profilers show where time is spent. Visualize results — plot timings vs input size with matplotlib or seaborn for scaling behavior. In production, integrate timing into tests or CI — assert latency < threshold, track regressions over commits. Combine with NumPy/Pandas vectorization — np.sum() is often 10–100× faster than Python loops. Use generators for large data — sum(x**2 for x in range(n)) avoids list creation.
Timing with timeit is not optional — it’s how you prove your code is fast, not just correct. In 2026, time early and often, control repetitions, profile bottlenecks, and log percentiles. Master timeit, and you’ll write code that scales, meets performance goals, and stays fast after every change.
Next time you write a loop, function, or data pipeline — add timing. It’s Python’s cleanest way to ask: “Is this fast enough?” — and answer it reliably.