%lprun output from the line_profiler package is one of the most powerful tools for pinpointing performance bottlenecks at the line level in Python code. While cProfile shows function-level timing (calls, total time, per-call time), %lprun breaks it down line by line — revealing exactly which statements inside a function consume the most time, how many times each line runs (Hits), time per hit, and percentage of total runtime. In 2026, line-level profiling with %lprun remains essential for optimizing tight loops, data processing, numerical code, and any function where micro-optimizations (vectorization, caching, algorithm tweaks) can yield 10–100× speedups.
Here’s a complete, practical guide to understanding and using %lprun output: what each column means, interpreting hotspots, real-world examples, and modern best practices for line-level profiling in notebooks and scripts.
Typical %lprun output looks like this (from profiling a nested loop function):
Timer unit: 1e-06 s
Total time: 0.025097 s
File: example.py
Function: my_function at line 5
Line # Hits Time Per Hit % Time Line Contents
==============================================================
5 def my_function(x, y):
6 10 2324.0 232.4 9.3 result = []
7 1010 8210.0 8.1 32.7 for i in range(x):
8 1001000 1233758.0 1.2 4918.8 row = []
9 1000000 1027955.0 1.0 4092.7 for j in range(y):
10 1000000 1027955.0 1.0 4092.7 row.append(i*j)
11 1000 6725.0 6.7 26.8 result.append(row)
12 return result
Breakdown of key columns:
- Line # — line number in the function
- Hits — how many times that line was executed
- Time — total time spent on that line (in timer units, here microseconds)
- Per Hit — average time per execution of that line
- % Time — percentage of total function runtime spent on that line
- Line Contents — the actual code on that line
Hotspots jump out immediately: here, lines 8–10 (inner loop) dominate (~90% of time), showing the nested loop is the bottleneck — a classic target for vectorization or algorithm improvement.
Real-world pattern: optimizing numerical or data-processing functions — %lprun quickly reveals whether loops, list appends, or conversions are slow.
# Install once: !pip install line_profiler
%load_ext line_profiler
@profile
def process_data(data):
result = []
for row in data:
cleaned = [float(x) for x in row if x.strip()]
result.append(sum(cleaned) / len(cleaned) if cleaned else 0)
return result
data = [[str(i) for i in range(100)] for _ in range(1000)]
%lprun -f process_data process_data(data)
# Output will show list comprehension and sum/len as major time sinks
Best practices make %lprun output actionable and reliable. Install line_profiler and load the extension (%load_ext line_profiler) — decorate functions with @profile to enable profiling. Run %lprun -f function_name function_call() — focus on hot functions first. Sort by % Time or Time — highest percentages are your optimization targets. Look at Hits × Per Hit — lines with high hits and moderate per-hit time (inner loops) are prime for vectorization. Modern tip: use scalene or pyinstrument for sampling-based profiling — lower overhead than line_profiler for long-running code. Visualize — export to HTML with snakeviz or gprof2dot for call graphs and flame graphs. In production, profile on representative data — small inputs hide bottlenecks; profile in release mode (no debug assertions). Combine with timeit for top-level timing — %lprun shows line-level detail, timeit validates overall speedup after fixes. Prefer NumPy/Pandas/Polars vectorization over loops — profiling often shows Python loops as 10–100× slower. Use generators for large data — avoid materializing lists unnecessarily.
%lprun output turns vague slowdowns into precise line-level insights — find the real bottlenecks, optimize effectively, and prove your fixes work. In 2026, profile early and often, focus on high % Time lines, vectorize loops, and track performance over time. Master line profiling, and you’ll write code that runs fast — not just correct — at scale.
Next time your code feels slow — don’t guess. Run %lprun. It’s Python’s cleanest way to ask: “Which line is eating my time?” — and get an exact answer.