The itertools module is one of Python’s most powerful standard library tools for working with iterators and iterables — it provides fast, memory-efficient functions for combining, slicing, filtering, grouping, permuting, and generating sequences without writing verbose loops. In 2026, itertools remains essential — used constantly for data processing, combinatorial tasks, streaming pipelines, parallel iteration, and optimizing code in pandas, Polars, and production systems. Its functions are implemented in C, making them faster than equivalent Python loops, and many return lazy iterators — perfect for large or infinite data.
Here’s a complete, practical guide to the itertools module: core functions for combining, generating, slicing, filtering, and grouping, real-world patterns, and modern best practices with type hints, lazy evaluation, and scalability.
itertools.chain(*iterables) concatenates multiple iterables into one long iterator — lazy and memory-efficient, great for merging streams or flattening shallow nested structures.
import itertools
a = [1, 2, 3]
b = ["a", "b", "c"]
c = itertools.chain(a, b)
print(list(c)) # [1, 2, 3, 'a', 'b', 'c']
# Chain generators or files — no full lists created
def gen1(): yield from range(3)
def gen2(): yield from "xyz"
print(list(itertools.chain(gen1(), gen2()))) # [0, 1, 2, 'x', 'y', 'z']
itertools.product(*iterables, repeat=1) generates the Cartesian product — all possible combinations of items — as tuples. It’s the lazy version of nested loops.
colors = ["red", "green"]
sizes = ["S", "M", "L"]
for combo in itertools.product(colors, sizes):
print(combo)
# Output:
# ('red', 'S') ('red', 'M') ('red', 'L') ('green', 'S') ('green', 'M') ('green', 'L')
# With repeat for powers
print(list(itertools.product([0, 1], repeat=3))) # All 3-bit combinations
itertools.combinations(iterable, r) and itertools.permutations(iterable, r=None) generate combinations (order doesn’t matter) or permutations (order matters) — lazy and useful for combinatorial tasks.
items = ["A", "B", "C"]
print(list(itertools.combinations(items, 2))) # [('A', 'B'), ('A', 'C'), ('B', 'C')]
print(list(itertools.permutations(items, 2))) # [('A', 'B'), ('A', 'C'), ('B', 'A'), ('B', 'C'), ('C', 'A'), ('C', 'B')]
Real-world pattern: parallel processing with zip() and itertools — combine sequences, filter lazily, or batch data for efficient pipelines.
# Pair IDs and values from two streams, count frequencies
ids = ["user1", "user2", "user1"]
values = [10, 20, 15]
paired = itertools.zip_longest(ids, values, fillvalue="unknown")
counts = Counter(id for id, _ in paired if id != "unknown")
print(counts) # Counter({'user1': 2, 'user2': 1})
Best practices make itertools usage fast, readable, and scalable. Prefer itertools over nested loops or manual iteration — product() is clearer and faster than double for. Use lazy functions (chain, product, combinations) with generators — constant memory for large data. Add type hints for clarity — Iterator[tuple[str, int]] — improves readability and mypy checks. Modern tip: combine with Polars for large tabular data — pl.concat([df1, df2]) or pl.DataFrame.join(...) often replaces manual itertools chaining. In production, wrap itertools over external data (files, APIs) in try/except — handle bad items gracefully. Use islice() for slicing generators — itertools.islice(gen, 1000) takes first 1000 items lazily. Avoid materializing large iterators unnecessarily — use next(), takewhile, or for loops to consume only what you need.
The itertools module turns complex iteration into clean, fast, memory-efficient operations — combining, generating, slicing, and grouping with C-level speed. In 2026, use chain for merging, product for combinations, zip_longest for alignment, and type hints for safety. Master itertools, and you’ll process sequences, streams, and data with elegance and performance.
Next time you need to merge, pair, combine, or generate from multiple sources — reach for itertools. It’s Python’s cleanest way to say: “Handle these iterables together — efficiently.”