Bridge (separate “what you do” from “how it’s done”)
When to use
- You have two dimensions that vary independently: e.g., Export logic (plain vs versioned) and Storage backend (S3 vs local).
- You want to mix & match without combinatorial subclasses (no
S3VersionedExporter,LocalPlainExporter, …). - You need to evolve abstractions (features) and implementations (backends) separately.
Avoid when only one axis varies—use Adapter or Strategy instead.
Diagram (text)
Abstraction: Exporter ── export(name, rows)
│
└─ holds a reference to
Implementor: Storage (put(bucket, key, data))
▲ ▲
S3Store (impl) LocalStore (impl)
Refined Abstraction: VersionedExporter (adds prefix) → still uses Storage
Step-by-step idea
- Define the Implementor interface (
Storage.put). - Implement different Storage backends (S3, Local).
- Define the Abstraction (
Exporter) that uses aStorage. - Add Refined Abstractions (e.g.,
VersionedExporter) without touching storages.
Python example (≤40 lines, type-hinted)
from __future__ import annotations
from dataclasses import dataclass
from typing import Protocol, Iterable, Callable
from pathlib import Path
import json
class Storage(Protocol):
def put(self, bucket: str, key: str, data: bytes) -> None: ...
@dataclass
class S3Store:
c: object # e.g., boto3 client
def put(self, b: str, k: str, d: bytes) -> None: self.c.put_object(Bucket=b, Key=k, Body=d)
@dataclass
class LocalStore:
root: Path
def put(self, b: str, k: str, d: bytes) -> None:
p = self.root / b / k; p.parent.mkdir(parents=True, exist_ok=True); p.write_bytes(d)
Encode = Callable[[Iterable[dict]], bytes]
def encode_json(rows: Iterable[dict]) -> bytes: return json.dumps(list(rows), separators=(",", ":")).encode()
@dataclass
class Exporter: # Abstraction
storage: Storage
encode: Encode = encode_json
bucket: str = "exports"
def export(self, name: str, rows: Iterable[dict]) -> str:
key = f"{name}.json"; self.storage.put(self.bucket, key, self.encode(rows)); return key
@dataclass
class VersionedExporter(Exporter): # Refined Abstraction
prefix: str = "2025-11-06"
def export(self, name: str, rows: Iterable[dict]) -> str:
key = f"{self.prefix}/{name}.json"; self.storage.put(self.bucket, key, self.encode(rows)); return key
Tiny pytest (cements it)
def test_bridge_mix_and_match(tmp_path):
class FakeS3:
def __init__(self): self.store = {}
def put_object(self, Bucket, Key, Body): self.store[(Bucket, Key)] = Body
s3 = S3Store(FakeS3())
local = LocalStore(tmp_path)
plain = Exporter(storage=s3)
ver = VersionedExporter(storage=local, bucket="metrics", prefix="2025-11-06")
k1 = plain.export("events", [{"a": 1}])
k2 = ver.export("events", [{"a": 2}])
assert ("exports", "events.json") in s3.c.store
assert (tmp_path / "metrics" / "2025-11-06" / "events.json").exists()
assert k1 == "events.json" and k2.endswith("events.json")
Trade-offs & pitfalls
- Pros: Clean cross-product flexibility; evolve features and backends separately; simpler tests.
- Cons: More types than a single class; adds indirection.
- Pitfalls:
- If only one axis varies, Bridge is overkill—use Adapter/Strategy.
- Letting Abstractions depend on vendor details—keep those inside
Storageimpls. - Hidden coupling via filenames/metadata—define the contract (key format, encoding) clearly.
Pythonic alternatives
- Plain DI (what we did): pass a
Storageobject into services; don’t sweat the “Bridge” name. - Adapter + DI: use adapters for vendors (
S3Store,GCSStore) and inject them—already “Bridge-like.” - Strategy for encoding: you can also swap
encode(JSON/Parquet) independently without new classes.
Mini exercise
Create GCSStore with upload_blob(bucket, key, data). Prove you can use VersionedExporter(storage=GCSStore(...)) with no changes to exporters. Optional: add a Parquet encoder and pass it into Exporter(encode=encode_parquet).
Checks (quick checklist)
- Abstraction references an Implementor interface (here,
Storage). - You can mix any Abstraction with any Implementor.
- No vendor details leak into Abstractions.
- Tests use different combinations to prove independence.
- Keep both sides small and cohesive.




