Skip to content

Chapter 6 — Units of Analysis and Scale Collapse

Many systems “work” and still fail—because they are applied at the wrong scale.

A team-level method used at org-level becomes bureaucracy. An org-level governance model used at team-level becomes control theater. A system designed for one unit of analysis collapses when moved to another without redesign.

This chapter gives you a rule:

A system is only valid for the unit of analysis it was designed to control.

The Failure This Chapter Prevents

Observable failure: a system succeeds locally but fails when scaled, copied, or mandated.

Symptoms:

  • A practice that worked for one team becomes painful across many teams
  • “Standardization” increases coordination cost and reduces speed
  • Teams comply with a system without believing in it
  • Leadership adds governance to compensate for drift
  • Local autonomy and global coherence fight constantly

Root cause:

  • The system’s assumptions match one unit of analysis, but it is applied to another.

What “Unit of Analysis” Means

The unit of analysis is the smallest boundary inside which the system’s logic is true and enforceable.

It defines:

  • who runs the system
  • who can enforce constraints
  • what “success” means
  • what coupling exists
  • how feedback loops work

This book uses five common units:

  • Individual
  • Team
  • Multi-team
  • Organization
  • Ecosystem / Market

A system that does not name its unit of analysis will be misapplied by default.

The Five Units and What Typically Changes

Individual

What dominates:

  • personal attention, habits, cognition
  • high autonomy, low coordination

System risks:

  • over-structuring
  • self-optimization that harms team interfaces

Common decision types optimized:

  • sequencing, focus, repair (personal bottlenecks)

Team

What dominates:

  • shared execution and local coordination
  • stable context and feedback

System risks:

  • ignoring external dependencies
  • optimizing team throughput while harming adjacent systems

Common decision types optimized:

  • scope, sequencing, ownership (inside the team)

Multi-team

What dominates:

  • dependency management, interface clarity
  • shared standards, integration risk

System risks:

  • coordination overload
  • governance replacing interface design

Common decision types optimized:

  • ownership, sequencing, repair (across boundaries)

Organization

What dominates:

  • investment allocation, strategy coherence
  • incentives, career ladders, portfolio control

System risks:

  • false uniformity
  • legibility pressure overriding truth
  • politics shaping artifacts

Common decision types optimized:

  • investment, priority, scope (portfolio-level)

Ecosystem / Market

What dominates:

  • competition, regulation, network effects
  • evolution and selection pressures

System risks:

  • internal optimization that ignores external reality
  • slow adaptation to market shifts

Common decision types optimized:

  • investment, diagnosis, repair (strategic adaptation)

Scale Collapse: The Most Common Patterns

Scale collapse is what happens when a system’s constraints, artifacts, and enforcement mechanisms do not survive a change in scale.

Pattern 1: The “Copy-Paste Team” Fallacy

A team has a successful system and assumes other teams can copy it.

What breaks:

  • different constraints
  • different coupling
  • different skill distribution
  • different external dependencies

Why it happens:

  • success creates narrative confidence
  • copying is cheaper than redesign

Correction:

  • systems must be revalidated at each unit of analysis
  • reuse concepts, not rules

Pattern 2: Standardization as a Substitute for Interfaces

Organizations standardize process because interfaces are unclear.

What breaks:

  • teams lose local adaptability
  • exceptions multiply
  • compliance becomes the main activity

Why it happens:

  • standardization is legible
  • interface design is hard and political

Correction:

  • at multi-team scale, prefer controlling interfaces over controlling methods
  • standardize contracts and boundaries, not rituals

Pattern 3: Metrics That Don’t Survive Aggregation

Team metrics don’t add up cleanly at org scale.

Example failures:

  • “velocity” used as org productivity
  • “story points” compared across teams
  • “utilization” used to predict throughput

Why it happens:

  • leadership needs legibility
  • aggregation is tempting

Correction:

  • change metrics with unit of analysis
  • enforce “no cross-team comparability” rules where needed

Pattern 4: Authority Mismatch

A system is mandated at a level where enforcement is weak.

Example:

  • an org mandates a team-level practice but cannot enforce quality without creating heavy oversight

Failure mode:

  • surface compliance + hidden divergence
  • governance inflation

Correction:

  • enforcement must match authority:

  • either delegate authority to the unit running the system

  • or accept the cost of governance (explicitly)

Pattern 5: Artifact Explosion

As scale grows, artifacts multiply.

What breaks:

  • teams spend time reconciling representations instead of making decisions
  • the system becomes reporting infrastructure

Correction:

  • enforce artifact minimalism:

  • one artifact per recurring decision domain

  • explicit “source of truth” rules

The “Scale Triangle”: Autonomy, Coherence, Legibility

At scale, systems face a three-way tradeoff:

  • Autonomy: teams can act locally
  • Coherence: the org moves in compatible directions
  • Legibility: leadership can understand and steer

Most bad systems try to maximize all three.

That produces:

  • heavy reporting (legibility)
  • heavy governance (coherence)
  • reduced local adaptability (autonomy loss)

You must choose what to sacrifice and admit it.

Choosing the Correct Unit of Analysis

Ask these questions:

  1. Where does the failure actually occur?

  2. inside a team? between teams? in portfolio decisions?

  3. Who has the authority to enforce constraints?

  4. if enforcement requires leadership, this is not a team-level system

  5. Where is the coupling?

  6. if coupling is mostly cross-team, a team-only system will not fix it

  7. What feedback loop matters?

  8. if learning requires market signals, team-only metrics will mislead

Misuse Model: How This Chapter Gets Misapplied

Misuse 1: “We’re unique, so nothing scales”

Teams use scale differences as an excuse to avoid shared constraints.

Correction:

  • don’t standardize rituals, standardize interfaces and decision outputs.

Misuse 2: Treating unit of analysis as org chart

The unit of analysis is about coupling and authority, not reporting lines.

Correction:

  • map actual dependency networks, not formal structure.

Misuse 3: Forcing identical artifacts across units

Organizations mandate a single artifact format for everyone.

Correction:

  • artifacts should be comparable only if the decision type and object of control are comparable.

The Non-Negotiable Rule Introduced Here

A system definition must include:

  • its unit of analysis
  • the authority boundary that enforces it
  • what happens when it is applied outside that unit

If it can be applied at any scale without modification, it is either trivial or dishonest.

Exit Condition for This Chapter

Before moving on, write:

  1. The unit of analysis for the system you are evaluating or designing
  2. The authority that enforces its constraints
  3. One example of how it would fail if scaled up or down
  4. One redesign you would need to make it valid at a different unit