Chapter 7 — Causality Models¶
A system is a machine built on a belief about causality: how actions produce outcomes.
If that belief is wrong for the situation, the system can be perfectly executed and still fail. You will get confident plans, clean artifacts, and consistent disappointment.
This chapter gives you a practical rule:
Mismatch between causality model and problem reality invalidates the system.
The Failure This Chapter Prevents¶
Observable failure: teams apply planning and governance systems that assume a predictable world to situations governed by feedback, constraints, or social dynamics.
Symptoms:
- plans “work on paper” and fail in execution
- postmortems repeat the same surprises
- the system produces certainty, not accuracy
- people become cynical about process (“we all know it won’t happen anyway”)
- the organization oscillates between rigid planning and chaotic firefighting
Root cause:
- the system encodes the wrong causality model.
What a Causality Model Is¶
A causality model is the implicit answer to:
- “If we do X, what happens next?”
- “What can we predict, and what must we learn?”
- “What kind of evidence changes our mind?”
Every system picks one model by default. You can either choose it consciously or inherit it accidentally.
This book uses five dominant models:
- Linear planning
- Feedback loops
- Constraints & flow
- Evolution / selection
- Socio-technical dynamics
Most real environments contain more than one, but one usually dominates the failure you’re addressing.
Model 1: Linear Planning¶
Core assumption¶
Cause → effect is sufficiently stable that you can plan sequences and expect them to hold.
When it fits¶
- well-understood work
- stable requirements
- low uncertainty
- repeatable execution
- clear ownership and interfaces
Typical systems built on it¶
- Gantt-style project planning
- phase gates
- standard operating procedures
- deterministic roadmaps
What it optimizes¶
- sequencing decisions
- scope control
- predictability under stable conditions
Failure modes¶
- overconfidence in estimates
- late discovery of errors
- brittle plans that discourage learning
Smell test: if the work keeps changing after you “plan it,” linear planning is being used outside its domain.
Model 2: Feedback Loops¶
Core assumption¶
You cannot reliably predict outcomes; you must act, observe, and adjust.
When it fits¶
- product discovery
- strategy under uncertainty
- experimentation
- user behavior change
- market-dependent outcomes
Typical systems built on it¶
- hypothesis-driven development
- OKR variants with learning cycles
- continuous discovery
- experiment pipelines
What it optimizes¶
- investment decisions under uncertainty
- diagnosis and learning
- adaptation speed
Failure modes¶
- endless experimentation without commitment
- metrics theater (“we measure everything, decide nothing”)
- local learning that doesn’t translate into action
Smell test: if “learning” does not change priorities or scope, the loop is decorative.
Model 3: Constraints & Flow¶
Core assumption¶
Throughput is governed by bottlenecks, queues, and capacity constraints, not by intention or effort.
When it fits¶
- delivery throughput problems
- reliability operations
- build/release pipelines
- support queues and escalations
- multi-stage handoffs
Typical systems built on it¶
- kanban with explicit WIP limits
- SRE incident management
- theory of constraints applied to delivery
- flow metrics and queue policies
What it optimizes¶
- sequencing decisions around bottlenecks
- repair decisions (remove constraints)
- predictability through reduced WIP and variance
Failure modes¶
- treating flow metrics as performance evaluation (gaming)
- local flow optimization that increases downstream load
- constraints applied without authority (paper rules)
Smell test: if work is “in progress” everywhere, your real system is uncontrolled WIP.
Model 4: Evolution / Selection¶
Core assumption¶
Success emerges through variation, selection, and retention over time. You shape conditions more than you control outcomes.
When it fits¶
- scaling organizations
- platform ecosystems
- innovation portfolios
- architectural evolution
- competitive environments
Typical systems built on it¶
- portfolio bets with explicit kill criteria
- evolutionary architecture approaches
- internal platform product models
- innovation funnels with selection gates
What it optimizes¶
- investment decisions across uncertain futures
- adaptability and resilience
- avoidance of single-point bets
Failure modes¶
- “innovation theater” without real selection pressure
- too much variation (fragmentation)
- too much selection (premature standardization)
Smell test: if nothing ever dies, you are not selecting; you are accumulating.
Model 5: Socio-Technical Dynamics¶
Core assumption¶
Outcomes are shaped by incentives, authority, trust, identity, and power, interacting with technical constraints.
When it fits¶
- cross-team cooperation failures
- ownership ambiguity
- governance, compliance, risk management
- cultural and incentive-driven behavior
- any environment where conflict is avoided rather than resolved
Typical systems built on it¶
- RACI-like ownership models (when enforced)
- interface contracts with escalation rules
- decision rights frameworks
- governance mechanisms that define authority and defaults
What it optimizes¶
- ownership decisions
- conflict resolution pathways
- coordination cost reduction through clear boundaries
Failure modes¶
- systems that pretend politics doesn’t exist
- “consensus” systems that create veto power everywhere
- enforcement collapse when authority is unclear
Smell test: if the main blocker is “getting people to agree,” you are in socio-technical territory.
Choosing the Right Model¶
Start from the failure and ask which statement is most true:
- Linear: “We mostly know what to do; we just need to execute consistently.”
- Feedback: “We don’t know what will work until we test and learn.”
- Flow: “We know what to do, but work doesn’t move.”
- Evolution: “We need to explore options and let winners emerge.”
- Socio-technical: “The hard part is authority, incentives, and coordination.”
Then confirm with evidence:
- repeated surprises → feedback or socio-technical
- chronic queues and stuck work → flow
- fragmentation and drift at scale → evolution
- stable repeatable work → linear
Model Mismatch: The Common Failure Combinations¶
Linear planning applied to feedback problems¶
Result:
- fixed roadmaps with fragile assumptions
- large batches, late learning, public failures
Correction:
- shorten cycles; make learning artifacts part of the system.
Feedback loops applied to flow problems¶
Result:
- more experimentation, same bottleneck
- “improvements” that don’t change throughput
Correction:
- identify the constraint; set WIP and queue policies.
Flow thinking applied to socio-technical problems¶
Result:
- dashboards and SLAs added, but ownership conflicts persist
- people route around the system
Correction:
- clarify authority and interfaces; define defaults and escalation.
Evolution thinking applied without selection¶
Result:
- endless pilots, tool sprawl, architectural fragmentation
Correction:
- add explicit selection gates and kill rules.
The Artifact Must Match the Model¶
Artifacts encode causality assumptions.
Examples:
- Linear planning artifacts: schedules, dependency graphs, milestones
- Feedback artifacts: hypotheses, experiment results, decision logs
- Flow artifacts: WIP limits, cumulative flow, queue policies, bottleneck maps
- Evolution artifacts: portfolio maps, bet tracking, kill criteria, standards lifecycle
- Socio-technical artifacts: ownership maps, decision rights, interface contracts, escalation rules
If you use the wrong artifact, you will observe the wrong reality.
Misuse Model: How This Chapter Gets Misapplied¶
Misuse 1: Treating models as categories rather than lenses¶
People argue “this is a flow problem” as identity.
Correction:
- choose the dominant model for the failure you’re addressing, not for the entire organization.
Misuse 2: Using “socio-technical” as an excuse¶
Teams label problems political to avoid engineering.
Correction:
- if the bottleneck is measurable work stuck in queues, start with flow.
Misuse 3: Over-fitting to uncertainty¶
Teams treat everything as unknown to avoid commitment.
Correction:
- uncertainty does not eliminate the need for decisions; it changes the decision cadence and artifact type.
The Non-Negotiable Rule Introduced Here¶
A system definition must state:
- its dominant causality model
- what evidence is valid within that model
- what artifact represents the model’s truth
- how the system fails under model mismatch
If a system can’t name its causality assumptions, it will smuggle them in anyway.
Exit Condition for This Chapter¶
Write:
- The dominant causality model for your failure (linear / feedback / flow / evolution / socio-technical)
- One piece of evidence that supports that choice
- The artifact type that best represents truth under that model
- One likely mismatch risk if the context changes