Chapter 3 — Why Smart People Design Bad Systems¶
Bad systems are rarely designed by incompetent people.
They are designed by capable people responding to real pressure, using mental shortcuts that work locally but fail structurally. The problem is not intelligence—it is that system design punishes the same behaviors that are rewarded in most organizations: speed, confidence, persuasive narratives, and visible activity.
This chapter makes the failure mechanisms explicit so you can recognize them while they are happening.
The Failure This Chapter Prevents¶
Observable failure: teams install systems that feel rigorous but degrade decision quality over time.
Symptoms:
- The system is “adopted” but rarely changes hard decisions
- Meetings become more frequent; commitments become less reliable
- Everyone can explain the process; no one can explain why it works
- The system produces artifacts that look official but cannot be challenged
- People defend the system’s identity rather than evaluating its outputs
Underlying pattern:
- People optimize for legitimacy and coherence instead of inspectability and constraint.
The Core Dynamic: Intelligence Is Not the Constraint¶
System design fails because it is a socio-technical engineering problem:
- Social: incentives, status, power, conflict avoidance, identity
- Technical: interfaces, constraints, feedback loops, scaling limits
Smart people often fail here because:
- they are rewarded for persuasive abstraction, and
- punished for exposing uncertainty.
Systems require the opposite:
- explicit assumptions
- explicit constraints
- explicit failure modes
Failure Mechanism 1: Abstraction Drift¶
Abstraction is useful until it replaces contact with reality.
How it happens:
- A real failure occurs (missed deadlines, churn, conflict)
- People create a broad label (“alignment”, “execution”, “ownership”)
- The label becomes the problem definition
- The system is built to satisfy the label, not the failure
Warning signs:
- the problem statement contains only abstract nouns
- nobody can cite a specific incident
- “we all know what that means” appears frequently
Countermeasure:
-
require an Observable Failure Statement that includes:
-
a concrete situation
- a repeated symptom
- a measurable consequence
- who is affected
Failure Mechanism 2: Vocabulary Substitution¶
People replace decisions with words because words feel safer than commitments.
Examples:
- “Let’s align” instead of “Let’s pick a priority”
- “Let’s clarify scope” instead of “Let’s say no to X”
- “We need ownership” instead of “Name an owner and their authority”
Why it’s attractive:
- vocabulary scales socially
- decisions create winners and losers
What it causes:
- semantic debates
- ritualized artifacts
- avoidance of accountability
Countermeasure:
-
translate abstract language into one of seven decision types:
-
priority, scope, ownership, sequencing, investment, diagnosis, repair
If you can’t translate it, you don’t have a decision problem yet—you have a political or emotional problem.
Failure Mechanism 3: Framework Stacking¶
When a system doesn’t work, people often add another system on top.
Typical chain:
- OKRs + Agile + Architecture Review Board + “Alignment” cadence + dashboards
Why it fails:
- the systems optimize different decisions
- constraints conflict
- artifacts compete
- teams learn to perform compliance, not thinking
Warning signs:
- multiple artifacts represent the “same truth” differently
- teams spend more time reconciling artifacts than shipping or learning
- each system has defenders; none has measurable decision improvement
Countermeasure:
- insist on a single “source decision” for each recurring domain
- treat new systems as replacements unless proven otherwise
Failure Mechanism 4: Local Optimization Disguised as Strategy¶
Smart people can make a system that works for their team and harms the whole.
Examples:
- a prioritization system that optimizes feature delivery but breaks platform reliability
- a velocity system that improves predictability but increases long-term coupling
- a discovery system that generates insights but starves delivery
Why it persists:
- local wins are visible
- systemic costs are delayed and diffused
Countermeasure:
- explicitly name the unit of analysis
- state what the system does not optimize
- add a misuse warning for “success that causes harm elsewhere”
Failure Mechanism 5: Unpriced Tradeoffs¶
Every system makes tradeoffs; bad systems hide them.
Common hidden tradeoffs:
- speed vs correctness
- local autonomy vs global coherence
- innovation vs stability
- predictability vs adaptability
- throughput vs quality
Warning sign:
- the system is described as “best practice” with no cost model
Countermeasure:
-
require a “cost of correctness” statement:
-
what you are willing to lose to gain the decision improvement
Failure Mechanism 6: Confusing Legibility with Truth¶
Systems often become tools to make work legible to leadership rather than effective.
Legibility pressure creates:
- simplified metrics
- sanitized narratives
- artifacts that look stable even when reality is volatile
Result:
- decision machines become reporting machines
- teams learn to optimize optics
Countermeasure:
-
define whether the artifact is for:
-
decision-making
- coordination
- reporting
If you try to make one artifact do all three, it will be gamed.
Failure Mechanism 7: The Comfort of Completion¶
Smart people love closed forms: canvases, diagrams, tables.
The danger is that “filled out” feels like “done.”
Warning signs:
- the artifact is celebrated more than the decision it supports
- teams spend time perfecting format
- people ask for templates instead of constraints
Countermeasure:
-
enforce an “artifact usefulness test”:
-
Did this artifact change a real decision within a week?
- If not, delete it or redesign it.
Failure Mechanism 8: Hidden Authority Mismatch¶
A system can be logically correct and socially impossible.
Authority mismatches happen when:
- the system requires enforcing constraints no one can enforce
- ownership is declared without real decision rights
- escalation paths are undefined or ignored
Warning signs:
- “We all agreed but nothing changed”
- “That’s not my call”
- “Leadership will never go for that”
Countermeasure:
-
adoption path is part of validity:
-
who can run it
- who can enforce it
- what default applies when enforcement fails
Failure Mechanism 9: No Misuse Model¶
Smart people avoid naming misuse because it implies distrust.
But systems without misuse models will be misused immediately, because:
- misuse aligns with incentives
- misuse reduces friction
- misuse is socially safer than confrontation
Countermeasure:
-
every system must include:
-
how it degrades when misapplied
- what behaviors indicate misuse
- one mitigation per misuse
The Non-Negotiable Rule Introduced Here¶
A system is not “good” because it is coherent.
A system is good only if:
- it improves a specific decision,
- under real constraints,
- while resisting predictable misuse.
Coherence without enforcement produces folklore.
A Practical Diagnostic: The Bad System Smell Test¶
If you notice three or more of these, treat the system as suspect:
- It can be applied without naming a failure
- It produces artifacts that are not challenged
- It claims to optimize everything
- It relies on “buy-in” more than constraints
- It scales by vocabulary rather than by interfaces
- It measures activity more than decision outcomes
- It becomes identity (“we are an OKR company”)
- It cannot name how it breaks
Exit Condition for This Chapter¶
You are ready to proceed if you can name, from your environment:
- One instance of abstraction drift
- One instance of vocabulary substitution
- One instance of framework stacking
- One system that lacks a misuse model
If you can’t find examples, don’t assume you’re healthy. Assume you’re not looking closely enough.