Chapter 11 — Deliberate System Invention¶
Most “new systems” are accidental: a meeting becomes a cadence, a document becomes a requirement, and soon everyone is complying with something nobody designed.
This chapter is about designing systems on purpose.
Deliberate system invention is not creativity. It is engineering:
- define the failure
- choose the decision to optimize
- choose what you can control
- produce inspectable artifacts
- enforce constraints
- model misuse
- define adoption
If any of these are missing, you’re not inventing a system. You’re inventing ceremony.
The Failure This Chapter Prevents¶
Observable failure: organizations invent local processes and frameworks that feel helpful but become fragile, unenforceable, or politically captured as they spread.
Symptoms:
- the “system” works only with its creator present
- adoption requires persuasion rather than enforcement
- artifacts multiply without decision clarity
- the system becomes permanent without review
- people comply superficially while routing around it
Root cause:
- system design happened implicitly, without a contract.
The System Contract (Required)¶
You may not call something a “system” unless you can fill this contract.
1) Target situation¶
Where will this system run?
- unit of analysis (team / multi-team / org / ecosystem)
- environment characteristics (stability, coupling, risk profile)
- why now (what changed or became intolerable)
2) Observable failure¶
Write the failure in 3–5 sentences:
- situation
- symptom
- consequence
- who is impacted
- frequency
If the failure is not observable, the system will optimize appearance.
3) Root-cause assumption¶
State your belief about why the failure persists.
Examples:
- “Work is stuck because WIP is uncontrolled and handoffs create queues.”
- “We miss strategy because investment decisions are made without kill criteria.”
- “Cross-team conflict persists because interfaces and ownership are ambiguous.”
This assumption is not guaranteed to be true; it is the hypothesis your system encodes.
4) Primary object of control¶
Choose 1–2 objects:
- goals
- work items
- interfaces
- domains
- constraints
- incentives
- information flow
If you pick more than 2, you’re designing an operating model; decompose into subsystems.
5) Decision to optimize¶
Pick one primary decision type:
- priority
- scope
- ownership
- sequencing
- investment
- diagnosis
- repair
This is the system’s purpose.
If you can’t choose one, your system will expand until it becomes political.
6) Artifact(s)¶
Specify what the system produces every time it runs.
Artifacts must be:
- inspectable
- challengeable
- stable enough to compare over time
- clearly tied to the decision
Examples:
- decision log entry
- ownership map
- interface contract
- priority stack with capacity allocation
- bottleneck map + WIP policy
- portfolio bet tracker with kill criteria
7) Non-negotiable rule (constraint)¶
This is the power source.
Choose at least one constraint that:
- forces a real tradeoff
- has a default if not obeyed
- can be enforced within the unit of analysis
Examples:
- “Max 3 active initiatives; new work displaces old work.”
- “No work starts without an explicit owner and exit criteria.”
- “If a dependency isn’t resolved in 48 hours, escalation path triggers automatically.”
- “If decision is not made by Friday, default option A ships.”
8) Misuse warning¶
Name how the system will be misused.
At minimum include:
- one misuse that turns it into reporting theater
- one misuse that turns it into ritual
- one misuse that turns it into power capture
Then specify one mitigation per misuse.
9) Adoption path¶
Define:
- who can successfully use it first
- minimum viable use (smallest real instance)
- time to first value (what changes quickly if it works)
- how it expands (and what redesign is required to scale)
Adoption is part of design. A system that cannot be adopted is not valid.
The Invention Procedure¶
This is how you fill the contract in practice.
Step 1: Write the failure and refuse to abstract it¶
Don’t write “alignment.” Write what happens:
- decisions reversed
- work stuck in queues
- repeated incidents
- conflicting priorities
Step 2: Name the decision you keep failing to make¶
Most system invention is actually decision invention.
Examples:
- “We are failing to decide what not to do” → scope/priority
- “We are failing to decide who owns the interface” → ownership
- “We are failing to decide what to fix vs tolerate” → repair
Step 3: Choose an object you can control this week¶
Avoid fantasy objects (incentives, org chart) unless you own them.
Start with objects you can actually edit:
- work item definitions
- interface contracts
- WIP limits
- decision logs
- ownership maps
Step 4: Design the artifact first, then the cadence¶
Artifacts make thinking inspectable.
Cadence without artifact becomes meeting.
Design:
- what the artifact contains
- how it is reviewed
- how it triggers action
- where it lives (single source of truth)
Step 5: Add the constraint and default¶
Ask:
- “How will people avoid this decision?”
- “What rule makes avoidance expensive?”
- “What default happens when avoidance occurs?”
A constraint without a default is a request.
Step 6: Simulate misuse¶
Run three rehearsals:
- How a busy team will weaken it
- How leadership will turn it into reporting
- How a political actor will capture it
If you can’t imagine misuse, you are not done.
Step 7: Define minimal viable use¶
Describe the smallest run that produces value:
- one decision cycle
- one artifact output
- one enforced constraint
If you can’t do it small, you can’t do it at scale.
Patterns for System Shapes¶
Most systems fall into a few shapes. Pick one that matches your decision and object of control.
Diagnostic systems¶
Purpose: improve diagnosis decisions.
Artifacts:
- incident timeline
- causal map
- decision log
Constraints:
- mandatory post-incident review
- required “next change” output
Allocation systems¶
Purpose: improve investment/priority decisions.
Artifacts:
- portfolio allocation table
- priority stack with capacity
Constraints:
- capacity caps
- explicit tradeoff rules
Boundary systems¶
Purpose: improve ownership decisions.
Artifacts:
- ownership map
- interface contracts
Constraints:
- no change without owner approval
- escalation defaults
Flow-control systems¶
Purpose: improve sequencing/repair.
Artifacts:
- WIP policy
- bottleneck map
Constraints:
- WIP limits
- stop-the-line rules
Selection systems¶
Purpose: manage evolution and adaptation.
Artifacts:
- bet tracker
- kill criteria
Constraints:
- sunset rules
- selection gates
Pick one shape and keep it narrow. Hybrid systems are where complexity hides.
Misuse Model: The Most Common Invention Failures¶
Misuse 1: The system is just a meeting¶
Artifact is weak or absent, so the system becomes a cadence.
Mitigation:
- no artifact, no meeting; meeting exists to produce the artifact.
Misuse 2: The system controls everything¶
It becomes an operating model with no clear decision output.
Mitigation:
- one primary decision type; everything else is secondary.
Misuse 3: The system cannot be enforced¶
Constraints require authority you don’t have.
Mitigation:
- redesign to use controllable objects, or explicitly include the authority holder in adoption path.
Misuse 4: The system is never allowed to die¶
It becomes permanent because removing it feels risky.
Mitigation:
- add expiry/review rules (“sunset unless renewed”).
The Non-Negotiable Rule Introduced Here¶
A system is invalid unless it contains:
- one optimized decision
- one inspectable artifact
- one enforceable constraint with a default
- one misuse model
- one adoption path
If any is missing, you are designing ceremony.
Exit Condition for This Chapter¶
Produce a completed System Contract for one new system you might build.
You are done when you can answer:
- What failure does this reduce?
- What decision does it optimize?
- What does it directly control?
- What artifact does it produce?
- What constraint enforces it (and what default applies)?
- How will it be misused, and what mitigates that?
- Who can adopt it first, and what is minimal viable use?