Chapter 13 — Canonical Dimensions (Reference)¶
This chapter is a reference sheet you can apply to any framework, method, operating model, or “how we do things here” practice.
It exists to make systems inspectable.
If you cannot decompose a system across these dimensions, you do not understand it well enough to adopt, mandate, or criticize it.
The Canonical Dimension Set¶
Use these ten dimensions in order. They are designed to force decision clarity, constraint, and misuse resistance.
1) Problem frame¶
Purpose: anchor the system in a concrete failure.
Answer:
- What observable failure does this system reduce?
- Where does it appear: strategy, discovery, delivery, cooperation, evolution?
Red flags:
- “alignment” without a decision
- “execution” without a failure description
- “best practices” as justification
2) Primary object of control¶
Purpose: identify what the system directly manipulates.
Choose 1–2:
- goals
- work items
- interfaces
- domains
- constraints
- incentives
- information flow
Red flags:
- controlling outcomes directly
- controlling “communication” without a mechanism
- controlling too many objects (becoming an operating model)
3) Unit of analysis¶
Purpose: prevent scale collapse.
Choose one:
- individual
- team
- multi-team
- organization
- ecosystem/market
Red flags:
- “works at any scale”
- unclear enforcement authority
- copying team practices org-wide without redesign
4) Causality model¶
Purpose: match system logic to reality.
Choose one dominant model:
- linear planning
- feedback loops
- constraints & flow
- evolution / selection
- socio-technical dynamics
Red flags:
- deterministic plans in high uncertainty
- experimentation used to avoid commitment
- flow tools applied to authority conflicts
- “innovation” without selection pressure
5) Decision type optimized¶
Purpose: make the system’s purpose explicit.
Pick one primary decision type:
- priority
- scope
- ownership
- sequencing
- investment
- diagnosis
- repair
Red flags:
- claims to optimize everything
- outputs that aren’t decisions (just discussion)
- decision type unclear or shifting
6) Artifacts¶
Purpose: enforce inspectability.
Answer:
- What does the system produce every time it runs?
- Where is the artifact stored?
- Who can challenge it?
Common artifact types:
- map
- table
- score
- vocabulary
- contract
- canvas
- rule set
- decision log entry
Red flags:
- artifacts that are never used in later decisions
- artifacts optimized for reporting rather than truth
- artifacts too ambiguous to dispute
7) Vocabulary & boundary rules¶
Purpose: prevent semantic drift and vague thinking.
Answer:
- What terms must be defined precisely?
- What vague terms are disallowed or must be operationalized?
- What does the system refuse to do?
Red flags:
- key terms left to interpretation
- “alignment,” “value,” “impact” used without qualifiers
- no explicit “no” (no boundary rules)
8) Operating mode¶
Purpose: ensure the system can be run as intended.
Answer:
- Is it one-off or continuous?
- Is it slow strategic or fast operational?
- Is it facilitation-heavy or solo-usable?
- What triggers a run (cadence, event, threshold)?
Red flags:
- requires constant facilitation to function
- cadence without a decision output
- unclear triggers (“whenever needed”)
9) Failure & misuse model¶
Purpose: make breakage predictable and manageable.
Answer:
- How does the system degrade when misapplied?
- What anti-patterns does it attract?
- What incentives drive misuse?
- What mitigations exist?
Red flags:
- “people are doing it wrong” as the only explanation
- no stated misuse modes
- system becomes identity (“we are a X company”)
10) Adoption path¶
Purpose: ensure the system can enter reality.
Answer:
- Who can use it first successfully?
- What is minimum viable use?
- Time to first value?
- What changes are required to scale it?
Red flags:
- requires org-wide buy-in before any value appears
- unclear enforcement authority
- adoption plan is “training”
The One-Sentence System Spec¶
When decomposing, always write this:
“This system reduces __ failure by optimizing _ decisions through control of , producing artifacts, enforced by constraints, at the ___ unit of analysis.”
If you can’t write it, you don’t have a system definition.
Quick Diagnostic Prompts (Fast Use)¶
Use these prompts when you’re in a meeting and need immediate clarity.
- “What failure are we seeing that this solves?”
- “Which decision will this make easier?”
- “What does it control directly?”
- “What artifact comes out every time?”
- “What constraint gives it teeth?”
- “How will people avoid or game it?”
- “Who can actually enforce this?”
Misuse Model: How This Reference Is Misused¶
Misuse 1: Turning dimensions into jargon¶
People use the words without doing the work.
Correction:
- require an artifact output per decomposition (a filled table, not spoken labels).
Misuse 2: Treating decomposition as judgment¶
Decomposition is a specification, not a moral evaluation.
Correction:
- evaluate fit against your failure and constraints, not against abstract ideals.
Misuse 3: Over-detailing¶
Teams fill the dimensions with essays and lose the decision.
Correction:
- timebox decomposition; prefer crisp statements and mark uncertainties.
Exit Condition for This Chapter¶
You can treat this chapter as “installed” when you can decompose:
- one system you currently use, and
- one system you are considering adopting,
and write a one-sentence system spec for each.