Skip to content

Chapter 12 — System Review & Validation

Designing a system is not the hard part.

Keeping it valid over time is.

Systems drift. Incentives change. People route around constraints. Artifacts become performative. The environment that made the system useful shifts under it.

This chapter provides a review discipline that answers one question:

Should we keep, modify, subordinate, or remove this system?

The Failure This Chapter Prevents

Observable failure: systems persist after they stop improving decisions, because they have identity, inertia, and institutional defenders.

Symptoms:

  • “We do X” becomes cultural identity rather than a choice
  • the system becomes compliance theater
  • exceptions accumulate until the rule is meaningless
  • the system expands to cover more territory instead of staying effective
  • people fear removing the system because it feels like removing safety

Root cause:

  • no explicit validation criteria and no retirement mechanism.

Review Is Part of System Design

A system without review becomes:

  • ritual
  • ideology
  • bureaucracy

Validation must be built in because:

  • context changes
  • the system gets gamed
  • the system’s costs grow
  • the original failure may no longer be the dominant failure

If a system cannot be questioned, it is no longer a tool.

The Review Outputs (The Only Four Allowed)

A review must end with one of these decisions:

  1. Keep (system remains as-is)
  2. Modify (change object of control, artifact, constraints, cadence, or scope)
  3. Subordinate (keep it, but clarify precedence under another system)
  4. Remove (retire it; replace only if needed)

If the review ends with “we’ll revisit,” the system is already winning against you.

Validation Criteria (The Test Suite)

A system is valid only if it passes all five criteria below.

1) Problem fit

  • Is the original observable failure still present?
  • Is it still the dominant failure?
  • Has the failure moved to a different frame (strategy ↔ delivery ↔ cooperation)?

A system that solved yesterday’s failure may be today’s drag.

2) Decision clarity

  • Is the optimized decision still explicit?
  • Can people name it consistently?
  • Are decisions actually being made, or merely discussed?

If outputs are not decisions, you’re paying for a meeting.

3) Artifact inspectability

  • Is the artifact produced consistently?
  • Can it be challenged by informed participants?
  • Does it reflect reality or optics?
  • Is it used in subsequent decisions?

Artifacts that are not used are reporting waste.

4) Constraint enforcement

  • Are constraints actually enforced?
  • Do defaults trigger when decisions aren’t made?
  • Are exceptions explicit and rare—or invisible and common?

If constraints aren’t enforced, the system exists only as narrative.

5) Misuse resistance

  • Are predictable misuse modes occurring?
  • Is the system being gamed?
  • Has it become identity or power leverage?

If misuse dominates, redesign or retire.

The Review Procedure (Step-by-Step)

Step 1: Reconstruct the system’s intent

Write a one-sentence spec:

“This system exists to reduce __ failure by optimizing _ decisions through control of , producing artifacts, enforced by ___ constraints.”

If you can’t write this, you can’t review it. You have an ambient ritual.

Step 2: Reconfirm the observable failure with evidence

Collect at least one of:

  • incident timelines
  • cycle-time / queue evidence
  • repeated decision reversals
  • escalation logs
  • missed commitments and their causes

If you have only opinions, you have politics, not validation.

Step 3: Inspect real artifacts, not descriptions

Bring examples:

  • the last 3 artifacts produced by the system
  • the last 3 decisions the system claims to have produced

Then ask:

  • Were these artifacts used to commit to action?
  • Do they reflect what actually happened?
  • Can outsiders interpret them?

If artifacts don’t survive inspection, the system isn’t working.

Step 4: Test enforcement and defaults

Ask:

  • What happens when people don’t comply?
  • Has that happened recently?
  • Did the system respond automatically (defaults) or socially (nagging)?

If enforcement relies on reminders and heroics, the system is weak.

Step 5: Run the misuse audit

Use three lenses:

  • Busy-team misuse: how do people weaken it to save time?
  • Leadership misuse: how does it become reporting and control?
  • Political misuse: how does it become a weapon, veto, or shield?

For each misuse observed:

  • name the mechanism
  • name the incentive that drives it
  • name the mitigation (change constraint, artifact, or authority)

Step 6: Check compatibility with the system landscape

Systems rarely fail in isolation. They fail in collision.

Ask:

  • Does this system conflict with another system’s artifacts or constraints?
  • Is it duplicating a decision already owned elsewhere?
  • Should it be subordinate rather than primary?

If two systems both claim to be “the source of truth,” one will become theater.

Step 7: Decide: keep, modify, subordinate, or remove

Do not leave without a decision output.

What to Modify (The Levers)

Most modifications fall into a small set. Choose deliberately.

Modify the object of control

Example:

  • from “work items” to “interfaces” if the real failure is cross-team friction

Modify the artifact

Example:

  • from status reports to decision logs
  • from goals lists to allocation tables with capacity

Modify the constraint

Example:

  • add a WIP cap
  • add a default rule when decisions are not made
  • add expiry rules to prevent accumulation

Modify the unit of analysis

Example:

  • a team-level system moved to multi-team requires interface artifacts and authority boundaries

Modify cadence and triggers

Example:

  • shift from time-based reviews to event-based triggers (incidents, thresholds)

Retirement Rules (Prevent Fossilization)

A system needs an exit mechanism. Otherwise it will never die.

Use one or more:

  • Sunset clause: system expires unless renewed on evidence
  • Kill criteria: defined conditions that trigger retirement or redesign
  • Cost cap: if system consumes more than X time/effort, it must justify itself
  • Replacement rule: introducing a new system requires retiring or subordinating an old one

Retirement is not failure. It is evidence the organization can learn.

Misuse Model: How Review Becomes Theater

Misuse 1: Reviews become compliance audits

People check boxes rather than evaluating decision improvement.

Mitigation:

  • require evidence of decision outcomes, not process adherence.

Misuse 2: Reviews become blame sessions

People use reviews to punish teams or protect status.

Mitigation:

  • review the system as an engineered object; separate performance evaluation from system evaluation.

Misuse 3: Reviews produce “more process”

When systems fail, organizations add oversight.

Mitigation:

  • treat added governance as a cost; prefer changing constraints and artifacts first.

The Non-Negotiable Rule Introduced Here

A system must have a review cadence and a retirement mechanism.

If it cannot be removed, it is not a tool. It is an institution.

Exit Condition for This Chapter

Perform a review of one system currently in use.

You are done when you can produce:

  1. The one-sentence system spec (intent reconstruction)
  2. Evidence of the current failure state
  3. Examples of real artifacts and decisions
  4. A misuse audit with at least two observed misuses
  5. One decision: keep / modify / subordinate / remove