Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from bundle
Facilitate a Value Management style interrogation of a plan, design, problem, or product decision. Use when the user wants a deeper successor to grill-me - on
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
references/evaluation-matrix.md
1# Evaluation Matrix23Score options on explicit criteria. Never decide by vibe. The matrix exists so the user can see *which assumption* would have to flip for the recommendation to flip.45## Process671. Pick criteria (defaults below; tailor with the user - ask one at a time).82. Weight criteria 1-5 (5 = decisive). Default: all 3.93. Score each option per criterion 1-5.104. Compute weighted total. Show top-2 plus the dark-horse.115. Run a sensitivity check: which single weight or score, if flipped, changes the winner?1213## Default criteria - software1415| Criterion | What you're scoring |16|---|---|17| Outcome value | How much of the basic function is delivered. |18| Implementation effort | Inverse of cost. 5 = trivial. |19| Reversibility | How fast can we undo if wrong (5 = config flip). |20| Risk surface | Inverse of risk; data-loss/permission/migration weigh heaviest. |21| Operational burden | On-call, runbooks, dashboards, manual ops. |22| User comprehension | Will a typical user grok this without training. |23| Testability | Can we prove correctness before ship. |24| Architecture fit | Conformance with existing patterns; deviation costs more. |25| Time to feedback | How fast we'll learn if the bet is right. |2627Drop any criterion the user explicitly waives. Add domain-specific ones (compliance, latency budget, infra cost) when relevant.2829## Default criteria - non-software3031| Criterion | Notes |32|---|---|33| Outcome value | Movement toward the stated outcome. |34| Cost | Money + opportunity cost. |35| Time to value | Calendar weeks until benefit lands. |36| Stakeholder fit | Alignment with the people whose buy-in matters. |37| Reversibility | Cost to undo. |38| Risk | Severity x likelihood of negative outcomes. |39| Learning value | What we'll know we didn't know before. |40| Energy / morale | Will the team actually do it. |4142## Template4344```45Options: A=<no-build>, B=<config-only>, C=<lean build>, D=<full build>, E=<dark-horse>4647Criterion Weight A B C D E48Outcome value 5 1 2 4 5 349Implementation effort 3 5 5 3 1 450Reversibility 4 5 5 3 1 451Risk surface 4 5 4 3 1 352Operational burden 3 5 4 3 1 453User comprehension 2 3 4 4 3 254Testability 3 5 4 3 2 355Architecture fit 3 5 3 4 3 256Time to feedback 4 5 5 3 1 45758Weighted totals: (compute) - recommend top-2 + dark-horse.59```6061## Sensitivity check6263After the table, state:6465- The winner.66- The single change that would unseat it (e.g. "if Reversibility drops to weight 2, D wins").67- Which assumption the user should challenge first.6869## Pitfalls7071- Don't pad with criteria that don't differentiate options.72- Don't let the sponsor weight; weight first, then reveal the winner.73- If two options tie, run a sacrifice test (cut the most expensive function in each) before adding more criteria.74