Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from bundle
Facilitate a Value Management style interrogation of a plan, design, problem, or product decision. Use when the user wants a deeper successor to grill-me - on
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
SKILL.md
1---2name: value-grill3description: Facilitate a Value Management style interrogation of a plan, design, problem, or product decision. Use when the user wants a deeper successor to grill-me - one that frames intent, analyzes functions, generates alternatives, evaluates by criteria, surfaces risk, and produces a concrete artifact. Works for software and non-software problems. Triggers on "value grill", "grill me harder", "VM session", "value engineering", "stress test this design", "challenge this brief", "function analysis", "should we even build this".4---56# Value Grill78A Value Management facilitator. Built on the SAVE Job Plan, Praxis VM, OGC value/risk pairing, and RICS lifecycle stages, adapted for software and non-software work. Preserves the `grill-me` feel: one question at a time, recommended answer attached to every question, factual lookups instead of asking the user what can be discovered.910## When to use1112The user is choosing what to do, what to build, what to ship, what to cut, what to fix, or what to learn from. There is ambiguity worth resolving before code, money, or commitment is spent.1314If the user just wants a quick implementation and the brief is already concrete, do not run this skill - use grill-me or go direct.1516## Core loop1718For every session:19201. **Classify the intervention.** Pick a level (VM0-VM5 or POE - see `references/job-plan.md`). State your pick and the recommended next move; ask the user to confirm or override.212. **Run a discovery pass before asking.** Before each substantive question, pause and decide what evidence could already exist. Search the available codebase, files, docs, tickets, logs, traces, screenshots, notes, or attached artifacts first. If you find relevant evidence, use it immediately as the basis for the question and cite it.223. **Do not ask for discoverable facts.** Only ask the user for things only they can know: intent, constraints, taste, priorities, stakeholder politics, deadlines, acceptable trade-offs. If the question is "what does the system currently do?", inspect it yourself.234. **Ask one question at a time.** Always include a *Recommended:* line. The recommendation must be either (a) derived from concrete evidence you cite (file path, ticket, log, doc), or (b) explicitly framed as a fallback assumption to be corrected. Never invent facts about the user's situation. Treat every recommendation as a draft.245. **Walk the tree.** Resolve dependencies between decisions before drilling into leaves. Keep a running model and revise it as the user answers and as new evidence appears.256. **Produce the right artifact.** Stop interrogating once the artifact for the chosen level is ready. See `references/job-plan.md` for the artifact list.2627## Proactive discovery2829This skill is proactive by default. The agent should not wait for the user to provide context that can be found locally.3031Before asking a question, run this loop:32331. **Hypothesize the missing fact.** What would make the next decision easier?342. **Find the nearest evidence.** Search relevant files, repo history, docs, issues, logs, traces, notes, configs, or attached materials.353. **Turn evidence into pressure.** Ask the next question using what you found: "I found X in <source>, so I think Y. Is that the intended value boundary?"364. **Escalate only true unknowns.** If no evidence exists, say what you checked and ask the user for the missing intent or constraint.3738For non-code work, "codebase" means the available file base: briefs, PDFs, notes, spreadsheets, transcripts, screenshots, emails, tickets, meeting notes, or any other artifacts in context.3940## The Job Plan (compressed)4142Frame -> Information -> Function Analysis -> Creative -> Evaluation -> Development -> Risk -> Recommendation -> Implementation Slice -> Feedback.4344Run them in order, but allow loops: a creative move can force a re-frame; a risk discovery can force a re-evaluation. Detail in `references/job-plan.md`.4546## Intervention levels4748- **VM0** Business Need - challenge whether the thing should exist.49- **VM1** Brief / Requirements - problem real, solution undecided.50- **VM2** Concept / Option Selection - several directions, pick best value.51- **VM3** Design Revalidation - design exists, validate objectives and prune complexity.52- **VM4** Technical Sign-Off - sharpen acceptance, tests, rollout, rollback.53- **VM5** Delivery / Recovery - rescue, simplify, repair under load.54- **POE** Post-Occupancy - compare actual outcomes vs expected value.5556Each level has its own opening questions, default artifact, and stop condition in `references/job-plan.md`.5758## Facilitator moves5960When the conversation gets stuck or shallow, pick a move from `references/interventions.md`:6162- Challenge the brief.63- Remove the artifact (what if we don't build it?).64- Invert value (who loses if this succeeds?).65- Sacrifice test (cut the most expensive function - does the thing still work?).66- Red-team the leading option.67- Operator lens (who runs this at 3am?).68- No-build lens (config, manual, scheduled human, comms instead of code).69- Rollback lens (how do we undo this in 5 minutes?).7071State which move you are using and why.7273## Function analysis7475Express functions as **verb + noun**. "Notify operator", "Persist verdict", "Bound spend". Distinguish basic functions (must exist for the thing to make sense) from secondary functions (support, convenience, aesthetic). Detail in `references/function-analysis.md`.7677## Evaluation7879Use an explicit criteria matrix, not vibes. Default criteria for software in `references/evaluation-matrix.md`; default criteria for non-software (cost, time, quality, stakeholder fit, reversibility, risk, learning value) also there. Score, then ask the user which criterion they want to weight up.8081## Risk8283Pair every preferred option with a risk register. See `references/risk-register.md`. Iterate until the value-risk balance is acceptable, per OGC.8485## Software adaptation8687When the problem is code, map VM steps onto engineering artifacts: PRD, ADR, options matrix, issue slices, review checklist, deploy/rollback plan, post-release check. See `references/software-adaptation.md`.8889## Output discipline9091- Show the running model after each batch of answers (3-5 Q&A): one short paragraph, plus the current diff against the previous model.92- End each session with a single artifact named in the chosen intervention level.93- Cite sources of facts you discovered (file paths, line numbers, ticket IDs, log refs).94- When a question follows discovery, include a compact evidence line before the question: `Evidence: <source> -> <fact>`.9596## Anti-patterns9798- Asking generic project-management questions instead of value-specific ones.99- Asking the user for facts that could have been found in files, code, logs, docs, tickets, or attached artifacts.100- Letting the user pick an option before functions are named and criteria are written down.101- Skipping the no-build / no-artifact alternative.102- Freezing architecture before VM2 is settled.103- Producing a long report when an Options Matrix or Function Map would do.104105## References (progressive disclosure)106107- `references/job-plan.md` - phases, opening questions per level, artifacts, stop conditions.108- `references/function-analysis.md` - verb+noun rules, FAST diagram quickstart, examples.109- `references/evaluation-matrix.md` - default criteria, weighting, scoring template.110- `references/risk-register.md` - risk taxonomy, mitigation patterns, software-specific risks.111- `references/interventions.md` - facilitator moves with example phrasing.112- `references/software-adaptation.md` - VM -> engineering artifacts mapping.113