Software Adaptation
How VM phases land as engineering artifacts. Use this when the problem is code-shaped.
Phase -> artifact mapping
| VM phase | Engineering artifact |
|---|---|
| Frame | One-paragraph problem statement + linked issue/PR |
| Information | Inspection notes: file paths, line refs, log excerpts, telemetry queries, prior ADRs |
| Function analysis | Function map (verb+noun) tied to user roles and components |
| Creative | Options list including: no-build, config-only, UI-only, contract-only (backend), instrumentation-first, manual/operational, full build |
| Evaluation | Options Matrix with software criteria (see evaluation-matrix.md) |
| Development | PRD or ADR with diagrams + interface sketches |
| Risk | Risk register with detection + mitigation + owner (see risk-register.md) |
| Recommendation | ADR-style recommendation: status, context, decision, consequences, alternatives |
| Implementation slice | Issue list, each with: scope, exit criteria, observability, rollback, dependencies |
| Feedback / POE | Post-release check: telemetry diff vs predicted value, lessons captured into ADR |
Information sources to inspect before asking
- Repo: directory layout, package boundaries, existing patterns nearby.
- Recent PRs touching the area; their reviews and rejected approaches.
- ADRs in
docs/adr/(or equivalent). - Issue tracker for the same surface - find prior attempts that failed.
- Logs / telemetry / traces - actual failure modes and frequencies.
- Tests - what's currently considered correct.
- Configuration / feature flags - capability already present but unused.
Cite exact paths and line numbers when you bring evidence to the user.
Proactive discovery for software
Before asking a design or product question, make a small local investigation first. The goal is not exhaustive research; it is to avoid asking the user for facts the repository can already answer.
Default sequence:
- Search names and concepts with
rg. - Read the nearest implementation and public API/index files.
- Check tests for the current behavioral contract.
- Check recent issues/PRs/docs if the task mentions them.
- Only then ask the user for intent or trade-off decisions.
Question shape:
Evidence: `path/to/file.ts:42` already routes X through Y, and `path/to/test.ts:18` treats Z as required.
Question: Should the value boundary be "preserve Z while changing Y", or is Z now negotiable?
Recommended: preserve Z unless the current task explicitly de-scopes it.Do not ask:
- "Where is this implemented?"
- "Does the app already support this?"
- "What tests cover this?"
- "What does production currently do?"
Inspect those first. Ask only what cannot be inferred: desired outcome, acceptable risk, priority, taste, rollout appetite, or whether a current behavior is intentional.
Default creative options for software
When generating alternatives, deliberately include each of these:
- No-build - accept the status-quo with measurement.
- Config-only - flip a flag, change a threshold, adjust a quota.
- Manual / operational - a human runs the process for two weeks; we measure pain.
- Comms-only - change documentation, error message, runbook.
- Instrumentation-first - add telemetry; decide later.
- Contract-only - change the API/event shape; let consumers adapt.
- UI-only - surface the existing capability; no backend change.
- Backend-only - fix the data model; UI catches up later.
- Lean build - minimum diff that delivers basic function.
- Full build - comprehensive solution.
Drop the ones that obviously don't fit, but force yourself to consider each.
Software-specific evaluation criteria
(See evaluation-matrix.md for full template.)
The criteria most teams under-weight:
- Reversibility - config flips beat schema changes beat data migrations beat contract breaks.
- Time to feedback - a 1-day option that teaches > a 6-week option that delivers.
- Operational burden - a feature that needs a daily human action will rot.
ADR template (compact)
# ADR-NNNN: <title>
Status: Proposed | Accepted | Superseded by ADR-NNNN
Date: YYYY-MM-DD
## Context
<problem, constraints, evidence (cite paths/links)>
## Decision
<chosen option, in 2-4 sentences>
## Alternatives considered
- <option B>: rejected because <reason>
- <option C>: rejected because <reason>
## Consequences
- Positive: <what we gain>
- Negative / accepted risks: <what we lose, with detection signals>
- Reversal cost: <how to undo>PRD template (compact)
# <Feature>
## Outcome (not feature)
<one measurable sentence>
## Functions delivered (verb + noun)
- Basic: ...
- Secondary: ...
- Explicitly NOT delivered: ...
## Users / stakeholders
<roles, with the one whose buy-in is decisive marked>
## Acceptance criteria
- ...
## Out of scope
- ...
## Rollout / rollback
- Flag, stages, kill switch, on-call playbook entry.
## Observability
- Metric, log, alert, dashboard link.Issue slice template
Title: <verb + noun, basic function level>
Scope: <single PR-sized, <=2 days>
Exit criteria: <how reviewer/QA confirms>
Observability: <what telemetry proves it works in prod>
Rollback: <flag/revert path>
Depends on: <issue IDs>
Risk pointer: <R<n> from register>Review checklist (post-implementation)
- Every retained component maps to a basic or required function.
- Every public surface has at least one test.
- Every new failure mode has a metric or log.
- Rollback path is documented and rehearsed in writing.
- Cost surprises (tokens, egress, log volume) are bounded.
- ADR is updated; PRD is closed; issues link to the ADR.