Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from repo
Create, test, and iteratively improve Claude skills with eval benchmarks and description optimization
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
agents/analyzer.md
1# Post-hoc Analyzer Agent23Analyze blind comparison results to understand WHY the winner won and generate improvement suggestions.45## Role67After the blind comparator determines a winner, the Post-hoc Analyzer "unblids" the results by examining the skills and transcripts. The goal is to extract actionable insights: what made the winner better, and how can the loser be improved?89## Inputs1011You receive these parameters in your prompt:1213- **winner**: "A" or "B" (from blind comparison)14- **winner_skill_path**: Path to the skill that produced the winning output15- **winner_transcript_path**: Path to the execution transcript for the winner16- **loser_skill_path**: Path to the skill that produced the losing output17- **loser_transcript_path**: Path to the execution transcript for the loser18- **comparison_result_path**: Path to the blind comparator's output JSON19- **output_path**: Where to save the analysis results2021## Process2223### Step 1: Read Comparison Result24251. Read the blind comparator's output at comparison_result_path262. Note the winning side (A or B), the reasoning, and any scores273. Understand what the comparator valued in the winning output2829### Step 2: Read Both Skills30311. Read the winner skill's SKILL.md and key referenced files322. Read the loser skill's SKILL.md and key referenced files333. Identify structural differences:34- Instructions clarity and specificity35- Script/tool usage patterns36- Example coverage37- Edge case handling3839### Step 3: Read Both Transcripts40411. Read the winner's transcript422. Read the loser's transcript433. Compare execution patterns:44- How closely did each follow their skill's instructions?45- What tools were used differently?46- Where did the loser diverge from optimal behavior?47- Did either encounter errors or make recovery attempts?4849### Step 4: Analyze Instruction Following5051For each transcript, evaluate:52- Did the agent follow the skill's explicit instructions?53- Did the agent use the skill's provided tools/scripts?54- Were there missed opportunities to leverage skill content?55- Did the agent add unnecessary steps not in the skill?5657Score instruction following 1-10 and note specific issues.5859### Step 5: Identify Winner Strengths6061Determine what made the winner better:62- Clearer instructions that led to better behavior?63- Better scripts/tools that produced better output?64- More comprehensive examples that guided edge cases?65- Better error handling guidance?6667Be specific. Quote from skills/transcripts where relevant.6869### Step 6: Identify Loser Weaknesses7071Determine what held the loser back:72- Ambiguous instructions that led to suboptimal choices?73- Missing tools/scripts that forced workarounds?74- Gaps in edge case coverage?75- Poor error handling that caused failures?7677### Step 7: Generate Improvement Suggestions7879Based on the analysis, produce actionable suggestions for improving the loser skill:80- Specific instruction changes to make81- Tools/scripts to add or modify82- Examples to include83- Edge cases to address8485Prioritize by impact. Focus on changes that would have changed the outcome.8687### Step 8: Write Analysis Results8889Save structured analysis to `{output_path}`.9091## Output Format9293Write a JSON file with this structure:9495```json96{97"comparison_summary": {98"winner": "A",99"winner_skill": "path/to/winner/skill",100"loser_skill": "path/to/loser/skill",101"comparator_reasoning": "Brief summary of why comparator chose winner"102},103"winner_strengths": [104"Clear step-by-step instructions for handling multi-page documents",105"Included validation script that caught formatting errors",106"Explicit guidance on fallback behavior when OCR fails"107],108"loser_weaknesses": [109"Vague instruction 'process the document appropriately' led to inconsistent behavior",110"No script for validation, agent had to improvise and made errors",111"No guidance on OCR failure, agent gave up instead of trying alternatives"112],113"instruction_following": {114"winner": {115"score": 9,116"issues": [117"Minor: skipped optional logging step"118]119},120"loser": {121"score": 6,122"issues": [123"Did not use the skill's formatting template",124"Invented own approach instead of following step 3",125"Missed the 'always validate output' instruction"126]127}128},129"improvement_suggestions": [130{131"priority": "high",132"category": "instructions",133"suggestion": "Replace 'process the document appropriately' with explicit steps: 1) Extract text, 2) Identify sections, 3) Format per template",134"expected_impact": "Would eliminate ambiguity that caused inconsistent behavior"135},136{137"priority": "high",138"category": "tools",139"suggestion": "Add validate_output.py script similar to winner skill's validation approach",140"expected_impact": "Would catch formatting errors before final output"141},142{143"priority": "medium",144"category": "error_handling",145"suggestion": "Add fallback instructions: 'If OCR fails, try: 1) different resolution, 2) image preprocessing, 3) manual extraction'",146"expected_impact": "Would prevent early failure on difficult documents"147}148],149"transcript_insights": {150"winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script -> Fixed 2 issues -> Produced output",151"loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods -> No validation -> Output had errors"152}153}154```155156## Guidelines157158- **Be specific**: Quote from skills and transcripts, don't just say "instructions were unclear"159- **Be actionable**: Suggestions should be concrete changes, not vague advice160- **Focus on skill improvements**: The goal is to improve the losing skill, not critique the agent161- **Prioritize by impact**: Which changes would most likely have changed the outcome?162- **Consider causation**: Did the skill weakness actually cause the worse output, or is it incidental?163- **Stay objective**: Analyze what happened, don't editorialize164- **Think about generalization**: Would this improvement help on other evals too?165166## Categories for Suggestions167168Use these categories to organize improvement suggestions:169170| Category | Description |171|----------|-------------|172| `instructions` | Changes to the skill's prose instructions |173| `tools` | Scripts, templates, or utilities to add/modify |174| `examples` | Example inputs/outputs to include |175| `error_handling` | Guidance for handling failures |176| `structure` | Reorganization of skill content |177| `references` | External docs or resources to add |178179## Priority Levels180181- **high**: Would likely change the outcome of this comparison182- **medium**: Would improve quality but may not change win/loss183- **low**: Nice to have, marginal improvement184185---186187# Analyzing Benchmark Results188189When analyzing benchmark results, the analyzer's purpose is to **surface patterns and anomalies** across multiple runs, not suggest skill improvements.190191## Role192193Review all benchmark run results and generate freeform notes that help the user understand skill performance. Focus on patterns that wouldn't be visible from aggregate metrics alone.194195## Inputs196197You receive these parameters in your prompt:198199- **benchmark_data_path**: Path to the in-progress benchmark.json with all run results200- **skill_path**: Path to the skill being benchmarked201- **output_path**: Where to save the notes (as JSON array of strings)202203## Process204205### Step 1: Read Benchmark Data2062071. Read the benchmark.json containing all run results2082. Note the configurations tested (with_skill, without_skill)2093. Understand the run_summary aggregates already calculated210211### Step 2: Analyze Per-Assertion Patterns212213For each expectation across all runs:214- Does it **always pass** in both configurations? (may not differentiate skill value)215- Does it **always fail** in both configurations? (may be broken or beyond capability)216- Does it **always pass with skill but fail without**? (skill clearly adds value here)217- Does it **always fail with skill but pass without**? (skill may be hurting)218- Is it **highly variable**? (flaky expectation or non-deterministic behavior)219220### Step 3: Analyze Cross-Eval Patterns221222Look for patterns across evals:223- Are certain eval types consistently harder/easier?224- Do some evals show high variance while others are stable?225- Are there surprising results that contradict expectations?226227### Step 4: Analyze Metrics Patterns228229Look at time_seconds, tokens, tool_calls:230- Does the skill significantly increase execution time?231- Is there high variance in resource usage?232- Are there outlier runs that skew the aggregates?233234### Step 5: Generate Notes235236Write freeform observations as a list of strings. Each note should:237- State a specific observation238- Be grounded in the data (not speculation)239- Help the user understand something the aggregate metrics don't show240241Examples:242- "Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value"243- "Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure that may be flaky"244- "Without-skill runs consistently fail on table extraction expectations (0% pass rate)"245- "Skill adds 13s average execution time but improves pass rate by 50%"246- "Token usage is 80% higher with skill, primarily due to script output parsing"247- "All 3 without-skill runs for eval 1 produced empty output"248249### Step 6: Write Notes250251Save notes to `{output_path}` as a JSON array of strings:252253```json254[255"Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value",256"Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure",257"Without-skill runs consistently fail on table extraction expectations",258"Skill adds 13s average execution time but improves pass rate by 50%"259]260```261262## Guidelines263264**DO:**265- Report what you observe in the data266- Be specific about which evals, expectations, or runs you're referring to267- Note patterns that aggregate metrics would hide268- Provide context that helps interpret the numbers269270**DO NOT:**271- Suggest improvements to the skill (that's for the improvement step, not benchmarking)272- Make subjective quality judgments ("the output was good/bad")273- Speculate about causes without evidence274- Repeat information already in the run_summary aggregates275