Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from repo
A comprehensive collection of Agent Skills for context engineering, multi-agent architectures, and production agent systems.
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
skills/evaluation/SKILL.md
1---2name: evaluation3description: This skill should be used when the user asks to "evaluate agent performance", "build test framework", "measure agent quality", "create evaluation rubrics", or mentions LLM-as-judge, multi-dimensional evaluation, agent testing, or quality gates for agent pipelines.4---56# Evaluation Methods for Agent Systems78Evaluate agent systems differently from traditional software because agents make dynamic decisions, are non-deterministic between runs, and often lack single correct answers. Build evaluation frameworks that account for these characteristics, provide actionable feedback, catch regressions, and validate that context engineering choices achieve intended effects.910## When to Activate1112Activate this skill when:13- Testing agent performance systematically14- Validating context engineering choices15- Measuring improvements over time16- Catching regressions before deployment17- Building quality gates for agent pipelines18- Comparing different agent configurations19- Evaluating production systems continuously2021## Core Concepts2223Focus evaluation on outcomes rather than execution paths, because agents may find alternative valid routes to goals. Judge whether the agent achieves the right outcome via a reasonable process, not whether it followed a specific sequence of steps.2425Use multi-dimensional rubrics instead of single scores because one number hides critical failures in specific dimensions. Capture factual accuracy, completeness, citation accuracy, source quality, and tool efficiency as separate dimensions, then weight them for the use case.2627Deploy LLM-as-judge for scalable evaluation across large test sets while supplementing with human review to catch edge cases, hallucinations, and subtle biases that automated evaluation misses.2829**Performance Drivers: The 95% Finding**3031Apply the BrowseComp research finding when designing evaluation budgets: three factors explain 95% of browsing agent performance variance.3233| Factor | Variance Explained | Implication |34|--------|-------------------|-------------|35| Token usage | 80% | More tokens = better performance |36| Number of tool calls | ~10% | More exploration helps |37| Model choice | ~5% | Better models multiply efficiency |3839Act on these implications when designing evaluations:40- **Set realistic token budgets**: Evaluate agents with production-realistic token limits, not unlimited resources, because token usage drives 80% of variance.41- **Prioritize model upgrades over token increases**: Upgrading model versions provides larger gains than doubling token budgets on previous versions because better models use tokens more efficiently.42- **Validate multi-agent architectures**: The finding supports distributing work across agents with separate context windows, so evaluate multi-agent setups against single-agent baselines.4344## Detailed Topics4546### Evaluation Challenges4748**Handle Non-Determinism and Multiple Valid Paths**4950Design evaluations that tolerate path variation because agents may take completely different valid paths to reach goals. One agent might search three sources while another searches ten; both may produce correct answers. Avoid checking for specific steps. Instead, define outcome criteria (correctness, completeness, quality) and score against those, treating the execution path as informational rather than evaluative.5152**Test Context-Dependent Failures**5354Evaluate across a range of complexity levels and interaction lengths because agent failures often depend on context in subtle ways. An agent might succeed on simple queries but fail on complex ones, work well with one tool set but fail with another, or degrade after extended interaction as context accumulates. Include simple, medium, complex, and very complex test cases to surface these patterns.5556**Score Composite Quality Dimensions Separately**5758Break agent quality into separate dimensions (factual accuracy, completeness, coherence, tool efficiency, process quality) and score each independently because an agent might score high on accuracy but low on efficiency, or vice versa. Then compute weighted aggregates tuned to use-case priorities. This approach reveals which dimensions need improvement rather than averaging away the signal.5960### Evaluation Rubric Design6162**Build Multi-Dimensional Rubrics**6364Define rubrics covering key dimensions with descriptive levels from excellent to failed. Include these core dimensions and adapt weights per use case:6566- Factual accuracy: Claims match ground truth (weight heavily for knowledge tasks)67- Completeness: Output covers requested aspects (weight heavily for research tasks)68- Citation accuracy: Citations match claimed sources (weight for trust-sensitive contexts)69- Source quality: Uses appropriate primary sources (weight for authoritative outputs)70- Tool efficiency: Uses right tools a reasonable number of times (weight for cost-sensitive systems)7172**Convert Rubrics to Numeric Scores**7374Map dimension assessments to numeric scores (0.0 to 1.0), apply per-dimension weights, and calculate weighted overall scores. Set passing thresholds based on use-case requirements, typically 0.7 for general use and 0.9 for high-stakes applications. Store individual dimension scores alongside the aggregate because the breakdown drives targeted improvement.7576### Evaluation Methodologies7778**Use LLM-as-Judge for Scale**7980Build LLM-based evaluation prompts that include: clear task description, the agent output under test, ground truth when available, an evaluation scale with explicit level descriptions, and a request for structured judgment with reasoning. LLM judges provide consistent, scalable evaluation across large test sets. Use a different model family than the agent being evaluated to avoid self-enhancement bias.8182**Supplement with Human Evaluation**8384Route edge cases, unusual queries, and a random sample of production traffic to human reviewers because humans notice hallucinated answers, system failures, and subtle biases that automated evaluation misses. Track patterns across human reviews to identify systematic issues and feed findings back into automated evaluation criteria.8586**Apply End-State Evaluation for Stateful Agents**8788For agents that mutate persistent state (files, databases, configurations), evaluate whether the final state matches expectations rather than how the agent got there. Define expected end-state assertions and verify them programmatically after each test run.8990### Test Set Design9192**Select Representative Samples**9394Start with small samples (20-30 cases) during early development when changes have dramatic impacts and low-hanging fruit is abundant. Scale to 50+ cases for reliable signal as the system matures. Sample from real usage patterns, add known edge cases, and ensure coverage across complexity levels.9596**Stratify by Complexity**9798Structure test sets across complexity levels to prevent easy examples from inflating scores:99- Simple: single tool call, factual lookup100- Medium: multiple tool calls, comparison logic101- Complex: many tool calls, significant ambiguity102- Very complex: extended interaction, deep reasoning, synthesis103104Report scores per stratum alongside overall scores to reveal where the agent actually struggles.105106### Context Engineering Evaluation107108**Validate Context Strategies Systematically**109110Run agents with different context strategies on the same test set and compare quality scores, token usage, and efficiency metrics. This isolates the effect of context engineering from other variables and prevents anecdote-driven decisions.111112**Run Degradation Tests**113114Test how context degradation affects performance by running agents at different context sizes. Identify performance cliffs where context becomes problematic and establish safe operating limits. Feed these limits back into context management strategies.115116### Continuous Evaluation117118**Build Automated Evaluation Pipelines**119120Integrate evaluation into the development workflow so evaluations run automatically on agent changes. Track results over time, compare versions, and block deployments that regress on key metrics.121122**Monitor Production Quality**123124Sample production interactions and evaluate them continuously. Set alerts for quality drops below warning (0.85 pass rate) and critical (0.70 pass rate) thresholds. Maintain dashboards showing trend analysis over time windows to detect gradual degradation.125126## Practical Guidance127128### Building Evaluation Frameworks129130Follow this sequence to build an evaluation framework, because skipping early steps leads to measurements that do not reflect real quality:1311321. Define quality dimensions relevant to the use case before writing any evaluation code, because dimensions chosen later tend to reflect what is easy to measure rather than what matters.1332. Create rubrics with clear, descriptive level definitions so evaluators (human or LLM) produce consistent scores.1343. Build test sets from real usage patterns and edge cases, stratified by complexity, with at least 50 cases for reliable signal.1354. Implement automated evaluation pipelines that run on every significant change.1365. Establish baseline metrics before making changes so improvements can be measured against a known reference.1376. Run evaluations on all significant changes and compare against the baseline.1387. Track metrics over time for trend analysis because gradual degradation is harder to notice than sudden drops.1398. Supplement automated evaluation with human review on a regular cadence.140141### Avoiding Evaluation Pitfalls142143Guard against these common failures that undermine evaluation reliability:144145- **Overfitting to specific paths**: Evaluate outcomes, not specific steps, because agents find novel valid paths.146- **Ignoring edge cases**: Include diverse test scenarios covering the full complexity spectrum.147- **Single-metric obsession**: Use multi-dimensional rubrics because a single score hides dimension-specific failures.148- **Neglecting context effects**: Test with realistic context sizes and histories rather than clean-room conditions.149- **Skipping human evaluation**: Automated evaluation misses subtle issues that humans catch reliably.150151## Examples152153**Example 1: Simple Evaluation**154```python155def evaluate_agent_response(response, expected):156rubric = load_rubric()157scores = {}158for dimension, config in rubric.items():159scores[dimension] = assess_dimension(response, expected, dimension)160overall = weighted_average(scores, config["weights"])161return {"passed": overall >= 0.7, "scores": scores}162```163164**Example 2: Test Set Structure**165166Test sets should span multiple complexity levels to ensure comprehensive evaluation:167168```python169test_set = [170{171"name": "simple_lookup",172"input": "What is the capital of France?",173"expected": {"type": "fact", "answer": "Paris"},174"complexity": "simple",175"description": "Single tool call, factual lookup"176},177{178"name": "medium_query",179"input": "Compare the revenue of Apple and Microsoft last quarter",180"complexity": "medium",181"description": "Multiple tool calls, comparison logic"182},183{184"name": "multi_step_reasoning",185"input": "Analyze sales data from Q1-Q4 and create a summary report with trends",186"complexity": "complex",187"description": "Many tool calls, aggregation, analysis"188},189{190"name": "research_synthesis",191"input": "Research emerging AI technologies, evaluate their potential impact, and recommend adoption strategy",192"complexity": "very_complex",193"description": "Extended interaction, deep reasoning, synthesis"194}195]196```197198## Guidelines1992001. Use multi-dimensional rubrics, not single metrics2012. Evaluate outcomes, not specific execution paths2023. Cover complexity levels from simple to complex2034. Test with realistic context sizes and histories2045. Run evaluations continuously, not just before release2056. Supplement LLM evaluation with human review2067. Track metrics over time for trend detection2078. Set clear pass/fail thresholds based on use case208209## Gotchas2102111. **Overfitting evals to specific code paths**: Tests pass but the agent fails on slight input variations. Write eval criteria against outcomes and semantics, not surface patterns, and rotate test inputs periodically.2122. **LLM-judge self-enhancement bias**: Models rate their own outputs higher than independent judges do. Use a different model family as the evaluation judge than the model being evaluated.2133. **Test set contamination**: Eval examples leak into training data or prompt templates, inflating scores. Keep eval sets versioned and separate from any data used in prompts or fine-tuning.2144. **Metric gaming**: Optimizing for the metric rather than actual quality produces agents that score well but disappoint users. Cross-validate automated metrics against human judgments regularly.2155. **Single-dimension scoring**: One aggregate number hides critical failures in specific dimensions. Always report per-dimension scores alongside the overall score, and fail the eval if any single dimension falls below its minimum threshold.2166. **Eval set too small**: Fewer than 50 examples produces unreliable signal with high variance between runs. Scale the eval set to at least 50 cases and report confidence intervals.2177. **Not stratifying by difficulty**: Easy examples inflate overall scores, masking failures on hard cases. Report scores per complexity stratum and weight the overall score to prevent easy-case dominance.2188. **Treating eval as one-time**: Evaluation must be continuous, not a launch gate. Agent quality drifts as models update, tools change, and usage patterns evolve. Run evals on every change and on a regular production cadence.219220## Integration221222This skill connects to all other skills as a cross-cutting concern:223224- context-fundamentals - Evaluating context usage225- context-degradation - Detecting degradation226- context-optimization - Measuring optimization effectiveness227- multi-agent-patterns - Evaluating coordination228- tool-design - Evaluating tool effectiveness229- memory-systems - Evaluating memory quality230231## References232233Internal reference:234- [Metrics Reference](./references/metrics.md) - Read when: designing specific evaluation metrics, choosing scoring scales, or implementing weighted rubric calculations235236Internal skills:237- All other skills connect to evaluation for quality measurement238239External resources:240- LLM evaluation benchmarks - Read when: selecting or building benchmark suites for agent comparison241- Agent evaluation research papers - Read when: adopting new evaluation methodologies or validating current approach242- Production monitoring practices - Read when: setting up alerting, dashboards, or sampling strategies for live systems243244---245246## Skill Metadata247248**Created**: 2025-12-20249**Last Updated**: 2026-03-17250**Author**: Agent Skills for Context Engineering Contributors251**Version**: 1.1.0252