Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from repo
A comprehensive collection of Agent Skills for context engineering, multi-agent architectures, and production agent systems.
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
examples/llm-as-judge-skills/prompts/index.md
1# Prompts Index23Prompts are reusable templates that define how agents and tools interact with LLMs.45## Prompt Categories67### Evaluation Prompts8**Path**: `prompts/evaluation/`910Templates for quality assessment tasks.1112| Prompt | Purpose | Used By |13|--------|---------|---------|14| `direct-scoring-prompt` | Evaluate single response | Evaluator Agent, directScore tool |15| `pairwise-comparison-prompt` | Compare two responses | Evaluator Agent, pairwiseCompare tool |1617---1819### Research Prompts20**Path**: `prompts/research/`2122Templates for information gathering and synthesis.2324| Prompt | Purpose | Used By |25|--------|---------|---------|26| `research-synthesis-prompt` | Synthesize findings | Research Agent |2728---2930### Agent System Prompts31**Path**: `prompts/agent-system/`3233System prompts for agent definitions.3435| Prompt | Purpose | Used By |36|--------|---------|---------|37| `orchestrator-prompt` | Multi-agent coordination | Orchestrator Agent |3839## Prompt Template Format4041### Standard Structure4243```markdown44# Prompt Name4546## Purpose47Brief description of what this prompt accomplishes.4849## Prompt Template50```markdown51[The actual prompt with {{variables}}]52```5354## Variables55| Variable | Description | Required |56|----------|-------------|----------|57| var_name | What it contains | Yes/No |5859## Example Usage60Concrete example showing inputs and expected outputs.6162## Best Practices63Guidelines for using this prompt effectively.64```6566### Variable Syntax6768Use Handlebars-style templating:6970```markdown71{{variable}} # Simple substitution72{{#if condition}}...{{/if}} # Conditional section73{{#each array}}...{{/each}} # Iteration74```7576## Prompt Design Principles7778### 1. Clear Role Definition79Tell the model exactly what it is and what it's doing.8081```markdown82You are an expert evaluator assessing the quality of AI-generated responses.83```8485### 2. Explicit Instructions86Don't assume the model will infer requirements.8788```markdown89For each criterion:901. First, identify specific evidence from the response912. Then, determine the appropriate score based on the rubric923. Finally, provide actionable feedback93```9495### 3. Structured Output96Specify the exact format you need.9798```markdown99Format your response as structured JSON:100```json101{102"scores": [...],103"summary": {...}104}105```106```107108### 4. Guard Rails109Include constraints and warnings.110111```markdown112Important Guidelines:113- Do NOT prefer responses simply because they are longer114- Do NOT prefer responses based on their position (A vs B)115- Focus on the specified criteria116```117118## Adding New Prompts1191201. Determine category or create new: `prompts/<category>/`1212. Create prompt file: `prompts/<category>/<prompt-name>.md`1223. Include:123- Purpose124- Template with variables125- Variable documentation126- Example usage127- Best practices1284. Update this index129130## Prompt Testing Checklist131132- [ ] Variables render correctly133- [ ] Output format is parseable134- [ ] Edge cases are handled135- [ ] Instructions are unambiguous136- [ ] Examples match expected output137- [ ] Constraints are clear138139