Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from repo
A comprehensive collection of Agent Skills for context engineering, multi-agent architectures, and production agent systems.
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
skills/tool-design/SKILL.md
1---2name: tool-design3description: This skill should be used when the user asks to "design agent tools", "create tool descriptions", "reduce tool complexity", "implement MCP tools", or mentions tool consolidation, architectural reduction, tool naming conventions, or agent-tool interfaces.4---56# Tool Design for Agents78Design every tool as a contract between a deterministic system and a non-deterministic agent. Unlike human-facing APIs, agent-facing tools must make the contract unambiguous through the description alone -- agents infer intent from descriptions and generate calls that must match expected formats. Every ambiguity becomes a potential failure mode that no amount of prompt engineering can fix.910## When to Activate1112Activate this skill when:13- Creating new tools for agent systems14- Debugging tool-related failures or misuse15- Optimizing existing tool sets for better agent performance16- Designing tool APIs from scratch17- Evaluating third-party tools for agent integration18- Standardizing tool conventions across a codebase1920## Core Concepts2122Design tools around the consolidation principle: if a human engineer cannot definitively say which tool should be used in a given situation, an agent cannot be expected to do better. Reduce the tool set until each tool has one unambiguous purpose, because agents select tools by comparing descriptions and any overlap introduces selection errors.2324Treat every tool description as prompt engineering that shapes agent behavior. The description is not documentation for humans -- it is injected into the agent's context and directly steers reasoning. Write descriptions that answer what the tool does, when to use it, and what it returns, because these three questions are exactly what agents evaluate during tool selection.2526## Detailed Topics2728### The Tool-Agent Interface2930**Tools as Contracts**31Design each tool as a self-contained contract. When humans call APIs, they read docs, understand conventions, and make appropriate requests. Agents must infer the entire contract from a single description block. Make the contract unambiguous by including format examples, expected patterns, and explicit constraints. Omit nothing that a caller needs to know, because agents cannot ask clarifying questions before making a call.3233**Tool Description as Prompt**34Write tool descriptions knowing they load directly into agent context and collectively steer behavior. A vague description like "Search the database" with cryptic parameter names forces the agent to guess -- and guessing produces incorrect calls. Instead, include usage context, parameter format examples, and sensible defaults. Every word in the description either helps or hurts tool selection accuracy.3536**Namespacing and Organization**37Namespace tools under common prefixes as the collection grows, because agents benefit from hierarchical grouping. When an agent needs database operations, it routes to the `db_*` namespace; when it needs web interactions, it routes to `web_*`. Without namespacing, agents must evaluate every tool in a flat list, which degrades selection accuracy as the count grows.3839### The Consolidation Principle4041**Single Comprehensive Tools**42Build single comprehensive tools instead of multiple narrow tools that overlap. Rather than implementing `list_users`, `list_events`, and `create_event` separately, implement `schedule_event` that finds availability and schedules in one call. The comprehensive tool handles the full workflow internally, removing the agent's burden of chaining calls in the correct order.4344**Why Consolidation Works**45Apply consolidation because agents have limited context and attention. Each tool in the collection competes for attention during tool selection, each description consumes context budget tokens, and overlapping functionality creates ambiguity. Consolidation eliminates redundant descriptions, removes selection ambiguity, and shrinks the effective tool set. Vercel demonstrated this principle by reducing their agent from 17 specialized tools to 2 general-purpose tools and achieving better performance -- fewer tools meant less confusion and more reliable tool selection.4647**When Not to Consolidate**48Keep tools separate when they have fundamentally different behaviors, serve different contexts, or must be callable independently. Over-consolidation creates a different problem: a single tool with too many parameters and modes becomes hard for agents to parameterize correctly.4950### Architectural Reduction5152Push the consolidation principle to its logical extreme by removing most specialized tools in favor of primitive, general-purpose capabilities. Production evidence shows this approach can outperform sophisticated multi-tool architectures.5354**The File System Agent Pattern**55Provide direct file system access through a single command execution tool instead of building custom tools for data exploration, schema lookup, and query validation. The agent uses standard Unix utilities (grep, cat, find, ls) to explore and operate on the system. This works because file systems are a proven abstraction that models understand deeply, standard tools have predictable behavior, agents can chain primitives flexibly rather than being constrained to predefined workflows, and good documentation in files replaces summarization tools.5657**When Reduction Outperforms Complexity**58Choose reduction when the data layer is well-documented and consistently structured, the model has sufficient reasoning capability, specialized tools were constraining rather than enabling the model, or more time is spent maintaining scaffolding than improving outcomes. Avoid reduction when underlying data is messy or poorly documented, the domain requires specialized knowledge the model lacks, safety constraints must limit agent actions, or operations genuinely benefit from structured workflows.5960**Build for Future Models**61Design minimal architectures that benefit from model improvements rather than sophisticated architectures that lock in current limitations. Ask whether each tool enables new capabilities or constrains reasoning the model could handle on its own -- tools built as "guardrails" often become liabilities as models improve.6263See [Architectural Reduction Case Study](./references/architectural_reduction.md) for production evidence.6465### Tool Description Engineering6667**Description Structure**68Structure every tool description to answer four questions:69701. What does the tool do? State exactly what the tool accomplishes -- avoid vague language like "helps with" or "can be used for."712. When should it be used? Specify direct triggers ("User asks about pricing") and indirect signals ("Need current market rates").723. What inputs does it accept? Describe each parameter with types, constraints, defaults, and format examples.734. What does it return? Document the output format, structure, successful response examples, and error conditions.7475**Default Parameter Selection**76Set defaults to reflect common use cases. Defaults reduce agent burden by eliminating unnecessary parameter specification and prevent errors from omitted parameters. Choose defaults that produce useful results without requiring the agent to understand every option.7778### Response Format Optimization7980Offer response format options (concise vs. detailed) because tool response size significantly impacts context usage. Concise format returns essential fields only, suitable for confirmations. Detailed format returns complete objects, suitable when full context drives decisions. Document when to use each format in the tool description so agents learn to select appropriately.8182### Error Message Design8384Design error messages for two audiences: developers debugging issues and agents recovering from failures. For agents, every error message must be actionable -- it must state what went wrong and how to correct it. Include retry guidance for retryable errors, corrected format examples for input errors, and specific missing fields for incomplete requests. An error that says only "failed" provides zero recovery signal.8586### Tool Definition Schema8788Establish a consistent schema across all tools. Use verb-noun pattern for tool names (`get_customer`, `create_order`), consistent parameter names across tools (always `customer_id`, never sometimes `id` and sometimes `identifier`), and consistent return field names. Consistency reduces the cognitive load on agents and improves cross-tool generalization.8990### Tool Collection Design9192Limit tool collections to 10-20 tools for most applications, because research shows description overlap causes model confusion and more tools do not always lead to better outcomes. When more tools are genuinely needed, use namespacing to create logical groupings. Implement selection mechanisms: tool grouping by domain, example-based selection hints, and umbrella tools that route to specialized sub-tools.9394### MCP Tool Naming Requirements9596Always use fully qualified tool names with MCP (Model Context Protocol) to avoid "tool not found" errors.9798Format: `ServerName:tool_name`99100```python101# Correct: Fully qualified names102"Use the BigQuery:bigquery_schema tool to retrieve table schemas."103"Use the GitHub:create_issue tool to create issues."104105# Incorrect: Unqualified names106"Use the bigquery_schema tool..." # May fail with multiple servers107```108109Without the server prefix, agents may fail to locate tools when multiple MCP servers are available. Establish naming conventions that include server context in all tool references.110111### Using Agents to Optimize Tools112113Feed observed tool failures back to an agent to diagnose issues and improve descriptions. Production testing shows this approach achieves 40% reduction in task completion time by helping future agents avoid mistakes.114115**The Tool-Testing Agent Pattern**:116117```python118def optimize_tool_description(tool_spec, failure_examples):119"""120Use an agent to analyze tool failures and improve descriptions.121122Process:1231. Agent attempts to use tool across diverse tasks1242. Collect failure modes and friction points1253. Agent analyzes failures and proposes improvements1264. Test improved descriptions against same tasks127"""128prompt = f"""129Analyze this tool specification and the observed failures.130131Tool: {tool_spec}132133Failures observed:134{failure_examples}135136Identify:1371. Why agents are failing with this tool1382. What information is missing from the description1393. What ambiguities cause incorrect usage140141Propose an improved tool description that addresses these issues.142"""143144return get_agent_response(prompt)145```146147This creates a feedback loop: agents using tools generate failure data, which agents then use to improve tool descriptions, which reduces future failures.148149### Testing Tool Design150151Evaluate tool designs against five criteria: unambiguity, completeness, recoverability, efficiency, and consistency. Test by presenting representative agent requests and evaluating the resulting tool calls against expected behavior.152153## Practical Guidance154155### Tool Selection Framework156157When designing tool collections:1581. Identify distinct workflows agents must accomplish1592. Group related actions into comprehensive tools1603. Ensure each tool has a clear, unambiguous purpose1614. Document error cases and recovery paths1625. Test with actual agent interactions163164## Examples165166**Example 1: Well-Designed Tool**167```python168def get_customer(customer_id: str, format: str = "concise"):169"""170Retrieve customer information by ID.171172Use when:173- User asks about specific customer details174- Need customer context for decision-making175- Verifying customer identity176177Args:178customer_id: Format "CUST-######" (e.g., "CUST-000001")179format: "concise" for key fields, "detailed" for complete record180181Returns:182Customer object with requested fields183184Errors:185NOT_FOUND: Customer ID not found186INVALID_FORMAT: ID must match CUST-###### pattern187"""188```189190**Example 2: Poor Tool Design**191192This example demonstrates several tool design anti-patterns:193194```python195def search(query):196"""Search the database."""197pass198```199200**Problems with this design:**2012021. **Vague name**: "search" is ambiguous - search what, for what purpose?2032. **Missing parameters**: What database? What format should query take?2043. **No return description**: What does this function return? A list? A string? Error handling?2054. **No usage context**: When should an agent use this versus other tools?2065. **No error handling**: What happens if the database is unavailable?207208**Failure modes:**209- Agents may call this tool when they should use a more specific tool210- Agents cannot determine correct query format211- Agents cannot interpret results212- Agents cannot recover from failures213214## Guidelines2152161. Write descriptions that answer what, when, and what returns2172. Use consolidation to reduce ambiguity2183. Implement response format options for token efficiency2194. Design error messages for agent recovery2205. Establish and follow consistent naming conventions2216. Limit tool count and use namespacing for organization2227. Test tool designs with actual agent interactions2238. Iterate based on observed failure modes2249. Question whether each tool enables or constrains the model22510. Prefer primitive, general-purpose tools over specialized wrappers22611. Invest in documentation quality over tooling sophistication22712. Build minimal architectures that benefit from model improvements228229## Gotchas2302311. **Vague descriptions**: Descriptions like "Search the database for customer information" leave too many questions unanswered. State the exact database, query format, and return shape.2322. **Cryptic parameter names**: Parameters named `x`, `val`, or `param1` force agents to guess meaning. Use descriptive names that convey purpose without reading further documentation.2333. **Missing error recovery guidance**: Tools that fail with generic messages like "Error occurred" provide no recovery signal. Every error response must tell the agent what went wrong and what to try next.2344. **Inconsistent naming across tools**: Using `id` in one tool, `identifier` in another, and `customer_id` in a third creates confusion. Standardize parameter names across the entire tool collection.2355. **MCP namespace collisions**: When multiple MCP tool providers register tools with similar names (e.g., two servers both exposing `search`), agents cannot disambiguate. Always use fully qualified `ServerName:tool_name` format and audit for collisions when adding new providers.2366. **Tool description rot**: Descriptions become inaccurate as underlying APIs evolve -- parameters get added, return formats change, error codes shift. Treat descriptions as code: version them, review them during API changes, and test them against current behavior.2377. **Over-consolidation**: Making a single tool handle too many workflows produces parameter lists so large that agents struggle to select the right combination. If a tool requires more than 8-10 parameters or serves fundamentally different use cases, split it.2388. **Parameter explosion**: Too many optional parameters overwhelm agent decision-making. Each parameter the agent must evaluate adds cognitive load. Provide sensible defaults, group related options into format presets, and move rarely-used parameters into an `options` object.2399. **Missing error context**: Error messages that say only "failed" or "invalid input" without specifying which input, why it failed, or what a valid input looks like leave agents unable to self-correct. Include the invalid value, the expected format, and a concrete example in every error response.240241## Integration242243This skill connects to:244- context-fundamentals - How tools interact with context245- multi-agent-patterns - Specialized tools per agent246- evaluation - Evaluating tool effectiveness247248## References249250Internal references:251- [Best Practices Reference](./references/best_practices.md) - Read when: designing a new tool from scratch or auditing an existing tool collection for quality gaps252- [Architectural Reduction Case Study](./references/architectural_reduction.md) - Read when: considering removing specialized tools in favor of primitives, or evaluating whether a complex tool architecture is justified253254Related skills in this collection:255- context-fundamentals - Tool context interactions256- evaluation - Tool testing patterns257258External resources:259- MCP (Model Context Protocol) documentation - Read when: implementing tools for multi-server agent environments or debugging tool routing failures260- Framework tool conventions - Read when: adopting a new agent framework and need to map tool design principles to framework-specific APIs261- API design best practices for agents - Read when: translating existing human-facing APIs into agent-facing tool interfaces262- Vercel d0 agent architecture case study - Read when: evaluating whether to consolidate tools or seeking production evidence for architectural reduction263264---265266## Skill Metadata267268**Created**: 2025-12-20269**Last Updated**: 2026-03-17270**Author**: Agent Skills for Context Engineering Contributors271**Version**: 2.0.0272