Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from repo
A comprehensive collection of Agent Skills for context engineering, multi-agent architectures, and production agent systems.
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
skills/project-development/references/case-studies.md
1# Case Studies: LLM Project Development23This reference contains detailed case studies of production LLM projects that demonstrate effective development methodology. Each case study analyzes the problem, approach, architecture, and lessons learned.45## Case Study 1: Karpathy's HN Time Capsule67**Source**: https://github.com/karpathy/hn-time-capsule89### Problem Statement1011Analyze Hacker News discussions from 10 years ago and grade commenters on how prescient their predictions were with the benefit of hindsight.1213### Task-Model Fit Analysis1415This task is well-suited for LLM processing because:1617| Factor | Assessment |18|--------|------------|19| Synthesis | Combining article content + multiple comment threads |20| Subjective judgment | Grading predictions against known outcomes |21| Domain knowledge | Model has knowledge of what actually happened |22| Error tolerance | Wrong grade on one comment does not break the system |23| Batch processing | Each article is independent |24| Natural language output | Human-readable analysis is the goal |2526### Development Methodology2728**Step 1: Manual Prototype**2930Before building any automation, Karpathy copy-pasted one article + comment thread into ChatGPT to validate the approach. This took minutes and confirmed:31- The model could produce insightful hindsight analysis32- The output format worked for the intended use case33- The quality exceeded what he could do manually3435**Step 2: Agent-Assisted Implementation**3637Used Opus 4.5 to build the pipeline in approximately 3 hours. The agent handled:38- HTML parsing for HN frontpage39- Algolia API integration for comments40- Prompt template design41- Output parsing logic42- Static HTML rendering4344**Step 3: Batch Execution**4546- 930 LLM queries (31 days × 30 articles)47- 15 parallel workers48- ~$58 total cost49- ~1 hour execution time5051### Pipeline Architecture5253```54fetch → prompt → analyze → parse → render55```5657**Stage 1: Fetch**58- Download HN frontpage for target date59- Fetch article content via HTTP60- Fetch comments via Algolia API61- Output: `data/{date}/{item_id}/meta.json`, `article.txt`, `comments.json`6263**Stage 2: Prompt**64- Load article metadata and content65- Load comment tree66- Generate markdown prompt from template67- Output: `data/{date}/{item_id}/prompt.md`6869**Stage 3: Analyze**70- Submit prompt to GPT 5.1 Thinking API71- Parallel execution with ThreadPoolExecutor72- Output: `data/{date}/{item_id}/response.md`7374**Stage 4: Parse**75- Extract grades from "Final grades" section via regex76- Extract interestingness score via regex77- Aggregate grades across all articles78- Output: `data/{date}/{item_id}/grades.json`, `score.json`7980**Stage 5: Render**81- Generate static HTML with embedded JavaScript82- Create day pages with article navigation83- Create Hall of Fame with aggregated rankings84- Output: `output/{date}/index.html`, `output/hall-of-fame.html`8586### Structured Output Design8788The prompt template specifies exact output format:8990```91Let's use our benefit of hindsight now in 6 sections:92931. Give a brief summary of the article and the discussion thread.942. What ended up happening to this topic?953. Give out awards for "Most prescient" and "Most wrong" comments.964. Mention any other fun or notable aspects.975. Give out grades to specific people for their comments.986. At the end, give a final score (from 0-10).99100As for the format of Section 5, use the header "Final grades" and follow it101with simply an unordered list in the format of "name: grade (optional comment)".102103Please follow the format exactly because I will be parsing it programmatically.104```105106Key techniques:107- Numbered sections for structure108- Explicit format specification with examples109- Rationale disclosure ("because I will be parsing it")110- Constrained output (letter grades, 0-10 scores)111112### Parsing Implementation113114The parsing code handles variations gracefully:115116```python117def parse_grades(text: str) -> dict[str, dict]:118# Match "Final grades" with optional section number or markdown119pattern = r'(?:^|\n)(?:\d+[\.\)]\s*)?(?:#+ *)?Final grades\s*\n'120match = re.search(pattern, text, re.IGNORECASE)121122# Handle both ASCII and Unicode minus signs123line_pattern = r'^[\-\*]\s*([^:]+):\s*([A-F][+\-−]?)(?:\s*\(([^)]+)\))?'124```125126### Lessons Learned1271281. **Manual validation first**: The 5-minute copy-paste test prevented hours of wasted development.1291302. **File system as state**: Each article directory contains all intermediate outputs, making debugging trivial.1311323. **Idempotent stages**: Re-running only processes items that lack output files.1331344. **Agent-assisted development**: 3 hours to working code by focusing on requirements, not implementation details.1351365. **Parallel execution**: 15 workers reduced execution time without increasing token costs.137138---139140## Case Study 2: Vercel d0 Architectural Reduction141142**Source**: https://vercel.com/blog/we-removed-80-percent-of-our-agents-tools143144### Problem Statement145146Build a text-to-SQL agent that enables anyone at Vercel to query analytics data through natural language questions in Slack.147148### Initial Approach (Failed)149150The team built a sophisticated system with:151- 17 specialized tools (schema lookup, query validation, error recovery, etc.)152- Heavy prompt engineering to constrain reasoning153- Careful context management154- Hand-coded retrieval for schema information155156**Results**:157- 80% success rate158- 274.8 seconds average execution time159- ~102k tokens average usage160- ~12 steps average161- Constant maintenance burden162163### The Problem164165The team was solving problems the model could handle on its own:166- Pre-filtering context167- Constraining options168- Wrapping every interaction in validation logic169- Building tools to "protect" the model from complexity170171Every edge case required another patch. Every model update required re-calibrating constraints. More time was spent maintaining scaffolding than improving outcomes.172173### Architectural Reduction174175The hypothesis: What if we just give Claude access to the raw files and let it figure things out?176177**New architecture**:178- 2 tools total: ExecuteCommand (bash) + ExecuteSQL179- Direct file system access via sandbox180- Semantic layer as YAML/Markdown/JSON files181- Standard Unix utilities (grep, cat, find, ls)182183```javascript184const agent = new ToolLoopAgent({185model: "anthropic/claude-opus-4.5",186tools: {187ExecuteCommand: executeCommandTool(sandbox),188ExecuteSQL,189},190});191```192193### Results194195| Metric | Before (17 tools) | After (2 tools) | Change |196|--------|-------------------|-----------------|--------|197| Avg execution time | 274.8s | 77.4s | 3.5x faster |198| Success rate | 80% | 100% | +20% |199| Avg token usage | ~102k | ~61k | 37% fewer |200| Avg steps | ~12 | ~7 | 42% fewer |201202The worst case before: 724 seconds, 100 steps, 145k tokens, and still failed.203Same query after: 141 seconds, 19 steps, 67k tokens, succeeded.204205### Why It Worked2062071. **Good documentation already existed**: The semantic layer files contained dimension definitions, measure calculations, and join relationships. The tools were summarizing what was already legible.2082092. **File systems are proven abstractions**: The model understands file systems deeply from training. grep is 50 years old and works perfectly.2102113. **Constraints became liabilities**: With better models, the guardrails were limiting performance more than helping.212213### Key Lessons2142151. **Addition by subtraction**: The best agents might be ones with the fewest tools. Every tool is a choice you are making for the model.2162172. **Build for future models**: Models improve faster than tooling. Architectures optimized for today may be over-constrained for tomorrow.2182193. **Good context over clever tools**: Invest in documentation, clear naming, and well-structured data. That foundation matters more than sophisticated tooling.2202214. **Start simple**: Model + file system + goal. Add complexity only when proven necessary.222223---224225## Case Study 3: Manus Context Engineering226227**Source**: Peak Ji's blog "Context Engineering for AI Agents: Lessons from Building Manus"228229### Problem Statement230231Build a general-purpose consumer agent that can accomplish complex tasks across 50+ tool calls while maintaining performance and managing costs.232233### Core Insight234235KV-cache hit rate is the single most important metric for production agents. It directly affects both latency and cost.236237- Claude Sonnet cached: $0.30/MTok238- Claude Sonnet uncached: $3.00/MTok239- 10x cost difference240241With an average input-to-output ratio of 100:1 in agentic workloads, optimizing for cache hits dominates the cost equation.242243### Key Patterns244245**1. Append-Only Context**246247Never modify previous actions or observations. Ensure deterministic serialization (JSON key ordering must be stable). A single token difference invalidates the cache from that point forward.248249Common mistake: Including a timestamp at the beginning of the system prompt kills cache hit rate entirely.250251**2. Mask, Do Not Remove**252253Do not dynamically add or remove tools mid-iteration. Tool definitions live near the front of context - any change invalidates the KV-cache for all subsequent content.254255Instead, use logit masking during decoding to constrain tool selection without modifying definitions. This maintains cache while still controlling behavior.256257**3. File System as Context**258259Treat the file system as unlimited, persistent, agent-operable memory. The model learns to write and read files on demand.260261Compression strategies should be restorable:262- Web page content can be dropped if URL is preserved263- Document contents can be omitted if file path remains available264265**4. Recitation for Attention**266267Manus creates a todo.md file and updates it step-by-step. This is not just organization - it pushes the global plan into the model's recent attention span.268269By constantly rewriting objectives at the end of context, the agent avoids "lost in the middle" issues and maintains goal alignment.270271**5. Keep Errors In Context**272273Do not hide failures. When the model sees a failed action and the resulting error, it implicitly updates beliefs and avoids repeating mistakes.274275Erasing failures removes evidence the model needs to adapt.276277### Multi-Agent for Context Isolation278279The primary goal of sub-agents in Manus is context isolation, not role division. For tasks requiring discrete work:280- Planner assigns tasks to sub-agents with their own context windows281- Simple tasks: pass instructions via function call282- Complex tasks: share full context with sub-agent283284Sub-agents have a submit_results tool with constrained output schema. Constrained decoding ensures adherence to defined format.285286### Layered Action Space287288Rather than binding every utility as a tool:289- Small set (<20) of atomic functions: Bash, filesystem access, code execution290- Most actions offload to sandbox layer291- MCP tools exposed through CLI, executed via Bash tool292293This reduces tool definition tokens and prevents model confusion from overlapping descriptions.294295### Iteration Expectation296297Manus has refactored their agent framework five times since launch. The Bitter Lesson suggests structures added for current limitations become constraints as models improve.298299Test across model strengths to verify your harness is not limiting performance. Simple, unopinionated designs adapt better to model improvements.300301---302303## Case Study 4: Anthropic Multi-Agent Research304305**Source**: Anthropic blog "How we built our multi-agent research system"306307### Problem Statement308309Build a research feature that can explore complex topics using multiple parallel agents searching across web, Google Workspace, and integrations.310311### Architecture312313Orchestrator-worker pattern:314- Lead agent analyzes query and develops strategy315- Lead spawns subagents for parallel exploration316- Subagents return findings to lead for synthesis317- Citation agent processes final output318319### Performance Insight320321Three factors explained 95% of performance variance in BrowseComp evaluation:322- Token usage: 80% of variance323- Number of tool calls: additional factor324- Model choice: additional factor325326Multi-agent architectures effectively scale token usage for tasks exceeding single-agent limits.327328### Token Economics329330- Chat interactions: baseline331- Single agent: ~4x more tokens than chat332- Multi-agent: ~15x more tokens than chat333334Multi-agent requires high-value tasks to justify the cost.335336### Prompting Principles3373381. **Think like your agents**: Build simulations, watch step-by-step, identify failure modes.3393402. **Teach delegation**: Subagents need objective, output format, tools/sources guidance, and clear boundaries.3413423. **Scale effort to complexity**: Explicit guidelines for agent/tool call counts by task type.3433444. **Tool design is critical**: Distinct purpose and clear description for each tool. Bad descriptions send agents down wrong paths entirely.3453465. **Let agents improve themselves**: Claude 4 models can diagnose prompt failures and suggest improvements. Tool-testing agents can rewrite tool descriptions to avoid common mistakes.3473486. **Start wide, then narrow**: Broad queries first, evaluate landscape, then drill into specifics.3493507. **Guide thinking process**: Extended thinking mode as controllable scratchpad for planning.3513528. **Parallel tool calling**: 3-5 subagents in parallel, 3+ tools per subagent in parallel. Cut research time by up to 90%.353354### Evaluation Approach355356- Start with ~20 representative queries immediately357- LLM-as-judge with rubric: factual accuracy, citation accuracy, completeness, source quality, tool efficiency358- Human evaluation catches edge cases automation misses359- Focus on end-state evaluation for multi-turn agents360361---362363## Cross-Case Patterns364365### Common Success Factors3663671. **Manual validation before automation**: All successful projects validated task-model fit with simple tests first.3683692. **File system as foundation**: Whether for state management (Karpathy), tool interface (Vercel), or memory (Manus), the file system provides proven abstractions.3703713. **Architectural simplicity**: Reduction outperformed complexity in multiple cases. Start minimal, add only what proves necessary.3723734. **Structured outputs with robust parsing**: Explicit format specifications combined with flexible parsing that handles variations.3743755. **Iteration expectation**: No project got architecture right on the first try. Build for change.376377### Common Failure Patterns3783791. **Over-constraining models**: Guardrails that helped with weaker models become liabilities as capabilities improve.3803812. **Tool proliferation**: More tools often means more confusion and worse performance.3823833. **Hiding errors**: Removing failures from context prevents models from learning.3843854. **Premature optimization**: Adding complexity before basic functionality works.3863875. **Ignoring economics**: Token costs compound quickly; estimation and tracking are essential.388389