Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from repo
Enterprise-grade research with multi-source synthesis, citation tracking, and verification. 8-phase pipeline with auto-continuation.
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
README.md
1# Deep Research Skill for Claude Code23Enterprise-grade research engine for Claude Code. Produces citation-backed reports with source credibility scoring, multi-provider search, and automated validation.45## Installation67```bash8# Clone into Claude Code skills directory9git clone https://github.com/199-biotechnologies/claude-deep-research-skill.git ~/.claude/skills/deep-research10```1112No additional dependencies required for basic usage.1314### Optional: search-cli (multi-provider search)1516For aggregated search across Brave, Serper, Exa, Jina, and Firecrawl:1718```bash19brew tap 199-biotechnologies/tap && brew install search-cli20search config set keys.brave YOUR_KEY # configure at least one provider21```2223## Usage2425```26deep research on the current state of quantum computing27```2829```30deep research in ultradeep mode: compare PostgreSQL vs Supabase for our stack31```3233## Research Modes3435| Mode | Phases | Duration | Best For |36|------|--------|----------|----------|37| Quick | 3 | 2-5 min | Initial exploration |38| Standard | 6 | 5-10 min | Most research questions |39| Deep | 8 | 10-20 min | Complex topics, critical decisions |40| UltraDeep | 8+ | 20-45 min | Comprehensive reports, maximum rigor |4142## Pipeline4344Scope → Plan → **Retrieve** (parallel search + agents) → Triangulate → Outline Refinement → Synthesize → Critique (with loop-back) → Refine → Package4546Key features:47- **Step 0**: Retrieves current date before searches (prevents stale training-data year assumptions)48- **Parallel retrieval**: 5-10 concurrent searches + 2-3 focused sub-agents returning structured evidence objects49- **First Finish Search**: Adaptive quality thresholds by mode50- **Critique loop-back**: Phase 6 can return to Phase 3 with delta-queries if critical gaps found51- **Multi-persona red teaming**: Skeptical Practitioner, Adversarial Reviewer, Implementation Engineer (Deep/UltraDeep)52- **Disk-persisted citations**: `sources.json` survives context compaction and continuation agents5354## Output5556Reports saved to `~/Documents/[Topic]_Research_[Date]/`:57- Markdown (primary source of truth)58- HTML (McKinsey-style, auto-opened in browser)59- PDF (professional print via WeasyPrint)6061Reports >18K words auto-continue via recursive agent spawning with context preservation.6263## Quality Standards6465- 10+ sources, 3+ per major claim66- Executive summary 200-400 words67- Findings 600-2,000 words each, prose-first (>=80%)68- Full bibliography with URLs, no placeholders69- Automated validation: `validate_report.py` (9 checks) + `verify_citations.py` (DOI/URL/hallucination detection)70- Validation loop: validate → fix → retry (max 3 cycles)7172## Search Tools7374| Tool | Priority | Setup |75|------|----------|-------|76| search-cli | **Primary** — all searches go here first | `brew install search-cli` + API keys |77| WebSearch | Fallback — if search-cli fails or rate-limited | None (built-in) |78| Exa MCP | Optional — semantic/neural search alongside search-cli | MCP config |7980## Architecture8182```83deep-research/84├── SKILL.md # Skill entry point (lean, ~100 lines)85├── reference/86│ ├── methodology.md # 8-phase pipeline details87│ ├── report-assembly.md # Progressive generation strategy88│ ├── quality-gates.md # Validation standards89│ ├── html-generation.md # McKinsey HTML conversion90│ ├── continuation.md # Auto-continuation protocol91│ └── weasyprint_guidelines.md # PDF generation92├── templates/93│ ├── report_template.md # Report structure template94│ └── mckinsey_report_template.html # HTML report template95├── scripts/96│ ├── validate_report.py # 9-check structure validator97│ ├── verify_citations.py # DOI/URL/hallucination checker98│ ├── source_evaluator.py # Source credibility scoring99│ ├── citation_manager.py # Citation tracking100│ ├── md_to_html.py # Markdown to HTML converter101│ ├── verify_html.py # HTML verification102│ └── research_engine.py # Core orchestration engine103└── tests/104└── fixtures/ # Test report fixtures105```106107## Version History108109| Version | Date | Changes |110|---------|------|---------|111| 2.3.1 | 2026-03-19 | Template/validator harmonization, structured evidence, critique loop-back, multi-persona red teaming |112| 2.3 | 2026-03-19 | Contract harmonization, search-cli integration, dynamic year detection, disk-persisted citations, validation loops |113| 2.2 | 2025-11-05 | Auto-continuation system for unlimited length |114| 2.1 | 2025-11-05 | Progressive file assembly |115| 1.0 | 2025-11-04 | Initial release |116117## License118119MIT - modify as needed for your workflow.120