Loading source
Pulling the file list, source metadata, and syntax-aware rendering for this listing.
Source from repo
Bulk extract content from an entire website or site section by following links up to a configurable depth and page limit.
Files
Skill
Size
Entrypoint
Format
Open file
Syntax-highlighted preview of this file as included in the skill package.
SKILL.md
1---2name: firecrawl-crawl3description: |4Bulk extract content from an entire website or site section. Use this skill when the user wants to crawl a site, extract all pages from a docs section, bulk-scrape multiple pages following links, or says "crawl", "get all the pages", "extract everything under /docs", "bulk extract", or needs content from many pages on the same site. Handles depth limits, path filtering, and concurrent extraction.5allowed-tools:6- Bash(firecrawl *)7- Bash(npx firecrawl *)8---910# firecrawl crawl1112Bulk extract content from a website. Crawls pages following links up to a depth/limit.1314## When to use1516- You need content from many pages on a site (e.g., all `/docs/`)17- You want to extract an entire site section18- Step 4 in the [workflow escalation pattern](firecrawl-cli): search → scrape → map → **crawl** → interact1920## Quick start2122```bash23# Crawl a docs section24firecrawl crawl "<url>" --include-paths /docs --limit 50 --wait -o .firecrawl/crawl.json2526# Full crawl with depth limit27firecrawl crawl "<url>" --max-depth 3 --wait --progress -o .firecrawl/crawl.json2829# Check status of a running crawl30firecrawl crawl <job-id>31```3233## Options3435| Option | Description |36| ------------------------- | ------------------------------------------- |37| `--wait` | Wait for crawl to complete before returning |38| `--progress` | Show progress while waiting |39| `--limit <n>` | Max pages to crawl |40| `--max-depth <n>` | Max link depth to follow |41| `--include-paths <paths>` | Only crawl URLs matching these paths |42| `--exclude-paths <paths>` | Skip URLs matching these paths |43| `--delay <ms>` | Delay between requests |44| `--max-concurrency <n>` | Max parallel crawl workers |45| `--pretty` | Pretty print JSON output |46| `-o, --output <path>` | Output file path |4748## Tips4950- Always use `--wait` when you need the results immediately. Without it, crawl returns a job ID for async polling.51- Use `--include-paths` to scope the crawl — don't crawl an entire site when you only need one section.52- Crawl consumes credits per page. Check `firecrawl credit-usage` before large crawls.5354## See also5556- [firecrawl-scrape](../firecrawl-scrape/SKILL.md) — scrape individual pages57- [firecrawl-map](../firecrawl-map/SKILL.md) — discover URLs before deciding to crawl58- [firecrawl-download](../firecrawl-download/SKILL.md) — download site to local files (uses map + scrape)59