Firecrawl
插件 活跃Scrape, search, crawl, and map the web with a single command.
To provide a unified command-line interface for a wide range of web data extraction and interaction tasks, empowering users to automate web research and data collection.
功能
- Web scraping and content extraction
- Web search with optional content scraping
- Site mapping and URL discovery
- Website crawling for bulk content
- AI-powered structured data extraction
- Live browser interaction for dynamic pages
- Local file parsing (PDF, DOCX, etc.)
- Authentication management for Firecrawl API
使用场景
- Extracting structured data from complex websites
- Automating research and information gathering
- Saving website content for offline access
- Interacting with web pages that require login or dynamic elements
非目标
- Performing local file system operations outside of output saving
- Deploying web applications or managing infrastructure
- Directly manipulating code or project files
- Acting as a general-purpose shell or terminal replacement
Scope
- warning:Single responsibility principleThe plugin bundles a wide array of distinct web interaction tools (scrape, search, map, crawl, agent, interact, download, parse, config) which, while related, represent a broad scope that could potentially lead to trigger conflicts or bloat.
- warning:Tool surface sizeThe plugin exposes a large number of distinct commands (scrape, search, map, crawl, agent, interact, download, parse, config, login, logout, credit-usage, view-config, and experimental commands), exceeding the target of 3-10 tools.
Trust
- warning:Issues AttentionIn the last 90 days, 4 issues were opened and 1 was closed, indicating a low closure rate and potentially slow maintainer response.
Invocation
- warning:Overlapping near-synonym toolsThere is potential overlap between `scrape` and `agent` for complex data extraction, and `crawl` and `download` for bulk content retrieval, which might require careful disambiguation by the agent.
- warning:Name collisionsThe plugin bundles many commands like `scrape`, `search`, and `crawl` which are common terms and could potentially collide with other CLI tools or built-in agent commands if not managed carefully.
安装
请先添加 Marketplace
/plugin marketplace add firecrawl/cli/plugin install cli@firecrawl包含 9 个扩展
Skill (9)
AI-powered autonomous data extraction that navigates complex sites and returns structured JSON. Use this skill when the user wants structured data from websites, needs to extract pricing tiers, product listings, directory entries, or any data as JSON with a schema. Triggers on "extract structured data", "get all the products", "pull pricing info", "extract as JSON", or when the user provides a JSON schema for website data. More powerful than simple scraping for multi-page structured extraction.
Search, scrape, and interact with the web via the Firecrawl CLI. Use this skill whenever the user wants to search the web, find articles, research a topic, look something up online, scrape a webpage, grab content from a URL, get data from a website, crawl documentation, download a site, or interact with pages that need clicks or logins. Also use when they say "fetch this page", "pull the content from", "get the page at https://", or reference external websites. This provides real-time web search with full page content and interact capabilities — beyond what Claude can do natively with built-in tools. Do NOT trigger for local file operations, git commands, deployments, or code editing tasks.
Bulk extract content from an entire website or site section. Use this skill when the user wants to crawl a site, extract all pages from a docs section, bulk-scrape multiple pages following links, or says "crawl", "get all the pages", "extract everything under /docs", "bulk extract", or needs content from many pages on the same site. Handles depth limits, path filtering, and concurrent extraction.
Download an entire website as local files — markdown, screenshots, or multiple formats per page. Use this skill when the user wants to save a site locally, download documentation for offline use, bulk-save pages as files, or says "download the site", "save as local files", "offline copy", "download all the docs", or "save for reference". Combines site mapping and scraping into organized local directories.
Control and interact with a live browser session on any scraped page — click buttons, fill forms, navigate flows, and extract data using natural language prompts or code. Use when the user needs to interact with a webpage beyond simple scraping: logging into a site, submitting forms, clicking through pagination, handling infinite scroll, navigating multi-step checkout or wizard flows, or when a regular scrape failed because content is behind JavaScript interaction. Also useful for authenticated scraping via profiles. Triggers on "interact", "click", "fill out the form", "log in to", "sign in", "submit", "paginated", "next page", "infinite scroll", "interact with the page", "navigate to", "open a session", or "scrape failed".
Discover and list all URLs on a website, with optional search filtering. Use this skill when the user wants to find a specific page on a large site, list all URLs, see the site structure, find where something is on a domain, or says "map the site", "find the URL for", "what pages are on", or "list all pages". Essential when the user knows which site but not which exact page.
Efficiently extract and convert the contents of any local file—such as PDF, DOCX, DOC, ODT, RTF, XLSX, XLS, or HTML—into clean, well-formatted markdown saved to disk. Use this skill whenever the user requests to parse, read, or extract information from a file on their computer, including phrases like “parse this PDF”, “convert this document”, “read this file”, “extract text from”, or when a local file path (not a URL) is provided. This skill offers advanced options like generating AI-powered summaries and answering questions based on the file's content. Prefer this tool over `scrape` when handling local files to deliver precise, structured outputs for downstream tasks.
Extract clean markdown from any URL, including JavaScript-rendered SPAs. Use this skill whenever the user provides a URL and wants its content, says "scrape", "grab", "fetch", "pull", "get the page", "extract from this URL", or "read this webpage". Handles JS-rendered pages, multiple concurrent URLs, and returns LLM-optimized markdown. Use this instead of WebFetch for any webpage content extraction.
Web search with full page content extraction. Use this skill whenever the user asks to search the web, find articles, research a topic, look something up, find recent news, discover sources, or says "search for", "find me", "look up", "what are people saying about", or "find articles about". Returns real search results with optional full-page markdown — not just snippets. Provides capabilities beyond Claude's built-in WebSearch.
质量评分
类似扩展
Uc Taskmanager
100SDD WORK-PIPELINE Agent — Requirements analysis & development 6-agent full pipeline with DAG-based orchestration and sliding window context management
Paper Search
100通过 OpenAlex 搜索学术论文 — 按关键词查找论文,按 DOI 查看详情,支持分页和排序
X Twitter Scraper
99X (Twitter) 实时数据平台技能,提供 REST API(100 多个端点)、MCP 服务器(2 个工具)和 Webhook。涵盖推文搜索、用户查找、时间线、提取、监控、赠品抽奖、积分、支持以及经过确认的私有读取、写入操作、Webhook、监控和按使用付费流程。每次调用读取价格为 $0.00015。