Tavily MCP
Tavily AI search server — real-time web search optimized for AI agents with factual, accurate, and structured results.
// Add to your client
{
"mcpServers": {
"tavily": {
"command": "npx",
"args": ["-y", "tavily-mcp"],
"env": {
"TAVILY_API_KEY": "<your-tavily-api-key>"
}
}
}
}Paste into your client's MCP configuration file.
// Try it now
"Research the current state of WebAssembly adoption and summarize what you find"
- ! Tavily charges per search and extraction — monitor your quota
- ! Results quality varies by query phrasing — vague queries return noise
- ! Full-page extraction costs more than snippet-only — choose the right mode
API key or simple config
// When to use
- You're building an AI agent that needs web research as a tool
- You want search results that are cleaned and ready for LLM consumption
- You need research across multiple sources in a single call
- You want both snippets and full-page extraction options
// When NOT to use
- You only need one quick answer (Perplexity is more concise)
- You need real-time prices or live data — web search is usually stale by seconds to hours
- You're doing totally free research — Tavily is usage-based
// Usage Scenarios
Multi-Source Research
Pull and synthesize content from multiple sources in one call.
Example prompt
"Research the top 5 CRM tools for small businesses and summarize their pricing and main features"
Cleaned Content Extraction
Get clean text from search results without HTML cruft.
Example prompt
"Search for 'how to set up Postgres replication' and return the full cleaned content of the top 3 results"
Agent-Ready Search
Use as a tool inside an autonomous agent loop.
Example prompt
"Before writing this report, search for the latest stats on remote work adoption in 2026"
// About
Plain English
Regular search engines are built for humans skimming a results page. AI assistants need something different — a search that returns clean, summarized, source-grounded answers it can reason over. Pointing an AI at Google often produces noisy, ad-cluttered results that hurt more than help. Tavily is search built for AI agents from the ground up. Ask Claude something research-shaped — 'compare the top five project management tools and their pricing,' 'what's the current state of WebAssembly adoption,' 'find recent reporting on this regulation' — and Tavily pulls relevant sources, extracts the meaningful parts, and hands the AI a clean summary along with the original links so you can verify. For anyone who uses AI for research, briefings, market analysis, or fact-checking, this is the difference between a vague answer and a sourced one. The cost is per query, so it's not the right tool for one-off curiosity questions, but it's worth it when accuracy and citations matter. Pair it well with Context7 for code questions or Firecrawl when you already know which sites you want to read deeply.
The Tavily MCP server integrates Tavily's AI-optimized search engine into AI assistants. Designed specifically for LLM consumption, Tavily returns structured, relevant results with reduced noise compared to general search engines.
Popular in AI agent frameworks like LangChain and LlamaIndex. Free tier available for development.
// Use Cases
- Real-time web search optimized for AI responses
- Research topics with accurate, structured results
- Power AI agents with current web information
- Find factual information with citations
// Works With
// Also Consider
// Related Servers
Anthropic Claude API MCP
Use Claude API within Claude — chain AI calls, compare model outputs, and build sophisticated multi-agent workflows.
Brave Search MCP
Official Brave Search integration — give AI real-time web search access via the Brave Search API for up-to-date information.
Fetch MCP
Official web fetch server — let AI retrieve content from any URL, including HTML to Markdown conversion for clean LLM consumption.