Crawleo
OfficialProvides real-time web search and website crawling capabilities with zero data retention.
Obtain your Crawleo API key
- Visit https://crawleo.dev and sign in (or sign up for a free account).
- Open your dashboard and locate your API keys.
- Create or copy an API key. The key will start with
sk_. Copy it immediately (it may be shown only once). Store it securely.
Enter HTTP header (for remote/hosted MCP at https://api.crawleo.dev/mcp)
- In the FastMCP connection interface, click the “Install Now” button to open the add-connection dialog.
- Under HTTP Headers (or Headers) add a header:
- Name:
Authorization - Value:
Bearer YOUR_API_KEY_HERE
(ReplaceYOUR_API_KEY_HEREwith the key you copied. Include theBearerprefix and the space.)
- Name:
- Save the connection in FastMCP.
- (If your client requires it) Restart the client or start a new session to ensure the header is applied.
- Test by invoking a simple web.search (e.g., “search for latest AI news”) to confirm authentication succeeds.
Enter environment variable (for local MCP usage — npm/npx/docker)
- If you run a local Crawleo MCP (npm / npx / docker), FastMCP’s Install Now dialog can accept ENV entries. Click “Install Now.”
- Add an environment variable:
- Name:
CRAWLEO_API_KEY - Value:
YOUR_API_KEY_HERE
(Do NOT add theBearerprefix for the env var — just the raw API key.)
- Name:
- Save the connection.
- If using Docker directly, the equivalent is:
- docker run -e CRAWLEO_API_KEY=your_api_key crawleo-mcp
- Restart your local MCP client if required and test a request.
Notes & verification
- Ensure the API key begins with
sk_. - For hosted/remote setups used by Claude/Cursor/Windsurf, the README requires the Authorization header to include the
Bearerprefix. Enter exactlyBearer <key>. - For local env usage (CRAWLEO_API_KEY) provide only the key (no
Bearer). - If you get authentication errors: re-check the key, ensure it is quoted if the UI requires quotes, confirm your account/credits at crawleo.dev, and retry.
- Ensure the API key begins with
Quick Start
Choose Connection Type for
Authentication Required
Please sign in to use FastMCP hosted connections
Run MCP servers without
local setup or downtime
Access to 1,000+ ready-to-use MCP servers
Skip installation, maintenance, and trial-and-error.
No local setup or infra
Run MCP servers without Docker, ports, or tunnels.
Always online
Your MCP keeps working even when your laptop is off.
One secure URL
Use the same MCP from any agent, anywhere.
Secure by default
Encrypted connections. Secrets never stored locally.
Configuration for
Environment Variables
Please provide values for the following environment variables:
HTTP Headers
Please provide values for the following HTTP headers:
started!
The MCP server should open in . If it doesn't open automatically, please check that you have the application installed.
Copy and run this command in your terminal:
Make sure Gemini CLI is installed:
Visit Gemini CLI documentation for installation instructions.
Make sure Claude Code is installed:
Visit Claude Code documentation for installation instructions.
Installation Steps:
Configuration
Installation Failed
More for Web Search
View All →DuckDuckGo
Experience fast and reliable DuckDuckGo web search with this TypeScript MCP server. It offers a simple search interface supporting customizable queries, result counts, and safe search levels. Built-in rate limiting ensures fair usage with up to 1 request per second and 15,000 per month. The server returns well-formatted Markdown results, making it easy to integrate and display search data. Designed to demonstrate core Model Context Protocol concepts, it also includes helpful debugging tools to inspect communication. Perfect for developers wanting seamless DuckDuckGo integration via MCP with efficient error handling and robust controls.
Microsoft Docs
Access official Microsoft documentation instantly with the Microsoft Learn Docs MCP Server. This cloud service implements the Model Context Protocol (MCP) to enable AI tools like GitHub Copilot and others to perform high-quality semantic searches across Microsoft Learn, Azure, Microsoft 365 docs, and more. It delivers up to 10 concise, context-relevant content chunks in real time, ensuring up-to-date, accurate information. Designed for seamless integration with any MCP-compatible client, it helps AI assistants ground their responses in authoritative, current Microsoft resources for better developer support and productivity.
Brave Search
Integrate web search and local search capabilities with Brave. Brave Search allows you to seamlessly integrate Brave Search functionality into AI assistants like Claude. By implementing a Model Context Protocol (MCP) server, it enables the AI to leverage Brave Search's web search and local business search capabilities. It provides tools for both general web searches and specific local searches, enhancing the AI assistant's ability to provide relevant and up-to-date information.
Exa Search
Empower AI assistants like Claude with real-time web data using the Exa MCP Server. This Model Context Protocol server connects AI models to the Exa AI Search API, enabling safe, up-to-date web searches across diverse tools such as academic papers, company data, LinkedIn, Wikipedia, GitHub, and more. Its flexible toolset enhances research, competitor analysis, and content extraction, providing comprehensive information for smarter AI interactions. Designed for seamless integration with Claude Desktop, the Exa MCP Server boosts AI capabilities by delivering fast, reliable, and controlled access to the latest online information.
More for Web Scraping
View All →DeepWiki
Instantly turn any Deepwiki article into clean, structured Markdown you can use anywhere. Deepwiki MCP Server safely crawls deepwiki.com pages, removes clutter like ads and navigation, rewrites links for Markdown, and offers fast performance with customizable output formats. Choose a single document or organize content by page, and easily extract documentation or guides for any supported library. It’s designed for secure, high-speed conversion and clear, easy-to-read results—making documentation and learning seamless.
Deep Research (Tavily)
Enables comprehensive web research by leveraging Tavily's Search and Crawl APIs to aggregate information from multiple sources, extract detailed content, and structure data specifically for generating technical documentation and research reports.
Selenium WebDriver
Enables browser automation through Selenium WebDriver with support for Chrome, Firefox, and Edge browsers, providing navigation, element interaction, form handling, screenshot capture, JavaScript execution, and advanced actions for automated testing and web scraping tasks.
Similar MCP Servers
Firecrawl
Unlock powerful web data extraction with Firecrawl, turning any website into clean markdown or structured data. Firecrawl lets you crawl all accessible pages, scrape content in multiple formats, and extract structured data using AI-driven prompts and schemas. Its advanced features handle dynamic content, proxies, anti-bot measures, and media parsing, ensuring reliable and customizable data output. Whether mapping site URLs or batch scraping thousands of pages asynchronously, Firecrawl streamlines data gathering for AI applications, research, or automation with simple API calls and SDK support across multiple languages. Empower your projects with high-quality, LLM-ready web data.