RAG Documentation Search
Provides semantic document search and retrieval through vector embeddings, enabling context-aware re...
Determine Your Embeddings Provider
- Choose whether you want to use OpenAI or Ollama for embeddings:
- For OpenAI, you will need an OpenAI API key.
- For Ollama, you need a running Ollama server (local or remote).
- Choose whether you want to use OpenAI or Ollama for embeddings:
Collect Required Environment Variable Values
All configurations require:
QDRANT_URL: The full URL of your Qdrant instance (e.g.,http://localhost:6333for local, or your Qdrant Cloud cluster URL for cloud use).QDRANT_API_KEY: (Only required for Qdrant Cloud) Get this from your Qdrant Cloud console.EMBEDDINGS_PROVIDER: Set to"openai"or"ollama"depending on your setup.
If using OpenAI:
OPENAI_API_KEY: Obtain your key from your OpenAI dashboard.
If using Ollama:
OLLAMA_BASE_URL: (Optional, defaults tohttp://localhost:11434) The URL where Ollama is running.
Obtain a Qdrant API Key (for Qdrant Cloud)
- Go to the Qdrant Cloud Console
- Log in or create an account.
- Create or select a cluster.
- Navigate to the API Keys section (often under "Settings" or "Access").
- Generate a new API key.
- Copy the API key (you’ll need it for
QDRANT_API_KEY).
Obtain an OpenAI API Key (if using OpenAI)
- Visit OpenAI API Keys.
- Log in.
- Click Create new secret key.
- Copy the key to use as
OPENAI_API_KEY.
Start Ollama Server (if using Ollama)
- Install Ollama, if not already installed:
curl -fsSL https://ollama.com/install.sh | sh - Start the Ollama server:
ollama serve - By default, it will run at
http://localhost:11434.
- Install Ollama, if not already installed:
Fill In the FastMCP Connection Interface
- Go to your FastMCP connection interface.
- Press the “Install Now” button for MCP-server-ragdocs.
- In the environment/key input fields, enter the values you obtained for:
QDRANT_URLQDRANT_API_KEY(if using Qdrant Cloud)EMBEDDINGS_PROVIDEROPENAI_API_KEY(if using OpenAI)OLLAMA_BASE_URL(if using Ollama)
Save and Apply the Configuration
- Click “Save” or “Apply” in FastMCP to finalize the setup.
You’re now ready to use MCP-server-ragdocs!
Quick Start
Choose Connection Type for
Authentication Required
Please sign in to use FastMCP hosted connections
Configure Environment Variables for
Please provide values for the following environment variables:
started!
The MCP server should open in . If it doesn't open automatically, please check that you have the application installed.
Copy and run this command in your terminal:
Make sure Gemini CLI is installed:
Visit Gemini CLI documentation for installation instructions.
Make sure Claude Code is installed:
Visit Claude Code documentation for installation instructions.
Installation Steps:
Configuration
Installation Failed
More for AI and Machine Learning
View All →Blender
Experience seamless AI-powered 3D modeling by connecting Blender with Claude AI via the Model Context Protocol. BlenderMCP enables two-way communication, allowing you to create, modify, and inspect 3D scenes directly through AI prompts. Control objects, materials, lighting, and execute Python code in Blender effortlessly. Access assets from Poly Haven and generate AI-driven models using Hyper3D Rodin. This integration enhances creative workflows by combining Blender’s robust tools with Claude’s intelligent guidance, making 3D content creation faster, interactive, and more intuitive. Perfect for artists and developers seeking AI-assisted 3D design within Blender’s environment.
Video Edit (MoviePy)
MoviePy-based video editing server that provides comprehensive video and audio processing capabilities including trimming, merging, resizing, effects, format conversion, YouTube downloading, and text/image overlays through an in-memory object store for chaining operations efficiently.
ElevenLabs
Unleash powerful Text-to-Speech and audio processing with the official ElevenLabs MCP server. It enables MCP clients like Claude Desktop, Cursor, and OpenAI Agents to generate speech, clone voices, transcribe audio, and create unique sounds effortlessly. Customize voices, convert recordings, and build immersive audio scenes with easy-to-use APIs designed for creative and practical applications. This server integrates seamlessly, expanding your AI toolkit to bring rich, dynamic audio experiences to life across various platforms and projects.
Dual-Cycle Reasoner
Provides dual-cycle metacognitive reasoning framework that detects when autonomous agents get stuck in repetitive behaviors through statistical anomaly detection and semantic analysis, then automatically diagnoses failure causes and generates recovery strategies using case-based learning.
TypeScript Refactoring
Provides TypeScript/JavaScript code analysis and refactoring capabilities using ts-morph, enabling intelligent code transformations with symbol renaming, file moving with import path corrections, cross-file reference updates, type signature analysis, and module dependency exploration across entire codebases.
More for Developer Tools
View All →
GitHub
Extend your developer tools with the GitHub MCP Server—a powerful Model Context Protocol server enhancing automation and AI interactions with GitHub APIs. It supports diverse functionalities like managing workflows, issues, pull requests, repositories, and security alerts. Customize available toolsets to fit your needs, enable dynamic tool discovery to streamline tool usage, and run the server locally or remotely. With read-only mode and support for GitHub Enterprise, this server integrates deeply into your GitHub ecosystem, empowering data extraction and intelligent operations for developers and AI applications. Licensed under MIT, it fosters flexible and advanced GitHub automation.
Desktop Commander
Desktop Commander MCP transforms Claude Desktop into a powerful AI assistant for managing files, running terminal commands, and editing code with precision across your entire system. It supports in-memory code execution, interactive process control, advanced search and replace, plus comprehensive filesystem operations including reading from URLs and negative offset file reads. With detailed audit and fuzzy search logging, it enables efficient automation, data analysis, and multi-project workflows—all without extra API costs. Designed for developers seeking smarter automation, it enhances productivity by integrating all essential development tools into a single, intelligent chat interface.
Figma Context
Unlock seamless design-to-code with Framelink Figma MCP Server, letting AI coding tools access your Figma files directly. It simplifies Figma API data to supply only relevant design layouts and styles, boosting AI accuracy in implementing designs across frameworks. Specifically built for use with tools like Cursor, it transforms design metadata into precise code in one step. This server streamlines the workflow by providing clean, focused context, enabling faster and more reliable design-driven development. Enjoy a powerful bridge between design and coding that enhances productivity and code quality with minimal fuss.
Chrome DevTools
Provides direct Chrome browser control through DevTools for web automation, debugging, and performance analysis using accessibility tree snapshots for reliable element targeting, automatic page event handling, and integrated performance tracing with actionable insights.