Provides semantic document search and retrieval through vector embeddings, enabling context-aware responses backed by specific documentation sources
Determine Your Embeddings Provider
- Choose whether you want to use OpenAI or Ollama for embeddings:
- For OpenAI, you will need an OpenAI API key.
- For Ollama, you need a running Ollama server (local or remote).
Collect Required Environment Variable Values
Obtain a Qdrant API Key (for Qdrant Cloud)
- Go to the Qdrant Cloud Console
- Log in or create an account.
- Create or select a cluster.
- Navigate to the API Keys section (often under "Settings" or "Access").
- Generate a new API key.
- Copy the API key (you’ll need it for
QDRANT_API_KEY
).
Obtain an OpenAI API Key (if using OpenAI)
- Visit OpenAI API Keys.
- Log in.
- Click Create new secret key.
- Copy the key to use as
OPENAI_API_KEY
.
Start Ollama Server (if using Ollama)
Fill In the FastMCP Connection Interface
- Go to your FastMCP connection interface.
- Press the “Install Now” button for MCP-server-ragdocs.
- In the environment/key input fields, enter the values you obtained for:
QDRANT_URL
QDRANT_API_KEY
(if using Qdrant Cloud)
EMBEDDINGS_PROVIDER
OPENAI_API_KEY
(if using OpenAI)
OLLAMA_BASE_URL
(if using Ollama)
Save and Apply the Configuration
- Click “Save” or “Apply” in FastMCP to finalize the setup.
You’re now ready to use MCP-server-ragdocs!