Qdrant
OfficialStore and retrieve vector-based memories for AI systems.
Create or Access a Qdrant Account
- If you do not have a Qdrant Cloud account, go to the Qdrant Cloud and sign up.
- Log in to your Qdrant Cloud dashboard.
Create a New Qdrant Cluster or Access an Existing One
- On the Qdrant Cloud dashboard, either create a new cluster or select an existing one you wish to use.
- Copy the HTTPS endpoint URL for this cluster (it looks like
https://xyz-example.region.aws.cloud.qdrant.io:6333).
Obtain an API Key for Your Qdrant Cluster
- In the sidebar, navigate to the “API Keys” section (sometimes called “API Access”).
- Click “Generate API Key” or “Create New Key”.
- Optionally, provide a name and select the desired permissions (Read/Write or as required).
- Save the generated API key securely. You will need it for configuration.
Choose or Create a Collection Name
- Decide on a collection name for storing your vectors (e.g.,
my-collection). This can be any string and will be used to organize your data. - You can use an existing collection name or specify a new one; mcp-server-qdrant will create it if it does not exist.
- Decide on a collection name for storing your vectors (e.g.,
(Optional) Decide on an Embedding Model
- By default,
sentence-transformers/all-MiniLM-L6-v2is used. - You can change the model if needed, but for most users, the default is sufficient.
- By default,
Fill In the FastMCP Connection Interface
- Click the Install Now button in your environment to open the FastMCP connection interface.
- In the fields provided, enter:
QDRANT_URL: Your cluster URL from step 2, e.g.,https://xyz-example.region.aws.cloud.qdrant.io:6333QDRANT_API_KEY: Your API key from step 3COLLECTION_NAME: The collection name you chose in step 4- (Optional)
EMBEDDING_MODEL: Change only if you wish to use a different model.
Save and Complete the Setup
- Confirm and save your configuration in the FastMCP connection interface.
- You are now ready to use the Qdrant MCP server with your specified settings.
Note:
- If you are using a local Qdrant server instead of Qdrant Cloud, set
QDRANT_URLto your local Qdrant endpoint (e.g.http://localhost:6333). - For local mode, you may omit
QDRANT_API_KEYif authentication is not enabled and setQDRANT_LOCAL_PATHinstead if running Qdrant locally.
Quick Start
Choose Connection Type for
Authentication Required
Please sign in to use FastMCP hosted connections
Configure Environment Variables for
Please provide values for the following environment variables:
started!
The MCP server should open in . If it doesn't open automatically, please check that you have the application installed.
Copy and run this command in your terminal:
Make sure Gemini CLI is installed:
Visit Gemini CLI documentation for installation instructions.
Make sure Claude Code is installed:
Visit Claude Code documentation for installation instructions.
Installation Steps:
Configuration
Installation Failed
More for Memory Management
View All →Context7
Discover Context7 MCP, a powerful tool that injects fresh, version-specific code docs and examples from official sources directly into your AI prompts. Say goodbye to outdated or incorrect API info—Context7 ensures your language model answers come with the latest coding references. By simply adding "use context7" to your prompt, you get precise, reliable library documentation and working code snippets without leaving your editor. Designed for smooth integration with many MCP-compatible clients and IDEs, it enhances AI coding assistants with accurate, real-time context that boosts developer productivity and confidence.
Memory Bank
Maintains persistent project context through a structured Memory Bank system of five markdown files that track goals, status, progress, decisions, and patterns with automatic timestamp tracking and workflow guidance for consistent documentation across development sessions.
In Memoria
Provides persistent intelligence infrastructure for codebase analysis through hybrid Rust-TypeScript architecture that combines Tree-sitter AST parsing with semantic concept extraction, developer pattern recognition, and SQLite-based persistence to build contextual understanding of codebases over time, learning from developer behavior and architectural decisions.
Sequential Story
Provides structured problem-solving tools through Sequential Story and Sequential Thinking approaches, enabling narrative-based or Python-implemented thought sequences with branching and visual formatting capabilities for enhanced memory retention.
More for Database
View All →Supabase MCP Server
Connect Supabase projects directly with AI assistants using the Model Context Protocol (MCP). This server standardizes communication between Large Language Models and Supabase, enabling AI to manage tables, query data, and interact with project features like edge functions, storage, and branching. Customize access with read-only or project-scoped modes and select specific tool groups to fit your needs. Integrated tools cover account management, documentation search, database operations, debugging, and more, empowering AI to assist with development, monitoring, and deployment tasks in your Supabase environment efficiently and securely.
Svelte
Official Svelte documentation access and code analysis server that provides up-to-date reference material, playground link generation, and intelligent autofixer capabilities for detecting common patterns, anti-patterns, and migration opportunities in Svelte 5 and SvelteKit projects.
ClickHouse
Unlock powerful analytics with the ClickHouse MCP Server—seamlessly run, explore, and manage SQL queries across ClickHouse clusters or with chDB’s embedded OLAP engine. This server offers easy database and table listing, safe query execution, and flexible access to data from files, URLs, or databases. Built-in health checks ensure reliability, while support for both ClickHouse and chDB enables robust data workflows for any project.
Similar MCP Servers
AI Memory
Production-ready semantic memory management server that stores, retrieves, and manages contextual knowledge across sessions using PostgreSQL with pgvector for vector similarity search, featuring intelligent caching, multi-user support, memory relationships, automatic clustering, and background job processing for persistent AI memory and knowledge management systems.