AI Memory
Production-ready semantic memory management server that stores, retrieves, and manages contextual kn...
Prepare Your PostgreSQL Database
- Ensure you have a running PostgreSQL server with the
pgvectorextension enabled. - Create a new database (e.g.,
mcp_ai_memory) and user if you haven't already. - Enable the pgvector extension for your database:
CREATE EXTENSION IF NOT EXISTS vector;
- Ensure you have a running PostgreSQL server with the
Gather Your PostgreSQL Database Credentials
- You will need:
- Host (e.g.,
localhost) - Port (default:
5432) - Database name (e.g.,
mcp_ai_memory) - Username
- Password
- Host (e.g.,
- You will need:
Create the Connection URL
- Construct your PostgreSQL URL in the following format:
postgresql://<username>:<password>@<host>:<port>/<database>- Example:
postgresql://myuser:mypassword@localhost:5432/mcp_ai_memory
- Example:
- Construct your PostgreSQL URL in the following format:
[Optional] Prepare Redis (For Advanced Caching/Async Features)
- Set up a Redis server if you want caching or async job processing.
- Default Redis URL:
redis://localhost:6379
[Optional] Choose Embedding Model
- You can specify an embedding model or use the default.
- Default:
Xenova/all-mpnet-base-v2 - Alternative:
Xenova/all-MiniLM-L6-v2(smaller/faster, lower quality)
Fill out the ENV values in FastMCP
- Click the "Install Now" button to begin configuring the MCP AI Memory server.
- In the FastMCP connection interface, set or fill out the following ENV fields:
MEMORY_DB_URL: your PostgreSQL URL from Step 3 (Required)REDIS_URL: your Redis URL (optional)EMBEDDING_MODEL: model name (optional, defaults as above)- Any other advanced ENV values as needed (see table below):
| Variable | Description | Default |
|---|---|---|
| MEMORY_DB_URL | PostgreSQL connection string (Required) | None |
| REDIS_URL | Redis connection (optional) | None (defaults to in-memory) |
| EMBEDDING_MODEL | Transformers.js model (optional) | Xenova/all-mpnet-base-v2 |
| LOG_LEVEL | Logging (optional) | info |
| CACHE_TTL | General cache (optional) | 3600 |
| MAX_MEMORIES_PER_QUERY | Max results (optional) | 10 |
| MIN_SIMILARITY_SCORE | Min similarity (optional) | 0.5 |
- Save the Configuration
- Complete setup in FastMCP and start the server.
Summary:
You are only required to provide a valid PostgreSQL connection string (MEMORY_DB_URL) and optionally a Redis server URL (REDIS_URL) and embedding model name. All these values should be entered directly in the FastMCP connection interface after clicking "Install Now".
Quick Start
Choose Connection Type for
Authentication Required
Please sign in to use FastMCP hosted connections
Configure Environment Variables for
Please provide values for the following environment variables:
started!
The MCP server should open in . If it doesn't open automatically, please check that you have the application installed.
Copy and run this command in your terminal:
Make sure Gemini CLI is installed:
Visit Gemini CLI documentation for installation instructions.
Make sure Claude Code is installed:
Visit Claude Code documentation for installation instructions.
Installation Steps:
Configuration
Installation Failed
More for Memory Management
View All →Context7
Discover Context7 MCP, a powerful tool that injects fresh, version-specific code docs and examples from official sources directly into your AI prompts. Say goodbye to outdated or incorrect API info—Context7 ensures your language model answers come with the latest coding references. By simply adding "use context7" to your prompt, you get precise, reliable library documentation and working code snippets without leaving your editor. Designed for smooth integration with many MCP-compatible clients and IDEs, it enhances AI coding assistants with accurate, real-time context that boosts developer productivity and confidence.
Memory Bank
Maintains persistent project context through a structured Memory Bank system of five markdown files that track goals, status, progress, decisions, and patterns with automatic timestamp tracking and workflow guidance for consistent documentation across development sessions.
In Memoria
Provides persistent intelligence infrastructure for codebase analysis through hybrid Rust-TypeScript architecture that combines Tree-sitter AST parsing with semantic concept extraction, developer pattern recognition, and SQLite-based persistence to build contextual understanding of codebases over time, learning from developer behavior and architectural decisions.
Sequential Story
Provides structured problem-solving tools through Sequential Story and Sequential Thinking approaches, enabling narrative-based or Python-implemented thought sequences with branching and visual formatting capabilities for enhanced memory retention.
More for AI and Machine Learning
View All →Blender
Experience seamless AI-powered 3D modeling by connecting Blender with Claude AI via the Model Context Protocol. BlenderMCP enables two-way communication, allowing you to create, modify, and inspect 3D scenes directly through AI prompts. Control objects, materials, lighting, and execute Python code in Blender effortlessly. Access assets from Poly Haven and generate AI-driven models using Hyper3D Rodin. This integration enhances creative workflows by combining Blender’s robust tools with Claude’s intelligent guidance, making 3D content creation faster, interactive, and more intuitive. Perfect for artists and developers seeking AI-assisted 3D design within Blender’s environment.
Video Edit (MoviePy)
MoviePy-based video editing server that provides comprehensive video and audio processing capabilities including trimming, merging, resizing, effects, format conversion, YouTube downloading, and text/image overlays through an in-memory object store for chaining operations efficiently.
Video & Audio Text Extraction
Extracts text from videos and audio files across platforms like YouTube, Bilibili, TikTok, Instagram, Twitter/X, Facebook, and Vimeo using Whisper speech recognition for transcription, content analysis, and accessibility improvements.
Similar MCP Servers
Cipher
Memory-powered agent framework that provides persistent memory capabilities across conversations and sessions using vector databases and embeddings, enabling context retention, reasoning pattern recognition, and shared workspace memory for team collaboration.
Kiro Memory
Provides intelligent memory management and task tracking for software development projects with automatic project detection, semantic search, and SQLite-based persistence that maintains context across coding sessions through memory classification, relationship building, and context-aware task creation.
In Memoria
Provides persistent intelligence infrastructure for codebase analysis through hybrid Rust-TypeScript architecture that combines Tree-sitter AST parsing with semantic concept extraction, developer pattern recognition, and SQLite-based persistence to build contextual understanding of codebases over time, learning from developer behavior and architectural decisions.