AI Memory
Production-ready semantic memory management server that stores, retrieves, and manages contextual kn...
Prepare Your PostgreSQL Database
- Ensure you have a running PostgreSQL server with the
pgvectorextension enabled. - Create a new database (e.g.,
mcp_ai_memory) and user if you haven't already. - Enable the pgvector extension for your database:
CREATE EXTENSION IF NOT EXISTS vector;
- Ensure you have a running PostgreSQL server with the
Gather Your PostgreSQL Database Credentials
- You will need:
- Host (e.g.,
localhost) - Port (default:
5432) - Database name (e.g.,
mcp_ai_memory) - Username
- Password
- Host (e.g.,
- You will need:
Create the Connection URL
- Construct your PostgreSQL URL in the following format:
postgresql://<username>:<password>@<host>:<port>/<database>- Example:
postgresql://myuser:mypassword@localhost:5432/mcp_ai_memory
- Example:
- Construct your PostgreSQL URL in the following format:
[Optional] Prepare Redis (For Advanced Caching/Async Features)
- Set up a Redis server if you want caching or async job processing.
- Default Redis URL:
redis://localhost:6379
[Optional] Choose Embedding Model
- You can specify an embedding model or use the default.
- Default:
Xenova/all-mpnet-base-v2 - Alternative:
Xenova/all-MiniLM-L6-v2(smaller/faster, lower quality)
Fill out the ENV values in FastMCP
- Click the "Install Now" button to begin configuring the MCP AI Memory server.
- In the FastMCP connection interface, set or fill out the following ENV fields:
MEMORY_DB_URL: your PostgreSQL URL from Step 3 (Required)REDIS_URL: your Redis URL (optional)EMBEDDING_MODEL: model name (optional, defaults as above)- Any other advanced ENV values as needed (see table below):
| Variable | Description | Default |
|---|---|---|
| MEMORY_DB_URL | PostgreSQL connection string (Required) | None |
| REDIS_URL | Redis connection (optional) | None (defaults to in-memory) |
| EMBEDDING_MODEL | Transformers.js model (optional) | Xenova/all-mpnet-base-v2 |
| LOG_LEVEL | Logging (optional) | info |
| CACHE_TTL | General cache (optional) | 3600 |
| MAX_MEMORIES_PER_QUERY | Max results (optional) | 10 |
| MIN_SIMILARITY_SCORE | Min similarity (optional) | 0.5 |
- Save the Configuration
- Complete setup in FastMCP and start the server.
Summary:
You are only required to provide a valid PostgreSQL connection string (MEMORY_DB_URL) and optionally a Redis server URL (REDIS_URL) and embedding model name. All these values should be entered directly in the FastMCP connection interface after clicking "Install Now".
Quick Start
Choose Connection Type for
Authentication Required
Please sign in to use FastMCP hosted connections
Configure Environment Variables for
Please provide values for the following environment variables:
started!
The MCP server should open in . If it doesn't open automatically, please check that you have the application installed.
Copy and run this command in your terminal:
Make sure Gemini CLI is installed:
Visit Gemini CLI documentation for installation instructions.
Make sure Claude Code is installed:
Visit Claude Code documentation for installation instructions.
Installation Steps:
Configuration
Installation Failed
More for Memory Management
View All →Context7
Discover Context7 MCP, a powerful tool that injects fresh, version-specific code docs and examples from official sources directly into your AI prompts. Say goodbye to outdated or incorrect API info—Context7 ensures your language model answers come with the latest coding references. By simply adding "use context7" to your prompt, you get precise, reliable library documentation and working code snippets without leaving your editor. Designed for smooth integration with many MCP-compatible clients and IDEs, it enhances AI coding assistants with accurate, real-time context that boosts developer productivity and confidence.
Memory Bank
Maintains persistent project context through a structured Memory Bank system of five markdown files that track goals, status, progress, decisions, and patterns with automatic timestamp tracking and workflow guidance for consistent documentation across development sessions.
More for AI and Machine Learning
View All →Blender
Experience seamless AI-powered 3D modeling by connecting Blender with Claude AI via the Model Context Protocol. BlenderMCP enables two-way communication, allowing you to create, modify, and inspect 3D scenes directly through AI prompts. Control objects, materials, lighting, and execute Python code in Blender effortlessly. Access assets from Poly Haven and generate AI-driven models using Hyper3D Rodin. This integration enhances creative workflows by combining Blender’s robust tools with Claude’s intelligent guidance, making 3D content creation faster, interactive, and more intuitive. Perfect for artists and developers seeking AI-assisted 3D design within Blender’s environment.
Video Edit (MoviePy)
MoviePy-based video editing server that provides comprehensive video and audio processing capabilities including trimming, merging, resizing, effects, format conversion, YouTube downloading, and text/image overlays through an in-memory object store for chaining operations efficiently.
ElevenLabs
Unleash powerful Text-to-Speech and audio processing with the official ElevenLabs MCP server. It enables MCP clients like Claude Desktop, Cursor, and OpenAI Agents to generate speech, clone voices, transcribe audio, and create unique sounds effortlessly. Customize voices, convert recordings, and build immersive audio scenes with easy-to-use APIs designed for creative and practical applications. This server integrates seamlessly, expanding your AI toolkit to bring rich, dynamic audio experiences to life across various platforms and projects.
TypeScript Refactoring
Provides TypeScript/JavaScript code analysis and refactoring capabilities using ts-morph, enabling intelligent code transformations with symbol renaming, file moving with import path corrections, cross-file reference updates, type signature analysis, and module dependency exploration across entire codebases.
Dual-Cycle Reasoner
Provides dual-cycle metacognitive reasoning framework that detects when autonomous agents get stuck in repetitive behaviors through statistical anomaly detection and semantic analysis, then automatically diagnoses failure causes and generates recovery strategies using case-based learning.
Ultra (Multi-AI Provider)
Unified server providing access to OpenAI O3, Google Gemini 2.5 Pro, and Azure OpenAI models with automatic usage tracking, cost estimation, and nine specialized development tools for code analysis, debugging, and documentation generation.
Similar MCP Servers
Kiro Memory
Provides intelligent memory management and task tracking for software development projects with automatic project detection, semantic search, and SQLite-based persistence that maintains context across coding sessions through memory classification, relationship building, and context-aware task creation.
In Memoria
Provides persistent intelligence infrastructure for codebase analysis through hybrid Rust-TypeScript architecture that combines Tree-sitter AST parsing with semantic concept extraction, developer pattern recognition, and SQLite-based persistence to build contextual understanding of codebases over time, learning from developer behavior and architectural decisions.