Sequential Thinking Multi-Agent System
Orchestrates a team of specialized agents working in parallel to break down complex problems through...
Identify Your LLM Provider
Decide which large language model (LLM) provider you want to use. The supported providers are:- DeepSeek (default)
- Groq
- OpenRouter (also for Kimi K2)
- GitHub Models
- Ollama (does not require an API key)
- Kimi K2 (uses OpenRouter API key)
Obtain the Required API Key for Your Provider
- DeepSeek: Register at deepseek.com and obtain your API key from your account dashboard.
- Groq: Register at groq.com, log in to the dashboard, and create an API key (usually under “API” or “Keys”).
- OpenRouter/Kimi K2: Register at openrouter.ai and generate an API key.
- GitHub Models:
- Go to GitHub > Settings > Developer settings > Personal access tokens > Tokens (classic).
- Click “Generate new token” and select scopes like
repoandread:user. - Copy the generated token—it should start with
ghp_.
- Ollama: No API key required, but you must have Ollama installed locally.
If Using Exa with the Researcher Agent (Optional):
Register at exa.ai and generate your API key from your account page.Decide on Any Optional Model Customizations
If you want, decide which exact model variants to use by setting model ID variables (e.g.,DEEPSEEK_TEAM_MODEL_ID,GROQ_AGENT_MODEL_ID). Refer to the README under “Note on Model Selection” and “GitHub Models Setup and Available Models” for guidance.Open the FastMCP Connection Interface
Go to the FastMCP system and find your Sequential Thinking Multi-Agent System connection.Press "Install Now" to Add ENVs
Click the “Install Now” button to access environment variable fields.Fill in the Required ENV Values
Enter the required values in the FastMCP ENV interface:LLM_PROVIDER— (e.g.,deepseek,groq,openrouter,github,ollama, orkimi){PROVIDER}_API_KEYorGITHUB_TOKEN— Paste the API key you obtained for your provider.- Optional:
LLM_BASE_URL,{PROVIDER}_TEAM_MODEL_ID,{PROVIDER}_AGENT_MODEL_IDif you wish to customize models. - If using Exa: Include
EXA_API_KEY.
Examples:
- For DeepSeek (default):
LLM_PROVIDER=deepseekDEEPSEEK_API_KEY=your_deepseek_api_key
- For GitHub Models:
LLM_PROVIDER=githubGITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx- Optionally,
GITHUB_TEAM_MODEL_ID=openai/gpt-5,GITHUB_AGENT_MODEL_ID=openai/gpt-5-min
- For Groq:
LLM_PROVIDER=groqGROQ_API_KEY=your_groq_api_key
- If using Exa:
EXA_API_KEY=your_exa_api_key
Save and Apply the Configuration
Confirm the values and save your environment variable settings in FastMCP.
Your Sequential Thinking Multi-Agent System should now be configured with the appropriate API tokens.
Quick Start
Choose Connection Type for
Authentication Required
Please sign in to use FastMCP hosted connections
Configure Environment Variables for
Please provide values for the following environment variables:
started!
The MCP server should open in . If it doesn't open automatically, please check that you have the application installed.
Copy and run this command in your terminal:
Make sure Gemini CLI is installed:
Visit Gemini CLI documentation for installation instructions.
Make sure Claude Code is installed:
Visit Claude Code documentation for installation instructions.
Installation Steps:
Configuration
Installation Failed
More for AI and Machine Learning
View All →Blender
Experience seamless AI-powered 3D modeling by connecting Blender with Claude AI via the Model Context Protocol. BlenderMCP enables two-way communication, allowing you to create, modify, and inspect 3D scenes directly through AI prompts. Control objects, materials, lighting, and execute Python code in Blender effortlessly. Access assets from Poly Haven and generate AI-driven models using Hyper3D Rodin. This integration enhances creative workflows by combining Blender’s robust tools with Claude’s intelligent guidance, making 3D content creation faster, interactive, and more intuitive. Perfect for artists and developers seeking AI-assisted 3D design within Blender’s environment.
Video Edit (MoviePy)
MoviePy-based video editing server that provides comprehensive video and audio processing capabilities including trimming, merging, resizing, effects, format conversion, YouTube downloading, and text/image overlays through an in-memory object store for chaining operations efficiently.
ElevenLabs
Unleash powerful Text-to-Speech and audio processing with the official ElevenLabs MCP server. It enables MCP clients like Claude Desktop, Cursor, and OpenAI Agents to generate speech, clone voices, transcribe audio, and create unique sounds effortlessly. Customize voices, convert recordings, and build immersive audio scenes with easy-to-use APIs designed for creative and practical applications. This server integrates seamlessly, expanding your AI toolkit to bring rich, dynamic audio experiences to life across various platforms and projects.
TypeScript Refactoring
Provides TypeScript/JavaScript code analysis and refactoring capabilities using ts-morph, enabling intelligent code transformations with symbol renaming, file moving with import path corrections, cross-file reference updates, type signature analysis, and module dependency exploration across entire codebases.
Dual-Cycle Reasoner
Provides dual-cycle metacognitive reasoning framework that detects when autonomous agents get stuck in repetitive behaviors through statistical anomaly detection and semantic analysis, then automatically diagnoses failure causes and generates recovery strategies using case-based learning.
Ultra (Multi-AI Provider)
Unified server providing access to OpenAI O3, Google Gemini 2.5 Pro, and Azure OpenAI models with automatic usage tracking, cost estimation, and nine specialized development tools for code analysis, debugging, and documentation generation.
Similar MCP Servers
Sequential Thinking
Implements a structured sequential thinking process for breaking down complex problems, iteratively refining solutions, and exploring multiple reasoning paths. An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process.