S

Sequential Thinking Multi-Agent System

Orchestrates a team of specialized agents working in parallel to break down complex problems through...

107 views
1 installs
Updated Sep 10, 2025
Not audited
Orchestrates a team of specialized agents working in parallel to break down complex problems through structured thinking steps, enabling multi-disciplinary analysis with greater depth than single-agent approaches.
  1. Identify Your LLM Provider
    Decide which large language model (LLM) provider you want to use. The supported providers are:

    • DeepSeek (default)
    • Groq
    • OpenRouter (also for Kimi K2)
    • GitHub Models
    • Ollama (does not require an API key)
    • Kimi K2 (uses OpenRouter API key)
  2. Obtain the Required API Key for Your Provider

    • DeepSeek: Register at deepseek.com and obtain your API key from your account dashboard.
    • Groq: Register at groq.com, log in to the dashboard, and create an API key (usually under “API” or “Keys”).
    • OpenRouter/Kimi K2: Register at openrouter.ai and generate an API key.
    • GitHub Models:
      1. Go to GitHub > Settings > Developer settings > Personal access tokens > Tokens (classic).
      2. Click “Generate new token” and select scopes like repo and read:user.
      3. Copy the generated token—it should start with ghp_.
    • Ollama: No API key required, but you must have Ollama installed locally.
  3. If Using Exa with the Researcher Agent (Optional):
    Register at exa.ai and generate your API key from your account page.

  4. Decide on Any Optional Model Customizations
    If you want, decide which exact model variants to use by setting model ID variables (e.g., DEEPSEEK_TEAM_MODEL_ID, GROQ_AGENT_MODEL_ID). Refer to the README under “Note on Model Selection” and “GitHub Models Setup and Available Models” for guidance.

  5. Open the FastMCP Connection Interface
    Go to the FastMCP system and find your Sequential Thinking Multi-Agent System connection.

  6. Press "Install Now" to Add ENVs
    Click the “Install Now” button to access environment variable fields.

  7. Fill in the Required ENV Values
    Enter the required values in the FastMCP ENV interface:

    • LLM_PROVIDER — (e.g., deepseek, groq, openrouter, github, ollama, or kimi)
    • {PROVIDER}_API_KEY or GITHUB_TOKEN — Paste the API key you obtained for your provider.
    • Optional: LLM_BASE_URL, {PROVIDER}_TEAM_MODEL_ID, {PROVIDER}_AGENT_MODEL_ID if you wish to customize models.
    • If using Exa: Include EXA_API_KEY.

    Examples:

    • For DeepSeek (default):
      • LLM_PROVIDER=deepseek
      • DEEPSEEK_API_KEY=your_deepseek_api_key
    • For GitHub Models:
      • LLM_PROVIDER=github
      • GITHUB_TOKEN=ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      • Optionally, GITHUB_TEAM_MODEL_ID=openai/gpt-5, GITHUB_AGENT_MODEL_ID=openai/gpt-5-min
    • For Groq:
      • LLM_PROVIDER=groq
      • GROQ_API_KEY=your_groq_api_key
    • If using Exa:
      • EXA_API_KEY=your_exa_api_key
  8. Save and Apply the Configuration
    Confirm the values and save your environment variable settings in FastMCP.

Your Sequential Thinking Multi-Agent System should now be configured with the appropriate API tokens.

Quick Start

View on GitHub

More for AI and Machine Learning

View All →

Similar MCP Servers

Sequential Thinking

Sequential Thinking

Official

Implements a structured sequential thinking process for breaking down complex problems, iteratively refining solutions, and exploring multiple reasoning paths. An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process.

S

Sequential Thinking Tools

Provides structured problem-solving tools for step-by-step analysis, branching thoughts, and adaptive reasoning strategies in complex decision-making processes.

Productivity Project Management
S

Structured Thinking

Structures reasoning processes through defined thought stages, managing a history of thoughts with metadata for transparent, step-by-step problem solving and decision making.

Memory Management AI and Machine Learning

Report Issue

Thank you! Your issue report has been submitted successfully.