An adaptation of the MCP Sequential Thinking Server designed to guide tool usage in problem-solving. This server helps break down complex problems into manageable steps and provides recommendations for which MCP tools would be most effective at each stage.
A Model Context Protocol (MCP) server that combines sequential thinking with intelligent tool suggestions. For each step in the problem-solving process, it provides confidence-scored recommendations for which tools to use, along with rationale for why each tool would be appropriate.
- π€ Dynamic and reflective problem-solving through sequential thoughts
- π Flexible thinking process that adapts and evolves
- π³ Support for branching and revision of thoughts
- π οΈ LLM-driven intelligent tool recommendations for each step
- π Confidence scoring for tool suggestions
- π Detailed rationale for tool recommendations
- π Step tracking with expected outcomes
- π Progress monitoring with previous and remaining steps
- π― Alternative tool suggestions for each step
- π§ Memory management with configurable history limits
- ποΈ Manual history cleanup capabilities
This server facilitates sequential thinking with MCP tool coordination. The LLM analyzes available tools and their descriptions to make intelligent recommendations, which are then tracked and organized by this server.
The workflow:
- LLM provides available MCP tools to the sequential thinking server
- LLM analyzes each thought step and recommends appropriate tools
- Server tracks recommendations, maintains context, and manages memory
- LLM executes recommended tools and continues the thinking process
Each recommendation includes:
- A confidence score (0-1) indicating how well the tool matches the need
- A clear rationale explaining why the tool would be helpful
- A priority level to suggest tool execution order
- Suggested input parameters for the tool
- Alternative tools that could also be used
The server works with any MCP tools available in your environment and automatically manages memory to prevent unbounded growth.
Here's an example of how the server guides tool usage:
{
"thought": "Initial research step to understand what universal reactivity means in Svelte 5",
"current_step": {
"step_description": "Gather initial information about Svelte 5's universal reactivity",
"expected_outcome": "Clear understanding of universal reactivity concept",
"recommended_tools": [
{
"tool_name": "search_docs",
"confidence": 0.9,
"rationale": "Search Svelte documentation for official information",
"priority": 1
},
{
"tool_name": "tavily_search",
"confidence": 0.8,
"rationale": "Get additional context from reliable sources",
"priority": 2
}
],
"next_step_conditions": [
"Verify information accuracy",
"Look for implementation details"
]
},
"thought_number": 1,
"total_thoughts": 5,
"next_thought_needed": true
}
The server tracks your progress and supports:
- Creating branches to explore different approaches
- Revising previous thoughts with new information
- Maintaining context across multiple steps
- Suggesting next steps based on current findings
This server requires configuration through your MCP client. Here are examples for different environments:
Add this to your Cline MCP settings:
{
"mcpServers": {
"mcp-sequentialthinking-tools": {
"command": "npx",
"args": ["-y", "mcp-sequentialthinking-tools"],
"env": {
"MAX_HISTORY_SIZE": "1000"
}
}
}
}
For WSL environments, add this to your Claude Desktop configuration:
{
"mcpServers": {
"mcp-sequentialthinking-tools": {
"command": "wsl.exe",
"args": [
"bash",
"-c",
"MAX_HISTORY_SIZE=1000 source ~/.nvm/nvm.sh && /home/username/.nvm/versions/node/v20.12.1/bin/npx mcp-sequentialthinking-tools"
]
}
}
}
The server implements a single MCP tool with configurable parameters:
A tool for dynamic and reflective problem-solving through thoughts, with intelligent tool recommendations.
Parameters:
available_mcp_tools
(array, required): Array of MCP tool names available for use (e.g., ["mcp-omnisearch", "mcp-turso-cloud"])thought
(string, required): Your current thinking stepnext_thought_needed
(boolean, required): Whether another thought step is neededthought_number
(integer, required): Current thought numbertotal_thoughts
(integer, required): Estimated total thoughts neededis_revision
(boolean, optional): Whether this revises previous thinkingrevises_thought
(integer, optional): Which thought is being reconsideredbranch_from_thought
(integer, optional): Branching point thought numberbranch_id
(string, optional): Branch identifierneeds_more_thoughts
(boolean, optional): If more thoughts are neededcurrent_step
(object, optional): Current step recommendation with:step_description
: What needs to be donerecommended_tools
: Array of tool recommendations with confidence scoresexpected_outcome
: What to expect from this stepnext_step_conditions
: Conditions for next step
previous_steps
(array, optional): Steps already recommendedremaining_steps
(array, optional): High-level descriptions of upcoming steps
The server includes built-in memory management to prevent unbounded growth:
- History Limit: Configurable maximum number of thoughts to retain (default: 1000)
- Automatic Trimming: History automatically trims when limit is exceeded
- Manual Cleanup: Server provides methods to clear history when needed
You can configure the history size by setting the MAX_HISTORY_SIZE
environment variable:
{
"mcpServers": {
"mcp-sequentialthinking-tools": {
"command": "npx",
"args": ["-y", "mcp-sequentialthinking-tools"],
"env": {
"MAX_HISTORY_SIZE": "500"
}
}
}
}
Or for local development:
MAX_HISTORY_SIZE=2000 npx mcp-sequentialthinking-tools
- Clone the repository
- Install dependencies:
pnpm install
- Build the project:
pnpm build
- Run in development mode:
pnpm dev
The project uses changesets for version management. To publish:
- Create a changeset:
pnpm changeset
- Version the package:
pnpm changeset version
- Publish to npm:
pnpm release
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License - see the LICENSE file for details.
- Built on the Model Context Protocol
- Adapted from the MCP Sequential Thinking Server