The Selection Process
When you ask an AI assistant a question while multiple MCP servers are connected, the model needs to decide whether to use a tool at all, and if so, which one. This decision is based on the tool descriptions (which the model reads when the servers connect), the user's request, and the model's understanding of each tool's capabilities.
The model essentially pattern-matches your request against the available tool descriptions. If you ask "what files are in the project directory" and there's a filesystem tool described as "read and search files in specified directories," the match is clear. If you ask something more ambiguous, like "find me information about the project," the model must decide between the filesystem tool, a web search tool, and a database tool, any of which might be relevant.
What Influences the Decision
Tool descriptions are the primary influence. A well-written description that clearly states what the tool does, when it should be used, and what it can't do helps the model make accurate selections. A vague description like "general-purpose data tool" gives the model little to work with.
Parameter schemas also influence selection. If a tool's parameters closely match what the model would need to specify for the current task, the tool is more likely to be selected. A tool that requires a "sql_query" parameter is a strong match when the user asks a data question. A tool that requires a "file_path" parameter is a strong match when the user asks about a specific file.
The model's training also plays a role. Models have learned associations between request types and tool types from their training data. Questions about databases tend to trigger database tool usage. Questions about files trigger filesystem tools. These learned associations provide a starting point that tool descriptions can reinforce or override.
When Selection Goes Wrong
Tool selection errors fall into two categories: the model uses the wrong tool, or the model fails to use any tool when it should. Wrong tool selection usually means the tool descriptions are ambiguous. If two tools could plausibly handle the request, the model might pick the wrong one.
The fix for ambiguous tool descriptions is to make them more specific. Instead of "search for information," write "search the local filesystem for files matching a pattern." Instead of "get data," write "execute a read-only SQL query against the connected PostgreSQL database." Specificity reduces ambiguity and improves selection accuracy.
Failure to use any tool usually means the model didn't recognize the request as requiring tool use. This can happen when the user's phrasing doesn't match the tool descriptions closely enough. Rephrasing the request to more closely match tool terminology often resolves the issue.
Configuring for Better Selection
If you control the MCP servers in your setup, optimizing tool descriptions is one of the highest-impact improvements you can make. Review each tool's description, name, and parameter schema. Ask yourself: if I read this description without any context, would I know exactly when to use this tool?
Including negative descriptions (what the tool shouldn't be used for) can also help. "Use this for SQL queries against the production database. Don't use for file operations or web searches" gives the model clear boundaries.
When multiple similar tools are connected (two different database servers, for example), differentiate them clearly. "PostgreSQL production database with customer data" and "SQLite local analytics database" are much easier for the model to distinguish than "database tool 1" and "database tool 2."
Related Reading
- Tool Use in AI Agents: Current Capabilities and Limitations
- MCP vs Function Calling: Understanding the Tradeoffs
- Getting Started with MCP Servers in 2026
Browse MCP servers on Skillful.sh. Search 137,000+ AI tools.