Why Descriptions Matter So Much
When an AI model connects to your MCP server, the tool descriptions are literally all it knows about what your tools do. It can't read your source code. It can't try things and learn from experience. It reads the description, builds a mental model of the tool, and uses that model for every subsequent decision about when and how to call it.
A vague description means the model guesses. Sometimes it guesses right. Often it guesses wrong. Clear descriptions eliminate guessing and make your tools work reliably across different models, different prompts, and different user requests.
What Makes a Bad Description
Here are real patterns from MCP servers in the wild (paraphrased): "Process data," "Execute operation," "Get information." These tell the model almost nothing. It doesn't know what kind of data, what operation, or what information. The model has to infer everything from context, which it might not have.
Slightly better but still problematic: "Query the database." Which database? What kind of queries? Read-only or read-write? What format are the results in? A model working with this description might generate a DELETE query when the user only wants to read data.
What Makes a Good Description
Good descriptions answer: what does this tool do, when should it be used, what are its inputs, what does it return, and what should it NOT be used for. Here's an example:
"Execute a read-only SQL query against the connected PostgreSQL database. Use this when the user asks questions about data, wants to look up records, check statistics, or explore the database contents. Returns results as a JSON array of rows. Cannot modify data (INSERT, UPDATE, DELETE are blocked). Maximum 1000 rows returned."
This description tells the model exactly what it can do (read-only SQL), when to use it (data questions), what it gets back (JSON rows), what it can't do (modifications), and what the limits are (1000 rows). The model won't struggle to decide when this tool is appropriate.
Parameter Descriptions Matter Too
Each parameter in your tool's schema should also have a clear description. "query: string" is insufficient. "query: A SQL SELECT statement to execute against the database. Do not include semicolons or multiple statements. Example: SELECT name, email FROM users WHERE created_at > '2026-01-01'" gives the model everything it needs to generate correct inputs.
Including examples in parameter descriptions is particularly effective. The model learns better from examples than from abstract rules. One good example communicates more than a paragraph of constraints.
Testing Your Descriptions
The best way to test tool descriptions is to connect your server and try requests that should trigger each tool. If the model picks the wrong tool, your descriptions are ambiguous. If the model picks the right tool but sends wrong parameters, your parameter descriptions need work. If everything works smoothly, your descriptions are doing their job.
Try edge cases too. What happens when the user's request could plausibly match two different tools? Does the model pick the right one? If not, add differentiating language to the descriptions. "Use this for file content searches" versus "Use this for file name lookups" is clearer than having both described as "search files."
Related Reading
- How AI Assistants Choose Which Tool to Use
- Why Your AI Assistant Ignores Perfectly Good Tools
- Building an MCP Server from Scratch with the TypeScript SDK
Browse MCP servers on Skillful.sh. Search 137,000+ AI tools.