What Downloads Actually Count
npm download counts measure how many times a package was fetched from the registry. This includes every npm install, every CI/CD pipeline run, every Docker build, and every lockfile resolution. A single project that runs CI on every push might generate dozens of downloads per day for each dependency, even if no human ever looks at the package.
This means download counts are heavily influenced by automation. A package that's a dependency of a popular CI tool will have enormous download counts regardless of how useful it's on its own. A package that developers install manually for local use will have much lower counts even if it's widely appreciated.
Why This Matters for AI Tools
AI tools, particularly MCP servers, have a usage pattern that download counts don't capture well. Many MCP servers are installed once and run locally for months. They generate one download at installation time and zero thereafter, even if they're used daily. Compare this to a utility library that gets downloaded millions of times per week because it's in the dependency tree of thousands of other packages.
A highly useful MCP server with 500 weekly downloads might have more active daily users than a utility library with 50,000 weekly downloads. The download count comparison would suggest the library is 100x more popular, but the reality is quite different.
CI/CD Inflation
The CI/CD inflation effect is substantial. When a package is listed as a dependency in a project that has automated testing, every test run downloads the package. A popular project with 100 contributors running tests multiple times per day generates thousands of downloads per week from CI alone. These are legitimate downloads, but they don't represent 100 users deciding to install the package.
For AI tools that are typically installed as standalone applications rather than as dependencies of other packages, CI inflation is less of a factor. But comparing their download counts against packages that benefit from CI inflation creates a misleading comparison.
Geographic and Platform Variations
Download counts also reflect geographic patterns in package registry usage. npm mirrors in different regions, alternative registries, and corporate proxies may or may not be reflected in the official download counts. A package that's popular in China might have its downloads split across the main registry and local mirrors, making its true adoption difficult to gauge from npmjs.com stats alone.
Platform distribution matters too. MCP servers distributed as Python packages don't show up in npm stats, and vice versa. Comparing tools across package ecosystems using download counts is comparing different measurement systems.
What to Use Instead
Download counts are useful as one signal among several, but they shouldn't be the primary evaluation metric for AI tools. More informative signals include GitHub issues and discussions (which indicate active usage), fork count (which indicates derivative work), contributor count (which indicates community investment), and directory presence (which indicates curation).
Composite metrics that combine multiple signals provide a more reliable assessment. A tool with moderate download counts but active issues, regular updates, and presence in curated directories is likely a better choice than one with high download counts but no recent activity.
For MCP servers specifically, the most reliable usage indicator is often community testimony: developers writing about their experiences, recommending tools in forums, and contributing to the project. These qualitative signals are harder to aggregate than download counts but are often more informative about actual usage and satisfaction.