AgentEval
AgentEval is the comprehensive .NET toolkit for AI agent evaluation—tool usage validation, RAG quality metrics, stochastic evaluation, and model comparison—built first for Microsoft Agent Framework (MAF) and Microsoft.Extensions.AI. What RAGAS, PromptFoo and DeepEval do for Python, AgentEval does for .NET
Directory Presence
Cross-referenced across 55 tracked directories
Adoption Metrics
#308
Popularity Rank
2%
Adoption Rate
Emerging
Adoption Stage
54
Unlisted Directories
Recently added to directories
Cross-Posting Gaps
Not yet listed in these active directories:
Statistics
49
GitHub Stars
2
Forks
Related Agents
MCP server for Claude Code/VSCode/Cursor/Windsurf to use editor self functionality. ⚡ Get real-time LSP diagnostics, type information, and code navigation for AI coding agents without waiting for slow tsc/eslint checks.
An MCP server that autonomously evaluates web applications.
Define task-specific AI sub-agents in Markdown for any MCP-compatible tool.