ML Testing
474AI tools in the ML Testing category
poker-rangeman
dargeo
A comprehensive JavaScript library for parsing, managing, and filtering poker hand ranges with support for dead cards, board cards, and hand strength evaluation
...moreiswasmfast
maga
Performance comparison of WebAssembly, C++ Addon, and native implementations of various algorithms in Node.js.
jse-eval
6utt3rfly
JavaScript expression parsing and evaluation.
eventemitter3000
developerdragon
``` BENCHMARK: add -> remove EventEmitter2 x 15,905,750 ops/sec ±1.91% (93 runs sampled) EventEmitter3 x 75,966,377 ops/sec ±0.75% (93 runs sampled) EnhancedDrip x 2,233,248 ops/sec ±2.00% (94 runs sampled) Drip x 206,609,678 ops/sec ±1.18% (90 runs
...moreweb-tooling-benchmark-generator
alopezsanchez
CLI tools to generate benchmark cases in the v8/web-tooling-benchmark repository.
lowcarb
helio-frota
A feature-basic benchmark.js wrapper to keep you with the pump without muscle loss.
@casbin/expression-eval
hsluoyz
JavaScript expression parsing and evaluation.
benchmark-array-ref
jameskmonger
Benchmarking array referencing.
@agentshield-ai/openclaw-plugin
markbriers
AgentShield real-time security evaluation plugin for OpenClaw. Intercepts tool calls before execution and evaluates them against Sigma detection rules.
...moreeslint-rule-benchmark
azat-io
Benchmark ESLint rules with detailed performance metrics for CI and plugin development
thunky
mafintosh
delay the evaluation of a paramless async function and cache the result
benchmark-cli
dylanpiercey
CLI application to benchmark JavaScript files.
time-span
sindresorhus
Simplified high resolution timing
rippletide
imbjdd
Rippletide Evaluation CLI
@easy-nodes/core
addisudamena49
A React-based node graph editor built on [React Flow](https://reactflow.dev). Define nodes declaratively with JSON, wire them together visually, and let the built-in evaluation engine run your graph in topological order. Supports sync and async evaluation
...more@2501-ai/cli
zhuk-aa
[](https://www.npmjs.com/package/@2501-ai/cli) [](https://www.2501.ai/research/full-humaneval-benchmark) [![Lic
...more@kodus/agent-readiness
gamalinosqui
Evaluate how prepared your codebase is for autonomous AI coding agents
nairon-bench
_obaid_
AI workflow benchmarking CLI
@index9/mcp
johnwils
Search, inspect, and benchmark 300+ AI models from your editor
wraptile
coderaiser
translate the evaluation of a function that takes multiple arguments into evaluating a sequence of 2 functions, each with any count of arguments
...more