Search
Berkeley Function-Calling Leaderboard
evaluates LLM's ability to call external functions/tools.
CompassRank
CompassRank is dedicated to exploring the most advanced language and visual models, offering a comprehensive, objective, and neutral evaluation reference for the industry and research.
...moreCompMix
a benchmark evaluating QA methods that operate over a mixture of heterogeneous input sources (KB, text, tables, infoboxes).
...moreDreamBench++
a benchmark for evaluating the performance of large language models (LLMs) in various tasks related to both textual and visual imagination.
...moreFELM
a meta-benchmark that evaluates how well factuality evaluators assess the outputs of large language models (LLMs).
InfiBench
a benchmark designed to evaluate large language models (LLMs) specifically in their ability to answer real-world coding-related questions.
...moreLawBench
a benchmark designed to evaluate large language models in the legal domain.
LLMEval
focuses on understanding how these models perform in various scenarios and analyzing results from an interpretability perspective.
...moreM3CoT
a benchmark that evaluates large language models on a variety of multimodal reasoning tasks, including language, natural and social sciences, physical and social commonsense, temporal reasoning, algebra, and geometry.
...moreMathEval
a comprehensive benchmarking platform designed to evaluate large models' mathematical abilities across 20 fields and nearly 30,000 math problems.
...moreMixEval
a ground-truth-based dynamic benchmark derived from off-the-shelf benchmark mixtures, which evaluates LLMs with a highly capable model ranking (i.e., 0.96 correlation with Chatbot Arena) while running locally and quickly (6% the time and cost of running MMLU).
...moreMMedBench
a benchmark that evaluates large language models' ability to answer medical questions across multiple languages.
MMToM-QA
a multimodal question-answering benchmark designed to evaluate AI models' cognitive ability to understand human beliefs and goals.
...moreOlympicArena
a benchmark for evaluating AI models across multiple academic disciplines like math, physics, chemistry, biology, and more.
...morePubMedQA
a biomedical question-answering benchmark designed for answering research-related questions using PubMed abstracts.
SciBench
benchmark designed to evaluate large language models (LLMs) on solving complex, college-level scientific problems from domains like chemistry, physics, and mathematics.
...moreSuperBench
a benchmark platform designed for evaluating large language models (LLMs) on a range of tasks, particularly focusing on their performance in different aspects such as natural language understanding, reasoning, and generalization.
...moreSuperLim
a Swedish language understanding benchmark that evaluates natural language processing (NLP) models on various tasks such as argumentation analysis, semantic similarity, and textual entailment.
...moreTAT-DQA
a large-scale Document Visual Question Answering (VQA) dataset designed for complex document understanding, particularly in financial reports.
...moreTAT-QA
a large-scale question-answering benchmark focused on real-world financial data, integrating both tabular and textual information.
...more