List of papers on hallucination detection in LLMs.
Cross-referenced across 55 tracked directories
#649
Popularity Rank
1 / 55
Listed In
Emerging
Adoption Stage
9/15/2023
Created
1,060
GitHub Stars
Score: 100/100
0 dependency vulnerabilities found
Run an AI-powered security scan to analyze this package's source code for vulnerabilities, prompt injection vectors, data exfiltration risks, and behavior mismatches.
Scans fetch actual source code from the GitHub repository, not just the README.
A curation of awesome tools, documents and projects about LLM Security.
A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models.
A curated list of practical guide resources of LLMs
LLM hallucination paper list.
81
Forks
1/11/2026
Last Commit
Recently added to the ecosystem