How to ask LLMs to produce reliable reasoning and make reason-responsive decisions.
Cross-referenced across 55 tracked directories
#642
Popularity Rank
1 / 55
Listed In
Emerging
Adoption Stage
8/9/2023
Created
125
GitHub Stars
Score: 100/100
0 dependency vulnerabilities found
Run an AI-powered security scan to analyze this package's source code for vulnerabilities, prompt injection vectors, data exfiltration risks, and behavior mismatches.
Scans fetch actual source code from the GitHub repository, not just the README.
List of papers on hallucination detection in LLMs.
A Chinese collection of prompt examples to be used with the ChatGPT model.
LLM hallucination paper list.
A curated list of practical guide resources of LLMs
8
Forks
2/3/2025
Last Commit
Recently added to the ecosystem