a collection of human preference datasets for LLM instruction tuning, RLHF and evaluation.
Cross-referenced across 55 tracked directories
#623
Popularity Rank
1 / 55
Listed In
Emerging
Adoption Stage
5/3/2023
Created
387
GitHub Stars
Score: 100/100
0 dependency vulnerabilities found
Run an AI-powered security scan to analyze this package's source code for vulnerabilities, prompt injection vectors, data exfiltration risks, and behavior mismatches.
Scans fetch actual source code from the GitHub repository, not just the README.
List of papers on hallucination detection in LLMs.
A Chinese collection of prompt examples to be used with the ChatGPT model.
LLM hallucination paper list.
A curated list of practical guide resources of LLMs
18
Forks
10/4/2023
Last Commit
Recently added to the ecosystem