Open-source security testing framework for AI agents. Discovers dangerous tool chain compositions via graph analysis, detects execution-level side effects, and runs multi-phase trust exploitation campaigns. 
Cross-referenced across 55 tracked directories
#3524
Popularity Rank
1 / 55
Listed In
Emerging
Adoption Stage
2/9/2026
Created
4
GitHub Stars
Score: 100/100
0 dependency vulnerabilities found
Run an AI-powered security scan to analyze this package's source code for vulnerabilities, prompt injection vectors, data exfiltration risks, and behavior mismatches.
Scans fetch actual source code from the GitHub repository, not just the README.
Future AGI <no-reply@futureagi.com>
We help GenAI teams maintain high-accuracy for their Models in production.
Andreas Kirsch, Daedalus Lab Ltd
Directly Connecting Python to LLMs - Dataclasses & Interfaces <-> LLMs
Jerry Liu
A library of community-driven data loaders for LLMs. Use with LlamaIndex and/or LangChain.
ju-bezdek
syntactic sugar for langchain
50
Open Issues
3/20/2026
Last Commit
Recently added to the ecosystem