ai-ml
2516AI tools in the ai-ml category
cc-guardian
Kun Yuan
Install local middleware proxies into Claude Code
turboquant-vectors
back2matching
Compress embeddings 8x instantly with TurboQuant. No training needed. Up to +8pp recall vs FAISS PQ at matched storage.
fragpdf
None
converra
Converra SDK — transparent LLM client wrapping for conversation capture and A/B testing
llmstrike
Akeem McKenzie
Adversarial security testing framework for LLM-powered applications
thinkstrip
Think-block filter for LLM streams
bithub
The missing friendly interface for BitNet inference. Ollama for 1-bit LLMs.
speclogician
denis, hongyu
SpecLogician is an AI framework that turns code, tests, logs, and requirements into mathematical context for LLMs through formal specification synthesis, verification, and analysis.
...moreclemcore
Computational Linguistics Group, University of Potsdam
The cLLM (chat-optimized Large Language Model, 'clem') framework tests such models' ability to engage in games, that is, rule-constituted activities played using language.
...moretelegram-rag-bot
Production-ready Telegram FAQ bot with Russian LLMs, RAG, and multi-provider fallback
claude-portage
Eric Bowman
Portable Claude Code workspace archives
vllm-tuner
A Python package for tuning vLLM hyperparameters.
conexus
Self-hosted semantic search and knowledge management for LLM-driven development
amf-core
Jad
Atomic Model Fragmentation (AMF) - Universal LLM Decomposition Library
backend.ai-storage-proxy
Lablup Inc. and contributors
Backend.AI Storage Proxy
near-langchain-contract-deployer
LangChain BaseTool for estimating and deploying NEAR smart contracts
cf-datahive
Canonical result and measurement data storage APIs for Cogniflow
llmops-observability
LLMOps Observability SDK: decorators + SQS dispatch with compression
AtherisLiteLLM
AI-powered Python fuzzer using LiteLLM and Atheris to automatically generate and execute fuzzing harnesses.
tqai
pbertsch
TurboQuant KV cache compression for local LLM inference