ai-ml
2466AI tools in the ai-ml category
routra
Asman Mirza
Routra Python SDK - OpenAI-compatible client with intelligent multi-provider routing
lm-proxy-server
Vitalii Stepanenko
LM Proxy Server is an OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc.
...morelibkvikio-cu12
NVIDIA Corporation
KvikIO - GPUDirect Storage (C++)
languagemodelcommon
Imran
Provides the underlying framework to enhance langchain and add loading configurations
kvikio-cu12
NVIDIA Corporation
KvikIO - GPUDirect Storage
llm-proxy-server
Vitalii Stepanenko
LLM Proxy Server is an OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc.
...morecontext-pipe
Sarvesh Kr. Dwivedi
Extensible conversation context management toolkit for LLM applications
pyfgs
PyO3 bindings and Python interface to FragGeneScanRs, a gene prediction model for short and error-prone reads.
omnicomp-router
LLM Router API server for OmniComp providers
telegram-rag-bot
Production-ready Telegram FAQ bot with Russian LLMs, RAG, and multi-provider fallback
claude-portage
Eric Bowman
Portable Claude Code workspace archives
vllm-tuner
A Python package for tuning vLLM hyperparameters.
conexus
Self-hosted semantic search and knowledge management for LLM-driven development
amf-core
Jad
Atomic Model Fragmentation (AMF) - Universal LLM Decomposition Library
backend.ai-storage-proxy
Lablup Inc. and contributors
Backend.AI Storage Proxy
near-langchain-contract-deployer
LangChain BaseTool for estimating and deploying NEAR smart contracts
cf-datahive
Canonical result and measurement data storage APIs for Cogniflow
llmops-observability
LLMOps Observability SDK: decorators + SQS dispatch with compression
AtherisLiteLLM
AI-powered Python fuzzer using LiteLLM and Atheris to automatically generate and execute fuzzing harnesses.
tqai
pbertsch
TurboQuant KV cache compression for local LLM inference