Search
mmo
A powerful AI orchestrator and skill system for Antigravity, Claude Code, GitHub Copilot, and more. Featuring 60+ specialized domain expert skills, disciplined development workflows, and RTK token optimization.
...morefabricatio-rag
A Python library for Retrieval-Augmented Generation (RAG) capabilities in LLM applications.
fabricatio-memory
An Extension of fabricatio aiming to extend the context llm could handle.
genji
Caleb Evans
Jinja2-based templating for LLM-generated structured output
antidrift
Company brain for Claude. One repo your whole team shares — context, skills, and connected services.
aef-loader
Jake Wilkins
Virtualizarr access for AEF embeddings.
parsimony-cli
Token usage and cost observability for Claude Code sessions
flowpad
A local-first AI development environment powered by Claude Code
fragment-api-python-sdk
GramStackDev
Official Python client for the GramStack.dev - Fragment API
openai-http-proxy
Vitalii Stepanenko
OpenAI HTTP Proxy is an OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc.
...moreoai-proxy
Vitalii Stepanenko
OAI Proxy is an OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc.
...morelm-proxy-server
Vitalii Stepanenko
LM Proxy Server is an OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc.
...morelm-proxy
Vitalii Stepanenko
LM-Proxy is an OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc.
...morellm-proxy-server
Vitalii Stepanenko
LLM Proxy Server is an OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc.
...moreinference-proxy
Vitalii Stepanenko
Inference Proxy is an OpenAI-compatible http proxy server for inferencing various LLMs capable of working with Google, Anthropic, OpenAI APIs, local PyTorch inference, etc.
...moreomnicomp-router
LLM Router API server for OmniComp providers
prellm
Tom Sapletta
preLLM — One function for small LLM preprocessing before large LLM execution. Like litellm.completion() but with decomposition.
...moreomnicomp-claude-oauth
Claude OAuth-based OmniComp provider
interactive-git-versioneer
InteractiveGitVersioneer - Gestor interactivo de versiones Git con soporte para tags, releases de GitHub y generación de mensajes con IA (Groq/OpenAI). Incluye modo CI/CD para pipelines automatizados.
...morellx
Intelligent LLM model router driven by real code metrics — successor to preLLM