PyPI (Real-time)
ActiveReal-time PyPI feed for AI/MCP packages
7,222
Items Listed
N/A
Sync Frequency
api
Access Method
Items in this Directory
paperstack-mcp
arxiv MCP Team
Model Context Protocol server for arXiv PDF retrieval and LLM context generation.
simmer-reactor-mcp
MCP server for Simmer reactor — real-time whale trade events for any AI agent
steer-core
Modelling energy storage from cell to site - STEER OpenCell Design
craft-code
Shubham Agarwal
A Claude Code-like AI coding assistant built with LangChain Deep Agents
ark-market-data-mcp
MCP server for streaming crypto market data via WebSocket
smelt-ai
LLM-powered structured data transformation
coaiapy-mcp
MCP (Model Context Protocol) wrapper for coaiapy observability toolkit
GPTQModel
Production ready LLM model compression/quantization toolkit with hw accelerated inference support for both cpu/gpu via HF, vLLM, and SGLang.
...moreticktick-agent-cli
Oleksandr Tsepukh
Agent-native CLI for TickTick task management — full API coverage via V1 + V2
gwmcp
Bryan Jacinto
Google Workspace MCP Server with guided setup and seamless auth. 114 tools for Gmail, Drive, Docs, Sheets, Calendar & more.
...morecodeowners-coverage
Measure and enforce CODEOWNERS coverage
mergeguide
Chuck McWhirter, MergeGuide, Inc.
AI governance platform — policy enforcement for AI-assisted development. Four enforcement layers (IDE, MCP, Git hooks, PR Gate), 739 detection rules, 18+ compliance frameworks.
...moreclaude-clean
j-about
CLI for selectively purging Claude Code project data — history, settings, metadata, or everything at once.
handshake-mcp-server
MCP server for scraping Handshake (joinhandshake.com) — student profiles, employers, jobs, and events
livekit-plugins-openai
Agent Framework plugin for services from OpenAI
fast-obsidian-mcp
Thin MCP server wrapping the Obsidian CLI
ucgen
Generate structured use case documents from natural language input.
flet-secure-storage
Secure Storage control for Flet
livekit-plugins-langchain
LangChain/LangGraph plugin for LiveKit agents
pawbench
4-dimensional LLM inference benchmark — multi-turn, multi-agent, parallel dispatch with tool calling