PyPI (Real-time)
ActiveReal-time PyPI feed for AI/MCP packages
7,374
Items Listed
N/A
Sync Frequency
api
Access Method
Items in this Directory
claude-agent-mcp
touwaeriol
MCP server bridging Claude Agent SDK for streaming conversations.
medcalc
blikblum
Collection of medical calculators
lavalink
appellation
A Lavalink client for JavaScript.
agent-bus-mcp
Alessandro Bologna <[email protected]>
Local SQLite-backed MCP bus for peer coding agents
mcp-openrouter
tsilva
MCP server providing access to 300+ AI models via OpenRouter
bioinformatics-mcp-server
small_pigpig
🧬 生物信息学MCP服务器 - 专为ModelScope设计的智能生物数据分析工具
obris-mcp
Obris <[email protected]>
MCP server for Obris — bring your curated knowledge into any AI conversation
dais-sdk
BHznJNs
A LLM Agent Meta Framework
agent-framework-lib
Sebastian Pavel <[email protected]>, Elliott Girard <[email protected]>
A comprehensive Python framework for building and serving conversational AI agents with FastAPI
ims-mcp
Igor Solomatov
Model Context Protocol server for IMS (Instruction Management Systems)
hwpx-mcp-server
An MCP server for reading, editing, and creating Hangul Word Processor (.hwpx) files. It enables users to extract text, perform find-and-replace operations, and modify font styles through automated XML patching.
...moreExstruct
harumiWeb
Conversion from Excel to structured JSON (tables, shapes, charts) for LLM/RAG pipelines, and autonomous Excel reading and writing by AI agents through MCP integration.
...morebullmq
GitHub Actions
Queue for messages and jobs based on Redis
codebase-stats
anav5704
CLI tool for quickly getting simple statistics from your codebase
veRL
veRL is a flexible and efficient RL framework for LLMs.
mcp-clipboard
flowengine.cloud
A minimalist, production-ready Model Context Protocol server that gives Claude Desktop seamless access to your system clipboard with automatic history tracking
...moremaxtext
A simple, performant and scalable Jax LLM!
Axolotl
Open-source framework for fine-tuning and evaluating LLMs. It simplifies the process of experimenting with different training configurations and makes it easy to reproduce and share results, supporting features like LoRA, QLoRA, DeepSpeed, PEFT, and multi-GPU setups.
...moreTensorRT-LLM
Nvidia Framework for LLM Inference
LMDeploy
A high-throughput and low-latency inference and serving framework for LLMs and VLs