Run large Mixture-of-Experts LLMs that exceed system RAM on Apple Silicon by loading only router-selected experts from SSD with MLX. Includes OpenAI/Anthropic-compatible serving for local agentic coding.
Cross-referenced across 55 tracked directories
#7474
Popularity Rank
1 / 55
Listed In
Emerging
Adoption Stage
4d
Listed For
1
GitHub Stars
2/28/2026
Last Commit
Recently added to the ecosystem
1
GitHub Stars
0
Forks
Score: 100/100
0 dependency vulnerabilities found
59b9f352-8cdc-44d3-9dd9-db3bfa521880
No description available
neo4j-contrib
π π - Model Context Protocol with Neo4j (Run queries, Knowledge Graph Memory, Manaage Neo4j Aura Instances)
pab1it0
π βοΈ - Query and analyze Prometheus, open-source monitoring system.
berthelius
[glama](https://glama.ai/mcp/servers/berthelius/frihet-mcp) π βοΈ - AI-native business management β invoices, expenses, clients, products, and quotes. 31 tools for Claude, Cursor, Windsurf, and Cline.
...more