arcee-ai
@arcee-ai
7
Published Tools
0
Total Stars
0
Weekly Downloads
Published Tools
7 Agentsacross 1 categoryTrinity-Large-Preview is a frontier-scale open-weight language model from Arcee, built as a 400B-parameter sparse Mixture-of-Experts with 13B active parameters per token using 4-of-256 expert routing. It excels in creative writing, storytelling, role-play, chat scenarios, and real-time voice assistance, better than your average reasoning model usually can. But weâre also introducing some of our newer agentic performance. It was trained to navigate well in agent harnesses like OpenCode, Cline,
Trinity Mini is a 26B-parameter (3B active) sparse mixture-of-experts language model featuring 128 experts with 8 active per token. Engineered for efficient reasoning over long contexts (131k) with robust function calling and multi-step agent workflows.
Trinity Mini is a 26B-parameter (3B active) sparse mixture-of-experts language model featuring 128 experts with 8 active per token. Engineered for efficient reasoning over long contexts (131k) with robust function calling and multi-step agent workflows.
Spotlight is a 7âbillionâparameter visionâlanguage model derived from QwenâŻ2.5âVL and fineâtuned by Arcee AI for tight imageâtext grounding tasks. It offers a 32âŻkâtoken context window, enabling rich multimodal conversations that combine lengthy documents with one or more images. Training emphasized fast inference on consumer GPUs while retaining strong captioning, visualâquestionâanswering, and diagramâanalysis accuracy. As a result, Spotlight slots neatly into agent workflows where screenshots
Maestro Reasoning is Arcee's flagship analysis model: a 32âŻBâparameter derivative of QwenâŻ2.5â32âŻB tuned with DPO and chainâofâthought RL for stepâbyâstep logic. Compared to the earlier 7âŻB preview, the production 32âŻB release widens the context window to 128âŻk tokens and doubles passârate on MATH and GSMâ8K, while also lifting code completion accuracy. Its instruction style encourages structured "thought â answer" traces that can be parsed or hidden according to user preference. That transparen
VirtuosoâLarge is Arcee's topâtier generalâpurpose LLM at 72âŻB parameters, tuned to tackle crossâdomain reasoning, creative writing and enterprise QA. Unlike many 70âŻB peers, it retains the 128âŻk context inherited from QwenâŻ2.5, letting it ingest books, codebases or financial filings wholesale. Training blended DeepSeekâŻR1 distillation, multiâepoch supervised fineâtuning and a final DPO/RLHF alignment stage, yielding strong performance on BIGâBenchâHard, GSMâ8K and longâcontext NeedleâInâHaystac
CoderâLarge is a 32âŻBâparameter offspring of QwenâŻ2.5âInstruct that has been further trained on permissivelyâlicensed GitHub, CodeSearchNet and synthetic bugâfix corpora. It supports a 32k context window, enabling multiâfile refactoring or long diff review in a single call, and understands 30âplus programming languages with special attention to TypeScript, Go and Terraform. Internal benchmarks show 5â8âŻpt gains over CodeLlamaâ34âŻBâPython on HumanEval and competitive BugFix scores thanks to a rei