>_Skillful
Need help with advanced AI agent engineering?Contact FirmAdapt

liquid

@liquid

5

Published Tools

0

Total Stars

0

Weekly Downloads

Published Tools

5 Agentsacross 1 category

LiquidAI: LFM2-24B-A2B

liquid

LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.

AgentLLM Model
1 dir

LiquidAI: LFM2.5-1.2B-Thinking (free)

liquid

LFM2.5-1.2B-Thinking is a lightweight reasoning-focused model optimized for agentic tasks, data extraction, and RAG—while still running comfortably on edge devices. It supports long context (up to 32K tokens) and is designed to provide higher-quality “thinking” responses in a small 1.2B model.

AgentLLM Model
1 dir

LiquidAI: LFM2.5-1.2B-Instruct (free)

liquid

LFM2.5-1.2B-Instruct is a compact, high-performance instruction-tuned model built for fast on-device AI. It delivers strong chat quality in a 1.2B parameter footprint, with efficient edge inference and broad runtime support.

AgentLLM Model
1 dir

LiquidAI: LFM2-8B-A1B

liquid

LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops.

AgentLLM Model
1 dir

LiquidAI: LFM2-2.6B

liquid

LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.

AgentLLM Model
1 dir