>_Skillful
Need help with advanced AI agent engineering?Contact FirmAdapt

nvidia

@nvidia

11

Published Tools

18

Total Stars

0

Weekly Downloads

100/100

Avg Security

Published Tools

2 MCP Servers3 Skills20 Agentsacross 8 categories

voice-agent-examples

nvidia

AI Space: nvidia/voice-agent-examples

Agentai-space
182 dirs

nemoguardrails

NVIDIA

NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.

Skilluncategorised
1 dir

nvidia-profbench

NVIDIA Corporation

Professional domain benchmark for evaluating LLMs on Physics PhD, Chemistry PhD, Finance MBA, and Consulting MBA tasks

Skillai-ml
1 dir

nvidia-nat-ragas

NVIDIA Corporation

Subpackage for RAGAS evaluators in NVIDIA NeMo Agent Toolkit

Agentai-agents
1 dir

nvidia-nat-rag

NVIDIA Corporation

Subpackage for NVIDIA RAG in NeMo Agent Toolkit

Agentai-agents
1 dir

nvidia-nat-fastmcp

NVIDIA Corporation

Subpackage for FastMCP server integration in NeMo Agent Toolkit

MCP Servermcp
1 dir

nvidia-nat-crewai

NVIDIA Corporation

Subpackage for CrewAI integration in NeMo Agent Toolkit

Agentai-agents
1 dir

nvidia-nat-langchain

NVIDIA Corporation

Subpackage for LangChain/LangGraph integration in NeMo Agent Toolkit

Agentai-agents
1 dir

nvidia-nat-mcp

NVIDIA Corporation

Subpackage for MCP client integration in NeMo Agent Toolkit

MCP Servermcp
1 dir

nvidia-nat-ragaai

NVIDIA Corporation

Subpackage for RagaAI Catalyst integration in NeMo Agent Toolkit

Agentai-agents
1 dir

NVIDIA: Nemotron 3 Super (free)

nvidia

NVIDIA Nemotron 3 Super is a 120B-parameter open hybrid MoE model, activating just 12B parameters for maximum compute efficiency and accuracy in complex multi-agent applications. Built on a hybrid Mamba-Transformer Mixture-of-Experts architecture with multi-token prediction (MTP), it delivers over 50% higher token generation compared to leading open models. The model features a 1M token context window for long-term agent coherence, cross-document reasoning, and multi-step task planning. Latent

AgentLLM Model
1 dir

NVIDIA: Nemotron 3 Nano 30B A3B (free)

nvidia

NVIDIA Nemotron 3 Nano 30B A3B is a small language MoE model with highest compute efficiency and accuracy for developers to build specialized agentic AI systems. The model is fully open with open-weights, datasets and recipes so developers can easily customize, optimize, and deploy the model on their infrastructure for maximum privacy and security.

AgentLLM Model
1 dir

NVIDIA: Nemotron 3 Nano 30B A3B

nvidia

NVIDIA Nemotron 3 Nano 30B A3B is a small language MoE model with highest compute efficiency and accuracy for developers to build specialized agentic AI systems. The model is fully open with open-weights, datasets and recipes so developers can easily customize, optimize, and deploy the model on their infrastructure for maximum privacy and security.

AgentLLM Model
1 dir

NVIDIA: Nemotron Nano 12B 2 VL (free)

nvidia

NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, combining transformer-level accuracy with Mamba’s memory-efficient sequence modeling for significantly higher throughput and lower latency. The model supports inputs of text and multi-image documents, producing natural-language outputs. It is trained on high-quality NVIDIA-curated synthetic datasets

AgentLLM Model
1 dir

NVIDIA: Nemotron Nano 12B 2 VL

nvidia

NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, combining transformer-level accuracy with Mamba’s memory-efficient sequence modeling for significantly higher throughput and lower latency. The model supports inputs of text and multi-image documents, producing natural-language outputs. It is trained on high-quality NVIDIA-curated synthetic datasets

AgentLLM Model
1 dir

NVIDIA: Llama 3.3 Nemotron Super 49B V1.5

nvidia

Llama-3.3-Nemotron-Super-49B-v1.5 is a 49B-parameter, English-centric reasoning/chat model derived from Meta’s Llama-3.3-70B-Instruct with a 128K context. It’s post-trained for agentic workflows (RAG, tool calling) via SFT across math, code, science, and multi-turn chat, followed by multiple RL stages; Reward-aware Preference Optimization (RPO) for alignment, RL with Verifiable Rewards (RLVR) for step-wise reasoning, and iterative DPO to refine tool-use behavior. A distillation-driven Neural Arc

AgentLLM Model
1 dir

NVIDIA: Nemotron Nano 9B V2 (free)

nvidia

NVIDIA-Nemotron-Nano-9B-v2 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and tasks by first generating a reasoning trace and then concluding with a final response. The model's reasoning capabilities can be controlled via a system prompt. If the user prefers the model to provide its final answer without intermediate reasoning traces, it can be configured to do so.

AgentLLM Model
1 dir

NVIDIA: Nemotron Nano 9B V2

nvidia

NVIDIA-Nemotron-Nano-9B-v2 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and tasks by first generating a reasoning trace and then concluding with a final response. The model's reasoning capabilities can be controlled via a system prompt. If the user prefers the model to provide its final answer without intermediate reasoning traces, it can be configured to do so.

AgentLLM Model
1 dir

NVIDIA: Llama 3.1 Nemotron 70B Instruct

nvidia

NVIDIA's Llama 3.1 Nemotron 70B is a language model designed for generating precise and useful responses. Leveraging [Llama 3.1 70B](/models/meta-llama/llama-3.1-70b-instruct) architecture and Reinforcement Learning from Human Feedback (RLHF), it excels in automatic alignment benchmarks. This model is tailored for applications requiring high accuracy in helpfulness and response generation, suitable for diverse user queries across multiple domains. Usage of this model is subject to [Meta's Accep

AgentLLM Model
1 dir

FasterTransformer

NVIDIA Framework for LLM Inference(Transitioned to TensorRT-LLM)

AgentLLM Inference
1 dir

Megatron-LM

Ongoing research training transformer models at scale.

AgentLLM Training Frameworks
1 dir

NeMo Framework

Generative AI framework built for researchers and PyTorch developers working on Large Language Models (LLMs), Multimodal Models (MMs), Automatic Speech Recognition (ASR), Text to Speech (TTS), and Computer Vision (CV) domains.

AgentLLM Training Frameworks
1 dir

TensorRT-LLM

Nvidia Framework for LLM Inference

AgentLLM Inference
1 dir

Transformer Engine

A library for accelerating Transformer model training on NVIDIA GPUs.

AgentLLM Training Frameworks
1 dir

nvidia-eval-factory-garak

nv052193, Mads Kongsbak, Tianhao Li, Phyllis Poh, Razvan Dinu, Zander Mackie, Greg Stephens, Ahsan Ayub, Jonathan Liberman, Gustav Fredrikson, Oh Tien Cheng, Brain John, Naman Mishra, Soumili Nandi, Arjun Krishna, Mihailo Milenkovic, Kai Greshake, Martin Borup-Larsen, Emmanuel Ferdman, Eric Therond, Zoe Nolan, Harsh Raj, Shine-afk, Rafael Sandroni, Eric Hacker, Blessed Uyo, Ikko Eltociear Ashimine, iamnotcj, Dwight Temple, Shane Rosse, Masaya Ogushi, Viktor T. Zetterberg, Erwan Roussel, Matthew Rowe, Aishwarya Padmakumar, Marco Rosa, Ian Chu

garak (LLM vulnerability scanner) - packaged by NVIDIA Eval Factory

Skillai-ml
1 dir