NVIDIA Framework for LLM Inference(Transitioned to TensorRT-LLM)
Cross-referenced across 55 tracked directories
#2582
Popularity Rank
1 / 55
Listed In
Emerging
Adoption Stage
Mar 13, 2026
First Seen
Recently added to the ecosystem
Score: 100/100
0 dependency vulnerabilities found
Run an AI-powered security scan to analyze this package's source code for vulnerabilities, prompt injection vectors, data exfiltration risks, and behavior mismatches.
Scans fetch actual source code from the GitHub repository — not just the README.
Nvidia Framework for LLM Inference
A high-throughput and low-latency inference and serving framework for LLMs and VLs
SGLang is a fast serving framework for large language models and vision language models.
a toolkit for deploying and serving Large Language Models (LLMs).