>_Skillful
Need help with advanced AI agent engineering?Contact FirmAdapt

Search

instruct-eval

This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.

...more
AgentLLM Evaluation
5521 dir

Meta Lingua

a lean, efficient, and easy-to-hack codebase to research LLMs.

AgentLLM Training Frameworks
4.8K1 dir

Litgpt

20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.

AgentLLM Training Frameworks
13K1 dir

nanotron

Minimalistic large language model 3D-parallelism training.

AgentLLM Training Frameworks
2.6K1 dir

DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

...more
AgentLLM Training Frameworks
42K1 dir

Megatron-LM

Ongoing research training transformer models at scale.

AgentLLM Training Frameworks
16K1 dir

torchtitan

A native PyTorch Library for large model training.

AgentLLM Training Frameworks
5.2K1 dir

Megatron-DeepSpeed

DeepSpeed version of NVIDIA's Megatron-LM that adds additional support for several features such as MoE model training, Curriculum Learning, 3D Parallelism, and others.

...more
AgentLLM Training Frameworks
2.2K1 dir

torchtune

A Native-PyTorch Library for LLM Fine-tuning.

AgentLLM Training Frameworks
5.7K1 dir

NeMo Framework

Generative AI framework built for researchers and PyTorch developers working on Large Language Models (LLMs), Multimodal Models (MMs), Automatic Speech Recognition (ASR), Text to Speech (TTS), and Computer Vision (CV) domains.

...more
AgentLLM Training Frameworks
17K1 dir

BMTrain

Efficient Training for Big Models.

AgentLLM Training Frameworks
6241 dir

Mesh Tensorflow

Mesh TensorFlow: Model Parallelism Made Easier.

AgentLLM Training Frameworks
1.6K1 dir

GPT-NeoX

An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.

AgentLLM Training Frameworks
7.4K1 dir

Transformer Engine

A library for accelerating Transformer model training on NVIDIA GPUs.

AgentLLM Training Frameworks
3.2K1 dir

OpenRLHF

An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT).

...more
AgentLLM Training Frameworks
9.2K1 dir

unslothai

A framework that specializes in efficient fine-tuning. On its GitHub page, you can find ready-to-use fine-tuning templates for various LLMs, allowing you to easily train your own data for free on the Google Colab cloud.

...more
AgentLLM Training Frameworks
56K1 dir

SGLang

SGLang is a fast serving framework for large language models and vision language models.

AgentLLM Inference
25K1 dir

TGI

a toolkit for deploying and serving Large Language Models (LLMs).

AgentLLM Inference
1 dir

FasterTransformer

NVIDIA Framework for LLM Inference(Transitioned to TensorRT-LLM)

AgentLLM Inference
6.4K1 dir

MInference

To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which reduces inference latency by up to 10x for pre-filling on an A100 while maintaining accuracy.

...more
AgentLLM Inference
1.2K1 dir