Search
NEWS

What's in the RedPajama-Data-1T LLM training set

By A Mystery Man Writer

RedPajama is “a project to create leading open-source models, starts by reproducing LLaMA training dataset of over 1.2 trillion tokens”. It’s a collaboration between Together, Ontocord.ai, ETH DS3Lab, Stanford CRFM, …

What's in the RedPajama-Data-1T LLM training set

A Survey on Data Selection for Language Models

What's in the RedPajama-Data-1T LLM training set

RedPajama Project: An Open-Source Initiative to Democratizing LLMs

What's in the RedPajama-Data-1T LLM training set

Sheared LLaMA: Accelerating Language Model Pre-training via

What's in the RedPajama-Data-1T LLM training set

Open-Sourced Training Datasets for Large Language Models (LLMs)

What's in the RedPajama-Data-1T LLM training set

Inside language models (from GPT to Olympus) – Dr Alan D. Thompson

What's in the RedPajama-Data-1T LLM training set

What's in the RedPajama-Data-1T LLM training set

What's in the RedPajama-Data-1T LLM training set

Easily Train a Specialized LLM: PEFT, LoRA, QLoRA, LLaMA-Adapter

What's in the RedPajama-Data-1T LLM training set

What is RedPajama? - by Michael Spencer

What's in the RedPajama-Data-1T LLM training set

RedPajama Project: An Open-Source Initiative to Democratizing LLMs

What's in the RedPajama-Data-1T LLM training set

Inside language models (from GPT to Olympus) – Dr Alan D. Thompson