πŸ“Š
Dataset

Dolma3 Pool

by allenai hf-dataset--allenai--dolma3_pool
Nexus Index
40.1 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 60
R: Recency 73
Q: Quality 30
Tech Context
Vital Performance
0 DL / 30D
0.0%
Data Integrity 40.1 FNI Score
- Size
- Rows
Parquet Format
- Tokens
Dataset Information Summary
Entity Passport
Registry ID hf-dataset--allenai--dolma3_pool
License odc-by
Provider huggingface
πŸ“œ

Cite this dataset

Academic & Research Attribution

BibTeX
@misc{hf_dataset__allenai__dolma3_pool,
  author = {allenai},
  title = {Dolma3 Pool Dataset},
  year = {2026},
  howpublished = {\url{https://huggingface.co/datasets/allenai/dolma3_pool}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
allenai. (2026). Dolma3 Pool [Dataset]. Free2AITools. https://huggingface.co/datasets/allenai/dolma3_pool

πŸ”¬Technical Deep Dive

Full Specifications [+]

βš–οΈ Nexus Index V2.0

40.1
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 60
Recency (R) 73
Quality (Q) 30

πŸ’¬ Index Insight

FNI V2.0 for Dolma3 Pool: Semantic (S:50), Authority (A:0), Popularity (P:60), Recency (R:73), Quality (Q:30).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
⬇️
Downloads
145,179

πŸ‘οΈ Data Preview

πŸ“Š

Row-level preview not available for this dataset.

Schema structure is shown in the Field Logic panel when available.

πŸ”— Explore Full Dataset β†—

🧬 Field Logic

🧬

Schema not yet indexed for this dataset.

Dataset Specification

⚠️ IMPORTANT NOTICE ⚠️

This is the Dolma 3 pool, pre–quality upsampling and mixing. If you are interested in the data used to train Olmo 3 7B and Olmo 3 32B, visit allenai/dolma3_mix-6T-1025.


Logo for Dolma Pool

Dolma 3 Pool

The Dolma 3 pool is a dataset of over 9 trillion tokens from a diverse mix of web content, academic publications, code, and more. For detailed documenation on Dolma 3 processing and data, please see our Dolma 3 Github repository. For more information on Dolma in general, please see our original release here.

The dolma 3 pool contains documents for Common Crawl (web) and olmOCR Science PDFs only. To access the documents from the remaining sources in this pool, follow the source links below:

Dataset Sources

This dataset contains the full pool of documents considered to train the first stage of Olmo 3 7B.

Source Type 9T Pool Tokens 9T Pool Docs
Common Crawl Web pages 8.14T 9.67B
olmOCR Science PDFs Academic documents 972B 101M
StackEdu (Rebalanced) GitHub code 137B 167M
arXiv Papers with LaTeX 21.4B 3.95M
FineMath 3+ Math web pages 34.1B 21.4M
Wikipedia & Wikibooks Encyclopedic 3.69B 6.67M
Total 9.31T 9.97B

Downloading Dolma 3

You can download and load this data using HuggingFace's datasets library with the following code:

python
from datasets import load_dataset
dataset = load_dataset("allenai/dolma3_pool", split="train",)

You can further specify a specific split of the dataset to load. In this repository, Common Crawl data folders are foramtted as common_crawl-topic-vigintile. Similarly, olmOCR PDF data folders are formatted as olmocr_science_pdfs-topic. For example:

python
from datasets import load_dataset
dataset = load_dataset("allenai/dolma3_pool", 
                        data_files="data/olmocr_science_pdfs-*/*.jsonl.zst",
                        split="train")

Note: You can iterate over over the dataset directly without having to download the entire dataset. Simply set streaming=True in the command above.

Licensing Information

Dolma 3 is licensed under the Open Data Commons Attribution License v1.0 (ODC-By). It is intended for research and educational use. For more information, please see our Responsible Use Guidelines.

Citation

text
@misc{olmo2025olmo3,
title={Olmo 3},
author={Team Olmo and Allyson Ettinger and Amanda Bertsch and Bailey Kuehl and David Graham and David Heineman and Dirk Groeneveld and Faeze Brahman and Finbarr Timbers and Hamish Ivison and Jacob Morrison and Jake Poznanski and Kyle Lo and Luca Soldaini and Matt Jordan and Mayee Chen and Michael Noukhovitch and Nathan Lambert and Pete Walsh and Pradeep Dasigi and Robert Berry and Saumya Malik and Saurabh Shah and Scott Geng and Shane Arora and Shashank Gupta and Taira Anderson and Teng Xiao and Tyler Murray and Tyler Romero and Victoria Graf and Akari Asai and Akshita Bhagia and Alexander Wettig and Alisa Liu and Aman Rangapur and Chloe Anastasiades and Costa Huang and Dustin Schwenk and Harsh Trivedi and Ian Magnusson and Jaron Lochner and Jiacheng Liu and Lester James V. Miranda and Maarten Sap and Malia Morgan and Michael Schmitz and Michal Guerquin and Michael Wilson and Regan Huff and Ronan Le Bras and Rui Xin and Rulin Shao and Sam Skjonsberg and Shannon Zejiang Shen and Shuyue Stella Li and Tucker Wilde and Valentina Pyatkin and Will Merrill and Yapei Chang and Yuling Gu and Zhiyuan Zeng and Ashish Sabharwal and Luke Zettlemoyer and Pang Wei Koh and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi},
year={2025},
eprint={2512.13961},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.13961},
}

Social Proof

HuggingFace Hub
145.2KDownloads
πŸ”„ Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

πŸ“Š FNI Methodology πŸ“š Knowledge Baseℹ️ Verify with original source

πŸ›‘οΈ Dataset Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

πŸ†” Identity & Source

id
hf-dataset--allenai--dolma3_pool
slug
allenai--dolma3_pool
source
huggingface
author
allenai
license
odc-by
tags
task_categories:text-generation, language:en, license:odc-by, size_categories:10m<n<100m, format:json, modality:text, library:datasets, library:dask, library:mlcroissant, arxiv:2512.13961, region:us

βš™οΈ Technical Specs

architecture
null
params billions
null
context length
null
pipeline tag

πŸ“Š Engagement & Metrics

downloads
145,179
stars
34
forks
0

Data indexed from public sources. Updated daily.