📊
Dataset

AlgoTune

by oripress hf-dataset--oripress--algotune
Nexus Index
34.8 Top 100%
S: Semantic 50
A: Authority 0
P: Popularity 53
R: Recency 66
Q: Quality 30
Tech Context
Vital Performance
0 DL / 30D
0.0%
Data Integrity 34.8 FNI Score
- Size
- Rows
Parquet Format
- Tokens
Dataset Information Summary
Entity Passport
Registry ID hf-dataset--oripress--algotune
License MIT
Provider huggingface
📜

Cite this dataset

Academic & Research Attribution

BibTeX
@misc{hf_dataset__oripress__algotune,
  author = {oripress},
  title = {AlgoTune Dataset},
  year = {2026},
  howpublished = {\url{https://huggingface.co/datasets/oripress/algotune}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
oripress. (2026). AlgoTune [Dataset]. Free2AITools. https://huggingface.co/datasets/oripress/algotune

đŸ”ŦTechnical Deep Dive

Full Specifications [+]

âš–ī¸ Nexus Index V2.0

34.8
TOP 100% SYSTEM IMPACT
Semantic (S) 50
Authority (A) 0
Popularity (P) 53
Recency (R) 66
Quality (Q) 30

đŸ’Ŧ Index Insight

FNI V2.0 for AlgoTune: Semantic (S:50), Authority (A:0), Popularity (P:53), Recency (R:66), Quality (Q:30).

Free2AITools Nexus Index

Verification Authority

Unbiased Data Node Refresh: VFS Live
âŦ‡ī¸
Downloads
40,874

đŸ‘ī¸ Data Preview

📊

Row-level preview not available for this dataset.

Schema structure is shown in the Field Logic panel when available.

🔗 Explore Full Dataset ↗

đŸ§Ŧ Field Logic

đŸ§Ŧ

Schema not yet indexed for this dataset.

Dataset Specification

AlgoTune banner

Website  |   Paper   |   Code

How good are language models at coming up with new algorithms? To try to answer this, we built a benchmark, AlgoTune, comprised of 154 widely used math, physics, and computer science functions. For each function, the goal is to write code that produces the same outputs as the original function, while being faster. In addition to the benchmark, we also provide an agent, AlgoTuner, which allows language models to easily optimize code.

AlgoTune banner


AlgoTune can now be easily run on AWS with just an OpenRouter API key and AWS credentials.
Try it out here.

For more information on running AlgoTuner on SLURM or a single machine, please refer to
the code.

Citation

If you found this work helpful, please consider citing it using the following:

AlgoTune citation
bibtex
@article{press2025algotune, title={AlgoTune: Can Language Models Speed Up General-Purpose Numerical Programs?}, 
author={Press, Ori and Amos, Brandon and Zhao, Haoyu and Wu, Yikai and Ainsworth, Samuel K. and Krupke, Dominik and Kidger, Patrick and Sajed, Touqir and Stellato, Bartolomeo and Park, Jisun and Bosch, Nathanael and Meril, Eli and Steppi, Albert and Zharmagambetov, Arman and Zhang, Fangzhao and Perez-Pineiro, David and Mercurio, Alberto and Zhan, Ni and Abramovich, Talor and Lieret, Kilian and Zhang, Hanlin and Huang, Shirley and Bethge, Matthias and Press, Ofir}, 
journal={arXiv preprint arXiv:2507.15887},
year={2025},
 doi={10.48550/arXiv.2507.15887}, 
 url={https://arxiv.org/abs/2507.15887}}

📊 Structured Schema (Zero-Fabrication)

Feature Key Data Type
k int64
seed int64
problem unknown
median_oracle_time_ms int64

Estimated Rows: 200

Social Proof

HuggingFace Hub
40.9KDownloads
🔄 Daily sync (03:00 UTC)

AI Summary: Based on Hugging Face metadata. Not a recommendation.

📊 FNI Methodology 📚 Knowledge Baseâ„šī¸ Verify with original source

đŸ›Ąī¸ Dataset Transparency Report

Technical metadata sourced from upstream repositories.

Open Metadata

🆔 Identity & Source

id
hf-dataset--oripress--algotune
slug
oripress--algotune
source
huggingface
author
oripress
license
MIT
tags
license:mit, size_categories:n<1k, format:json, modality:tabular, modality:text, library:datasets, library:dask, library:polars, library:mlcroissant, arxiv:2507.15887, region:us

âš™ī¸ Technical Specs

architecture
null
params billions
null
context length
null
pipeline tag

📊 Engagement & Metrics

downloads
40,874
stars
1
forks
0

Data indexed from public sources. Updated daily.