πŸ“„
Paper

Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces

by Tao Li ID: arxiv-paper--2103.11154

Deep neural networks (DNNs) usually contain massive parameters, but there is redundancy such that it is guessed that the DNNs could be trained in low-dimensional subspaces. In this paper, we propose a Dynamic Linear Dimensionality Reduction (DLDR) based on low-dimensional properties of the training ...

High Impact - Citations
2021 Year
ArXiv Venue
Top 19% FNI Rank
Paper Information Summary
Entity Passport
Registry ID arxiv-paper--2103.11154
Provider arxiv
πŸ“œ

Cite this paper

Academic & Research Attribution

BibTeX
@misc{arxiv_paper__2103.11154,
  author = {Tao Li},
  title = {Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces Paper},
  year = {2026},
  howpublished = {\url{https://arxiv.org/abs/2103.11154v2}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Tao Li. (2026). Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces [Paper]. Free2AITools. https://arxiv.org/abs/2103.11154v2

πŸ”¬Technical Deep Dive

Full Specifications [+]

βš–οΈ Free2AI Nexus Index

Methodology β†’ πŸ“˜ What is FNI?
0.0
Top 19% Overall Impact
πŸ”₯ Popularity (P) 0
πŸš€ Velocity (V) 0
πŸ›‘οΈ Credibility (C) 0
πŸ”§ Utility (U) 0
Nexus Verified Data

πŸ’¬ Why this score?

The Nexus Index for Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces aggregates Popularity (P:0), Velocity (V:0), and Credibility (C:0). The Utility score (U:0) represents deployment readiness, context efficiency, and structural reliability within the Nexus ecosystem.

Data Verified πŸ• Last Updated: Not calculated
Free2AI Nexus Index | Fair Β· Transparent Β· Explainable | Full Methodology

πŸ“ Executive Summary

"Deep neural networks (DNNs) usually contain massive parameters, but there is redundancy such that it is guessed that the DNNs could be trained in low-dimensional subspaces. In this paper, we propose a Dynamic Linear Dimensionality Reduction (DLDR) based on low-dimensional properties of the training trajectory. The reduction is efficient, which is supported by comprehensive experiments: optimization in 40 dimensional spaces can achieve comparable performance as regular training over thousands ..."

❝ Cite Node

@article{Li2021Low,
  title={Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces},
  author={Tao Li and Lei Tan and Qinghua Tao and Yipeng Liu and Xiaolin Huang},
  journal={arXiv preprint arXiv:arxiv-paper--2103.11154},
  year={2021}
}

πŸ‘₯ Collaborating Minds

Tao Li Lei Tan Qinghua Tao Yipeng Liu Xiaolin Huang

Abstract & Analysis

Deep neural networks (DNNs) usually contain massive parameters, but there is redundancy such that it is guessed that the DNNs could be trained in low-dimensional subspaces. In this paper, we propose a Dynamic Linear Dimensionality Reduction (DLDR) based on low-dimensional properties of the training trajectory. The reduction is efficient, which is supported by comprehensive experiments: optimization in 40 dimensional spaces can achieve comparable performance as regular training over thousands or even millions of parameters. Since there are only a few optimization variables, we develop a quasi-Newton-based algorithm and also obtain robustness against label noises, which are two follow-up experiments to show the advantages of finding low-dimensional subspaces.

πŸ”„ Daily sync (03:00 UTC)

AI Summary: Based on arXiv metadata. Not a recommendation.

πŸ“Š FNI Methodology πŸ“š Knowledge Baseℹ️ Verify with original source

πŸ›‘οΈ Paper Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

πŸ†” Identity & Source

id
arxiv-paper--2103.11154
source
arxiv
author
Tao Li
tags
arxiv:cs.LGarxiv:cs.NEarxiv:math.OC

βš™οΈ Technical Specs

architecture
null
params billions
null
context length
null

πŸ“Š Engagement & Metrics

likes
0
downloads
0

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)