πŸ“„
Paper

TENSILE: A Tensor granularity dynamic GPU memory scheduling method toward multiple dynamic workloads system

by Kaixin Zhang ID: arxiv-paper--2105.13336

Recently, deep learning has been an area of intense research. However, as a kind of computing-intensive task, deep learning highly relies on the scale of GPU memory, which is usually prohibitive and scarce. Although some extensive works have been proposed for dynamic GPU memory management, they are ...

High Impact - Citations
2021 Year
ArXiv Venue
Top 19% FNI Rank
Paper Information Summary
Entity Passport
Registry ID arxiv-paper--2105.13336
Provider arxiv
πŸ“œ

Cite this paper

Academic & Research Attribution

BibTeX
@misc{arxiv_paper__2105.13336,
  author = {Kaixin Zhang},
  title = {TENSILE: A Tensor granularity dynamic GPU memory scheduling method toward multiple dynamic workloads system Paper},
  year = {2026},
  howpublished = {\url{https://arxiv.org/abs/2105.13336v5}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Kaixin Zhang. (2026). TENSILE: A Tensor granularity dynamic GPU memory scheduling method toward multiple dynamic workloads system [Paper]. Free2AITools. https://arxiv.org/abs/2105.13336v5

πŸ”¬Technical Deep Dive

Full Specifications [+]

βš–οΈ Free2AI Nexus Index

Methodology β†’ πŸ“˜ What is FNI?
0.0
Top 19% Overall Impact
πŸ”₯ Popularity (P) 0
πŸš€ Velocity (V) 0
πŸ›‘οΈ Credibility (C) 0
πŸ”§ Utility (U) 0
Nexus Verified Data

πŸ’¬ Why this score?

The Nexus Index for TENSILE: A Tensor granularity dynamic GPU memory scheduling method toward multiple dynamic workloads system aggregates Popularity (P:0), Velocity (V:0), and Credibility (C:0). The Utility score (U:0) represents deployment readiness, context efficiency, and structural reliability within the Nexus ecosystem.

Data Verified πŸ• Last Updated: Not calculated
Free2AI Nexus Index | Fair Β· Transparent Β· Explainable | Full Methodology

πŸ“ Executive Summary

"Recently, deep learning has been an area of intense research. However, as a kind of computing-intensive task, deep learning highly relies on the scale of GPU memory, which is usually prohibitive and scarce. Although some extensive works have been proposed for dynamic GPU memory management, they are hard to apply to systems with multiple dynamic workloads, such as in-database machine learning systems. In this paper, we demonstrated TENSILE, a method of managing GPU memory in tensor granularity..."

❝ Cite Node

@article{Zhang2021TENSILE:,
  title={TENSILE: A Tensor granularity dynamic GPU memory scheduling method toward multiple dynamic workloads system},
  author={Kaixin Zhang and Hongzhi Wang and Han Hu and Songling Zou and Jiye Qiu and Tongxin Li and Zhishun Wang},
  journal={arXiv preprint arXiv:arxiv-paper--2105.13336},
  year={2021}
}

πŸ‘₯ Collaborating Minds

Kaixin Zhang Hongzhi Wang Han Hu Songling Zou Jiye Qiu Tongxin Li Zhishun Wang

Abstract & Analysis

Recently, deep learning has been an area of intense research. However, as a kind of computing-intensive task, deep learning highly relies on the scale of GPU memory, which is usually prohibitive and scarce. Although some extensive works have been proposed for dynamic GPU memory management, they are hard to apply to systems with multiple dynamic workloads, such as in-database machine learning systems. In this paper, we demonstrated TENSILE, a method of managing GPU memory in tensor granularity to reduce the GPU memory peak, considering the multiple dynamic workloads. TENSILE tackled the cold-starting and across-iteration scheduling problem existing in previous works. We implemented TENSILE on a deep learning framework built by ourselves and evaluated its performance. The experiment results show that TENSILE can save more GPU memory with less extra overhead than prior works in single and multiple dynamic workloads scenarios.

πŸ”„ Daily sync (03:00 UTC)

AI Summary: Based on arXiv metadata. Not a recommendation.

πŸ“Š FNI Methodology πŸ“š Knowledge Baseℹ️ Verify with original source

πŸ›‘οΈ Paper Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

πŸ†” Identity & Source

id
arxiv-paper--2105.13336
source
arxiv
author
Kaixin Zhang
tags
arxiv:cs.DCarxiv:cs.AIarxiv:cs.DBarxiv:cs.LGarxiv:cs.NE

βš™οΈ Technical Specs

architecture
null
params billions
null
context length
null

πŸ“Š Engagement & Metrics

likes
0
downloads
0

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)