Paper

Distribution Matching Distillation Meets Reinforcement Learning

by Dengyang Jiang ID: arxiv-paper--2511.13649

Distribution Matching Distillation (DMD) distills a pre-trained multi-step diffusion model to a few-step one to improve inference efficiency. However, the performance of the latter is often capped by the former. To circumvent this dilemma, we propose DMDR, a novel framework that combines Reinforceme...

High Impact - Citations
2025 Year
ArXiv Venue
Top 28% FNI Rank
Paper Information Summary
Entity Passport
Registry ID arxiv-paper--2511.13649
Provider arxiv

πŸ•ΈοΈ Neural Mesh Hub

Interconnecting Research, Data & Ecosystem

πŸ•ΈοΈ

Intelligence Hive

Multi-source Relation Matrix

Live Index
πŸ“ˆ

Momentum Index

--
🏷️

Contextual Anchors

πŸ“œ

Cite this paper

Academic & Research Attribution

BibTeX
@misc{arxiv_paper__2511.13649,
  author = {Dengyang Jiang},
  title = {Distribution Matching Distillation Meets Reinforcement Learning Paper},
  year = {2026},
  howpublished = {\url{https://arxiv.org/abs/2511.13649v3}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Dengyang Jiang. (2026). Distribution Matching Distillation Meets Reinforcement Learning [Paper]. Free2AITools. https://arxiv.org/abs/2511.13649v3

πŸ”¬Technical Deep Dive

Full Specifications [+]

πŸ“ Executive Summary

"Distribution Matching Distillation (DMD) distills a pre-trained multi-step diffusion model to a few-step one to improve inference efficiency. However, the performance of the latter is often capped by the former. To circumvent this dilemma, we propose DMDR, a novel framework that combines Reinforcement Learning (RL) techniques into the distillation process. We show that for the RL of the few-step generator, the DMD loss itself is a more effective regularization compared to the traditional ones..."

❝ Cite Node

@article{Jiang2025Distribution,
  title={Distribution Matching Distillation Meets Reinforcement Learning},
  author={Dengyang Jiang and Dongyang Liu and Zanyi Wang and Qilong Wu and Liuzhuozheng Li and Hengzhuang Li and Xin Jin and David Liu and Zhen Li and Bo Zhang and Mengmeng Wang and Steven Hoi and Peng Gao and Harry Yang},
  journal={arXiv preprint arXiv:arxiv-paper--2511.13649},
  year={2025}
}

πŸ‘₯ Collaborating Minds

Dengyang Jiang Dongyang Liu Zanyi Wang Qilong Wu Liuzhuozheng Li Hengzhuang Li Xin Jin David Liu Zhen Li Bo Zhang Mengmeng Wang Steven Hoi Peng Gao Harry Yang

Paper Details

Distribution Matching Distillation (DMD) distills a pre-trained multi-step diffusion model to a few-step one to improve inference efficiency. However, the performance of the latter is often capped by the former. To circumvent this dilemma, we propose DMDR, a novel framework that combines Reinforcement Learning (RL) techniques into the distillation process. We show that for the RL of the few-step generator, the DMD loss itself is a more effective regularization compared to the traditional ones. In turn, RL can help to guide the mode coverage process in DMD more effectively. These allow us to unlock the capacity of the few-step generator by conducting distillation and RL simultaneously. Meanwhile, we design the dynamic distribution guidance and dynamic renoise sampling training strategies to improve the initial distillation process. The experiments demonstrate that DMDR can achieve leading visual quality, prompt coherence among few-step methods, and even exhibit performance that exceeds the multi-step teacher.

ZEN MODE β€’ Paper Details

Distribution Matching Distillation (DMD) distills a pre-trained multi-step diffusion model to a few-step one to improve inference efficiency. However, the performance of the latter is often capped by the former. To circumvent this dilemma, we propose DMDR, a novel framework that combines Reinforcement Learning (RL) techniques into the distillation process. We show that for the RL of the few-step generator, the DMD loss itself is a more effective regularization compared to the traditional ones. In turn, RL can help to guide the mode coverage process in DMD more effectively. These allow us to unlock the capacity of the few-step generator by conducting distillation and RL simultaneously. Meanwhile, we design the dynamic distribution guidance and dynamic renoise sampling training strategies to improve the initial distillation process. The experiments demonstrate that DMDR can achieve leading visual quality, prompt coherence among few-step methods, and even exhibit performance that exceeds the multi-step teacher.