Distribution Matching Distillation (DMD) distills a pre-trained multi-step diffusion model to a few-step one to improve inference efficiency. However, the performance of the latter is often capped by the former. To circumvent this dilemma, we propose DMDR, a novel framework that combines Reinforcement Learning (RL) techniques into the distillation process. We show that for the RL of the few-step generator, the DMD loss itself is a more effective regularization compared to the traditional ones. In turn, RL can help to guide the mode coverage process in DMD more effectively. These allow us to unlock the capacity of the few-step generator by conducting distillation and RL simultaneously. Meanwhile, we design the dynamic distribution guidance and dynamic renoise sampling training strategies to improve the initial distillation process. The experiments demonstrate that DMDR can achieve leading visual quality, prompt coherence among few-step methods, and even exhibit performance that exceeds the multi-step teacher.
Distribution Matching Distillation Meets Reinforcement Learning
Distribution Matching Distillation (DMD) distills a pre-trained multi-step diffusion model to a few-step one to improve inference efficiency. However, the performance of the latter is often capped by the former. To circumvent this dilemma, we propose DMDR, a novel framework that combines Reinforceme...
| Entity Passport | |
| Registry ID | arxiv-paper--2511.13649 |
| Provider | arxiv |
πΈοΈ Neural Mesh Hub
Interconnecting Research, Data & Ecosystem
π Core Ecosystem
Intelligence Hive
Multi-source Relation Matrix
Momentum Index
Contextual Anchors
Cite this paper
Academic & Research Attribution
@misc{arxiv_paper__2511.13649,
author = {Dengyang Jiang},
title = {Distribution Matching Distillation Meets Reinforcement Learning Paper},
year = {2026},
howpublished = {\url{https://arxiv.org/abs/2511.13649v3}},
note = {Accessed via Free2AITools Knowledge Fortress}
} π¬Technical Deep Dive
Full Specifications [+]βΎ
π Executive Summary
"Distribution Matching Distillation (DMD) distills a pre-trained multi-step diffusion model to a few-step one to improve inference efficiency. However, the performance of the latter is often capped by the former. To circumvent this dilemma, we propose DMDR, a novel framework that combines Reinforcement Learning (RL) techniques into the distillation process. We show that for the RL of the few-step generator, the DMD loss itself is a more effective regularization compared to the traditional ones..."
β Cite Node
@article{Jiang2025Distribution,
title={Distribution Matching Distillation Meets Reinforcement Learning},
author={Dengyang Jiang and Dongyang Liu and Zanyi Wang and Qilong Wu and Liuzhuozheng Li and Hengzhuang Li and Xin Jin and David Liu and Zhen Li and Bo Zhang and Mengmeng Wang and Steven Hoi and Peng Gao and Harry Yang},
journal={arXiv preprint arXiv:arxiv-paper--2511.13649},
year={2025}
} π₯ Collaborating Minds
Paper Details
Distribution Matching Distillation (DMD) distills a pre-trained multi-step diffusion model to a few-step one to improve inference efficiency. However, the performance of the latter is often capped by the former. To circumvent this dilemma, we propose DMDR, a novel framework that combines Reinforcement Learning (RL) techniques into the distillation process. We show that for the RL of the few-step generator, the DMD loss itself is a more effective regularization compared to the traditional ones. In turn, RL can help to guide the mode coverage process in DMD more effectively. These allow us to unlock the capacity of the few-step generator by conducting distillation and RL simultaneously. Meanwhile, we design the dynamic distribution guidance and dynamic renoise sampling training strategies to improve the initial distillation process. The experiments demonstrate that DMDR can achieve leading visual quality, prompt coherence among few-step methods, and even exhibit performance that exceeds the multi-step teacher.