πŸ“„
Paper

Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep Spiking Neural Networks

by Jianhao Ding ID: arxiv-paper--2105.11654

Spiking Neural Networks (SNNs), as bio-inspired energy-efficient neural networks, have attracted great attentions from researchers and industry. The most efficient way to train deep SNNs is through ANN-SNN conversion. However, the conversion usually suffers from accuracy loss and long inference time...

High Impact - Citations
2021 Year
ArXiv Venue
Top 19% FNI Rank
Paper Information Summary
Entity Passport
Registry ID arxiv-paper--2105.11654
Provider arxiv
πŸ“œ

Cite this paper

Academic & Research Attribution

BibTeX
@misc{arxiv_paper__2105.11654,
  author = {Jianhao Ding},
  title = {Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep Spiking Neural Networks Paper},
  year = {2026},
  howpublished = {\url{https://arxiv.org/abs/2105.11654v1}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Jianhao Ding. (2026). Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep Spiking Neural Networks [Paper]. Free2AITools. https://arxiv.org/abs/2105.11654v1

πŸ”¬Technical Deep Dive

Full Specifications [+]

βš–οΈ Free2AI Nexus Index

Methodology β†’ πŸ“˜ What is FNI?
0.0
Top 19% Overall Impact
πŸ”₯ Popularity (P) 0
πŸš€ Velocity (V) 0
πŸ›‘οΈ Credibility (C) 0
πŸ”§ Utility (U) 0
Nexus Verified Data

πŸ’¬ Why this score?

The Nexus Index for Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep Spiking Neural Networks aggregates Popularity (P:0), Velocity (V:0), and Credibility (C:0). The Utility score (U:0) represents deployment readiness, context efficiency, and structural reliability within the Nexus ecosystem.

Data Verified πŸ• Last Updated: Not calculated
Free2AI Nexus Index | Fair Β· Transparent Β· Explainable | Full Methodology

πŸ“ Executive Summary

"Spiking Neural Networks (SNNs), as bio-inspired energy-efficient neural networks, have attracted great attentions from researchers and industry. The most efficient way to train deep SNNs is through ANN-SNN conversion. However, the conversion usually suffers from accuracy loss and long inference time, which impede the practical application of SNN. In this paper, we theoretically analyze ANN-SNN conversion and derive sufficient conditions of the optimal conversion. To better correlate ANN-SNN a..."

❝ Cite Node

@article{Ding2021Optimal,
  title={Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep Spiking Neural Networks},
  author={Jianhao Ding and Zhaofei Yu and Yonghong Tian and Tiejun Huang},
  journal={arXiv preprint arXiv:arxiv-paper--2105.11654},
  year={2021}
}

πŸ‘₯ Collaborating Minds

Jianhao Ding Zhaofei Yu Yonghong Tian Tiejun Huang

Abstract & Analysis

Spiking Neural Networks (SNNs), as bio-inspired energy-efficient neural networks, have attracted great attentions from researchers and industry. The most efficient way to train deep SNNs is through ANN-SNN conversion. However, the conversion usually suffers from accuracy loss and long inference time, which impede the practical application of SNN. In this paper, we theoretically analyze ANN-SNN conversion and derive sufficient conditions of the optimal conversion. To better correlate ANN-SNN and get greater accuracy, we propose Rate Norm Layer to replace the ReLU activation function in source ANN training, enabling direct conversion from a trained ANN to an SNN. Moreover, we propose an optimal fit curve to quantify the fit between the activation value of source ANN and the actual firing rate of target SNN. We show that the inference time can be reduced by optimizing the upper bound of the fit curve in the revised ANN to achieve fast inference. Our theory can explain the existing work on fast reasoning and get better results. The experimental results show that the proposed method achieves near loss less conversion with VGG-16, PreActResNet-18, and deeper structures. Moreover, it can reach 8.6x faster reasoning performance under 0.265x energy consumption of the typical method. The code is available at https://github.com/DingJianhao/OptSNNConvertion-RNL-RIL.

πŸ”„ Daily sync (03:00 UTC)

AI Summary: Based on arXiv metadata. Not a recommendation.

πŸ“Š FNI Methodology πŸ“š Knowledge Baseℹ️ Verify with original source

πŸ›‘οΈ Paper Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

πŸ†” Identity & Source

id
arxiv-paper--2105.11654
source
arxiv
author
Jianhao Ding
tags
arxiv:cs.NEarxiv:cs.AIarxiv:cs.CVarxiv:cs.LGneural

βš™οΈ Technical Specs

architecture
null
params billions
null
context length
null

πŸ“Š Engagement & Metrics

likes
0
downloads
0

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)