πŸ“„
Paper

Weakly-Supervised Action Localization and Action Recognition using Global-Local Attention of 3D CNN

by Novanto Yudistira ID: arxiv-paper--2012.09542

3D Convolutional Neural Network (3D CNN) captures spatial and temporal information on 3D data such as video sequences. However, due to the convolution and pooling mechanism, the information loss seems unavoidable. To improve the visual explanations and classification in 3D CNN, we propose two approa...

High Impact - Citations
2020 Year
ArXiv Venue
Top 19% FNI Rank
Paper Information Summary
Entity Passport
Registry ID arxiv-paper--2012.09542
Provider arxiv
πŸ“œ

Cite this paper

Academic & Research Attribution

BibTeX
@misc{arxiv_paper__2012.09542,
  author = {Novanto Yudistira},
  title = {Weakly-Supervised Action Localization and Action Recognition using Global-Local Attention of 3D CNN Paper},
  year = {2026},
  howpublished = {\url{https://arxiv.org/abs/2012.09542v3}},
  note = {Accessed via Free2AITools Knowledge Fortress}
}
APA Style
Novanto Yudistira. (2026). Weakly-Supervised Action Localization and Action Recognition using Global-Local Attention of 3D CNN [Paper]. Free2AITools. https://arxiv.org/abs/2012.09542v3

πŸ”¬Technical Deep Dive

Full Specifications [+]

βš–οΈ Free2AI Nexus Index

Methodology β†’ πŸ“˜ What is FNI?
0.0
Top 19% Overall Impact
πŸ”₯ Popularity (P) 0
πŸš€ Velocity (V) 0
πŸ›‘οΈ Credibility (C) 0
πŸ”§ Utility (U) 0
Nexus Verified Data

πŸ’¬ Why this score?

The Nexus Index for Weakly-Supervised Action Localization and Action Recognition using Global-Local Attention of 3D CNN aggregates Popularity (P:0), Velocity (V:0), and Credibility (C:0). The Utility score (U:0) represents deployment readiness, context efficiency, and structural reliability within the Nexus ecosystem.

Data Verified πŸ• Last Updated: Not calculated
Free2AI Nexus Index | Fair Β· Transparent Β· Explainable | Full Methodology

πŸ“ Executive Summary

"3D Convolutional Neural Network (3D CNN) captures spatial and temporal information on 3D data such as video sequences. However, due to the convolution and pooling mechanism, the information loss seems unavoidable. To improve the visual explanations and classification in 3D CNN, we propose two approaches; i) aggregate layer-wise global to local (global-local) discrete gradients using trained 3DResNext network, and ii) implement attention gating network to improve the accuracy of the action rec..."

❝ Cite Node

@article{Yudistira2020Weakly-Supervised,
  title={Weakly-Supervised Action Localization and Action Recognition using Global-Local Attention of 3D CNN},
  author={Novanto Yudistira and Muthu Subash Kavitha and Takio Kurita},
  journal={arXiv preprint arXiv:arxiv-paper--2012.09542},
  year={2020}
}

πŸ‘₯ Collaborating Minds

Novanto Yudistira Muthu Subash Kavitha Takio Kurita

Abstract & Analysis

3D Convolutional Neural Network (3D CNN) captures spatial and temporal information on 3D data such as video sequences. However, due to the convolution and pooling mechanism, the information loss seems unavoidable. To improve the visual explanations and classification in 3D CNN, we propose two approaches; i) aggregate layer-wise global to local (global-local) discrete gradients using trained 3DResNext network, and ii) implement attention gating network to improve the accuracy of the action recognition. The proposed approach intends to show the usefulness of every layer termed as global-local attention in 3D CNN via visual attribution, weakly-supervised action localization, and action recognition. Firstly, the 3DResNext is trained and applied for action classification using backpropagation concerning the maximum predicted class. The gradients and activations of every layer are then up-sampled. Later, aggregation is used to produce more nuanced attention, which points out the most critical part of the predicted class's input videos. We use contour thresholding of final attention for final localization. We evaluate spatial and temporal action localization in trimmed videos using fine-grained visual explanation via 3DCam. Experimental results show that the proposed approach produces informative visual explanations and discriminative attention. Furthermore, the action recognition via attention gating on each layer produces better classification results than the baseline model.

πŸ”„ Daily sync (03:00 UTC)

AI Summary: Based on arXiv metadata. Not a recommendation.

πŸ“Š FNI Methodology πŸ“š Knowledge Baseℹ️ Verify with original source

πŸ›‘οΈ Paper Transparency Report

Verified data manifest for traceability and transparency.

100% Data Disclosure Active

πŸ†” Identity & Source

id
arxiv-paper--2012.09542
source
arxiv
author
Novanto Yudistira
tags
arxiv:cs.CVarxiv:cs.AIarxiv:cs.NEattention

βš™οΈ Technical Specs

architecture
null
params billions
null
context length
null

πŸ“Š Engagement & Metrics

likes
0
downloads
0

Free2AITools Constitutional Data Pipeline: Curated disclosure mode active. (V15.x Standard)