Channel Attention-based Spatial-Temporal Graph Convolutional Networks for Action Recognition
ID:34 Submission ID:44 View Protection:ATTENDEE Updated Time:2024-10-23 11:14:29 Hits:54 Oral Presentation

Start Time:2024-11-01 16:00 (Asia/Shanghai)

Duration:20min

Session:[P4] Parallel Session 4 » [P4-1] Parallel Session 4(November 1 PM)

No files

Abstract
In the domain of human action recognition, skeleton-based methods have attracted widespread attention for their superior robustness. While the Spatial-Temporal Graph Convolutional Networks (ST-GCN) was the first to apply GCNs to model skeleton data, it still struggle to effectively differentiate between essential and redundant features. To address this limitation, in this work we propose a novel Channel Attention-based Spatial-Temporal Graph Convolutional Network (CA-STGCN). Our model integrates SENet with SoftPool, intruducing the SoftPool-SENet (S-SE) module to enhance pooling operations and preserve critical functional information. We validate CA-STGCN on two public datasets, NTU-RGB+D and Kinetics. Experimental results demonstrate that our model outperforms the original ST-GCN model and offers valuable insights for advancing skeleton-based action recognition.
Keywords
action recognition,graph convolutional network,channel attention mechanism,SoftPool
Speaker
ChenWeijie
Ms Xidian University

Submission Author
ChenWeijie Xidian University
ChengXina Xidian University
JiaoJianbin Xidian University
Comment submit
Verification code Change another
All comments

Important Dates

15th August 2024   31st August 2024- Manuscript Submission

15th September 2024 - Acceptance Notification

1st October 2024 - Camera Ready Submission

1st October 2024  – Early Bird Registration

 

Contact Us

Website:

https://icsmd2024.aconf.org/

Email:
icsmd2024@163.com