FPGA-Based Lightweight Dual-Stage Multi-Exposure Image Fusion
ID:8
Submission ID:51 View Protection:ATTENDEE
Updated Time:2024-10-23 11:38:57
Hits:83
Poster Presentation
Abstract
Deep learning-based multi-exposure image fusion (MEF) methods have demonstrated robust performance. However, these methods require considerable computational resources and energy, which greatly limits their practical deployment. To address this issue, we propose a lightweight, dual-stage MEF method, termed LDMEF. By effectively deploying on field-programmable gate array (FPGA), this method significantly enhances its range of applications and flexibility. Specifically, in the initial stage, LDMEF preprocesses the input sequences by leveraging the parallel processing capabilities of FPGA to compute a preliminary image through pixel-wise addition and averaging, ensuring both simple and rapid execution. Subsequently, in the second stage, our proposed method incorporates depthwise separable convolution with the preliminary image to facilitate a lightweight network that is both straightforward to deploy and simple in design. This network meticulously fine-tunes the preliminary image at the pixel level, achieving high-quality fusion results. Extensive evaluations on publicly available datasets confirm that LDMEF not only achieves remarkable results but also outperforms many GPU-based learning MEF methods.
Keywords
Deep learning,multi-exposure image fusion,lightweight,field-programmable gate array
Submission Author
杨徐子谦
安徽大学
屠韬
中国科学技术大学
刘永斌
安徽大学
陈怀安
中国科学技术大学
金一
中国科学技术大学
Comment submit