Occlusion-Aware Segmentation Via RCF-Pix2Pix Generative Network
ID:170
Submission ID:164 View Protection:ATTENDEE
Updated Time:2024-10-23 10:02:36 Hits:64
Poster Presentation
Abstract
Segmenting image objects overlapped by other objects is challenging, because the shapes in occluded areas are unknown, and occlusion boundaries typically have no distinction from real object contours. Unlike traditional segmentation methods using convolutional networks, we explored the segmentation capabilities of generative networks in occluded areas and proposed a new edge-guided network architecture (RCF-Pix2Pix). The network simulates the human thought process for estimating the shape of occluded areas, using edge and overall contour information from visible parts to infer the shape of the occluded portion. RCF-Pix2Pix enhances the edge features by integrating edge information. It also uses edge contours as conditional input for the discriminator, guiding the network to predict contours in invisible areas. Moreover, by combining MSE-SSIM-L1 loss and edge loss, it improves the accuracy of target segmentation and stabilizes the quality of segmented images, making it more effective in handling complex imaging tasks. Experimental results show that our method has achieved significant improvements on the chip dataset, with an mIOU of 98.9%, Boundary IoU of 86.6% (+10.4%), and also achieved significant accuracy improvements in the D2SA dataset tests. This confirms the effectiveness and practical value of the RCF-Pix2Pix network in dealing with complex obstruction situations. Code is available at: https://github.com/CongyingAn/RCF-Pix2Pix-Generative-Network.
Keywords
RCF-Pix2Pix, Occluded areas, conditional information, Edge
Submission Author
AnCongying
jiangnan university
WuJingjing
Jiangnan University
ZhangHuanlong
University of Electronic Science and Technology
Comment submit