test
Search publications, data, projects and authors

Article

English

ID: <

oai:doaj.org/article:3bc6bceeb88d4451b97b732496a234aa

>

·

DOI: <

10.1016/j.jag.2022.102974

>

Where these data come from
A self-attention based global feature enhancing network for semantic segmentation of large-scale urban street-level point clouds

Abstract

Point clouds of large-scale urban street scenes contain large quantities of object categories and rich semantic information. The semantic segmentation is the basis and key to subsequent essential applications, such as digital twin engineering and city information model. The global feature of point clouds in large-scale scenes can provide long-range context information, which is critical to high-quality semantic segmentation. However, the learning of global spatial saliency considering class label constraints is often ignored in the feature representation of some deep learning models. With regard to this, we propose a Global Feature Self-Attention Encoding (GFSAE) module and a Weighted Semantic Mapping (WSM) module to make the semantic segmentation model of point clouds in large-scale urban street scene focus more on the global salient feature expression by self-attention enhancement channel by channel and take into account the constraints of semantic categories to learn a better semantic segmentation model for urban street scenes. The experiments are performed on the Semantic3D dataset and our own collected vehicle Mobile Laser Scanning (MLS) point cloud dataset. The segmentation results show that the GFSAE and the WSM proposed by us can improve the semantic segmentation of point clouds in large-scale urban street scenes and prove the effectiveness of our model compared with other state-of-the-art methods.

Your Feedback

Please give us your feedback and help us make GoTriple better.
Fill in our satisfaction questionnaire and tell us what you like about GoTriple!