DC Field | Value | Language |
dc.contributor.author | ZiRui Shen | - |
dc.contributor.author | Xin Li | - |
dc.contributor.author | Sheng Xu | - |
dc.coverage.spatial | Минск | en_US |
dc.date.accessioned | 2024-03-01T08:13:37Z | - |
dc.date.available | 2024-03-01T08:13:37Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | ZiRui Shen. RMNET: A Residual and Multi-scale Feature Fusion Network For High-resolution Image Semantic Segmentation / ZiRui Shen, Xin Li, Sheng Xu // Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023) : Proceedings of the 16th International Conference, October 17–19, 2023, Minsk, Belarus / United Institute of Informatics Problems of the National Academy of Sciences of Belarus. – Minsk, 2023. – P. 101–106. | en_US |
dc.identifier.uri | https://libeldoc.bsuir.by/handle/123456789/54463 | - |
dc.description.abstract | High-resolution remote sensing images have high
clarity and provide signifi cant support for urban planning,
resource management, environmental monitoring, and disaster
warning. Semantic segmentation accurately helps extract the
boundaries of objects, thereby increasing the application value
of scene understanding. Traditional encoder-decoder architec-
ture networks lack multi-scale information fusion and fail
to capture precise multi-scale semantic information, when
segmenting targets at diff erent scales. Additionally, these
semantic segmentation networks have inadequate handling of
class-imbalanced data, resulting in unsatisfactory classifi cation
results and fi nal segmentation eff ect. This paper proposesa
semantic segmentation network based on residual blocks and
multi-scale feature fusion. Building upon the U-Net network,
we design residual modules and multi-scale feature fusion
modules to extract information-rich feature maps. Then, the
multi-scale feature fusion module is used to interpolate and
upsample the obtained feature maps, which are then concate-
nated with feature maps at the same layer, resulting in a
novel fusion feature map. In experiments, the performance
of the proposed model surpasses U-Net with improvements
reaching 6.06% for MIoU. The introduced network identifi es
complex land features including dense distribution of objects,
small objects, large diff erences in object characteristics and
complex background eff ectively preserves and restores feature
information by incorporating the multi-scale feature fusion
module, achieving higher precision segmentation results and
providing rich multi-scale and spatial information. | en_US |
dc.language.iso | en | en_US |
dc.publisher | BSU | en_US |
dc.subject | материалы конференций | en_US |
dc.subject | deep learning | en_US |
dc.subject | high-resolution | en_US |
dc.subject | semantic segmentation | en_US |
dc.title | RMNET: A Residual and Multi-scale Feature Fusion Network For High-resolution Image Semantic Segmentation | en_US |
dc.type | Article | en_US |
Appears in Collections: | Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023)
|