Skip navigation
Please use this identifier to cite or link to this item: https://libeldoc.bsuir.by/handle/123456789/54364
Full metadata record
DC FieldValueLanguage
dc.contributor.authorTongrui Li-
dc.contributor.authorAblameyko, S.-
dc.coverage.spatialМинскen_US
dc.date.accessioned2024-02-26T07:04:59Z-
dc.date.available2024-02-26T07:04:59Z-
dc.date.issued2023-
dc.identifier.citationTongrui Li. Human Pose Estimation using SimCC and Swin Transformer / Tongrui Li, S. Ablameyko // Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023) : Proceedings of the 16th International Conference, October 17–19, 2023, Minsk, Belarus / United Institute of Informatics Problems of the National Academy of Sciences of Belarus. – Minsk, 2023. – P. 197–201.en_US
dc.identifier.urihttps://libeldoc.bsuir.by/handle/123456789/54364-
dc.description.abstract2D Human Pose Estimation is an important task in computer vision. In recent years, methods using deep learning for human pose estimation have been proposed one after another and achieved good results. Among existing models, the built-in attention layer in Transformer enables the model to effectively capture long-range relationships and also reveal the dependencies on which predicted key points depend. SimCC formulates keypoint localization as a classification problem, dividing the horizontal and vertical axes into equal-width numbered bins, and discretizing continuous coordinates into integer bin labels. We propose a new model that combines the Swin Transformer training model to predict the bin where the key points are located, so as to achieve the purpose of predicting key points. This method can achieve better results than other models and can achieve supixel positioning accuracy and low quantization error.en_US
dc.language.isoenen_US
dc.publisherBSUen_US
dc.subjectматериалы конференцийen_US
dc.subjecthuman pose estimationen_US
dc.subjectswin transformeren_US
dc.subjectSimCCen_US
dc.titleHuman Pose Estimation using SimCC and Swin Transformeren_US
dc.typeArticleen_US
Appears in Collections:Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023)

Files in This Item:
File Description SizeFormat 
Tongrui_Li_Human.pdf6.39 MBAdobe PDFView/Open
Show simple item record Google Scholar

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.