Skip navigation
Please use this identifier to cite or link to this item: https://libeldoc.bsuir.by/handle/123456789/46657
Full metadata record
DC FieldValueLanguage
dc.contributor.authorVyaznikov, P. A.-
dc.contributor.authorKotilevets, I. D.-
dc.date.accessioned2022-02-02T08:56:49Z-
dc.date.available2022-02-02T08:56:49Z-
dc.date.issued2021-
dc.identifier.citationVyaznikov, P. A. Developing a seq2seq neural network using visual attention to transform mathematical expressions from images to LaTeX / Vyaznikov P. A., Kotilevets I. D. // Доклады БГУИР. – 2021. – № 19(8). – С. 40–44. – DOI : http://dx.doi.org/10.35596/1729-7648-2021-19-8-40-44.ru_RU
dc.identifier.urihttps://libeldoc.bsuir.by/handle/123456789/46657-
dc.description.abstractThe paper presents the methods of development and the results of research on the effectiveness of the seq2seq neural network architecture using Visual Attention mechanism to solve the im2latex problem. The essence of the task is to create a neural network capable of converting an image with mathematical expressions into a similar expression in the LaTeX markup language. This problem belongs to the Image Captioning type: the neural network scans the image and, based on the extracted features, generates a description in natural language. The proposed solution uses the seq2seq architecture, which contains the Encoder and Decoder mechanisms, as well as Bahdanau Attention. A series of experiments was conducted on training and measuring the effectiveness of several neural network models.ru_RU
dc.language.isoenru_RU
dc.publisherБГУИРru_RU
dc.subjectдоклады БГУИРru_RU
dc.subjectim2latexru_RU
dc.subjectseq2seqru_RU
dc.subjectNLPru_RU
dc.subjectneural networkru_RU
dc.titleDeveloping a seq2seq neural network using visual attention to transform mathematical expressions from images to LaTeXru_RU
dc.typeСтатьяru_RU
Appears in Collections:№ 19(8)

Files in This Item:
File Description SizeFormat 
Vyaznikov_Developing.pdf946.79 kBAdobe PDFView/Open
Show simple item record Google Scholar

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.