DC Field | Value | Language |
dc.contributor.author | Vyaznikov, P. A. | - |
dc.contributor.author | Kotilevets, I. D. | - |
dc.date.accessioned | 2022-02-02T08:56:49Z | - |
dc.date.available | 2022-02-02T08:56:49Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Vyaznikov, P. A. Developing a seq2seq neural network using visual attention to transform mathematical expressions from images to LaTeX / Vyaznikov P. A., Kotilevets I. D. // Доклады БГУИР. – 2021. – № 19(8). – С. 40–44. – DOI : http://dx.doi.org/10.35596/1729-7648-2021-19-8-40-44. | ru_RU |
dc.identifier.uri | https://libeldoc.bsuir.by/handle/123456789/46657 | - |
dc.description.abstract | The paper presents the methods of development and the results of research on the effectiveness of the seq2seq neural network architecture using Visual Attention mechanism to solve the im2latex problem. The essence of the task is to create a neural network capable of converting an image with mathematical expressions into a similar expression in the LaTeX markup language. This problem belongs to the Image Captioning type: the neural network scans the image and, based on the extracted features, generates a description in natural language. The proposed solution uses the seq2seq architecture, which contains the Encoder and Decoder mechanisms, as well as Bahdanau Attention. A series of experiments was conducted on training and measuring the effectiveness of several neural network models. | ru_RU |
dc.language.iso | en | ru_RU |
dc.publisher | БГУИР | ru_RU |
dc.subject | доклады БГУИР | ru_RU |
dc.subject | im2latex | ru_RU |
dc.subject | seq2seq | ru_RU |
dc.subject | NLP | ru_RU |
dc.subject | neural network | ru_RU |
dc.title | Developing a seq2seq neural network using visual attention to transform mathematical expressions from images to LaTeX | ru_RU |
dc.type | Статья | ru_RU |
Appears in Collections: | № 19(8)
|