DC Field | Value | Language |
dc.contributor.author | Kroshchanka, A. | - |
dc.contributor.author | Golovko, V. | - |
dc.coverage.spatial | Минск | en_US |
dc.date.accessioned | 2024-03-01T07:57:37Z | - |
dc.date.available | 2024-03-01T07:57:37Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | Kroshchanka, A. Neural Networks Interpretation Improvement / A. Kroshchanka, V. Golovko // Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023) : Proceedings of the 16th International Conference, October 17–19, 2023, Minsk, Belarus / United Institute of Informatics Problems of the National Academy of Sciences of Belarus. – Minsk, 2023. – P. 45–48. | en_US |
dc.identifier.uri | https://libeldoc.bsuir.by/handle/123456789/54457 | - |
dc.description.abstract | The paper is devoted to studying the issues of
interpretability of neural network models. Particular attention
is paid to the training of heavy models with a large number of
parameters. A generalized approach for pretraining deep
models is proposed, which allows achieving better performance
in final accuracy and interpreting the model output and can be
used when training on small datasets. The effectiveness of the
proposed approach is demonstrated on examples of training
deep neural network models using the MNIST dataset. The
obtained results can be used to train fully connected type of
layers and other types of layers after applying of flatting
operation. | en_US |
dc.language.iso | en | en_US |
dc.publisher | BSU | en_US |
dc.subject | материалы конференций | en_US |
dc.subject | deep neural network | en_US |
dc.subject | pretraining | en_US |
dc.subject | Explainable AI | en_US |
dc.title | Neural Networks Interpretation Improvement | en_US |
dc.type | Article | en_US |
Appears in Collections: | Pattern Recognition and Information Processing (PRIP'2023) = Распознавание образов и обработка информации (2023)
|