DC Field | Value | Language |
dc.contributor.author | Assanovich, B. | - |
dc.contributor.author | Baniukevich, E. | - |
dc.coverage.spatial | Минск | en_US |
dc.date.accessioned | 2024-06-20T13:46:42Z | - |
dc.date.available | 2024-06-20T13:46:42Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | Assanovich, B. Using wavelet scattering transform to create voiceprint of a password / B. Assanovich, E. Baniukevich // Технические средства защиты информации : тезисы докладов ХXII Белорусско-российской научно-технической конференции, Минск, 12 июня 2024 г. / Белорусский государственный университет информатики и радиоэлектроники ; редкол.: Т. В. Борботько [и др.]. – Минск, 2024. – С. 7. | en_US |
dc.identifier.uri | https://libeldoc.bsuir.by/handle/123456789/56127 | - |
dc.description.abstract | Today, new biometric technologies are increasingly being used in various protocols and interfaces that implement user identification and verification. Voice identification, which implements text-dependent and text-independent speech recognition, is widely exploited in the human-machine interface. An example is the ID R&D developer, owned by the Mitek group of companies, which offers an AI-based speaker recognition product IDVoice that combines three-modal biometric capture with liveness detection, digital ID issuance, and mobile authentication. The developed SDK of ID R&D produces so-called a “voiceprint” that is a template analogous to someone’s fingerprint and capable to perform user verification. Usually Shallow and Deep Neural Networks (DNN) are used in these technologies. | en_US |
dc.language.iso | en | en_US |
dc.publisher | БГУИР | en_US |
dc.subject | материалы конференций | en_US |
dc.subject | biometric technologies | en_US |
dc.subject | voice identification | en_US |
dc.subject | Deep Neural Networks | en_US |
dc.title | Using wavelet scattering transform to create voiceprint of a password | en_US |
dc.type | Article | en_US |
Appears in Collections: | ТСЗИ 2024
|