DC Field | Value | Language |
dc.contributor.author | Mohbey, K. K. | - |
dc.contributor.author | Kesswani, N. | - |
dc.contributor.author | Yunevich, N. | - |
dc.contributor.author | Basant Agarwal | - |
dc.contributor.author | Sterjanov, M. | - |
dc.contributor.author | Vishnyakova, M. | - |
dc.coverage.spatial | South Korea | en_US |
dc.date.accessioned | 2025-04-21T07:52:22Z | - |
dc.date.available | 2025-04-21T07:52:22Z | - |
dc.date.issued | 2025 | - |
dc.identifier.citation | Hate Speech Identification and Categorization on Social Media Using Bi-LSTM: An Information Science Perspective / K. K. Mohbey, N. Kesswani, N. Yunevich [et al.] // Journal of Information Science Theory and Practice. – 2025. – Vol. 13, No. 1. – P. 51–69. | en_US |
dc.identifier.uri | https://libeldoc.bsuir.by/handle/123456789/59586 | - |
dc.description.abstract | Online social networks empower individuals with limited influence to exert significant control over specific individuals’ lives and
exploit the anonymity or social disconnect offered by the Internet to engage in harassment. Women are commonly attacked due
to the prevalent existence of sexism in our culture. Efforts to detect misogyny have improved, but its subtle and profound nature
makes it challenging to diagnose, indicating that statistical methods may not be enough. This research article explores the use of
deep learning techniques for the automatic detection of hate speech against women on Twitter. It offers further insights into the
practical issues of automating hate speech detection in social media platforms by utilizing the model’s capacity to grasp linguistic
nuances and context. The results highlight the model’s applicability to information science by addressing the expanding need for
better retrieval of hazardous content, scalable content moderation, and metadata organization. This work emphasizes content
control in the digital ecosystem. The deep learning-based methods discussed improve the retrieval of data connected to hate
speech in the context of a digital archive or social media monitoring system, facilitating study in fields including online harassment,
policy formation, and social justice campaigning. The findings not only advance the field of natural language processing but also
have practical implications for social media platforms, policymakers, and advocacy groups seeking to combat online harassment
and foster inclusive digital spaces for women. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Korea Institute of Science and Technology Information | en_US |
dc.subject | публикации ученых | en_US |
dc.subject | hate speech detection | en_US |
dc.subject | social media | en_US |
dc.subject | deep learning | en_US |
dc.subject | machine learning | en_US |
dc.subject | metadata organization | en_US |
dc.subject | content labelling | en_US |
dc.title | Hate Speech Identification and Categorization on Social Media Using Bi-LSTM: An Information Science Perspective | en_US |
dc.type | Article | en_US |
dc.identifier.DOI | https://doi.org/10.1633/JISTaP.2025.13.1.4 | - |
Appears in Collections: | Публикации в зарубежных изданиях
|