Please use this identifier to cite or link to this item:
https://gnanaganga.inflibnet.ac.in:8443/jspui/handle/123456789/16846
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Gomathy, B | - |
dc.contributor.author | Jayachitra, T | - |
dc.contributor.author | Rajkumar, R | - |
dc.contributor.author | Lalithamani, V | - |
dc.contributor.author | Pradeep Ghantasala, G S | - |
dc.contributor.author | Anantraj, I | - |
dc.contributor.author | Shyamala, C | - |
dc.contributor.author | Vinoth Rajkumar, G | - |
dc.contributor.author | Saranya, S | - |
dc.date.accessioned | 2024-12-12T09:38:15Z | - |
dc.date.available | 2024-12-12T09:38:15Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | Vol. 31, No. 6s; pp. 388-400 | en_US |
dc.identifier.issn | 1074-133X | - |
dc.identifier.uri | https://doi.org/10.52783/cana.v31.1231 | - |
dc.identifier.uri | https://gnanaganga.inflibnet.ac.in:8443/jspui/handle/123456789/16846 | - |
dc.description.abstract | Adversarial training has emerged as a powerful technique to improve the reliability of natural language processing (NLP) designs, especially sentiment analysis and machine translation. By providing adversarial examples during training process, models are exposed to perturbations that challenge their understanding and interpretation of textual data. This process helps in developing models that are not only accurate but also resilient to manipulations and noise in real-world scenarios. In sentiment analysis, adversarial training ensures that models can maintain consistent performance despite variations in input text, such as paraphrasing or the inclusion of misleading sentiment indicators. This robustness is crucial for applications involving user-generated content, where linguistic diversity and intentional manipulations are common. In the context of machine translation, adversarial training contributes to the development of models that can handle diverse linguistic structures and idiomatic expressions, which are often sources of errors in traditional models. By simulating adversarial attacks that introduce such complexities, the training process makes models more adept at preserving the semantic integrity of translated texts across different languages. This improved robustness is particularly beneficial for applications requiring high translation accuracy and reliability, such as international communication, content localization, and multilingual information retrieval. Overall, adversarial training provides a significant advancement in creating more resilient and effective NLP models for sentiment analysis and machine translation. © 2024, International Publications. All rights reserved. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Communications on Applied Nonlinear Analysis | en_US |
dc.publisher | International Publications | en_US |
dc.subject | Adversarial Examples | en_US |
dc.subject | Adversarial Training | en_US |
dc.subject | Linguistic Diversity | en_US |
dc.subject | Machine Translation | en_US |
dc.subject | Model Robustness | en_US |
dc.subject | Robust Natural Language Processing | en_US |
dc.subject | Sentiment Analysis | en_US |
dc.subject | Text Perturbations | en_US |
dc.subject | Translation Accuracy | en_US |
dc.title | Adversarial Training for Robust Natural Language Processing: a Focus on Sentiment Analysis and Machine Translation | en_US |
dc.type | Article | en_US |
Appears in Collections: | Journal Articles |
Files in This Item:
File | Size | Format | |
---|---|---|---|
14_01_Communications+on+Applied+Nonlinear+Analysis1.pdf | 412.15 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.