Please use this identifier to cite or link to this item: https://gnanaganga.inflibnet.ac.in:8443/jspui/handle/123456789/4731
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDebnath, Saswati-
dc.contributor.authorRoy, Pinki-
dc.contributor.authorNamasudra, Suyel-
dc.contributor.authorCrespo, Ruben Gonzalez-
dc.date.accessioned2024-01-10T09:19:11Z-
dc.date.available2024-01-10T09:19:11Z-
dc.date.issued2022-07-12-
dc.identifier.issn1573-3432-
dc.identifier.issn0162-3257-
dc.identifier.urihttps://doi.org/10.1007/s10803-022-05654-4-
dc.identifier.urihttp://gnanaganga.inflibnet.ac.in:8080/jspui/handle/123456789/4731-
dc.description.abstractEducation is a fundamental right that enriches everyone’s life. However, physically challenged people often debar from the general and advanced education system. Audio-Visual Automatic Speech Recognition (AV-ASR) based system is useful to improve the education of physically challenged people by providing hands-free computing. They can communicate to the learning system through AV-ASR. However, it is challenging to trace the lip correctly for visual modality. Thus, this paper addresses the appearance-based visual feature along with the co-occurrence statistical measure for visual speech recognition. Local Binary Pattern-Three Orthogonal Planes (LBP-TOP) and Grey-Level Co-occurrence Matrix (GLCM) is proposed for visual speech information. The experimental results show that the proposed system achieves 76.60 % accuracy for visual speech and 96.00 % accuracy for audio speech recognition.en_US
dc.language.isoenen_US
dc.publisherJournal of Autism and Developmental Disordersen_US
dc.subjectAV-ASRen_US
dc.subjectLBP-TOPen_US
dc.subjectGLCMen_US
dc.subjectMFCCen_US
dc.subjectClustering algorithmen_US
dc.subjectSupervised learningen_US
dc.titleAudio-Visual Automatic Speech Recognition Towards Education for Disabilitiesen_US
dc.typeArticleen_US
Appears in Collections:Journal Articles

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.