Please use this identifier to cite or link to this item: https://gnanaganga.inflibnet.ac.in:8443/jspui/handle/123456789/2232
Full metadata record
DC FieldValueLanguage
dc.contributor.authorJebakumar Immanuel, D-
dc.contributor.authorPoovizhi, P-
dc.contributor.authorMargret Sharmila, F-
dc.contributor.authorSelvapandian, D-
dc.contributor.authorThomas, Aby K-
dc.contributor.authorShankar, C K-
dc.date.accessioned2023-12-09T08:56:01Z-
dc.date.available2023-12-09T08:56:01Z-
dc.date.issued2022-
dc.identifier.citationVol. 444; pp. 749-763en_US
dc.identifier.isbn9789811924996-
dc.identifier.isbn9789811925009-
dc.identifier.issn2367-3370-
dc.identifier.issn2367-3389-
dc.identifier.urihttps://doi.org/10.1007/978-981-19-2500-9_55-
dc.identifier.urihttp://gnanaganga.inflibnet.ac.in:8080/jspui/handle/123456789/2232-
dc.description.abstractAmong the population of India, nearly 1.6 million children are blind or visually impaired by birth. The challenges faced by the visually impaired persons are high in day-to-day life. There are number of challenges faced by them like requiring guidance for the usage of public transport, to walk independently. One of the most common problems is the dependency on others for the purchase of household items. Hence, a unique solution is required which makes the people to identify and purchase the products in the supermarket without dependency. This research focuses on a portable camera-based assistance that helps them to identify the grocery items along with the framework to read the text on the labels. This experimental system consists of three modules: object detection and classification; reading the text from the label; text to audio converter. The input to the camera is fed by the Raspberry Pi kit. The captured video is segregated as pictures of the grocery items. The proposed deep learning algorithm for the detection of object is You only look once (YOLO). The accuracy of the detection is pre-dominantly high compared to the existing algorithms. The captured image of the grocery item label is preprocessed to blur the unwanted text. Optical character recognition (OCR) is applied over the label to convert as the text where machine can read. At last, the processed text is converted as audio for the guidance of the people. Thus, a flexible supervisory mechanism with user-friendliness is developed for helping the visually impaired people. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.en_US
dc.language.isoenen_US
dc.publisherExpert Clouds and Applications: Proceedings of ICOECA 2022en_US
dc.subjectDeep learningen_US
dc.subjectOptical character recognitionen_US
dc.subjectSmart super market assistanceen_US
dc.subjectVisually impaireden_US
dc.titleDesign of Smart Super Market Assistance For The Visually Impaired People Using Yolo Algorithmen_US
dc.typeArticleen_US
Appears in Collections:Conference Papers

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.