Please use this identifier to cite or link to this item:
https://gnanaganga.inflibnet.ac.in:8443/jspui/handle/123456789/16513
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Prasad, B M G | - |
dc.contributor.author | Jopate, Rachappa | - |
dc.contributor.author | Savita, Pankaj | - |
dc.contributor.author | Basi Reddy, A | - |
dc.contributor.author | Shankar, B Prabu | - |
dc.contributor.author | Arunkumar, M S | - |
dc.date.accessioned | 2024-08-29T05:41:23Z | - |
dc.date.available | 2024-08-29T05:41:23Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | pp. 417-423 | en_US |
dc.identifier.isbn | 9798350371406 | - |
dc.identifier.issn | 2640-074X | - |
dc.identifier.uri | https://doi.org/10.1109/ICIIP61524.2023.10537705 | - |
dc.identifier.uri | https://gnanaganga.inflibnet.ac.in:8443/jspui/handle/123456789/16513 | - |
dc.description.abstract | The incorporation of edge computing has become essential in the big data and Internet of Things age to tackle issues related to latency, real-time processing, and data scale. This paper delves into the imperative need for edge computing within the context of big data environments. Distributed deep learning methodologies are explored, encompassing a diverse array of deep learning models. Particularly, a novel paradigm named Data Forwarding based Decentralized Deep Neural Network (DF-DDNN) is introduced to achieve low latency IoT-Edge computation. In order to reduce latency in Io T ecosystems, the DF-DDNN model makes use of edge devices' processing capabilities. By strategically distributing data processing and computation at the edge, this model seeks to overcome inherent limitations of conventional Io T architectures, with a keen focus on network, processing, and overall latencies. The research provides a thorough performance evaluation that compares the DF-DDNN model's performance versus conventional IoT models in a number of areas. The outcomes demonstrate how much better the suggested DF-DDNN model is than traditional IoT systems. Notably, improvements are evident in terms of reduced network latency, accelerated processing speeds, and overall latency mitigation. This research underscores the critical importance of embracing edge computing in the context of big data and deep learning applications. The DF-DDNN model emerges as a promising avenue for advancing IoT-Edge computation, effectively addressing latency concerns and augmenting the performance of IoT-enabled systems. © 2023 IEEE. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Proceedings of the IEEE International Conference Image Information Processing | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | en_US |
dc.subject | Big Data | en_US |
dc.subject | Deep Learning | en_US |
dc.subject | Edge Computing | en_US |
dc.subject | Iot (Internet Of Things) | en_US |
dc.subject | Latency And Distributed Computation | en_US |
dc.title | Enhancing Iot-Edge Computation with Data Forwarding Based Decentralized Deep Neural Networks | en_US |
dc.type | Article | en_US |
Appears in Collections: | Conference Papers |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.