Please use this identifier to cite or link to this item:
https://gnanaganga.inflibnet.ac.in:8443/jspui/handle/123456789/4797
Title: | Medical Ultrasound Image Segmentation Using Multi-Residual U-Net Architecture |
Authors: | V B, Shereena G, Raju |
Keywords: | Ultrasound image Segmentation Non-local means Convolutional neural networks Multi-residual U-Net |
Issue Date: | 24-Aug-2023 |
Publisher: | Multimedia Tools and Applications |
Abstract: | Advances in medical imaging modalities facilitate the early and accurate detection of tumors of various types. A preferred imaging modality for diagnosis and identification of tumors is the B-mode ultrasound imaging, but due to the noise and artifacts present, correct interpretation of lesions region becomes a difficult task for an inexperienced radiologist. In this context, an efficient and reliable computer-aided segmentation system is preferred for extracting regions of interest. Recently, conventional methods of segmentation have been replaced by deep learning methods. In this article, a novel Multi-Residual U-Net model is proposed for the segmentation of ultrasound medical images. This architecture adopts residual blocks to improve the performance of deep convolutional networks and a loss function that addresses the class imbalance issue. To improve the quality and reduce Speckle noise, input images are pre-processed using an optimized Non-Local Means filter. Three benchmark B-mode Ultrasound image datasets of 200 Breast lesion images, 504 Skeletal images, and 647 Breast Lesion images are used for experimentation. Experimental results demonstrate that the proposed model performs more accurate segmentation in comparison to the five deep models chosen for the study. |
URI: | https://doi.org/10.1007/s11042-023-16461-z http://gnanaganga.inflibnet.ac.in:8080/jspui/handle/123456789/4797 |
ISSN: | 1573-7721 1380-7501 |
Appears in Collections: | Journal Articles |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.