Thota C, Jackson Samuel D, Musa Jaber M, Kamruzzaman MM, Ravi RV, Gnanasigamani LJ, Premalatha R. Image Smart Segmentation Analysis Against Diabetic Foot Ulcer Using Internet of Things with Virtual Sensing.
BIG DATA 2024;
12:155-172. [PMID:
37289808 DOI:
10.1089/big.2022.0283]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Diabetic foot ulcer (DFU) is a problem worldwide, and prevention is crucial. The image segmentation analysis of DFU identification plays a significant role. This will produce different segmentation of the same idea, incomplete, imprecise, and other problems. To address these issues, a method of image segmentation analysis of DFU through internet of things with the technique of virtual sensing for semantically similar objects, the analysis of four levels of range segmentation (region-based, edge-based, image-based, and computer-aided design-based range segmentation) for deeper segmentation of images is implemented. In this study, the multimodal is compressed with the object co-segmentation for semantical segmentation. The result is predicting the better validity and reliability assessment. The experimental results demonstrate that the proposed model can efficiently perform segmentation analysis, with a lower error rate, than the existing methodologies. The findings on the multiple-image dataset show that DFU obtains an average segmentation score of 90.85% and 89.03% correspondingly in two types of labeled ratios before DFU with virtual sensing and after DFU without virtual sensing (i.e., 25% and 30%), which is an increase of 10.91% and 12.22% over the previous best results. In live DFU studies, our proposed system improved by 59.1% compared with existing deep segmentation-based techniques and its average image smart segmentation improvements over its contemporaries are 15.06%, 23.94%, and 45.41%, respectively. Proposed range-based segmentation achieves interobserver reliability by 73.9% on the positive test namely likelihood ratio test set with only a 0.25 million parameters at the pace of labeled data.
Collapse