1
|
Reifs Jiménez D, Casanova-Lozano L, Grau-Carrión S, Reig-Bolaño R. Artificial Intelligence Methods for Diagnostic and Decision-Making Assistance in Chronic Wounds: A Systematic Review. J Med Syst 2025; 49:29. [PMID: 39969674 PMCID: PMC11839728 DOI: 10.1007/s10916-025-02153-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2024] [Accepted: 01/24/2025] [Indexed: 02/20/2025]
Abstract
Chronic wounds, which take over four weeks to heal, are a major global health issue linked to conditions such as diabetes, venous insufficiency, arterial diseases, and pressure ulcers. These wounds cause pain, reduce quality of life, and impose significant economic burdens. This systematic review explores the impact of technological advancements on the diagnosis of chronic wounds, focusing on how computational methods in wound image and data analysis improve diagnostic precision and patient outcomes. A literature search was conducted in databases including ACM, IEEE, PubMed, Scopus, and Web of Science, covering studies from 2013 to 2023. The focus was on articles applying complex computational techniques to analyze chronic wound images and clinical data. Exclusion criteria were non-image samples, review articles, and non-English or non-Spanish texts. From 2,791 articles identified, 93 full-text studies were selected for final analysis. The review identified significant advancements in tissue classification, wound measurement, segmentation, prediction of wound aetiology, risk indicators, and healing potential. The use of image-based and data-driven methods has proven to enhance diagnostic accuracy and treatment efficiency in chronic wound care. The integration of technology into chronic wound diagnosis has shown a transformative effect, improving diagnostic capabilities, patient care, and reducing healthcare costs. Continued research and innovation in computational techniques are essential to unlock their full potential in managing chronic wounds effectively.
Collapse
Affiliation(s)
- David Reifs Jiménez
- Digital Care Research Group, University of Vic, C/ Sagrada Familia, 7, 08500, Vic, Barcelona, Spain.
| | - Lorena Casanova-Lozano
- Digital Care Research Group, University of Vic, C/ Sagrada Familia, 7, 08500, Vic, Barcelona, Spain
| | - Sergi Grau-Carrión
- Digital Care Research Group, University of Vic, C/ Sagrada Familia, 7, 08500, Vic, Barcelona, Spain
| | - Ramon Reig-Bolaño
- Digital Care Research Group, University of Vic, C/ Sagrada Familia, 7, 08500, Vic, Barcelona, Spain
| |
Collapse
|
2
|
Pandey B, Joshi D, Arora AS. A deep learning based experimental framework for automatic staging of pressure ulcers from thermal images. QUANTITATIVE INFRARED THERMOGRAPHY JOURNAL 2024:1-21. [DOI: 10.1080/17686733.2024.2390719] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Accepted: 08/06/2024] [Indexed: 01/06/2025]
Affiliation(s)
- Bhaskar Pandey
- Department of EIE, Sant Longowal Institute of Engineering and Technology, Sangrur, India
| | - Deepak Joshi
- Centre for Biomedical Engineering, Indian Institute of Technology - Delhi, Hauz Khas, India
| | - Ajat Shatru Arora
- Department of EIE, Sant Longowal Institute of Engineering and Technology, Sangrur, India
| |
Collapse
|
3
|
Liu H, Hu J, Zhou J, Yu R. Application of deep learning to pressure injury staging. J Wound Care 2024; 33:368-378. [PMID: 38683775 DOI: 10.12968/jowc.2024.33.5.368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
OBJECTIVE Accurate assessment of pressure injuries (PIs) is necessary for a good outcome. Junior and non-specialist nurses have less experience with PIs and lack clinical practice, and so have difficulty staging them accurately. In this work, a deep learning-based system for PI staging and tissue classification is proposed to help improve its accuracy and efficiency in clinical practice, and save healthcare costs. METHOD A total of 1610 cases of PI and their corresponding photographs were collected from clinical practice, and each sample was accurately staged and the tissues labelled by experts for training a Mask Region-based Convolutional Neural Network (Mask R-CNN, Facebook Artificial Intelligence Research, Meta, US) object detection and instance segmentation network. A recognition system was set up to automatically stage and classify the tissues of the remotely uploaded PI photographs. RESULTS On a test set of 100 samples, the average precision of this model for stage recognition reached 0.603, which exceeded that of the medical personnel involved in the comparative evaluation, including an enterostomal therapist. CONCLUSION In this study, the deep learning-based PI staging system achieved the evaluation performance of a nurse with professional training in wound care. This low-cost system could help overcome the difficulty of identifying PIs by junior and non-specialist nurses, and provide valuable auxiliary clinical information.
Collapse
Affiliation(s)
- Han Liu
- Jiulongpo District People's Hospital, Chongqing, China
| | - Juan Hu
- The First Affiliated Hospital of Chongqing Medical and Pharmaceutical College, Chongqing, China
| | | | - Rong Yu
- Shulan Hospital, Hangzhou, China
| |
Collapse
|
4
|
Rippon MG, Fleming L, Chen T, Rogers AA, Ousey K. Artificial intelligence in wound care: diagnosis, assessment and treatment of hard-to-heal wounds: a narrative review. J Wound Care 2024; 33:229-242. [PMID: 38573907 DOI: 10.12968/jowc.2024.33.4.229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/06/2024]
Abstract
OBJECTIVE The effective assessment of wounds, both acute and hard-to-heal, is an important component in the delivery by wound care practitioners of efficacious wound care for patients. Improved wound diagnosis, optimising wound treatment regimens, and enhanced prevention of wounds aid in providing patients with a better quality of life (QoL). There is significant potential for the use of artificial intelligence (AI) in health-related areas such as wound care. However, AI-based systems remain to be developed to a point where they can be used clinically to deliver high-quality wound care. We have carried out a narrative review of the development and use of AI in the diagnosis, assessment and treatment of hard-to-heal wounds. We retrieved 145 articles from several online databases and other online resources, and 81 of them were included in this narrative review. Our review shows that AI application in wound care offers benefits in the assessment/diagnosis, monitoring and treatment of acute and hard-to-heal wounds. As well as offering patients the potential of improved QoL, AI may also enable better use of healthcare resources.
Collapse
Affiliation(s)
- Mark G Rippon
- University of Huddersfield, Huddersfield, UK
- Daneriver Consultancy Ltd, Holmes Chapel, UK
| | - Leigh Fleming
- School of Computing and Engineering, University of Huddersfield, Huddersfield, UK
| | - Tianhua Chen
- School of Computing and Engineering, University of Huddersfield, Huddersfield, UK
| | | | - Karen Ousey
- University of Huddersfield Department of Nursing and Midwifery, Huddersfield, UK
- Adjunct Professor, School of Nursing, Faculty of Health at the Queensland University of Technology, Australia
- Visiting Professor, Royal College of Surgeons in Ireland, Dublin, Ireland
- Chair, International Wound Infection Institute
- President Elect, International Skin Tear Advisory Panel
| |
Collapse
|
5
|
Zalluhoğlu C, Akdoğan D, Karakaya D, Güzel MS, Ülgü MM, Ardalı K, Boyalı AO, Sezer EA. Region-Based Semi-Two-Stream Convolutional Neural Networks for Pressure Ulcer Recognition. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:801-813. [PMID: 38343251 PMCID: PMC11031520 DOI: 10.1007/s10278-023-00960-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 10/31/2023] [Accepted: 11/02/2023] [Indexed: 04/20/2024]
Abstract
Pressure ulcers are a common, painful, costly, and often preventable complication associated with prolonged immobility in bedridden patients. It is a significant health problem worldwide because it is frequently seen in inpatients and has high treatment costs. For the treatment to be effective and to ensure an international standardization for all patients, it is essential that the diagnosis of pressure ulcers is made in the early stages and correctly. Since invasive methods of obtaining information can be painful for patients, different methods are used to make a correct diagnosis. Image-based diagnosis method is one of them. By using images obtained from patients, it will be possible to obtain successful results by keeping patients away from such painful situations. At this stage, disposable wound rulers are used in clinical practice to measure the length, width, and depth of patients' wounds. The information obtained is then entered into tools such as the Braden Scale, the Norton Scale, and the Waterlow Scale to provide a formal assessment of risk for pressure ulcers. This paper presents a novel benchmark dataset containing pressure ulcer images and a semi-two-stream approach that uses the original images and the cropped wound areas together for diagnosing the stage of pressure ulcers. Various state-of-the-art convolutional neural network (CNN) architectures are evaluated on this dataset. Our experimental results (test accuracy of 93%, the precision of 93%, the recall of 92%, and the F1-score of 93%) show that the proposed semi-two-stream method improves recognition results compared to the base CNN architectures.
Collapse
Affiliation(s)
- Cemil Zalluhoğlu
- Department of Computer Engineering, Hacettepe University, Ankara, Turkey.
| | | | | | | | - M Mahir Ülgü
- Health Information Systems, Republic of Turkey, Ministry of Health, Ankara, Turkey
| | | | | | | |
Collapse
|
6
|
Patel Y, Shah T, Dhar MK, Zhang T, Niezgoda J, Gopalakrishnan S, Yu Z. Integrated image and location analysis for wound classification: a deep learning approach. Sci Rep 2024; 14:7043. [PMID: 38528003 DOI: 10.1038/s41598-024-56626-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 03/08/2024] [Indexed: 03/27/2024] Open
Abstract
The global burden of acute and chronic wounds presents a compelling case for enhancing wound classification methods, a vital step in diagnosing and determining optimal treatments. Recognizing this need, we introduce an innovative multi-modal network based on a deep convolutional neural network for categorizing wounds into four categories: diabetic, pressure, surgical, and venous ulcers. Our multi-modal network uses wound images and their corresponding body locations for more precise classification. A unique aspect of our methodology is incorporating a body map system that facilitates accurate wound location tagging, improving upon traditional wound image classification techniques. A distinctive feature of our approach is the integration of models such as VGG16, ResNet152, and EfficientNet within a novel architecture. This architecture includes elements like spatial and channel-wise Squeeze-and-Excitation modules, Axial Attention, and an Adaptive Gated Multi-Layer Perceptron, providing a robust foundation for classification. Our multi-modal network was trained and evaluated on two distinct datasets comprising relevant images and corresponding location information. Notably, our proposed network outperformed traditional methods, reaching an accuracy range of 74.79-100% for Region of Interest (ROI) without location classifications, 73.98-100% for ROI with location classifications, and 78.10-100% for whole image classifications. This marks a significant enhancement over previously reported performance metrics in the literature. Our results indicate the potential of our multi-modal network as an effective decision-support tool for wound image classification, paving the way for its application in various clinical contexts.
Collapse
Affiliation(s)
- Yash Patel
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | - Tirth Shah
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | - Mrinal Kanti Dhar
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | - Taiyu Zhang
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | - Jeffrey Niezgoda
- Advancing the Zenith of Healthcare (AZH) Wound and Vascular Center, Milwaukee, WI, USA
| | | | - Zeyun Yu
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA.
- Department of Biomedical Engineering, University of Wisconsin-Milwaukee, Milwaukee, WI, USA.
| |
Collapse
|
7
|
Guo X, Yi W, Dong L, Kong L, Liu M, Zhao Y, Hui M, Chu X. Multi-Class Wound Classification via High and Low-Frequency Guidance Network. Bioengineering (Basel) 2023; 10:1385. [PMID: 38135976 PMCID: PMC10740846 DOI: 10.3390/bioengineering10121385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 11/24/2023] [Accepted: 11/28/2023] [Indexed: 12/24/2023] Open
Abstract
Wound image classification is a crucial preprocessing step to many intelligent medical systems, e.g., online diagnosis and smart medical. Recently, Convolutional Neural Network (CNN) has been widely applied to the classification of wound images and obtained promising performance to some extent. Unfortunately, it is still challenging to classify multiple wound types due to the complexity and variety of wound images. Existing CNNs usually extract high- and low-frequency features at the same convolutional layer, which inevitably causes information loss and further affects the accuracy of classification. To this end, we propose a novel High and Low-frequency Guidance Network (HLG-Net) for multi-class wound classification. To be specific, HLG-Net contains two branches: High-Frequency Network (HF-Net) and Low-Frequency Network (LF-Net). We employ pre-trained models ResNet and Res2Net as the feature backbone of the HF-Net, which makes the network capture the high-frequency details and texture information of wound images. To extract much low-frequency information, we utilize a Multi-Stream Dilation Convolution Residual Block (MSDCRB) as the backbone of the LF-Net. Moreover, a fusion module is proposed to fully explore informative features at the end of these two separate feature extraction branches, and obtain the final classification result. Extensive experiments demonstrate that HLG-Net can achieve maximum accuracy of 98.00%, 92.11%, and 82.61% in two-class, three-class, and four-class wound image classifications, respectively, which outperforms the previous state-of-the-art methods.
Collapse
Affiliation(s)
- Xiuwen Guo
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; (X.G.); (W.Y.); (L.K.); (M.L.); (Y.Z.); (M.H.); (X.C.)
- Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, Beijing 100081, China
| | - Weichao Yi
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; (X.G.); (W.Y.); (L.K.); (M.L.); (Y.Z.); (M.H.); (X.C.)
- Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, Beijing 100081, China
| | - Liquan Dong
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; (X.G.); (W.Y.); (L.K.); (M.L.); (Y.Z.); (M.H.); (X.C.)
- Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, Beijing 100081, China
- Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing 314019, China
| | - Lingqin Kong
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; (X.G.); (W.Y.); (L.K.); (M.L.); (Y.Z.); (M.H.); (X.C.)
- Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, Beijing 100081, China
- Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing 314019, China
| | - Ming Liu
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; (X.G.); (W.Y.); (L.K.); (M.L.); (Y.Z.); (M.H.); (X.C.)
- Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, Beijing 100081, China
- Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing 314019, China
| | - Yuejin Zhao
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; (X.G.); (W.Y.); (L.K.); (M.L.); (Y.Z.); (M.H.); (X.C.)
- Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, Beijing 100081, China
- Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing 314019, China
| | - Mei Hui
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; (X.G.); (W.Y.); (L.K.); (M.L.); (Y.Z.); (M.H.); (X.C.)
- Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, Beijing 100081, China
| | - Xuhong Chu
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; (X.G.); (W.Y.); (L.K.); (M.L.); (Y.Z.); (M.H.); (X.C.)
- Beijing Key Laboratory for Precision Optoelectronic Measurement Instrument and Technology, Beijing 100081, China
- Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing 314019, China
| |
Collapse
|
8
|
Kim J, Lee C, Choi S, Sung DI, Seo J, Na Lee Y, Hee Lee J, Jin Han E, Young Kim A, Suk Park H, Jeong Jung H, Hoon Kim J, Hee Lee J. Augmented Decision-Making in wound Care: Evaluating the clinical utility of a Deep-Learning model for pressure injury staging. Int J Med Inform 2023; 180:105266. [PMID: 37866277 DOI: 10.1016/j.ijmedinf.2023.105266] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 09/25/2023] [Accepted: 10/16/2023] [Indexed: 10/24/2023]
Abstract
BACKGROUND Precise categorization of pressure injury (PI) stages is critical in determining the appropriate treatment for wound care. However, the expertise necessary for PI staging is frequently unavailable in residential care settings. OBJECTIVE This study aimed to develop a convolutional neural network (CNN) model for classifying PIs and investigate whether its implementation can allow physicians to make better decisions for PI staging. METHODS Using 3,098 clinical images (2,614 and 484 from internal and external datasets, respectively), a CNN was trained and validated to classify PIs and other related dermatoses. A two-part survey was conducted with 24 dermatology residents, ward nurses, and medical students to determine whether the implementation of the CNN improved initial PI classification decisions. RESULTS The top-1 accuracy of the model was 0.793 (95% confidence interval [CI], 0.778-0.808) and 0.717 (95% CI, 0.676-0.758) over the internal and external testing sets, respectively. The accuracy of PI staging among participants was 0.501 (95% CI, 0.487-0.515) in Part I, improving by 17.1% to 0.672 (95% CI, 0.660-0.684) in Part II. Furthermore, the concordance between participants increased significantly with the use of the CNN model, with Fleiss' κ of 0.414 (95% CI, 0.410-0.417) and 0.641 (95% CI, 0.638-0.644) in Parts I and II, respectively. CONCLUSIONS The proposed CNN model can help classify PIs and relevant dermatoses. In addition, augmented decision-making can improve consultation accuracy while ensuring concordance between the clinical decisions made by a diverse group of health professionals.
Collapse
Affiliation(s)
- Jemin Kim
- Department of Dermatology, Yongin Severance Hospital, Yonsei University College of Medicine, Gyeonggi-do, Republic of Korea
| | - Changyoon Lee
- Department of Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Sungchul Choi
- Department of Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Da-In Sung
- Department of Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Jeonga Seo
- Department of Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Yun Na Lee
- Department of Dermatology and Cutaneous Biology Research Institute, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Joo Hee Lee
- Department of Dermatology and Cutaneous Biology Research Institute, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Eun Jin Han
- Department of Nursing, Severance Hospital, Seoul, Republic of Korea
| | - Ah Young Kim
- Department of Nursing, Severance Hospital, Seoul, Republic of Korea
| | - Hyun Suk Park
- Department of Nursing, Severance Hospital, Seoul, Republic of Korea
| | - Hye Jeong Jung
- Department of Nursing, Severance Hospital, Seoul, Republic of Korea
| | - Jong Hoon Kim
- Department of Dermatology and Cutaneous Biology Research Institute, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Ju Hee Lee
- Department of Dermatology and Cutaneous Biology Research Institute, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
9
|
P. J, B. K. SK, Jayaraman S. Automatic foot ulcer segmentation using conditional generative adversarial network (AFSegGAN): A wound management system. PLOS DIGITAL HEALTH 2023; 2:e0000344. [PMID: 37930982 PMCID: PMC10627472 DOI: 10.1371/journal.pdig.0000344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 08/07/2023] [Indexed: 11/08/2023]
Abstract
Effective wound care is essential to prevent further complications, promote healing, and reduce the risk of infection and other health issues. Chronic wounds, particularly in older adults, patients with disabilities, and those with pressure, venous, or diabetic foot ulcers, cause significant morbidity and mortality. Due to the positive trend in the number of individuals with chronic wounds, particularly among the growing elderly and diabetes populations, it is imperative to develop novel technologies and practices for the best practice clinical management of chronic wounds to minimize the potential health and economic burdens on society. As wound care is managed in hospitals and community care, it is crucial to have quantitative metrics like wound boundary and morphological features. The traditional visual inspection technique is purely subjective and error-prone, and digitization provides an appealing alternative. Various deep-learning models have earned confidence; however, their accuracy primarily relies on the image quality, the dataset size to learn the features, and experts' annotation. This work aims to develop a wound management system that automates wound segmentation using a conditional generative adversarial network (cGAN) and estimate the wound morphological parameters. AFSegGAN was developed and validated on the MICCAI 2021-foot ulcer segmentation dataset. In addition, we use adversarial loss and patch-level comparison at the discriminator network to improve the segmentation performance and balance the GAN network training. Our model outperformed state-of-the-art methods with a Dice score of 93.11% and IoU of 99.07%. The proposed wound management system demonstrates its abilities in wound segmentation and parameter estimation, thereby reducing healthcare workers' efforts to diagnose or manage wounds and facilitating remote healthcare.
Collapse
Affiliation(s)
- Jishnu P.
- TCS Research, Digital Medicine and Medical Technology- B&T Group, TATA Consultancy Services, Bangalore, Karnataka, India
| | - Shreyamsha Kumar B. K.
- TCS Research, Digital Medicine and Medical Technology- B&T Group, TATA Consultancy Services, Bangalore, Karnataka, India
| | - Srinivasan Jayaraman
- TCS Research, Digital Medicine and Medical Technology- B&T Group, TATA Consultancy Services, Cincinnati, Ohio, United States of America
| |
Collapse
|
10
|
Aldughayfiq B, Ashfaq F, Jhanjhi NZ, Humayun M. YOLO-Based Deep Learning Model for Pressure Ulcer Detection and Classification. Healthcare (Basel) 2023; 11:healthcare11091222. [PMID: 37174764 PMCID: PMC10178524 DOI: 10.3390/healthcare11091222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 04/15/2023] [Accepted: 04/22/2023] [Indexed: 05/15/2023] Open
Abstract
Pressure ulcers are significant healthcare concerns affecting millions of people worldwide, particularly those with limited mobility. Early detection and classification of pressure ulcers are crucial in preventing their progression and reducing associated morbidity and mortality. In this work, we present a novel approach that uses YOLOv5, an advanced and robust object detection model, to detect and classify pressure ulcers into four stages and non-pressure ulcers. We also utilize data augmentation techniques to expand our dataset and strengthen the resilience of our model. Our approach shows promising results, achieving an overall mean average precision of 76.9% and class-specific mAP50 values ranging from 66% to 99.5%. Compared to previous studies that primarily utilize CNN-based algorithms, our approach provides a more efficient and accurate solution for the detection and classification of pressure ulcers. The successful implementation of our approach has the potential to improve the early detection and treatment of pressure ulcers, resulting in better patient outcomes and reduced healthcare costs.
Collapse
Affiliation(s)
- Bader Aldughayfiq
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
| | - Farzeen Ashfaq
- School of Computer Science, SCS, Taylor's University, Subang Jaya 47500, Malaysia
| | - N Z Jhanjhi
- School of Computer Science, SCS, Taylor's University, Subang Jaya 47500, Malaysia
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
| |
Collapse
|
11
|
Sun Y, Lou W, Ma W, Zhao F, Su Z. Convolution Neural Network with Coordinate Attention for Real-Time Wound Segmentation and Automatic Wound Assessment. Healthcare (Basel) 2023; 11:healthcare11091205. [PMID: 37174747 PMCID: PMC10178407 DOI: 10.3390/healthcare11091205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 04/03/2023] [Accepted: 04/12/2023] [Indexed: 05/15/2023] Open
Abstract
BACKGROUND Wound treatment in emergency care requires the rapid assessment of wound size by medical staff. Limited medical resources and the empirical assessment of wounds can delay the treatment of patients, and manual contact measurement methods are often inaccurate and susceptible to wound infection. This study aimed to prepare an Automatic Wound Segmentation Assessment (AWSA) framework for real-time wound segmentation and automatic wound region estimation. METHODS This method comprised a short-term dense concatenate classification network (STDC-Net) as the backbone, realizing a segmentation accuracy-prediction speed trade-off. A coordinated attention mechanism was introduced to further improve the network segmentation performance. A functional relationship model between prior graphics pixels and shooting heights was constructed to achieve wound area measurement. Finally, extensive experiments on two types of wound datasets were conducted. RESULTS The experimental results showed that real-time AWSA outperformed state-of-the-art methods such as mAP, mIoU, recall, and dice score. The AUC value, which reflected the comprehensive segmentation ability, also reached the highest level of about 99.5%. The FPS values of our proposed segmentation method in the two datasets were 100.08 and 102.11, respectively, which were about 42% higher than those of the second-ranked method, reflecting better real-time performance. Moreover, real-time AWSA could automatically estimate the wound area in square centimeters with a relative error of only about 3.1%. CONCLUSION The real-time AWSA method used the STDC-Net classification network as its backbone and improved the network processing speed while accurately segmenting the wound, realizing a segmentation accuracy-prediction speed trade-off.
Collapse
Affiliation(s)
- Yi Sun
- National Key Laboratory of Electro-Mechanics Engineering and Control, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100010, China
- Beijing Institute of Technology Chongqing Innovation Center, Chongqing 401120, China
| | - Wenzhong Lou
- National Key Laboratory of Electro-Mechanics Engineering and Control, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100010, China
- Beijing Institute of Technology Chongqing Innovation Center, Chongqing 401120, China
| | - Wenlong Ma
- National Key Laboratory of Electro-Mechanics Engineering and Control, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100010, China
| | - Fei Zhao
- National Key Laboratory of Electro-Mechanics Engineering and Control, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100010, China
| | - Zilong Su
- National Key Laboratory of Electro-Mechanics Engineering and Control, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100010, China
| |
Collapse
|
12
|
B. K SK, K C A, Jayaraman S. Wound Care: Wound Segmentation and Parameter Estimation. 2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI) 2023:1-4. [DOI: 10.1109/isbi53787.2023.10230677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2025]
|
13
|
Dabas M, Schwartz D, Beeckman D, Gefen A. Application of Artificial Intelligence Methodologies to Chronic Wound Care and Management: A Scoping Review. Adv Wound Care (New Rochelle) 2023; 12:205-240. [PMID: 35438547 DOI: 10.1089/wound.2021.0144] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023] Open
Abstract
Significance: As the number of hard-to-heal wound cases rises with the aging of the population and the spread of chronic diseases, health care professionals struggle to provide safe and effective care to all their patients simultaneously. This study aimed at providing an in-depth overview of the relevant methodologies of artificial intelligence (AI) and their potential implementation to support these growing needs of wound care and management. Recent Advances: MEDLINE, Compendex, Scopus, Web of Science, and IEEE databases were all searched for new AI methods or novel uses of existing AI methods for the diagnosis or management of hard-to-heal wounds. We only included English peer-reviewed original articles, conference proceedings, published patent applications, or granted patents (not older than 2010) where the performance of the utilized AI algorithms was reported. Based on these criteria, a total of 75 studies were eligible for inclusion. These varied by the type of the utilized AI methodology, the wound type, the medical record/database configuration, and the research goal. Critical Issues: AI methodologies appear to have a strong positive impact and prospects in the wound care and management arena. Another important development that emerged from the findings is AI-based remote consultation systems utilizing smartphones and tablets for data collection and connectivity. Future Directions: The implementation of machine-learning algorithms in the diagnosis and managements of hard-to-heal wounds is a promising approach for improving the wound care delivered to hospitalized patients, while allowing health care professionals to manage their working time more efficiently.
Collapse
Affiliation(s)
- Mai Dabas
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv, Israel
| | - Dafna Schwartz
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv, Israel
| | - Dimitri Beeckman
- Skin Integrity Research Group (SKINT), University Centre for Nursing and Midwifery, Department of Public Health, Ghent University, Ghent, Belgium
- Swedish Centre for Skin and Wound Research, School of Health Sciences, Örebro University, Örebro, Sweden
| | - Amit Gefen
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv, Israel
- The Herbert J. Berman Chair in Vascular Bioengineering, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
14
|
Huang PH, Pan YH, Luo YS, Chen YF, Lo YC, Chen TPC, Perng CK. Development of a deep learning-based tool to assist wound classification. J Plast Reconstr Aesthet Surg 2023; 79:89-97. [PMID: 36893592 DOI: 10.1016/j.bjps.2023.01.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 12/04/2022] [Accepted: 01/29/2023] [Indexed: 02/10/2023]
Abstract
This paper presents a deep learning-based wound classification tool that can assist medical personnel in non-wound care specialization to classify five key wound conditions, namely deep wound, infected wound, arterial wound, venous wound, and pressure wound, given color images captured using readily available cameras. The accuracy of the classification is vital for appropriate wound management. The proposed wound classification method adopts a multi-task deep learning framework that leverages the relationships among the five key wound conditions for a unified wound classification architecture. With differences in Cohen's kappa coefficients as the metrics to compare our proposed model with humans, the performance of our model was better or non-inferior to those of all human medical personnel. Our convolutional neural network-based model is the first to classify five tasks of deep, infected, arterial, venous, and pressure wounds simultaneously with good accuracy. The proposed model is compact and matches or exceeds the performance of human doctors and nurses. Medical personnel who do not specialize in wound care can potentially benefit from an app equipped with the proposed deep learning model.
Collapse
Affiliation(s)
- Po-Hsuan Huang
- Inventec AI Center, Inventec Corporation, Taipei, Taiwan
| | - Yi-Hsiang Pan
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Ying-Sheng Luo
- Inventec AI Center, Inventec Corporation, Taipei, Taiwan
| | - Yi-Fan Chen
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Taipei Medical University Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
| | - Yu-Cheng Lo
- Center for Quality Management, Taipei Veterans General Hospital, Taipei, Taiwan; Institute of Bio-Medical Informatics, National Yang-Ming Chiao Tung University, Taipei, Taiwan
| | - Trista Pei-Chun Chen
- Inventec AI Center, Inventec Corporation, Taipei, Taiwan; AI Research Center, Microsoft Corporation, Taipei, Taiwan
| | - Cherng-Kang Perng
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Surgery, Chang Bing Show Chwan Memorial Hospital, Changhua, Taiwan.
| |
Collapse
|
15
|
Kairys A, Pauliukiene R, Raudonis V, Ceponis J. Towards Home-Based Diabetic Foot Ulcer Monitoring: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:3618. [PMID: 37050678 PMCID: PMC10099334 DOI: 10.3390/s23073618] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/14/2023] [Accepted: 03/27/2023] [Indexed: 06/19/2023]
Abstract
It is considered that 1 in 10 adults worldwide have diabetes. Diabetic foot ulcers are some of the most common complications of diabetes, and they are associated with a high risk of lower-limb amputation and, as a result, reduced life expectancy. Timely detection and periodic ulcer monitoring can considerably decrease amputation rates. Recent research has demonstrated that computer vision can be used to identify foot ulcers and perform non-contact telemetry by using ulcer and tissue area segmentation. However, the applications are limited to controlled lighting conditions, and expert knowledge is required for dataset annotation. This paper reviews the latest publications on the use of artificial intelligence for ulcer area detection and segmentation. The PRISMA methodology was used to search for and select articles, and the selected articles were reviewed to collect quantitative and qualitative data. Qualitative data were used to describe the methodologies used in individual studies, while quantitative data were used for generalization in terms of dataset preparation and feature extraction. Publicly available datasets were accounted for, and methods for preprocessing, augmentation, and feature extraction were evaluated. It was concluded that public datasets can be used to form a bigger, more diverse datasets, and the prospects of wider image preprocessing and the adoption of augmentation require further research.
Collapse
Affiliation(s)
- Arturas Kairys
- Automation Department, Electrical and Electronics Faculty, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Renata Pauliukiene
- Department of Endocrinology, Lithuanian University of Health Sciences, 50161 Kaunas, Lithuania
| | - Vidas Raudonis
- Automation Department, Electrical and Electronics Faculty, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Jonas Ceponis
- Institute of Endocrinology, Lithuanian University of Health Sciences, 44307 Kaunas, Lithuania
| |
Collapse
|
16
|
Construction and Validation of an Image Discrimination Algorithm to Discriminate Necrosis from Wounds in Pressure Ulcers. J Clin Med 2023; 12:jcm12062194. [PMID: 36983198 PMCID: PMC10057569 DOI: 10.3390/jcm12062194] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 03/04/2023] [Accepted: 03/10/2023] [Indexed: 03/14/2023] Open
Abstract
Artificial intelligence (AI) in medical care can raise diagnosis accuracy and improve its uniformity. This study developed a diagnostic imaging system for chronic wounds that can be used in medically underpopulated areas. The image identification algorithm searches for patterns and makes decisions based on information obtained from pixels rather than images. Images of 50 patients with pressure sores treated at Kobe University Hospital were examined. The algorithm determined the presence of necrosis with a significant difference (p = 3.39 × 10−5). A threshold value was created with a luminance difference of 50 for the group with necrosis of 5% or more black pixels. In the no-necrosis group with less than 5% black pixels, the threshold value was created with a brightness difference of 100. The “shallow wounds” were distributed below 100, whereas the “deep wounds” were distributed above 100. When the algorithm was applied to 24 images of 23 new cases, there was 100% agreement between the specialist and the algorithm regarding the presence of necrotic tissue and wound depth evaluation. The algorithm identifies the necrotic tissue and wound depth without requiring a large amount of data, making it suitable for application to future AI diagnosis systems for chronic wounds.
Collapse
|
17
|
Swerdlow M, Guler O, Yaakov R, Armstrong DG. Simultaneous Segmentation and Classification of Pressure Injury Image Data Using Mask-R-CNN. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2023; 2023:3858997. [PMID: 36778787 PMCID: PMC9911250 DOI: 10.1155/2023/3858997] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 12/08/2022] [Accepted: 12/13/2022] [Indexed: 02/05/2023]
Abstract
Background Pressure injuries (PIs) impose a substantial burden on patients, caregivers, and healthcare systems, affecting an estimated 3 million Americans and costing nearly $18 billion annually. Accurate pressure injury staging remains clinically challenging. Over the last decade, object detection and semantic segmentation have evolved quickly with new methods invented and new application areas emerging. Simultaneous object detection and segmentation paved the way to segment and classify anatomical structures. In this study, we utilize the Mask-R-CNN algorithm for segmentation and classification of stage 1-4 pressure injuries. Methods Images from the eKare Inc. pressure injury wound data repository were segmented and classified manually by two study authors with medical training. The Mask-R-CNN model was implemented using the Keras deep learning and TensorFlow libraries with Python. We split 969 pressure injury images into training (87.5%) and validation (12.5%) subsets for Mask-R-CNN training. Results We included 121 random pressure injury images in our test set. The Mask-R-CNN model showed overall classification accuracy of 92.6%, and the segmentation demonstrated 93.0% accuracy. Our F1 scores for stages 1-4 were 0.842, 0.947, 0.907, and 0.944, respectively. Our Dice coefficients for stages 1-4 were 0.92, 0.85, 0.93, and 0.91, respectively. Conclusions Our Mask-R-CNN model provides levels of accuracy considerably greater than the average healthcare professional who works with pressure injury patients. This tool can be easily incorporated into the clinician's workflow to aid in the hospital setting.
Collapse
Affiliation(s)
- Mark Swerdlow
- Department of Surgery, Keck School of Medicine of USC, Los Angeles, CA, USA
| | | | | | - David G. Armstrong
- Department of Surgery, Keck School of Medicine of USC, Los Angeles, CA, USA
| |
Collapse
|
18
|
Liu TJ, Wang H, Christian M, Chang CW, Lai F, Tai HC. Automatic segmentation and measurement of pressure injuries using deep learning models and a LiDAR camera. Sci Rep 2023; 13:680. [PMID: 36639395 PMCID: PMC9839689 DOI: 10.1038/s41598-022-26812-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 12/20/2022] [Indexed: 01/15/2023] Open
Abstract
Pressure injuries are a common problem resulting in poor prognosis, long-term hospitalization, and increased medical costs in an aging society. This study developed a method to do automatic segmentation and area measurement of pressure injuries using deep learning models and a light detection and ranging (LiDAR) camera. We selected the finest photos of patients with pressure injuries, 528 in total, at National Taiwan University Hospital from 2016 to 2020. The margins of the pressure injuries were labeled by three board-certified plastic surgeons. The labeled photos were trained by Mask R-CNN and U-Net for segmentation. After the segmentation model was constructed, we made an automatic wound area measurement via a LiDAR camera. We conducted a prospective clinical study to test the accuracy of this system. For automatic wound segmentation, the performance of the U-Net (Dice coefficient (DC): 0.8448) was better than Mask R-CNN (DC: 0.5006) in the external validation. In the prospective clinical study, we incorporated the U-Net in our automatic wound area measurement system and got 26.2% mean relative error compared with the traditional manual method. Our segmentation model, U-Net, and area measurement system achieved acceptable accuracy, making them applicable in clinical circumstances.
Collapse
Affiliation(s)
- Tom J Liu
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
- Division of Plastic Surgery, Department of Surgery, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Hanwei Wang
- Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
| | - Mesakh Christian
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Che-Wei Chang
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
- Division of Plastic Reconstructive and Aesthetic Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei City, Taiwan
| | - Feipei Lai
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Hao-Chih Tai
- National Taiwan University Hospital and College of Medicine, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
19
|
Eldem H, Ülker E, Yaşar Işıklı O. Encoder–decoder semantic segmentation models for pressure wound images. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2022.2163531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Affiliation(s)
- Hüseyin Eldem
- Vocational School of Technical Sciences, Computer Technologies Department, Karamanoğlu Mehmetbey University, Karaman, Turkey
| | - Erkan Ülker
- Faculty of Engineering and Natural Sciences, Department of Computer Engineering, Konya Technical University, Konya, Turkey
| | - Osman Yaşar Işıklı
- Karaman Education and Research Hospital, Vascular Surgery Department, Karaman, Turkey
| |
Collapse
|
20
|
Reifs D, Casanova-Lozano L, Reig-Bolaño R, Grau-Carrion S. Clinical validation of computer vision and artificial intelligence algorithms for wound measurement and tissue classification in wound care. INFORMATICS IN MEDICINE UNLOCKED 2023. [DOI: 10.1016/j.imu.2023.101185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023] Open
|
21
|
Kumar BKS, Anandakrishan KC, Sumant M, Jayaraman S. Wound Care: Wound Management System. IEEE ACCESS 2023; 11:45301-45312. [DOI: 10.1109/access.2023.3271011] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2025]
Affiliation(s)
- B. K. Shreyamsha Kumar
- TCS Research, Business Transformation Group, Digital Medicine and Medical Technology Unit, TATA Consultancy Services, Bengaluru, India
| | - K. C. Anandakrishan
- TCS Research, Business Transformation Group, Digital Medicine and Medical Technology Unit, TATA Consultancy Services, Bengaluru, India
| | - Manish Sumant
- Business Transformation Group, Digital Medicine and Medical Technology Unit, TATA Consultancy Services, Cincinnati, OH, USA
| | - Srinivasan Jayaraman
- TCS Research, Business Transformation Group, Digital Medicine and Medical Technology Unit, TATA Consultancy Services, Cincinnati, OH, USA
| |
Collapse
|
22
|
Dweekat OY, Lam SS, McGrath L. An Integrated System of Multifaceted Machine Learning Models to Predict If and When Hospital-Acquired Pressure Injuries (Bedsores) Occur. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:ijerph20010828. [PMID: 36613150 PMCID: PMC9820011 DOI: 10.3390/ijerph20010828] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 12/21/2022] [Accepted: 12/27/2022] [Indexed: 06/12/2023]
Abstract
Hospital-Acquired Pressure Injury (HAPI), known as bedsore or decubitus ulcer, is one of the most common health conditions in the United States. Machine learning has been used to predict HAPI. This is insufficient information for the clinical team because knowing who would develop HAPI in the future does not help differentiate the severity of those predicted cases. This research develops an integrated system of multifaceted machine learning models to predict if and when HAPI occurs. Phase 1 integrates Genetic Algorithm with Cost-Sensitive Support Vector Machine (GA-CS-SVM) to handle the high imbalance HAPI dataset to predict if patients will develop HAPI. Phase 2 adopts Grid Search with SVM (GS-SVM) to predict when HAPI will occur for at-risk patients. This helps to prioritize who is at the highest risk and when that risk will be highest. The performance of the developed models is compared with state-of-the-art models in the literature. GA-CS-SVM achieved the best Area Under the Curve (AUC) (75.79 ± 0.58) and G-mean (75.73 ± 0.59), while GS-SVM achieved the best AUC (75.06) and G-mean (75.06). The research outcomes will help prioritize at-risk patients, allocate targeted resources and aid with better medical staff planning to provide intervention to those patients.
Collapse
Affiliation(s)
- Odai Y. Dweekat
- Department of Systems Science and Industrial Engineering, Binghamton University, Binghamton, NY 13902, USA
| | - Sarah S. Lam
- Department of Systems Science and Industrial Engineering, Binghamton University, Binghamton, NY 13902, USA
| | - Lindsay McGrath
- Wound Ostomy Continence Nursing, ChristianaCare Health System, Newark, DE 19718, USA
| |
Collapse
|
23
|
Dweekat OY, Lam SS, McGrath L. Machine Learning Techniques, Applications, and Potential Future Opportunities in Pressure Injuries (Bedsores) Management: A Systematic Review. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:796. [PMID: 36613118 PMCID: PMC9819814 DOI: 10.3390/ijerph20010796] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 12/21/2022] [Accepted: 12/27/2022] [Indexed: 06/17/2023]
Abstract
Pressure Injuries (PI) are one of the most common health conditions in the United States. Most acute or long-term care patients are at risk of developing PI. Machine Learning (ML) has been utilized to manage patients with PI, in which one systematic review describes how ML is used in PI management in 32 studies. This research, different from the previous systematic review, summarizes the previous contributions of ML in PI from January 2007 to July 2022, categorizes the studies according to medical specialties, analyzes gaps, and identifies opportunities for future research directions. PRISMA guidelines were adopted using the four most common databases (PubMed, Web of Science, Scopus, and Science Direct) and other resources, which result in 90 eligible studies. The reviewed articles are divided into three categories based on PI time of occurrence: before occurrence (48%); at time of occurrence (16%); and after occurrence (36%). Each category is further broken down into sub-fields based on medical specialties, which result in sixteen specialties. Each specialty is analyzed in terms of methods, inputs, and outputs. The most relevant and potentially useful applications and methods in PI management are outlined and discussed. This includes deep learning techniques and hybrid models, integration of existing risk assessment tools with ML that leads to a partnership between provider assessment and patients' Electronic Health Records (EHR).
Collapse
Affiliation(s)
- Odai Y. Dweekat
- Department of Systems Science and Industrial Engineering, Binghamton University, Binghamton, NY 13902, USA
| | - Sarah S. Lam
- Department of Systems Science and Industrial Engineering, Binghamton University, Binghamton, NY 13902, USA
| | - Lindsay McGrath
- Wound Ostomy Continence Nursing, ChristianaCare Health System, Newark, DE 19718, USA
| |
Collapse
|
24
|
Brüngel R, Koitka S, Friedrich CM. Unconditionally Generated and Pseudo-Labeled Synthetic Images for Diabetic Foot Ulcer Segmentation Dataset Extension. LECTURE NOTES IN COMPUTER SCIENCE 2023:65-79. [DOI: 10.1007/978-3-031-26354-5_6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2025]
|
25
|
Dweekat OY, Lam SS, McGrath L. A Hybrid System of Braden Scale and Machine Learning to Predict Hospital-Acquired Pressure Injuries (Bedsores): A Retrospective Observational Cohort Study. Diagnostics (Basel) 2022; 13:diagnostics13010031. [PMID: 36611323 PMCID: PMC9818183 DOI: 10.3390/diagnostics13010031] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 12/16/2022] [Accepted: 12/19/2022] [Indexed: 12/25/2022] Open
Abstract
Background: The Braden Scale is commonly used to determine Hospital-Acquired Pressure Injuries (HAPI). However, the volume of patients who are identified as being at risk stretches already limited resources, and caregivers are limited by the number of factors that can reasonably assess during patient care. In the last decade, machine learning techniques have been used to predict HAPI by utilizing related risk factors. Nevertheless, none of these studies consider the change in patient status from admission until discharge. Objectives: To develop an integrated system of Braden and machine learning to predict HAPI and assist with resource allocation for early interventions. The proposed approach captures the change in patients' risk by assessing factors three times across hospitalization. Design: Retrospective observational cohort study. Setting(s): This research was conducted at ChristianaCare hospital in Delaware, United States. Participants: Patients discharged between May 2020 and February 2022. Patients with HAPI were identified from Nursing documents (N = 15,889). Methods: Support Vector Machine (SVM) was adopted to predict patients' risk for developing HAPI using multiple risk factors in addition to Braden. Multiple performance metrics were used to compare the results of the integrated system versus Braden alone. Results: The HAPI rate is 3%. The integrated system achieved better sensitivity (74.29 ± 1.23) and detection prevalence (24.27 ± 0.16) than the Braden scale alone (sensitivity (66.90 ± 4.66) and detection prevalence (41.96 ± 1.35)). The most important risk factors to predict HAPI were Braden sub-factors, overall Braden, visiting ICU during hospitalization, and Glasgow coma score. Conclusions: The integrated system which combines SVM with Braden offers better performance than Braden and reduces the number of patients identified as at-risk. Furthermore, it allows for better allocation of resources to high-risk patients. It will result in cost savings and better utilization of resources. Relevance to clinical practice: The developed model provides an automated system to predict HAPI patients in real time and allows for ongoing intervention for patients identified as at-risk. Moreover, the integrated system is used to determine the number of nurses needed for early interventions. Reporting Method: EQUATOR guidelines (TRIPOD) were adopted in this research to develop the prediction model. Patient or Public Contribution: This research was based on a secondary analysis of patients' Electronic Health Records. The dataset was de-identified and patient identifiers were removed before processing and modeling.
Collapse
Affiliation(s)
- Odai Y. Dweekat
- Department of Systems Science and Industrial Engineering, Binghamton University, Binghamton, NY 13902, USA
- Correspondence:
| | - Sarah S. Lam
- Department of Systems Science and Industrial Engineering, Binghamton University, Binghamton, NY 13902, USA
| | - Lindsay McGrath
- Wound Ostomy Continence Nursing, ChristianaCare Health System, Newark, DE 19718, USA
| |
Collapse
|
26
|
Anisuzzaman DM, Wang C, Rostami B, Gopalakrishnan S, Niezgoda J, Yu Z. Image-Based Artificial Intelligence in Wound Assessment: A Systematic Review. Adv Wound Care (New Rochelle) 2022; 11:687-709. [PMID: 34544270 DOI: 10.1089/wound.2021.0091] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
Significance: Accurately predicting wound healing trajectories is difficult for wound care clinicians due to the complex and dynamic processes involved in wound healing. Wound care teams capture images of wounds during clinical visits generating big datasets over time. Developing novel artificial intelligence (AI) systems can help clinicians diagnose, assess the effectiveness of therapy, and predict healing outcomes. Recent Advances: Rapid developments in computer processing have enabled the development of AI-based systems that can improve the diagnosis and effectiveness of therapy in various clinical specializations. In the past decade, we have witnessed AI revolutionizing all types of medical imaging like X-ray, ultrasound, computed tomography, magnetic resonance imaging, etc., but AI-based systems remain to be developed clinically and computationally for high-quality wound care that can result in better patient outcomes. Critical Issues: In the current standard of care, collecting wound images on every clinical visit, interpreting and archiving the data are cumbersome and time consuming. Commercial platforms are developed to capture images, perform wound measurements, and provide clinicians with a workflow for diagnosis, but AI-based systems are still in their infancy. This systematic review summarizes the breadth and depth of the most recent and relevant work in intelligent image-based data analysis and system developments for wound assessment. Future Directions: With increasing availabilities of massive data (wound images, wound-specific electronic health records, etc.) as well as powerful computing resources, AI-based digital platforms will play a significant role in delivering data-driven care to people suffering from debilitating chronic wounds.
Collapse
Affiliation(s)
- D M Anisuzzaman
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA
| | - Chuanbo Wang
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA
| | - Behrouz Rostami
- Department of Electrical Engineering, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA
| | | | | | - Zeyun Yu
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA
| |
Collapse
|
27
|
Huang HN, Zhang T, Yang CT, Sheen YJ, Chen HM, Chen CJ, Tseng MW. Image segmentation using transfer learning and Fast R-CNN for diabetic foot wound treatments. Front Public Health 2022; 10:969846. [PMID: 36203688 PMCID: PMC9530356 DOI: 10.3389/fpubh.2022.969846] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 08/15/2022] [Indexed: 01/25/2023] Open
Abstract
Diabetic foot ulcers (DFUs) are considered the most challenging forms of chronic ulcerations to handle their multifactorial nature. It is necessary to establish a comprehensive treatment plan, accurate, and systematic evaluation of a patient with a DFU. This paper proposed an image recognition of diabetic foot wounds to support the effective execution of the treatment plan. In the severity of a diabetic foot ulcer, we refer to the current qualitative evaluation method commonly used in clinical practice, developed by the International Working Group on the Diabetic Foot: PEDIS index, and the evaluation made by physicians. The deep neural network, convolutional neural network, object recognition, and other technologies are applied to analyze the classification, location, and size of wounds by image analysis technology. The image features are labeled with the help of the physician. The Object Detection Fast R-CNN method is applied to these wound images to build and train machine learning modules and evaluate their effectiveness. In the assessment accuracy, it can be indicated that the wound image detection data can be as high as 90%.
Collapse
Affiliation(s)
- Huang-Nan Huang
- Department of Applied Mathematics, Tunghai University, Taichung, Taiwan
| | - Tianyi Zhang
- Department of Computer Science, Tunghai University, Taichung, Taiwan
| | - Chao-Tung Yang
- Department of Computer Science, Tunghai University, Taichung, Taiwan,Research Center for Smart Sustainable Circular Economy, Tunghai University, Taichung, Taiwan,*Correspondence: Chao-Tung Yang
| | - Yi-Jing Sheen
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Taichung Veterans General Hospital, Taichung, Taiwan,Yi-Jing Sheen
| | - Hsian-Min Chen
- Department of Medical Research, Center for Quantitative Imaging in Medicine (CQUIM), Taichung Veterans General Hospital, Taichung, Taiwan,Hsian-Min Chen
| | - Chur-Jen Chen
- Department of Applied Mathematics, Tunghai University, Taichung, Taiwan
| | - Meng-Wen Tseng
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Taichung Veterans General Hospital, Taichung, Taiwan
| |
Collapse
|
28
|
Reifs D, Reig-Bolaño R, Casals M, Grau-Carrion S. Interactive Medical Image Labeling Tool to Construct a Robust Convolutional Neural Network Training Data Set: Development and Validation Study. JMIR Med Inform 2022; 10:e37284. [PMID: 35994311 PMCID: PMC9446137 DOI: 10.2196/37284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 05/10/2022] [Accepted: 07/31/2022] [Indexed: 12/04/2022] Open
Abstract
Background Skin ulcers are an important cause of morbidity and mortality everywhere in the world and occur due to several causes, including diabetes mellitus, peripheral neuropathy, immobility, pressure, arteriosclerosis, infections, and venous insufficiency. Ulcers are lesions that fail to undergo an orderly healing process and produce functional and anatomical integrity in the expected time. In most cases, the methods of analysis used nowadays are rudimentary, which leads to errors and the use of invasive and uncomfortable techniques on patients. There are many studies that use a convolutional neural network to classify the different tissues in a wound. To obtain good results, the network must be trained with a correctly labeled data set by an expert in wound assessment. Typically, it is difficult to label pixel by pixel using a professional photo editor software, as this requires extensive time and effort from a health professional. Objective The aim of this paper is to implement a new, fast, and accurate method of labeling wound samples for training a neural network to classify different tissues. Methods We developed a support tool and evaluated its accuracy and reliability. We also compared the support tool classification with a digital gold standard (labeling the data with an image editing software). Results The obtained comparison between the gold standard and the proposed method was 0.9789 for background, 0.9842 for intact skin, 0.8426 for granulation tissue, 0.9309 for slough, and 0.9871 for necrotic. The obtained speed on average was 2.6, compared to that of an advanced image editing user. Conclusions This method increases tagging speed on average compared to an advanced image editing user. This increase is greater with untrained users. The samples obtained with the new system are indistinguishable from the samples made with the gold standard.
Collapse
Affiliation(s)
- David Reifs
- Digital Care Research Group, Centre for Health and Social Care, Universitat of Vic-Central University of Catalonia, Vic, Spain
| | - Ramon Reig-Bolaño
- Digital Care Research Group, Centre for Health and Social Care, Universitat of Vic-Central University of Catalonia, Vic, Spain
| | | | - Sergi Grau-Carrion
- Digital Care Research Group, Centre for Health and Social Care, Universitat of Vic-Central University of Catalonia, Vic, Spain
| |
Collapse
|
29
|
Deep transfer learning-based visual classification of pressure injuries stages. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07274-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
30
|
Liu TJ, Christian M, Chu YC, Chen YC, Chang CW, Lai F, Tai HC. A pressure ulcers assessment system for diagnosis and decision making using convolutional neural networks. J Formos Med Assoc 2022; 121:2227-2236. [PMID: 35525810 DOI: 10.1016/j.jfma.2022.04.010] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 03/20/2022] [Accepted: 04/10/2022] [Indexed: 10/31/2022] Open
Abstract
BACKGROUND/PURPOSE Pressure ulcers are a common problem in hospital care and long-term care. Pressure ulcers are caused by prolonged compression of soft tissues, which can cause local tissue damage and even lead to serious infections. This study uses a deep learning algorithm to construct a system that diagnoses pressure ulcers and assists in making treatment decisions, thus providing additional reference for first-line caregivers. METHODS We performed a retrospective research of medical records to find photos of patients with pressure ulcers at National Taiwan University Hospital from 2016 to 2020. We used photos from 2016 to 2019 for training and after removing the photos which were vague, underexposed, or overexposed, 327 photos were obtained. The photos were then labeled as "erythema" or "non-erythema" for the first classification task and "extensive necrosis", "moderate necrosis" or "limited necrosis" for the second, by consensus of three recruited physicians. An Inception-ResNet-v2 model, a kind of Convolutional Neural Network (CNN), was applied for training these two classification tasks to construct an assessment system. Finally, we tested the model with the photos of pressure ulcers taken from 2019 to 2020 to verify its accuracy. RESULTS For the task of classification of erythema and non-erythema wounds, our CNN model achieved an accuracy of about 98.5%. For the task of classification of necrotic tissue, our model achieved accuracy of about 97%. CONCLUSION Our CNN model, which was based on Inception-ResNet-v2, achieved high accuracy when classifying different types of pressure ulcers, making it applicable in clinical circumstances.
Collapse
Affiliation(s)
- Tom J Liu
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan; Division of Plastic Surgery, Department of Surgery, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Mesakh Christian
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Yuan-Chia Chu
- Department of Information Management, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Yu-Chun Chen
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Che-Wei Chang
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan; Division of Plastic Reconstructive and Aesthetic Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei City, Taiwan
| | - Feipei Lai
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Hao-Chih Tai
- Department of Surgery, National Taiwan University Hospital and College of Medicine, Taipei, Taiwan.
| |
Collapse
|
31
|
Chang CW, Christian M, Chang DH, Lai F, Liu TJ, Chen YS, Chen WJ. Deep learning approach based on superpixel segmentation assisted labeling for automatic pressure ulcer diagnosis. PLoS One 2022; 17:e0264139. [PMID: 35176101 PMCID: PMC8853507 DOI: 10.1371/journal.pone.0264139] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 02/03/2022] [Indexed: 01/14/2023] Open
Abstract
A pressure ulcer is an injury of the skin and underlying tissues adjacent to a bony eminence. Patients who suffer from this disease may have difficulty accessing medical care. Recently, the COVID-19 pandemic has exacerbated this situation. Automatic diagnosis based on machine learning (ML) brings promising solutions. Traditional ML requires complicated preprocessing steps for feature extraction. Its clinical applications are thus limited to particular datasets. Deep learning (DL), which extracts features from convolution layers, can embrace larger datasets that might be deliberately excluded in traditional algorithms. However, DL requires large sets of domain specific labeled data for training. Labeling various tissues of pressure ulcers is a challenge even for experienced plastic surgeons. We propose a superpixel-assisted, region-based method of labeling images for tissue classification. The boundary-based method is applied to create a dataset for wound and re-epithelialization (re-ep) segmentation. Five popular DL models (U-Net, DeeplabV3, PsPNet, FPN, and Mask R-CNN) with encoder (ResNet-101) were trained on the two datasets. A total of 2836 images of pressure ulcers were labeled for tissue classification, while 2893 images were labeled for wound and re-ep segmentation. All five models had satisfactory results. DeeplabV3 had the best performance on both tasks with a precision of 0.9915, recall of 0.9915 and accuracy of 0.9957 on the tissue classification; and a precision of 0.9888, recall of 0.9887 and accuracy of 0.9925 on the wound and re-ep segmentation task. Combining segmentation results with clinical data, our algorithm can detect the signs of wound healing, monitor the progress of healing, estimate the wound size, and suggest the need for surgical debridement.
Collapse
Affiliation(s)
- Che Wei Chang
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
- * E-mail:
| | - Mesakh Christian
- Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Dun Hao Chang
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
- Department of Information Management, Yuan Ze University, Taoyuan City, Taiwan
| | - Feipei Lai
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
- Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Tom J. Liu
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
- Division of Plastic Surgery, Department of Surgery, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Yo Shen Chen
- Division of Plastic and Reconstructive Surgery, Department of Surgery, Far Eastern Memorial Hospital, New Taipei, Taiwan
| | - Wei Jen Chen
- Graduate Institute of Biomedical Electronics & Bioinformatics, National Taiwan University, Taipei, Taiwan
| |
Collapse
|
32
|
Lucas Y, Niri R, Treuillet S, Douzi H, Castaneda B. Wound Size Imaging: Ready for Smart Assessment and Monitoring. Adv Wound Care (New Rochelle) 2021; 10:641-661. [PMID: 32320356 PMCID: PMC8392100 DOI: 10.1089/wound.2018.0937] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Accepted: 03/02/2020] [Indexed: 01/02/2023] Open
Abstract
Significance: We introduce and evaluate emerging devices and modalities for wound size imaging and also promising image processing tools for smart wound assessment and monitoring. Recent Advances: Some commercial devices are available for optical wound assessment but with limited possibilities compared to the power of multimodal imaging. With new low-cost devices and machine learning, wound assessment has become more robust and accurate. Wound size imaging not only provides area and volume but also the proportion of each tissue on the wound bed. Near-infrared and thermal spectral bands also enhance the classical visual assessment. Critical Issues: The ability to embed advanced imaging technology in portable devices such as smartphones and tablets with tissue analysis software tools will significantly improve wound care. As wound care and measurement are performed by nurses, the equipment needs to remain user-friendly, enable quick measurements, provide advanced monitoring, and be connected to the patient data management system. Future Directions: Combining several image modalities and machine learning, optical wound assessment will be smart enough to enable real wound monitoring, to provide clinicians with relevant indications to adapt the treatments and to improve healing rates and speed. Sharing the wound care histories of a number of patients on databases and through telemedicine practice could induce a better knowledge of the healing process and thus a better efficiency when the recorded clinical experience has been converted into knowledge through deep learning.
Collapse
Affiliation(s)
- Yves Lucas
- PRISME Laboratory, Orléans University, Orléans, France
| | - Rania Niri
- PRISME Laboratory, Orléans University, Orléans, France
- IRF-SIC Laboratory, Ibn Zohr University, Agadir, Morocco
| | | | - Hassan Douzi
- IRF-SIC Laboratory, Ibn Zohr University, Agadir, Morocco
| | - Benjamin Castaneda
- Laboratorio de Imagenes Medicas, Pontificia Universidad Catholica del Peru, Lima, Peru
| |
Collapse
|
33
|
Molder C, Lowe B, Zhan J. Learning Medical Materials From Radiography Images. Front Artif Intell 2021; 4:638299. [PMID: 34337390 PMCID: PMC8320745 DOI: 10.3389/frai.2021.638299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Accepted: 05/26/2021] [Indexed: 11/13/2022] Open
Abstract
Deep learning models have been shown to be effective for material analysis, a subfield of computer vision, on natural images. In medicine, deep learning systems have been shown to more accurately analyze radiography images than algorithmic approaches and even experts. However, one major roadblock to applying deep learning-based material analysis on radiography images is a lack of material annotations accompanying image sets. To solve this, we first introduce an automated procedure to augment annotated radiography images into a set of material samples. Next, using a novel Siamese neural network that compares material sample pairs, called D-CNN, we demonstrate how to learn a perceptual distance metric between material categories. This system replicates the actions of human annotators by discovering attributes that encode traits that distinguish materials in radiography images. Finally, we update and apply MAC-CNN, a material recognition neural network, to demonstrate this system on a dataset of knee X-rays and brain MRIs with tumors. Experiments show that this system has strong predictive power on these radiography images, achieving 92.8% accuracy at predicting the material present in a local region of an image. Our system also draws interesting parallels between human perception of natural materials and materials in radiography images.
Collapse
Affiliation(s)
- Carson Molder
- Data Science and Artificial Intelligence Lab, Department of Computer Science and Computer Engineering, College of Engineering, University of Arkansas, Fayetteville, AR, United States
| | - Benjamin Lowe
- Data Science and Artificial Intelligence Lab, Department of Computer Science and Computer Engineering, College of Engineering, University of Arkansas, Fayetteville, AR, United States
| | - Justin Zhan
- Data Science and Artificial Intelligence Lab, Department of Computer Science and Computer Engineering, College of Engineering, University of Arkansas, Fayetteville, AR, United States
| |
Collapse
|
34
|
Rostami B, Anisuzzaman DM, Wang C, Gopalakrishnan S, Niezgoda J, Yu Z. Multiclass wound image classification using an ensemble deep CNN-based classifier. Comput Biol Med 2021; 134:104536. [PMID: 34126281 DOI: 10.1016/j.compbiomed.2021.104536] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 05/21/2021] [Accepted: 05/22/2021] [Indexed: 10/21/2022]
Abstract
Acute and chronic wounds are a challenge to healthcare systems around the world and affect many people's lives annually. Wound classification is a key step in wound diagnosis that would help clinicians to identify an optimal treatment procedure. Hence, having a high-performance classifier assists wound specialists to classify wound types with less financial and time costs. Different wound classification methods based on machine learning and deep learning have been proposed in the literature. In this study, we have developed an ensemble Deep Convolutional Neural Network-based classifier to categorize wound images into multiple classes including surgical, diabetic, and venous ulcers. The output classification scores of two classifiers (namely, patch-wise and image-wise) are fed into a Multilayer Perceptron to provide a superior classification performance. A 5-fold cross-validation approach is used to evaluate the proposed method. We obtained maximum and average classification accuracy values of 96.4% and 94.28% for binary and 91.9% and 87.7% for 3-class classification problems. The proposed classifier was compared with some common deep classifiers and showed significantly higher accuracy metrics. We also tested the proposed method on the Medetec wound image dataset, and the accuracy values of 91.2% and 82.9% were obtained for binary and 3-class classifications. The results show that our proposed method can be used effectively as a decision support system in classification of wound images or other related clinical applications.
Collapse
Affiliation(s)
- Behrouz Rostami
- Electrical Engineering Department, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | - D M Anisuzzaman
- Computer Science Department, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | - Chuanbo Wang
- Computer Science Department, University of Wisconsin-Milwaukee, Milwaukee, WI, USA
| | | | - Jeffrey Niezgoda
- Advancing the Zenith of Healthcare (AZH) Wound and Vascular Center, Milwaukee, WI, USA
| | - Zeyun Yu
- Electrical Engineering Department, University of Wisconsin-Milwaukee, Milwaukee, WI, USA; Computer Science Department, University of Wisconsin-Milwaukee, Milwaukee, WI, USA.
| |
Collapse
|
35
|
Jiang M, Ma Y, Guo S, Jin L, Lv L, Han L, An N. Using Machine Learning Technologies in Pressure Injury Management: Systematic Review. JMIR Med Inform 2021; 9:e25704. [PMID: 33688846 PMCID: PMC7991995 DOI: 10.2196/25704] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 01/21/2021] [Accepted: 02/05/2021] [Indexed: 12/24/2022] Open
Abstract
Background Pressure injury (PI) is a common and preventable problem, yet it is a challenge for at least two reasons. First, the nurse shortage is a worldwide phenomenon. Second, the majority of nurses have insufficient PI-related knowledge. Machine learning (ML) technologies can contribute to lessening the burden on medical staff by improving the prognosis and diagnostic accuracy of PI. To the best of our knowledge, there is no existing systematic review that evaluates how the current ML technologies are being used in PI management. Objective The objective of this review was to synthesize and evaluate the literature regarding the use of ML technologies in PI management, and identify their strengths and weaknesses, as well as to identify improvement opportunities for future research and practice. Methods We conducted an extensive search on PubMed, EMBASE, Web of Science, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Cochrane Library, China National Knowledge Infrastructure (CNKI), the Wanfang database, the VIP database, and the China Biomedical Literature Database (CBM) to identify relevant articles. Searches were performed in June 2020. Two independent investigators conducted study selection, data extraction, and quality appraisal. Risk of bias was assessed using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). Results A total of 32 articles met the inclusion criteria. Twelve of those articles (38%) reported using ML technologies to develop predictive models to identify risk factors, 11 (34%) reported using them in posture detection and recognition, and 9 (28%) reported using them in image analysis for tissue classification and measurement of PI wounds. These articles presented various algorithms and measured outcomes. The overall risk of bias was judged as high. Conclusions There is an array of emerging ML technologies being used in PI management, and their results in the laboratory show great promise. Future research should apply these technologies on a large scale with clinical data to further verify and improve their effectiveness, as well as to improve the methodological quality.
Collapse
Affiliation(s)
- Mengyao Jiang
- Evidence-based Nursing Center, School of Nursing, Lanzhou University, Lanzhou, China
| | - Yuxia Ma
- Evidence-based Nursing Center, School of Nursing, Lanzhou University, Lanzhou, China
| | - Siyi Guo
- Key Laboratory of Knowledge Engineering with Big Data of the Ministry of Education, School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, China
| | - Liuqi Jin
- Key Laboratory of Knowledge Engineering with Big Data of the Ministry of Education, School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, China
| | - Lin Lv
- Wound and Ostomy Center, Outpatient Department, Gansu Provincial Hospital, Lanzhou, China
| | - Lin Han
- Evidence-based Nursing Center, School of Nursing, Lanzhou University, Lanzhou, China.,Department of Nursing, Gansu Provincial Hospital, Lanzhou, China
| | - Ning An
- Key Laboratory of Knowledge Engineering with Big Data of the Ministry of Education, School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, China
| |
Collapse
|
36
|
Silva RHLE, Machado AMC. Automatic measurement of pressure ulcers using Support Vector Machines and GrabCut. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105867. [PMID: 33261945 DOI: 10.1016/j.cmpb.2020.105867] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Accepted: 11/17/2020] [Indexed: 05/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Pressure ulcers are regions of trauma caused by a continuous pressure applied to soft tissues between a bony prominence and a hard surface. The manual monitoring of their healing evolution can be achieved by area assessment techniques that include the use of rulers and adhesive labels in direct contact with the injury, being highly inaccurate and subjective. In this paper we present a Support Vector Machine classifier in combination with a modified version of the GrabCut method for the automatic measurement of the area affected by pressure ulcers in digital images. METHODS Three methods of region segmentation using the superpixel strategy were evaluated from which color and texture descriptors were extracted. After the superpixel classification, the GrabCut segmentation method was applied in order to delineate the region affected by the ulcer from the rest of the image. RESULTS Experiments on a set of 105 pressure ulcer images from a public data set resulted in an average accuracy of 96%, sensitivity of 94%, specificity of 97% and precision of 94%. CONCLUSIONS The association of support vector machines with superpixel segmentation outperformed current methods based on deep learning and may be extended to tissue classification.
Collapse
Affiliation(s)
- Rodolfo Herman Lara E Silva
- Graduate Program on Electrical Engineering, Pontifical Catholic University of Minas Gerais, Belo Horizonte, Brazil
| | - Alexei Manso Correa Machado
- Department of Computer Science, Pontifical Catholic University of Minas Gerais and with the Department of Anatomy and Imaging, School of Medicine, Federal University of Minas Gerais, Belo Horizonte, Brazil.
| |
Collapse
|
37
|
Zahia S, Garcia-Zapirain B, Saralegui I, Fernandez-Ruanova B. Dyslexia detection using 3D convolutional neural networks and functional magnetic resonance imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105726. [PMID: 32916543 DOI: 10.1016/j.cmpb.2020.105726] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2019] [Accepted: 08/22/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVES Dyslexia is a disorder of neurological origin which affects the learning of those who suffer from it, mainly children, and causes difficulty in reading and writing. When undiagnosed, dyslexia leads to intimidation and frustration of the affected children and also of their family circles. In case no early intervention is given, children may reach high school with serious achievement gaps. Hence, early detection and intervention services for dyslexic students are highly important and recommended in order to support children in developing a positive self-esteem and reaching their maximum academic capacities. This paper presents a new approach for automatic recognition of children with dyslexia using functional magnetic resonance Imaging. METHODS Our proposed system is composed of a sequence of preprocessing steps to retrieve the brain activation areas during three different reading tasks. Conversion to Nifti volumes, adjustment of head motion, normalization and smoothing transformations were performed on the fMRI scans in order to bring all the subject brains into one single model which will enable voxels comparison between each subject. Subsequently, using Statistical Parametric Maps (SPMs), a total of 165 3D volumes containing brain activation of 55 children were created. The classification of these volumes was handled using three parallel 3D Convolutional Neural Network (3D CNN), each corresponding to a brain activation during one reading task, and concatenated in the last two dense layers, forming a single architecture devoted to performing optimized detection of dyslexic brain activation. Additionally, we used 4-fold cross validation method in order to assess the generalizability of our model and control overfitting. RESULTS Our approach has achieved an overall average classification accuracy of 72.73%, sensitivity of 75%, specificity of 71.43%, precision of 60% and an F1-score of 67% in dyslexia detection. CONCLUSIONS The proposed system has demonstrated that the recognition of dyslexic children is feasible using deep learning and functional magnetic resonance Imaging when performing phonological and orthographic reading tasks.
Collapse
Affiliation(s)
- Sofia Zahia
- eVida research laboratory, University of Deusto, Bilbao 48007, Spain.
| | | | - Ibone Saralegui
- Department of Neuroradiology, Osatek, Biocruces-Bizkaia; Galdakao-Usansolo Hospital / Osakidetza, Galdakao 48960, Spain
| | | |
Collapse
|
38
|
A Systematic Overview of Recent Methods for Non-Contact Chronic Wound Analysis. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10217613] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Chronic wounds or wounds that are not healing properly are a worldwide health problem that affect the global economy and population. Alongside with aging of the population, increasing obesity and diabetes patients, we can assume that costs of chronic wound healing will be even higher. Wound assessment should be fast and accurate in order to reduce the possible complications, and therefore shorten the wound healing process. Contact methods often used by medical experts have drawbacks that are easily overcome by non-contact methods like image analysis, where wound analysis is fully or partially automated. Two major tasks in wound analysis on images are segmentation of the wound from the healthy skin and background, and classification of the most important wound tissues like granulation, fibrin, and necrosis. These tasks are necessary for further assessment like wound measurement or healing evaluation based on tissue representation. Researchers use various methods and algorithms for image wound analysis with the aim to outperform accuracy rates and show the robustness of the proposed methods. Recently, neural networks and deep learning algorithms have driven considerable performance improvement across various fields, which has a led to a significant rise of research papers in the field of wound analysis as well. The aim of this paper is to provide an overview of recent methods for non-contact wound analysis which could be used for developing an end-to-end solution for a fully automated wound analysis system which would incorporate all stages from data acquisition, to segmentation and classification, ending with measurement and healing evaluation.
Collapse
|
39
|
Chino DYT, Scabora LC, Cazzolato MT, Jorge AES, Traina-Jr C, Traina AJM. Segmenting skin ulcers and measuring the wound area using deep convolutional networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 191:105376. [PMID: 32066047 DOI: 10.1016/j.cmpb.2020.105376] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Revised: 01/17/2020] [Accepted: 01/29/2020] [Indexed: 05/17/2023]
Abstract
BACKGROUND AND OBJECTIVES Bedridden patients presenting chronic skin ulcers often need to be examined at home. Healthcare professionals follow the evolution of the patients' condition by regularly taking pictures of the wounds, as different aspects of the wound can indicate the healing stages of the ulcer, including depth, location, and size. The manual measurement of the wounds' size is often inaccurate, time-consuming, and can also cause discomfort to the patient. In this work, we propose the Automatic Skin Ulcer Region Assessment ASURA framework to accurately segment the wound and automatically measure its size. METHODS ASURA uses an encoder/decoder deep neural network to perform the segmentation, which detects the measurement ruler/tape present in the image and estimates its pixel density. RESULTS Experimental results show that ASURA outperforms the state-of-the-art methods by up to 16% regarding the Dice score, being able to correctly segment the wound with a Dice score higher than 90%. ASURA automatically estimates the pixel density of the images with a relative error of 5%. When using a semi-automatic approach, ASURA was able to estimate the area of the wound in square centimeters with a relative error of 14%. CONCLUSIONS The results show that ASURA is well-suited for the problem of segmenting and automatically measuring skin ulcers.
Collapse
Affiliation(s)
- Daniel Y T Chino
- Institute of Mathematical and Computer Sciences, University of Sao Paulo, Brazil.
| | - Lucas C Scabora
- Institute of Mathematical and Computer Sciences, University of Sao Paulo, Brazil.
| | - Mirela T Cazzolato
- Institute of Mathematical and Computer Sciences, University of Sao Paulo, Brazil.
| | - Ana E S Jorge
- Department of Physical Therapy, Federal University of Sao Carlos, Brazil.
| | - Caetano Traina-Jr
- Institute of Mathematical and Computer Sciences, University of Sao Paulo, Brazil.
| | - Agma J M Traina
- Institute of Mathematical and Computer Sciences, University of Sao Paulo, Brazil.
| |
Collapse
|
40
|
Zahia S, Garcia-Zapirain B, Elmaghraby A. Integrating 3D Model Representation for an Accurate Non-Invasive Assessment of Pressure Injuries with Deep Learning. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20102933. [PMID: 32455753 PMCID: PMC7294421 DOI: 10.3390/s20102933] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 05/17/2020] [Accepted: 05/19/2020] [Indexed: 02/02/2023]
Abstract
Pressure injuries represent a major concern in many nations. These wounds result from prolonged pressure on the skin, which mainly occur among elderly and disabled patients. If retrieving quantitative information using invasive methods is the most used method, it causes significant pain and discomfort to the patients and may also increase the risk of infections. Hence, developing non-intrusive methods for the assessment of pressure injuries would represent a highly useful tool for caregivers and a relief for patients. Traditional methods rely on findings retrieved solely from 2D images. Thus, bypassing the 3D information deriving from the deep and irregular shape of this type of wounds leads to biased measurements. In this paper, we propose an end-to-end system which uses a single 2D image and a 3D mesh of the pressure injury, acquired using the Structure Sensor, and outputs all the necessary findings such as: external segmentation of the wound as well as its real-world measurements (depth, area, volume, major axis and minor axis). More specifically, a first block composed of a Mask RCNN model uses the 2D image to output the segmentation of the external boundaries of the wound. Then, a second block matches the 2D and 3D views to segment the wound in the 3D mesh using the segmentation output and generates the aforementioned real-world measurements. Experimental results showed that the proposed framework can not only output refined segmentation with 87% precision, but also retrieves reliable measurements, which can be used for medical assessment and healing evaluation of pressure injuries.
Collapse
Affiliation(s)
- Sofia Zahia
- eVIDA Research Group, University of Deusto, 48007 Bilbao, Spain;
- Computer Science and Engineering Department, University of Louisville, Louisville, KY 40292, USA;
- Correspondence: ; Tel.: +34-632-817-043
| | | | - Adel Elmaghraby
- Computer Science and Engineering Department, University of Louisville, Louisville, KY 40292, USA;
| |
Collapse
|
41
|
Ohura N, Mitsuno R, Sakisaka M, Terabe Y, Morishige Y, Uchiyama A, Okoshi T, Shinji I, Takushima A. Convolutional neural networks for wound detection: the role of artificial intelligence in wound care. J Wound Care 2020; 28:S13-S24. [PMID: 31600101 DOI: 10.12968/jowc.2019.28.sup10.s13] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
OBJECTIVE Telemedicine is an essential support system for clinical settings outside the hospital. Recently, the importance of the model for assessment of telemedicine (MAST) has been emphasised. The development of an eHealth-supported wound assessment system using artificial intelligence is awaited. This study explored whether or not wound segmentation of a diabetic foot ulcer (DFU) and a venous leg ulcer (VLU) by a convolutional neural network (CNN) was possible after being educated using sacral pressure ulcer (PU) data sets, and which CNN architecture was superior at segmentation. METHODS CNNs with different algorithms and architectures were prepared. The four architectures were SegNet, LinkNet, U-Net and U-Net with the VGG16 Encoder Pre-Trained on ImageNet (Unet_VGG16). Each CNN learned the supervised data of sacral pressure ulcers (PUs). RESULTS Among the four architectures, the best results were obtained with U-Net. U-Net demonstrated the second-highest accuracy in terms of the area under the curve (0.997) and a high specificity (0.943) and sensitivity (0.993), with the highest values obtained with Unet_VGG16. U-Net was also considered to be the most practical architecture and superior to the others in that the segmentation speed was faster than that of Unet_VGG16. CONCLUSION The U-Net CNN constructed using appropriately supervised data was capable of segmentation with high accuracy. These findings suggest that eHealth wound assessment using CNNs will be of practical use in the future.
Collapse
Affiliation(s)
- Norihiko Ohura
- 1 Department of Plastic, Reconstructive Surgery, Kyorin University School of Medicine, Tokyo, Japan
| | - Ryota Mitsuno
- 2 Computer Biomedical Imaging, KYSMO.inc, Nagoya, Japan
| | - Masanobu Sakisaka
- 1 Department of Plastic, Reconstructive Surgery, Kyorin University School of Medicine, Tokyo, Japan
| | - Yuta Terabe
- 1 Department of Plastic, Reconstructive Surgery, Kyorin University School of Medicine, Tokyo, Japan
| | - Yuki Morishige
- 1 Department of Plastic, Reconstructive Surgery, Kyorin University School of Medicine, Tokyo, Japan
| | | | - Takumi Okoshi
- 2 Computer Biomedical Imaging, KYSMO.inc, Nagoya, Japan
| | - Iizaka Shinji
- 3 School of Nutrition, College of Nursing and Nutrition, Shukutoku University, Chiba, Japan
| | - Akihiko Takushima
- 1 Department of Plastic, Reconstructive Surgery, Kyorin University School of Medicine, Tokyo, Japan
| |
Collapse
|
42
|
Diagnosing Automotive Damper Defects Using Convolutional Neural Networks and Electronic Stability Control Sensor Signals. JOURNAL OF SENSOR AND ACTUATOR NETWORKS 2020. [DOI: 10.3390/jsan9010008] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Chassis system components such as dampers have a significant impact on vehicle stability, driving safety, and driving comfort. Therefore, monitoring and diagnosing the defects of these components is necessary. Currently, this task is based on the driver’s perception of component defects in series production vehicles, even though model-based approaches in the literature exist. As we observe an increased availability of data in modern vehicles and advances in the field of deep learning, this paper deals with the analysis of the performance of Convolutional Neural Networks (CNN) for the diagnosis of automotive damper defects. To ensure a broad applicability of the generated diagnosis system, only signals of a classic Electronic Stability Control (ESC) system, such as wheel speeds, longitudinal and lateral vehicle acceleration, and yaw rate, were used. A structured analysis of data pre-processing and CNN configuration parameters were investigated in terms of the defect detection result. The results show that simple Fast Fourier Transformation (FFT) pre-processing and configuration parameters resulting in small networks are sufficient for a high defect detection rate.
Collapse
|
43
|
Blanco G, Traina AJM, Traina C, Azevedo-Marques PM, Jorge AES, de Oliveira D, Bedo MVN. A superpixel-driven deep learning approach for the analysis of dermatological wounds. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 183:105079. [PMID: 31542688 DOI: 10.1016/j.cmpb.2019.105079] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Revised: 08/11/2019] [Accepted: 09/10/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND The image-based identification of distinct tissues within dermatological wounds enhances patients' care since it requires no intrusive evaluations. This manuscript presents an approach, we named QTDU, that combines deep learning models with superpixel-driven segmentation methods for assessing the quality of tissues from dermatological ulcers. METHOD QTDU consists of a three-stage pipeline for the obtaining of ulcer segmentation, tissues' labeling, and wounded area quantification. We set up our approach by using a real and annotated set of dermatological ulcers for training several deep learning models to the identification of ulcered superpixels. RESULTS Empirical evaluations on 179,572 superpixels divided into four classes showed QTDU accurately spot wounded tissues (AUC = 0.986, sensitivity = 0.97, and specificity = 0.974) and outperformed machine-learning approaches in up to 8.2% regarding F1-Score through fine-tuning of a ResNet-based model. Last, but not least, experimental evaluations also showed QTDU correctly quantified wounded tissue areas within a 0.089 Mean Absolute Error ratio. CONCLUSIONS Results indicate QTDU effectiveness for both tissue segmentation and wounded area quantification tasks. When compared to existing machine-learning approaches, the combination of superpixels and deep learning models outperformed the competitors within strong significant levels.
Collapse
Affiliation(s)
- Gustavo Blanco
- Institute of Mathematics and Computer Sciences, ICMC/USP, Brazil
| | - Agma J M Traina
- Institute of Mathematics and Computer Sciences, ICMC/USP, Brazil.
| | - Caetano Traina
- Institute of Mathematics and Computer Sciences, ICMC/USP, Brazil
| | | | - Ana E S Jorge
- Department of Physical Therapy, DFisio/UFSCar, Brazil
| | | | | |
Collapse
|
44
|
Pressure injury image analysis with machine learning techniques: A systematic review on previous and possible future methods. Artif Intell Med 2020; 102:101742. [DOI: 10.1016/j.artmed.2019.101742] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Revised: 09/17/2019] [Accepted: 10/18/2019] [Indexed: 01/17/2023]
|
45
|
Zhao X, Liu Z, Agu E, Wagh A, Jain S, Lindsay C, Tulu B, Strong D, Kan J. Fine-grained diabetic wound depth and granulation tissue amount assessment using bilinear convolutional neural network. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2019; 7:179151-179162. [PMID: 33777590 PMCID: PMC7996404 DOI: 10.1109/access.2019.2959027] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Diabetes mellitus is a serious chronic disease that affects millions of people worldwide. In patients with diabetes, ulcers occur frequently and heal slowly. Grading and staging of diabetic ulcers is the first step of effective treatment and wound depth and granulation tissue amount are two important indicators of wound healing progress. However, wound depths and granulation tissue amount of different severities can visually appear quite similar, making accurate machine learning classification challenging. In this paper, we innovatively adopted the fine-grained classification idea for diabetic wound grading by using a Bilinear CNN (Bi-CNN) architecture to deal with highly similar images of five grades. Wound area extraction, sharpening, resizing and augmentation were used to pre-process images before being input to the Bi-CNN. Innovative modifications of the generic Bi-CNN network architecture are explored to improve its performance. Our research generated a valuable wound dataset. In collaboration with wound experts from University of Massachusetts Medical School, we collected a diabetic wound dataset of 1639 images and annotated them with wound depth and granulation tissue grades as labels for classification. Deep learning experiments were conducted using holdout validation on this diabetic wound dataset. Comparisons with widely used CNN classification architectures demonstrated that our Bi-CNN fine-grained classification approach outperformed prior work for the task of grading diabetic wounds.
Collapse
Affiliation(s)
- Xixuan Zhao
- School of Technology, Beijing Forestry University, Beijing, China, 100083
| | - Ziyang Liu
- Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609
| | - Emmanuel Agu
- Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609
| | - Ameya Wagh
- Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609
| | - Shubham Jain
- Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609
| | - Clifford Lindsay
- Radiology Department, University of Massachusetts Medical School, Worcester MA, USA, 01655
| | - Bengisu Tulu
- Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609
| | - Diane Strong
- Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA, 01609
| | - Jiangming Kan
- School of Technology, Beijing Forestry University, Beijing, China, 100083
| |
Collapse
|