1
|
Freiche B, Bernardino G, Deleat-Besson R, Clarysse P, Duchateau N. Hierarchical Data Integration With Gaussian Processes: Application to the Characterization of Cardiac Ischemia-Reperfusion Patterns. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:1529-1540. [PMID: 40030503 DOI: 10.1109/tmi.2024.3512175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Cardiac imaging protocols usually result in several types of acquisitions and descriptors extracted from the images. The statistical analysis of such data across a population may be challenging, and can be addressed by fusion techniques within a dimensionality reduction framework. However, directly combining different data types may lead to unfair comparisons (for heterogeneous descriptors) or over-exploitation of information (for strongly correlated modalities). In contrast, physicians progressively consider each type of data based on hierarchies derived from their experience or evidence-based recommendations, an inspiring approach for data fusion strategies. In this paper, we propose a novel methodology for hierarchical data fusion and unsupervised representation learning. It mimics the physicians' approach by progressively integrating different high-dimensional data descriptors according to a known hierarchy. We model this hierarchy with a Hierarchical Gaussian Process Latent Variable Model (GP-LVM), which links the estimated low-dimensional latent representation and high-dimensional observations at each level in the hierarchy, with additional links between consecutive levels of the hierarchy. We demonstrate the relevance of this approach on a dataset of 1726 magnetic resonance image slices from 123 patients revascularized after acute myocardial infarction (MI) (first level in the hierarchy), some of them undergoing reperfusion injury (microvascular obstruction (MVO), second level in the hierarchy). Our experiments demonstrate that our hierarchical model provides consistent data organization across levels of the hierarchy and according to physiological characteristics of the lesions. This allows more relevant statistical analysis of myocardial lesion patterns, and in particular subtle lesions such as MVO.
Collapse
|
2
|
Manoel PZ, Dike IC, Anis H, Yassin N, Wojtara M, Uwishema O. Cardiovascular Imaging in the Era of Precision Medicine: Insights from Advanced Technologies - A Narrative Review. Health Sci Rep 2024; 7:e70173. [PMID: 39479287 PMCID: PMC11522615 DOI: 10.1002/hsr2.70173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 09/13/2024] [Accepted: 10/14/2024] [Indexed: 11/02/2024] Open
Abstract
Background and Aims Cardiovascular diseases are responsible for a high mortality rate globally. Precision medicine has emerged as an essential tool for improving cardiovascular disease outcomes. In this context, using advanced imaging exams is fundamental in cardiovascular precision medicine, enabling more accurate diagnoses and customized treatments. This review aims to provide a concise review on how advanced cardiovascular imaging supports precision medicine, highlighting its benefits, challenges, and future directions. Methods A literature review was carried out using the Pubmed and Google Scholar databases, using search strategies that combined terms such as precision medicine, cardiovascular diseases, and imaging tests. Results More advanced analysis aimed at diagnosing and describing cardiovascular diseases in greater detail is made possible by tests such as cardiac computed tomography, cardiac magnetic resonance imaging, and cardiac positron emission tomography. In addition, the aggregation of imaging data with other omics data allows for more personalized treatment and a better description of patient profiles. Conclusion The use of advanced imaging tests is essential in cardiovascular precision medicine. Although there are still technical and ethical obstacles, it is essential that there is collaboration between health professionals, as well as investments in technology and education to better disseminate cardiovascular precision medicine and consequently promote improved patient outcomes.
Collapse
Affiliation(s)
- Poliana Zanotto Manoel
- Department of Research and EducationOli Health Magazine OrganizationKigaliRwanda
- Department of Medicine, Faculty of MedicineFederal University of Rio GrandeRio GrandeRio Grande do SulBrazil
| | - Innocent Chijioke Dike
- Department of Research and EducationOli Health Magazine OrganizationKigaliRwanda
- Department of MedicineFederal Teaching Hospital Ido‐EkitiIdo‐EkitiEkitiNigeria
| | - Heeba Anis
- Department of Research and EducationOli Health Magazine OrganizationKigaliRwanda
- Department of Medicine, Faculty of MedicineDeccan College of Medical SciencesHyderabadTelanganaIndia
| | - Nour Yassin
- Department of Research and EducationOli Health Magazine OrganizationKigaliRwanda
- Department of Medicine, Faculty of MedicineBeirut Arab UniversityBeirutLebanon
| | - Magda Wojtara
- Department of Research and EducationOli Health Magazine OrganizationKigaliRwanda
- Department of Human GeneticsUniversity of Michigan Medical SchoolAnn ArborMichiganUSA
| | - Olivier Uwishema
- Department of Research and EducationOli Health Magazine OrganizationKigaliRwanda
| |
Collapse
|
3
|
Deng C, Aldali F, Luo H, Chen H. Regenerative rehabilitation: a novel multidisciplinary field to maximize patient outcomes. MEDICAL REVIEW (2021) 2024; 4:413-434. [PMID: 39444794 PMCID: PMC11495474 DOI: 10.1515/mr-2023-0060] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 05/15/2024] [Indexed: 10/25/2024]
Abstract
Regenerative rehabilitation is a novel and rapidly developing multidisciplinary field that converges regenerative medicine and rehabilitation science, aiming to maximize the functions of disabled patients and their independence. While regenerative medicine provides state-of-the-art technologies that shed light on difficult-to-treated diseases, regenerative rehabilitation offers rehabilitation interventions to improve the positive effects of regenerative medicine. However, regenerative scientists and rehabilitation professionals focus on their aspects without enough exposure to advances in each other's field. This disconnect has impeded the development of this field. Therefore, this review first introduces cutting-edge technologies such as stem cell technology, tissue engineering, biomaterial science, gene editing, and computer sciences that promote the progress pace of regenerative medicine, followed by a summary of preclinical studies and examples of clinical investigations that integrate rehabilitative methodologies into regenerative medicine. Then, challenges in this field are discussed, and possible solutions are provided for future directions. We aim to provide a platform for regenerative and rehabilitative professionals and clinicians in other areas to better understand the progress of regenerative rehabilitation, thus contributing to the clinical translation and management of innovative and reliable therapies.
Collapse
Affiliation(s)
- Chunchu Deng
- Department of Rehabilitation Medicine, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Fatima Aldali
- Department of Rehabilitation Medicine, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Hongmei Luo
- Department of Rehabilitation Medicine, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Hong Chen
- Department of Rehabilitation Medicine, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| |
Collapse
|
4
|
Yilmaz S, Tasyurek M, Amuk M, Celik M, Canger EM. Developing deep learning methods for classification of teeth in dental panoramic radiography. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:118-127. [PMID: 37316425 DOI: 10.1016/j.oooo.2023.02.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 09/13/2022] [Accepted: 02/10/2023] [Indexed: 06/16/2023]
Abstract
OBJECTIVES We aimed to develop an artificial intelligence-based clinical dental decision-support system using deep-learning methods to reduce diagnostic interpretation error and time and increase the effectiveness of dental treatment and classification. STUDY DESIGN We compared the performance of 2 deep-learning methods, You Only Look Once V4 (YOLO-V4) and Faster Regions with the Convolutional Neural Networks (R-CNN), for tooth classification in dental panoramic radiography for tooth classification in dental panoramic radiography to determine which is more successful in terms of accuracy, time, and detection ability. Using a method based on deep-learning models trained on a semantic segmentation task, we analyzed 1200 panoramic radiographs selected retrospectively. In the classification process, our model identified 36 classes, including 32 teeth and 4 impacted teeth. RESULTS The YOLO-V4 method achieved a mean 99.90% precision, 99.18% recall, and 99.54% F1 score. The Faster R-CNN method achieved a mean 93.67% precision, 90.79% recall, and 92.21% F1 score. Experimental evaluations showed that the YOLO-V4 method outperformed the Faster R-CNN method in terms of accuracy of predicted teeth in the tooth classification process, speed of tooth classification, and ability to detect impacted and erupted third molars. CONCLUSIONS The YOLO-V4 method outperforms the Faster R-CNN method in terms of accuracy of tooth prediction, speed of detection, and ability to detect impacted third molars and erupted third molars. The proposed deep learning based methods can assist dentists in clinical decision making, save time, and reduce the negative effects of stress and fatigue in daily practice.
Collapse
Affiliation(s)
- Serkan Yilmaz
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Erciyes University, Kayseri, Turkey
| | - Murat Tasyurek
- Department of Computer Engineering, Kayseri University, Kayseri, Turkey
| | - Mehmet Amuk
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Erciyes University, Kayseri, Turkey
| | - Mete Celik
- Department of Computer Engineering, Erciyes University, Kayseri, Turkey
| | - Emin Murat Canger
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Erciyes University, Kayseri, Turkey.
| |
Collapse
|
5
|
Fasih-Ahmad S, Wang Z, Mishra Z, Vatanatham C, Clark ME, Swain TA, Curcio CA, Owsley C, Sadda SR, Hu ZJ. Potential Structural Biomarkers in 3D Images Validated by the First Functional Biomarker for Early Age-Related Macular Degeneration - ALSTAR2 Baseline. Invest Ophthalmol Vis Sci 2024; 65:1. [PMID: 38300559 PMCID: PMC10846345 DOI: 10.1167/iovs.65.2.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2024] Open
Abstract
Purpose Lack of valid end points impedes developing therapeutic strategies for early age-related macular degeneration (AMD). Delayed rod-mediated dark adaptation (RMDA) is the first functional biomarker for incident early AMD. The relationship between RMDA and the status of outer retinal bands on optical coherence tomography (OCT) have not been well defined. This study aims to characterize these relationships in early and intermediate AMD. Methods Baseline data from 476 participants was assessed including eyes with early AMD (n = 138), intermediate AMD (n = 101), and normal aging (n = 237). Participants underwent volume OCT imaging of the macula and rod intercept time (RIT) was measured. The ellipsoid zone (EZ) and interdigitation zone (IZ) on all OCT B-scans of the volumes were segmented. The area of detectable EZ and IZ, and mean thickness of IZ within the Early Treatment Diabetic Retinopathy Study (ETDRS) grid were computed and associations with RIT were assessed by Spearman's correlation coefficient and age adjusted. Results Delayed RMDA (longer RIT) was most strongly associated with less preserved IZ area (r = -0.591; P < 0.001), followed by decreased IZ thickness (r = -0.434; P < 0.001), and EZ area (r = -0.334; P < 0.001). This correlation between RIT and IZ integrity was not apparent when considering normal eyes alone within 1.5 mm of the fovea. Conclusions RMDA is correlated with the status of outer retinal bands in early and intermediate AMD eyes, particularly, the status of the IZ. This correlation is consistent with a previous analysis of only foveal B-scans and is biologically plausible given that retinoid availability, involving transfer at the interface attributed to the IZ, is rate-limiting for RMDA.
Collapse
Affiliation(s)
| | - Ziyuan Wang
- Doheny Eye Institute, Pasadena, California, United States
| | - Zubin Mishra
- Doheny Eye Institute, Pasadena, California, United States
| | | | - Mark E Clark
- Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States
| | - Thomas A Swain
- Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States
- Epidemiology, School of Public Health, University of Alabama at Birmingham, Birmingham, Alabama, United States
| | - Christine A Curcio
- Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States
| | - Cynthia Owsley
- Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, Alabama, United States
| | | | | |
Collapse
|
6
|
Sang Y, McNitt-Gray M, Yang Y, Cao M, Low D, Ruan D. Target-oriented deep learning-based image registration with individualized test-time adaptation. Med Phys 2023; 50:7016-7026. [PMID: 37222565 DOI: 10.1002/mp.16477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Revised: 02/20/2023] [Accepted: 02/26/2023] [Indexed: 05/25/2023] Open
Abstract
BACKGROUND A classic approach in medical image registration is to formulate an optimization problem based on the image pair of interest, and seek a deformation vector field (DVF) to minimize the corresponding objective, often iteratively. It has a clear focus on the targeted pair, but is typically slow. In contrast, more recent deep-learning-based registration offers a much faster alternative and can benefit from data-driven regularization. However, learning is a process to "fit" the training cohort, whose image or motion characteristics or both may differ from the pair of images to be tested, which is the ultimate goal of registration. Therefore, generalization gap poses a high risk with direct inference alone. PURPOSE In this study, we propose an individualized adaptation to improve test sample targeting, to achieve a synergy of efficiency and performance in registration. METHODS Using a previously developed network with an integrated motion representation prior module as the implementation backbone, we propose to adapt the trained registration network further for image pairs at test time to optimize the individualized performance. The adaptation method was tested against various characteristics shifts caused by cross-protocol, cross-platform, and cross-modality, with test evaluation performed on lung CBCT, cardiac MRI, and lung MRI, respectively. RESULTS Landmark-based registration errors and motion-compensated image enhancement results demonstrated significantly improved test registration performance from our method, compared to tuned classic B-spline registration and network solutions without adaptation. CONCLUSIONS We have developed a method to synergistically combine the effectiveness of pre-trained deep network and the target-centric perspective of optimization-based registration to improve performance on individual test data.
Collapse
Affiliation(s)
- Yudi Sang
- Department of Bioengineering, University of California, Los Angeles, California, USA
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| | - Michael McNitt-Gray
- Department of Radiology, University of California, Los Angeles, California, USA
| | - Yingli Yang
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| | - Minsong Cao
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| | - Daniel Low
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| | - Dan Ruan
- Department of Bioengineering, University of California, Los Angeles, California, USA
- Department of Radiation Oncology, University of California, Los Angeles, California, USA
| |
Collapse
|
7
|
Fasih-Ahmad S, Wang Z, Mishra Z, Vatanatham C, Clark ME, Swain TA, Curcio CA, Owsley C, Sadda SR, Hu ZJ. Potential Structural Biomarkers in 3D Images Validated by the First Functional Biomarker for Early Age-Related Macular Degeneration - ALSTAR2 Baseline. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.09.10.23295309. [PMID: 37745353 PMCID: PMC10516097 DOI: 10.1101/2023.09.10.23295309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
Purpose While intermediate and late age-Related Macular Degeneration (AMD) have been widely investigated, rare studies were focused on the pathophysiologic mechanism of early AMD. Delayed rod-mediated dark adaptation (RMDA) is the first functional biomarker for incident early AMD. The status of outer retinal bands on optical coherence tomography (OCT) may be potential imaging biomarkers and the purpose is to investigate the hypothesis that the integrity of interdigitation zone (IZ) may provide insight into the health of photoreceptors and retinal pigment epithelium (RPE) in early AMD. Methods We establish the structure-function relationship between ellipsoid zone (EZ) integrity and RMDA, and IZ integrity and RMDA in a large-scale OCT dataset from eyes with normal aging (n=237), early AMD (n=138), and intermediate AMD (n=101) by utilizing a novel deep-learning-derived algorithm with manual correction when needed to segment the EZ and IZ on OCT B-scans (57,596 B-scans), and utilizing the AdaptDx device to measure RMDA. Results Our data demonstrates that slower RMDA is associated with less preserved EZ (r = -0.334; p<0.001) and IZ area (r = -0.591; p<0.001), and decreased IZ thickness (r = -0.434; p<0.001). These associations are not apparent when considering normal eyes alone. Conclusions The association with IZ area and RMDA in large-scale data is biologically plausible because retinoid availability and transfer at the interface attributed to IZ is rate-limiting for RMDA. This study supports the hypothesis that the IZ integrity provides insight into the health of photoreceptors and RPE in early AMD and is a potential new imaging biomarker.
Collapse
|
8
|
Tian F, Tian Z, Chen Z, Zhang D, Du S. Surface-GCN: Learning interaction experience for organ segmentation in 3D medical images. Med Phys 2023; 50:5030-5044. [PMID: 36738103 DOI: 10.1002/mp.16280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Revised: 12/26/2022] [Accepted: 01/13/2023] [Indexed: 02/05/2023] Open
Abstract
BACKGROUND Accurate segmentation of organs has a great significance for clinical diagnosis, but it is still hard work due to the obscure imaging boundaries caused by tissue adhesion on medical images. Based on the image continuity in medical image volumes, segmentation on these slices could be inferred from adjacent slices with a clear organ boundary. Radiologists can delineate a clear organ boundary by observing adjacent slices. PURPOSE Inspired by the radiologists' delineating procedure, we design an organ segmentation model based on boundary information of adjacent slices and a human-machine interactive learning strategy to introduce clinical experience. METHODS We propose an interactive organ segmentation method for medical image volume based on Graph Convolution Network (GCN) called Surface-GCN. First, we propose a Surface Feature Extraction Network (SFE-Net) to capture surface features of a target organ, and supervise it by a Mini-batch Adaptive Surface Matching (MBASM) module. Then, to predict organ boundaries precisely, we design an automatic segmentation module based on a Surface Convolution Unit (SCU), which propagates information on organ surfaces to refine the generated boundaries. In addition, an interactive segmentation module is proposed to learn radiologists' experience of interactive corrections on organ surfaces to reduce interaction clicks. RESULTS We evaluate the proposed method on one prostate MR image dataset and two abdominal multi-organ CT datasets. The experimental results show that our method outperforms other state-of-the-art methods. For prostate segmentation, the proposed method conducts a DSC score of 94.49% on PROMISE12 test dataset. For abdominal multi-organ segmentation, the proposed method achieves DSC scores of 95, 91, 95, and 88% for the left kidney, gallbladder, spleen, and esophagus, respectively. For interactive segmentation, the proposed method reduces 5-10 interaction clicks to reach the same accuracy. CONCLUSIONS To overcome the medical organ segmentation challenge, we propose a Graph Convolutional Network called Surface-GCN by imitating radiologist interactions and learning clinical experience. On single and multiple organ segmentation tasks, the proposed method could obtain more accurate segmentation boundaries compared with other state-of-the-art methods.
Collapse
Affiliation(s)
- Fengrui Tian
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Zhiqiang Tian
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Zhang Chen
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Dong Zhang
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Shaoyi Du
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
9
|
Padmapriya ST, Parthasarathy S. Ethical Data Collection for Medical Image Analysis: a Structured Approach. Asian Bioeth Rev 2023:1-14. [PMID: 37361687 PMCID: PMC10088772 DOI: 10.1007/s41649-023-00250-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 03/24/2023] [Accepted: 03/26/2023] [Indexed: 06/28/2023] Open
Abstract
Due to advancements in technology such as data science and artificial intelligence, healthcare research has gained momentum and is generating new findings and predictions on abnormalities leading to the diagnosis of diseases or disorders in human beings. On one hand, the extensive application of data science to healthcare research is progressing faster, while on the other hand, the ethical concerns and adjoining risks and legal hurdles those data scientists may face in the future slow down the progression of healthcare research. Simply put, the application of data science to ethically guided healthcare research appears to be a dream come true. Hence, in this paper, we discuss the current practices, challenges, and limitations of the data collection process during medical image analysis (MIA) conducted as part of healthcare research and propose an ethical data collection framework to guide data scientists to address the possible ethical concerns before commencing data analytics over a medical dataset.
Collapse
Affiliation(s)
- S. T. Padmapriya
- Department of Applied Mathematics and Computational Science, Thiagarajar College of Engineering, Madurai, India
| | - Sudhaman Parthasarathy
- Department of Applied Mathematics and Computational Science, Thiagarajar College of Engineering, Madurai, India
| |
Collapse
|
10
|
Afriyie Y, Weyori BA, Opoku AA. A scaling up approach: a research agenda for medical imaging analysis with applications in deep learning. J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Affiliation(s)
- Yaw Afriyie
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
- Department of Computer Science, Faculty of Information and Communication Technology, SD Dombo University of Business and Integrated Development Studies, Wa, Ghana
| | - Benjamin A. Weyori
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| | - Alex A. Opoku
- Department of Mathematics & Statistics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| |
Collapse
|
11
|
Bai R, Zhou M. SL-HarDNet: Skin lesion segmentation with HarDNet. Front Bioeng Biotechnol 2023; 10:1028690. [PMID: 36686227 PMCID: PMC9849244 DOI: 10.3389/fbioe.2022.1028690] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 12/16/2022] [Indexed: 01/06/2023] Open
Abstract
Automatic segmentation of skin lesions from dermoscopy is of great significance for the early diagnosis of skin cancer. However, due to the complexity and fuzzy boundary of skin lesions, automatic segmentation of skin lesions is a challenging task. In this paper, we present a novel skin lesion segmentation network based on HarDNet (SL-HarDNet). We adopt HarDNet as the backbone, which can learn more robust feature representation. Furthermore, we introduce three powerful modules, including: cascaded fusion module (CFM), spatial channel attention module (SCAM) and feature aggregation module (FAM). Among them, CFM combines the features of different levels and effectively aggregates the semantic and location information of skin lesions. SCAM realizes the capture of key spatial information. The cross-level features are effectively fused through FAM, and the obtained high-level semantic position information features are reintegrated with the features from CFM to improve the segmentation performance of the model. We apply the challenge dataset ISIC-2016&PH2 and ISIC-2018, and extensively evaluate and compare the state-of-the-art skin lesion segmentation methods. Experiments show that our SL-HarDNet performance is always superior to other segmentation methods and achieves the latest performance.
Collapse
Affiliation(s)
- Ruifeng Bai
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Mingwei Zhou
- Department of Dermatology, China-Japan Union Hospital of Jilin University, Changchun, China
| |
Collapse
|
12
|
Tian JR, Wang Y, Chen ZD, Luo X, Xu XS. Diagnose Like Doctors: Weakly-Supervised Fine-Grained Classification of Breast Cancer. ACM T INTEL SYST TEC 2022. [DOI: 10.1145/3572033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Breast cancer is the most common type of cancers in women. Therefore, how to accurately and timely diagnose it becomes very important. Some computer-aided diagnosis models based on pathological images have been proposed for this task. However, there are still some issues that need to be further addressed. For example, most deep learning based models suffer from a lack of interpretability. In addition, some of them cannot fully exploit the information in medical data, e.g., hierarchical label structure and scattered distribution of target objects. To address these issues, we propose a weakly-supervised fine-grained medical image classification method for breast cancer diagnosis, i.e., DLD-Net for short. It simulates the diagnostic procedures of pathologists by multiple attention-guided cropping and dropping operations, making it have good clinical interpretability. Moreover, it can not only exploit the global information of a whole image, but also further mine the critical local information by generating and selecting critical regions from the image. In light of this, those subtle discriminating information hidden in scattered regions can be exploited. In addition, we also design a novel hierarchical cross-entropy loss to utilize the hierarchical label information in medical images, making the classification results more discriminative. Furthermore, DLD-Net is a weakly-supervised network, which can be trained end-to-end without any additional region annotations. Extensive experimental results on three benchmark datasets demonstrate that DLD-Net is able to achieve good results and outperforms some state-of-the-art methods.
Collapse
|
13
|
Faber BG, Ebsim R, Saunders FR, Frysz M, Lindner C, Gregory JS, Aspden RM, Harvey NC, Davey Smith G, Cootes T, Tobias JH. A novel semi-automated classifier of hip osteoarthritis on DXA images shows expected relationships with clinical outcomes in UK Biobank. Rheumatology (Oxford) 2022; 61:3586-3595. [PMID: 34919677 PMCID: PMC9434243 DOI: 10.1093/rheumatology/keab927] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 12/10/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE Conventional scoring methods for radiographic hip OA (rHOA) are subjective and show inconsistent relationships with clinical outcomes. To provide a more objective rHOA scoring method, we aimed to develop a semi-automated classifier based on DXA images and confirm its relationships with clinical outcomes. METHODS Hip DXAs in UK Biobank (UKB) were marked up for osteophyte area from which acetabular, superior and inferior femoral head osteophyte grades were derived. Joint space narrowing (JSN) grade was obtained automatically from minimum joint space width (mJSW) measures. Clinical outcomes related to rHOA comprised hip pain, hospital diagnosed OA (HES OA) and total hip replacement. Logistic regression and Cox proportional hazard modelling were used to examine associations between overall rHOA grade (0-4; derived from combining osteophyte and JSN grades) and the clinical outcomes. RESULTS A toal of 40 340 individuals were included in the study (mean age 63.7), of whom 81.2% had no evidence of rHOA, while 18.8% had grade ≥1 rHOA. Grade ≥1 osteophytes at each location and JSN were associated with hip pain, HES OA and total hip replacement. Associations with all three clinical outcomes increased progressively according to rHOA grade, with grade 4 rHOA and total hip replacement showing the strongest association [57.70 (38.08-87.44)]. CONCLUSIONS Our novel semi-automated tool provides a useful means for classifying rHOA on hip DXAs, given its strong and progressive relationships with clinical outcomes. These findings suggest DXA scanning can be used to classify rHOA in large DXA-based cohort studies supporting further research, with the future potential for population-based screening.
Collapse
Affiliation(s)
- Benjamin G Faber
- Musculoskeletal Research Unit
- Medical Research Council Integrative Epidemiology Unit, University of Bristol, Bristol
| | - Raja Ebsim
- Division of Informatics, Imaging and Data Science, The University of Manchester, Manchester
| | - Fiona R Saunders
- Centre for Arthritis and Musculoskeletal Health, University of Aberdeen, Aberdeen
| | - Monika Frysz
- Musculoskeletal Research Unit
- Medical Research Council Integrative Epidemiology Unit, University of Bristol, Bristol
| | - Claudia Lindner
- Division of Informatics, Imaging and Data Science, The University of Manchester, Manchester
| | - Jennifer S Gregory
- Centre for Arthritis and Musculoskeletal Health, University of Aberdeen, Aberdeen
| | - Richard M Aspden
- Centre for Arthritis and Musculoskeletal Health, University of Aberdeen, Aberdeen
| | - Nicholas C Harvey
- Medical Research Council Lifecourse Epidemiology Unit, University of Southampton, Southampton, UK
| | - George Davey Smith
- Medical Research Council Integrative Epidemiology Unit, University of Bristol, Bristol
| | - Timothy Cootes
- Division of Informatics, Imaging and Data Science, The University of Manchester, Manchester
| | - Jonathan H Tobias
- Musculoskeletal Research Unit
- Medical Research Council Integrative Epidemiology Unit, University of Bristol, Bristol
| |
Collapse
|
14
|
Performance Analysis of Supervised Machine Learning Algorithms for Automatized Radiographical Classification of Maxillary Third Molar Impaction. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Background: Oro-antral communication (OAC) is a common complication following the extraction of upper molar teeth. The Archer and the Root Sinus (RS) systems can be used to classify impacted teeth in panoramic radiographs. The Archer classes B-D and the Root Sinus classes III, IV have been associated with an increased risk of OAC following tooth extraction in the upper molar region. In our previous study, we found that panoramic radiographs are not reliable for predicting OAC. This study aimed to (1) determine the feasibility of automating the classification (Archer/RS classes) of impacted teeth from panoramic radiographs, (2) determine the distribution of OAC stratified by classification system classes for the purposes of decision tree construction, and (3) determine the feasibility of automating the prediction of OAC utilizing the mentioned classification systems. Methods: We utilized multiple supervised pre-trained machine learning models (VGG16, ResNet50, Inceptionv3, EfficientNet, MobileNetV2), one custom-made convolutional neural network (CNN) model, and a Bag of Visual Words (BoVW) technique to evaluate the performance to predict the clinical classification systems RS and Archer from panoramic radiographs (Aim 1). We then used Chi-square Automatic Interaction Detectors (CHAID) to determine the distribution of OAC stratified by the Archer/RS classes to introduce a decision tree for simple use in clinics (Aim 2). Lastly, we tested the ability of a multilayer perceptron artificial neural network (MLP) and a radial basis function neural network (RBNN) to predict OAC based on the high-risk classes RS III, IV, and Archer B-D (Aim 3). Results: We achieved accuracies of up to 0.771 for EfficientNet and MobileNetV2 when examining the Archer classification. For the AUC, we obtained values of up to 0.902 for our custom-made CNN. In comparison, the detection of the RS classification achieved accuracies of up to 0.792 for the BoVW and an AUC of up to 0.716 for our custom-made CNN. Overall, the Archer classification was detected more reliably than the RS classification when considering all algorithms. CHAID predicted 77.4% correctness for the Archer classification and 81.4% for the RS classification. MLP (AUC: 0.590) and RBNN (AUC: 0.590) for the Archer classification as well as MLP 0.638) and RBNN (0.630) for the RS classification did not show sufficient predictive capability for OAC. Conclusions: The results reveal that impacted teeth can be classified using panoramic radiographs (best AUC: 0.902), and the classification systems can be stratified according to their relationship to OAC (81.4% correct for RS classification). However, the Archer and RS classes did not achieve satisfactory AUCs for predicting OAC (best AUC: 0.638). Additional research is needed to validate the results externally and to develop a reliable risk stratification tool based on the present findings.
Collapse
|
15
|
Inan MSK, Alam FI, Hasan R. Deep integrated pipeline of segmentation guided classification of breast cancer from ultrasound images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103553] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
16
|
S. Akinbo R, A. Daramola O. Ensemble Machine Learning Algorithms for Prediction and Classification of Medical Images. ARTIF INTELL 2021. [DOI: 10.5772/intechopen.100602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The employment of machine learning algorithms in disease classification has evolved as a precision medicine for scientific innovation. The geometric growth in various machine learning systems has paved the way for more research in the medical imaging process. This research aims to promote the development of machine learning algorithms for the classification of medical images. Automated classification of medical images is a fascinating application of machine learning and they have the possibility of higher predictability and accuracy. The technological advancement in the processing of medical imaging will help to reduce the complexities of diseases and some existing constraints will be greatly minimized. This research exposes the main ensemble learning techniques as it covers the theoretical background of machine learning, applications, comparison of machine learning and deep learning, ensemble learning with reviews of state-of the art literature, framework, and analysis. The work extends to medical image types, applications, benefits, and operations. We proposed the application of the ensemble machine learning approach in the classification of medical images for better performance and accuracy. The integration of advanced technology in clinical imaging will help in the prompt classification, prediction, early detection, and a better interpretation of medical images, this will, in turn, improves the quality of life and expands the clinical bearing for machine learning applications.
Collapse
|
17
|
Liang G, Xing X, Liu L, Zhang Y, Ying Q, Lin AL, Jacobs N. Alzheimer's Disease Classification Using 2D Convolutional Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3008-3012. [PMID: 34891877 DOI: 10.1109/embc46164.2021.9629587] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Alzheimer's disease (AD) is a non-treatable and non-reversible disease that affects about 6% of people who are 65 and older. Brain magnetic resonance imaging (MRI) is a pseudo-3D imaging technology that is widely used for AD diagnosis. Convolutional neural networks with 3D kernels (3D CNNs) are often the default choice for deep learning based MRI analysis. However, 3D CNNs are usually computationally costly and data-hungry. Such disadvantages post a barrier of using modern deep learning techniques in the medical imaging domain, in which the number of data that can be used for training is usually limited. In this work, we propose three approaches that leverage 2D CNNs on 3D MRI data. We test the proposed methods on the Alzheimer's Disease Neuroimaging Initiative dataset across two popular 2D CNN architectures. The evaluation results show that the proposed method improves the model performance on AD diagnosis by 8.33% accuracy or 10.11% auROC compared with the ResNet-based 3D CNN model, while significantly reducing the training time by over 89%. We also discuss the potential causes for performance improvement and the limitations. We believe this work can serve as a strong baseline for future researchers.
Collapse
|
18
|
Wang L, Guo D, Wang G, Zhang S. Annotation-Efficient Learning for Medical Image Segmentation Based on Noisy Pseudo Labels and Adversarial Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2795-2807. [PMID: 33370237 DOI: 10.1109/tmi.2020.3047807] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Despite that deep learning has achieved state-of-the-art performance for medical image segmentation, its success relies on a large set of manually annotated images for training that are expensive to acquire. In this paper, we propose an annotation-efficient learning framework for segmentation tasks that avoids annotations of training images, where we use an improved Cycle-Consistent Generative Adversarial Network (GAN) to learn from a set of unpaired medical images and auxiliary masks obtained either from a shape model or public datasets. We first use the GAN to generate pseudo labels for our training images under the implicit high-level shape constraint represented by a Variational Auto-encoder (VAE)-based discriminator with the help of the auxiliary masks, and build a Discriminator-guided Generator Channel Calibration (DGCC) module which employs our discriminator's feedback to calibrate the generator for better pseudo labels. To learn from the pseudo labels that are noisy, we further introduce a noise-robust iterative learning method using noise-weighted Dice loss. We validated our framework with two situations: objects with a simple shape model like optic disc in fundus images and fetal head in ultrasound images, and complex structures like lung in X-Ray images and liver in CT images. Experimental results demonstrated that 1) Our VAE-based discriminator and DGCC module help to obtain high-quality pseudo labels. 2) Our proposed noise-robust learning method can effectively overcome the effect of noisy pseudo labels. 3) The segmentation performance of our method without using annotations of training images is close or even comparable to that of learning from human annotations.
Collapse
|
19
|
Rodrigues JA, Krois J, Schwendicke F. Demystifying artificial intelligence and deep learning in dentistry. Braz Oral Res 2021; 35:e094. [PMID: 34406309 DOI: 10.1590/1807-3107bor-2021.vol35.0094] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Accepted: 07/05/2021] [Indexed: 11/21/2022] Open
Abstract
Artificial intelligence (AI) is a general term used to describe the development of computer systems which can perform tasks that normally require human cognition. Machine learning (ML) is one subfield of AI, where computers learn rules from data, capturing its intrinsic statistical patterns and structures. Neural networks (NNs) have been increasingly employed for ML complex data. The application of multilayered NN is referred to as "deep learning", which has been recently investigated in dentistry. Convolutional neural networks (CNNs) are mainly used for processing large and complex imagery data, as they are able to extract image features like edges, corners, shapes, and macroscopic patterns using layers of filters. CNN algorithms allow to perform tasks like image classification, object detection and segmentation. The literature involving AI in dentistry has increased rapidly, so a methodological guidance for designing, conducting and reporting studies must be rigorously followed, including the improvement of datasets. The limited interaction between the dental field and the technical disciplines, however, remains a hurdle for applicable dental AI. Similarly, dental users must understand why and how AI applications work and decide to appraise their decisions critically. Generalizable and robust AI applications will eventually prove helpful for clinicians and patients alike.
Collapse
Affiliation(s)
- Jonas Almeida Rodrigues
- Universidade Federal do Rio Grande do Sul - UFRGS, School of Dentistry, Department of Surgery and Orthopedics, Porto Alegre, RS, Brazil
| | - Joachim Krois
- Charité - Universitätsmedizin Berlin, Digital Health and Health Services Research, Department of Oral Diagnostics, Berlin, Germany
| | - Falk Schwendicke
- Charité - Universitätsmedizin Berlin, Digital Health and Health Services Research, Department of Oral Diagnostics, Berlin, Germany
| |
Collapse
|
20
|
Sermesant M, Delingette H, Cochet H, Jaïs P, Ayache N. Applications of artificial intelligence in cardiovascular imaging. Nat Rev Cardiol 2021; 18:600-609. [PMID: 33712806 DOI: 10.1038/s41569-021-00527-2] [Citation(s) in RCA: 73] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 02/08/2021] [Indexed: 01/31/2023]
Abstract
Research into artificial intelligence (AI) has made tremendous progress over the past decade. In particular, the AI-powered analysis of images and signals has reached human-level performance in many applications owing to the efficiency of modern machine learning methods, in particular deep learning using convolutional neural networks. Research into the application of AI to medical imaging is now very active, especially in the field of cardiovascular imaging because of the challenges associated with acquiring and analysing images of this dynamic organ. In this Review, we discuss the clinical questions in cardiovascular imaging that AI can be used to address and the principal methodological AI approaches that have been developed to solve the related image analysis problems. Some approaches are purely data-driven and rely mainly on statistical associations, whereas others integrate anatomical and physiological information through additional statistical, geometric and biophysical models of the human heart. In a structured manner, we provide representative examples of each of these approaches, with particular attention to the underlying computational imaging challenges. Finally, we discuss the remaining limitations of AI approaches in cardiovascular imaging (such as generalizability and explainability) and how they can be overcome.
Collapse
Affiliation(s)
| | | | - Hubert Cochet
- IHU Liryc, CHU Bordeaux, Université Bordeaux, Inserm 1045, Pessac, France
| | - Pierre Jaïs
- IHU Liryc, CHU Bordeaux, Université Bordeaux, Inserm 1045, Pessac, France
| | | |
Collapse
|
21
|
Zerka F, Urovi V, Bottari F, Leijenaar RTH, Walsh S, Gabrani-Juma H, Gueuning M, Vaidyanathan A, Vos W, Occhipinti M, Woodruff HC, Dumontier M, Lambin P. Privacy preserving distributed learning classifiers - Sequential learning with small sets of data. Comput Biol Med 2021; 136:104716. [PMID: 34364262 DOI: 10.1016/j.compbiomed.2021.104716] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 07/16/2021] [Accepted: 07/28/2021] [Indexed: 01/09/2023]
Abstract
BACKGROUND Artificial intelligence (AI) typically requires a significant amount of high-quality data to build reliable models, where gathering enough data within a single institution can be particularly challenging. In this study we investigated the impact of using sequential learning to exploit very small, siloed sets of clinical and imaging data to train AI models. Furthermore, we evaluated the capacity of such models to achieve equivalent performance when compared to models trained with the same data over a single centralized database. METHODS We propose a privacy preserving distributed learning framework, learning sequentially from each dataset. The framework is applied to three machine learning algorithms: Logistic Regression, Support Vector Machines (SVM), and Perceptron. The models were evaluated using four open-source datasets (Breast cancer, Indian liver, NSCLC-Radiomics dataset, and Stage III NSCLC). FINDINGS The proposed framework ensured a comparable predictive performance against a centralized learning approach. Pairwise DeLong tests showed no significant difference between the compared pairs for each dataset. INTERPRETATION Distributed learning contributes to preserve medical data privacy. We foresee this technology will increase the number of collaborative opportunities to develop robust AI, becoming the default solution in scenarios where collecting enough data from a single reliable source is logistically impossible. Distributed sequential learning provides privacy persevering means for institutions with small but clinically valuable datasets to collaboratively train predictive AI while preserving the privacy of their patients. Such models perform similarly to models that are built on a larger central dataset.
Collapse
Affiliation(s)
- Fadila Zerka
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology, Maastricht University, Maastricht, the Netherlands; Radiomics (Oncoradiomics SA), Liège, Belgium.
| | - Visara Urovi
- Institute of Data Science (IDS), Maastricht University, the Netherlands
| | | | | | - Sean Walsh
- Radiomics (Oncoradiomics SA), Liège, Belgium
| | | | | | - Akshayaa Vaidyanathan
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology, Maastricht University, Maastricht, the Netherlands; Radiomics (Oncoradiomics SA), Liège, Belgium
| | - Wim Vos
- Radiomics (Oncoradiomics SA), Liège, Belgium
| | | | - Henry C Woodruff
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology, Maastricht University, Maastricht, the Netherlands; Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, the Netherlands
| | - Michel Dumontier
- Institute of Data Science (IDS), Maastricht University, the Netherlands
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology, Maastricht University, Maastricht, the Netherlands; Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, the Netherlands
| |
Collapse
|
22
|
Zinchuk V, Grossenbacher-Zinchuk O. Machine Learning for Analysis of Microscopy Images: A Practical Guide. ACTA ACUST UNITED AC 2021; 86:e101. [PMID: 31904918 DOI: 10.1002/cpcb.101] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
The explosive growth of machine learning has provided scientists with insights into data in ways unattainable using prior research techniques. It has allowed the detection of biological features that were previously unrecognized and overlooked. However, because machine-learning methodology originates from informatics, many cell biology labs have experienced difficulties in implementing this approach. In this article, we target the rapidly expanding audience of cell and molecular biologists interested in exploiting machine learning for analysis of their research. We discuss the advantages of employing machine learning with microscopy approaches and describe the machine-learning pipeline. We also give practical guidelines for building models of cell behavior using machine learning. We conclude with an overview of the tools required for model creation, and share advice on their use. © 2020 by John Wiley & Sons, Inc.
Collapse
Affiliation(s)
- Vadim Zinchuk
- Department of Neurobiology and Anatomy, Kochi University Faculty of Medicine, Kochi, Japan
| | | |
Collapse
|
23
|
Srivastava A, Jain S, Miranda R, Patil S, Pandya S, Kotecha K. Deep learning based respiratory sound analysis for detection of chronic obstructive pulmonary disease. PeerJ Comput Sci 2021; 7:e369. [PMID: 33817019 PMCID: PMC7959628 DOI: 10.7717/peerj-cs.369] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 01/03/2021] [Indexed: 05/27/2023]
Abstract
In recent times, technologies such as machine learning and deep learning have played a vital role in providing assistive solutions to a medical domain's challenges. They also improve predictive accuracy for early and timely disease detection using medical imaging and audio analysis. Due to the scarcity of trained human resources, medical practitioners are welcoming such technology assistance as it provides a helping hand to them in coping with more patients. Apart from critical health diseases such as cancer and diabetes, the impact of respiratory diseases is also gradually on the rise and is becoming life-threatening for society. The early diagnosis and immediate treatment are crucial in respiratory diseases, and hence the audio of the respiratory sounds is proving very beneficial along with chest X-rays. The presented research work aims to apply Convolutional Neural Network based deep learning methodologies to assist medical experts by providing a detailed and rigorous analysis of the medical respiratory audio data for Chronic Obstructive Pulmonary detection. In the conducted experiments, we have used a Librosa machine learning library features such as MFCC, Mel-Spectrogram, Chroma, Chroma (Constant-Q) and Chroma CENS. The presented system could also interpret the severity of the disease identified, such as mild, moderate, or acute. The investigation results validate the success of the proposed deep learning approach. The system classification accuracy has been enhanced to an ICBHI score of 93%. Furthermore, in the conducted experiments, we have applied K-fold Cross-Validation with ten splits to optimize the performance of the presented deep learning approach.
Collapse
Affiliation(s)
- Arpan Srivastava
- CS&IT Dept, Symbiosis Insitute of Technology, Symbiosis International (Deemed University), Pune, Maharastra, India
| | - Sonakshi Jain
- CS&IT Dept, Symbiosis Insitute of Technology, Symbiosis International (Deemed University), Pune, Maharastra, India
| | - Ryan Miranda
- CS&IT Dept, Symbiosis Insitute of Technology, Symbiosis International (Deemed University), Pune, Maharastra, India
| | - Shruti Patil
- Symbiosis Centre for Applied Artificial Intelligence, Symbiosis International (Deemed University), Pune, Maharastra, India
| | - Sharnil Pandya
- Symbiosis Centre for Applied Artificial Intelligence, Symbiosis International (Deemed University), Pune, Maharastra, India
| | - Ketan Kotecha
- Symbiosis Centre for Applied Artificial Intelligence, Symbiosis International (Deemed University), Pune, Maharastra, India
| |
Collapse
|
24
|
Ahmad R. Reviewing the relationship between machines and radiology: the application of artificial intelligence. Acta Radiol Open 2021; 10:2058460121990296. [PMID: 33623711 PMCID: PMC7876935 DOI: 10.1177/2058460121990296] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 01/07/2021] [Indexed: 12/13/2022] Open
Abstract
Background The scope and productivity of artificial intelligence applications in health
science and medicine, particularly in medical imaging, are rapidly
progressing, with relatively recent developments in big data and deep
learning and increasingly powerful computer algorithms. Accordingly, there
are a number of opportunities and challenges for the radiological
community. Purpose To provide review on the challenges and barriers experienced in diagnostic
radiology on the basis of the key clinical applications of machine learning
techniques. Material and Methods Studies published in 2010–2019 were selected that report on the efficacy of
machine learning models. A single contingency table was selected for each
study to report the highest accuracy of radiology professionals and machine
learning algorithms, and a meta-analysis of studies was conducted based on
contingency tables. Results The specificity for all the deep learning models ranged from 39% to 100%,
whereas sensitivity ranged from 85% to 100%. The pooled sensitivity and
specificity were 89% and 85% for the deep learning algorithms for detecting
abnormalities compared to 75% and 91% for radiology experts, respectively.
The pooled specificity and sensitivity for comparison between radiology
professionals and deep learning algorithms were 91% and 81% for deep
learning models and 85% and 73% for radiology professionals (p < 0.000),
respectively. The pooled sensitivity detection was 82% for health-care
professionals and 83% for deep learning algorithms (p < 0.005). Conclusion Radiomic information extracted through machine learning programs form images
that may not be discernible through visual examination, thus may improve the
prognostic and diagnostic value of data sets.
Collapse
Affiliation(s)
- Rani Ahmad
- King Abdulaziz University, King Abdulaziz University Hospital, Jeddah, Saudi Arabia
| |
Collapse
|
25
|
Test-time adaptable neural networks for robust medical image segmentation. Med Image Anal 2021; 68:101907. [DOI: 10.1016/j.media.2020.101907] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 11/11/2020] [Accepted: 11/12/2020] [Indexed: 11/20/2022]
|
26
|
Zheng Q, Delingette H, Fung K, Petersen SE, Ayache N. Pathological Cluster Identification by Unsupervised Analysis in 3,822 UK Biobank Cardiac MRIs. Front Cardiovasc Med 2020; 7:539788. [PMID: 33313075 PMCID: PMC7701336 DOI: 10.3389/fcvm.2020.539788] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2020] [Accepted: 08/12/2020] [Indexed: 12/12/2022] Open
Abstract
We perform unsupervised analysis of image-derived shape and motion features extracted from 3,822 cardiac Magnetic resonance imaging (MRIs) of the UK Biobank. First, with a feature extraction method previously published based on deep learning models, we extract from each case 9 feature values characterizing both the cardiac shape and motion. Second, a feature selection is performed to remove highly correlated feature pairs. Third, clustering is carried out using a Gaussian mixture model on the selected features. After analysis, we identify 2 small clusters that probably correspond to 2 pathological categories. Further confirmation using a trained classification model and dimensionality reduction tools is carried out to support this finding. Moreover, we examine the differences between the other large clusters and compare our measures with the ground truth.
Collapse
Affiliation(s)
- Qiao Zheng
- Université Côte d'Azur, Inria, Sophia Antipolis, Valbonne, France
| | - Hervé Delingette
- Université Côte d'Azur, Inria, Sophia Antipolis, Valbonne, France
| | - Kenneth Fung
- National Institute for Health Research Barts Biomedical Research Centre, William Harvey Research Institute, Queen Mary University of London, London, United Kingdom
- Barts Heart Centre, St Bartholomew's Hospital, Barts Health National Health Service Trust, London, United Kingdom
| | - Steffen E. Petersen
- National Institute for Health Research Barts Biomedical Research Centre, William Harvey Research Institute, Queen Mary University of London, London, United Kingdom
- Barts Heart Centre, St Bartholomew's Hospital, Barts Health National Health Service Trust, London, United Kingdom
| | - Nicholas Ayache
- Université Côte d'Azur, Inria, Sophia Antipolis, Valbonne, France
| |
Collapse
|
27
|
A fast and fully-automated deep-learning approach for accurate hemorrhage segmentation and volume quantification in non-contrast whole-head CT. Sci Rep 2020; 10:19389. [PMID: 33168895 PMCID: PMC7652921 DOI: 10.1038/s41598-020-76459-7] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 10/26/2020] [Indexed: 01/17/2023] Open
Abstract
This project aimed to develop and evaluate a fast and fully-automated deep-learning method applying convolutional neural networks with deep supervision (CNN-DS) for accurate hematoma segmentation and volume quantification in computed tomography (CT) scans. Non-contrast whole-head CT scans of 55 patients with hemorrhagic stroke were used. Individual scans were standardized to 64 axial slices of 128 × 128 voxels. Each voxel was annotated independently by experienced raters, generating a binary label of hematoma versus normal brain tissue based on majority voting. The dataset was split randomly into training (n = 45) and testing (n = 10) subsets. A CNN-DS model was built applying the training data and examined using the testing data. Performance of the CNN-DS solution was compared with three previously established methods. The CNN-DS achieved a Dice coefficient score of 0.84 ± 0.06 and recall of 0.83 ± 0.07, higher than patch-wise U-Net (< 0.76). CNN-DS average running time of 0.74 ± 0.07 s was faster than PItcHPERFeCT (> 1412 s) and slice-based U-Net (> 12 s). Comparable interrater agreement rates were observed between “method-human” vs. “human–human” (Cohen’s kappa coefficients > 0.82). The fully automated CNN-DS approach demonstrated expert-level accuracy in fast segmentation and quantification of hematoma, substantially improving over previous methods. Further research is warranted to test the CNN-DS solution as a software tool in clinical settings for effective stroke management.
Collapse
|
28
|
Chaki J, Dey N. Data Tagging in Medical Images: A Survey of the State-of-Art. Curr Med Imaging 2020; 16:1214-1228. [PMID: 32108002 DOI: 10.2174/1573405616666200218130043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 12/02/2019] [Accepted: 12/16/2019] [Indexed: 11/22/2022]
Abstract
A huge amount of medical data is generated every second, and a significant percentage of the data are images that need to be analyzed and processed. One of the key challenges in this regard is the recovery of the data of medical images. The medical image recovery procedure should be done automatically by the computers that are the method of identifying object concepts and assigning homologous tags to them. To discover the hidden concepts in the medical images, the lowlevel characteristics should be used to achieve high-level concepts and that is a challenging task. In any specific case, it requires human involvement to determine the significance of the image. To allow machine-based reasoning on the medical evidence collected, the data must be accompanied by additional interpretive semantics; a change from a pure data-intensive methodology to a model of evidence rich in semantics. In this state-of-art, data tagging methods related to medical images are surveyed which is an important aspect for the recognition of a huge number of medical images. Different types of tags related to the medical image, prerequisites of medical data tagging, different techniques to develop medical image tags, different medical image tagging algorithms and different tools that are used to create the tags are discussed in this paper. The aim of this state-of-art paper is to produce a summary and a set of guidelines for using the tags for the identification of medical images and to identify the challenges and future research directions of tagging medical images.
Collapse
Affiliation(s)
- Jyotismita Chaki
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - Nilanjan Dey
- Department of Information Technology, Techno India College of Technology, West Bengal, India
| |
Collapse
|
29
|
Liu B, Chi W, Li X, Li P, Liang W, Liu H, Wang W, He J. Evolving the pulmonary nodules diagnosis from classical approaches to deep learning-aided decision support: three decades' development course and future prospect. J Cancer Res Clin Oncol 2020; 146:153-185. [PMID: 31786740 DOI: 10.1007/s00432-019-03098-5] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Accepted: 11/25/2019] [Indexed: 02/06/2023]
Abstract
PURPOSE Lung cancer is the commonest cause of cancer deaths worldwide, and its mortality can be reduced significantly by performing early diagnosis and screening. Since the 1960s, driven by the pressing needs to accurately and effectively interpret the massive volume of chest images generated daily, computer-assisted diagnosis of pulmonary nodule has opened up new opportunities to relax the limitation from physicians' subjectivity, experiences and fatigue. And the fair access to the reliable and affordable computer-assisted diagnosis will fight the inequalities in incidence and mortality between populations. It has been witnessed that significant and remarkable advances have been achieved since the 1980s, and consistent endeavors have been exerted to deal with the grand challenges on how to accurately detect the pulmonary nodules with high sensitivity at low false-positive rate as well as on how to precisely differentiate between benign and malignant nodules. There is a lack of comprehensive examination of the techniques' development which is evolving the pulmonary nodules diagnosis from classical approaches to machine learning-assisted decision support. The main goal of this investigation is to provide a comprehensive state-of-the-art review of the computer-assisted nodules detection and benign-malignant classification techniques developed over three decades, which have evolved from the complicated ad hoc analysis pipeline of conventional approaches to the simplified seamlessly integrated deep learning techniques. This review also identifies challenges and highlights opportunities for future work in learning models, learning algorithms and enhancement schemes for bridging current state to future prospect and satisfying future demand. CONCLUSION It is the first literature review of the past 30 years' development in computer-assisted diagnosis of lung nodules. The challenges indentified and the research opportunities highlighted in this survey are significant for bridging current state to future prospect and satisfying future demand. The values of multifaceted driving forces and multidisciplinary researches are acknowledged that will make the computer-assisted diagnosis of pulmonary nodules enter into the main stream of clinical medicine and raise the state-of-the-art clinical applications as well as increase both welfares of physicians and patients. We firmly hold the vision that fair access to the reliable, faithful, and affordable computer-assisted diagnosis for early cancer diagnosis would fight the inequalities in incidence and mortality between populations, and save more lives.
Collapse
Affiliation(s)
- Bo Liu
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China.
| | - Wenhao Chi
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xinran Li
- Department of Mathematics, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Peng Li
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China
| | - Wenhua Liang
- Department of Thoracic Surgery and Oncology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
- China State Key Laboratory of Respiratory Disease, Guangzhou, China
| | - Haiping Liu
- PET/CT Center, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
- China State Key Laboratory of Respiratory Disease, Guangzhou, China
| | - Wei Wang
- Department of Thoracic Surgery and Oncology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
- China State Key Laboratory of Respiratory Disease, Guangzhou, China
| | - Jianxing He
- Department of Thoracic Surgery and Oncology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China.
- China State Key Laboratory of Respiratory Disease, Guangzhou, China.
| |
Collapse
|
30
|
Dodo BI, Li Y, Eltayef K, Liu X. Automatic Annotation of Retinal Layers in Optical Coherence Tomography Images. J Med Syst 2019; 43:336. [PMID: 31724076 PMCID: PMC6853852 DOI: 10.1007/s10916-019-1452-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2018] [Accepted: 09/04/2019] [Indexed: 12/15/2022]
Abstract
Early diagnosis of retinal OCT images has been shown to curtail blindness and visual impairments. However, the advancement of ophthalmic imaging technologies produces an ever-growing scale of retina images, both in volume and variety, which overwhelms the ophthalmologist ability to segment these images. While many automated methods exist, speckle noise and intensity inhomogeneity negatively impacts the performance of these methods. We present a comprehensive and fully automatic method for annotation of retinal layers in OCT images comprising of fuzzy histogram hyperbolisation (FHH) and graph cut methods to segment 7 retinal layers across 8 boundaries. The FHH handles speckle noise and inhomogeneity in the preprocessing step. Then the normalised vertical image gradient, and it’s inverse to represent image intensity in calculating two adjacency matrices and then the FHH reassigns the edge-weights to make edges along retinal boundaries have a low cost, and graph cut method identifies the shortest-paths (layer boundaries). The method is evaluated on 150 B-Scan images, 50 each from the temporal, foveal and nasal regions were used in our study. Promising experimental results have been achieved with high tolerance and adaptability to contour variance and pathological inconsistency of the retinal layers in all (temporal, foveal and nasal) regions. The method also achieves high accuracy, sensitivity, and Dice score of 0.98360, 0.9692 and 0.9712, respectively in segmenting the retinal nerve fibre layer. The annotation can facilitate eye examination by providing accurate results. The integration of the vertical gradients into the graph cut framework, which captures the unique characteristics of retinal structures, is particularly useful in finding the actual minimum paths across multiple retinal layer boundaries. Prior knowledge plays an integral role in image segmentation.
Collapse
Affiliation(s)
- Bashir Isa Dodo
- Department of Computer Science, Brunel University London, Kingston Lane, Uxbridge, UB83PH, UK.
| | - Yongmin Li
- Department of Computer Science, Brunel University London, Kingston Lane, Uxbridge, UB83PH, UK
| | - Khalid Eltayef
- Department of Computer Science, Brunel University London, Kingston Lane, Uxbridge, UB83PH, UK
| | - Xiaohui Liu
- Department of Computer Science, Brunel University London, Kingston Lane, Uxbridge, UB83PH, UK
| |
Collapse
|
31
|
Zhang J, Xie Y, Xia Y, Shen C. Attention Residual Learning for Skin Lesion Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2092-2103. [PMID: 30668469 DOI: 10.1109/tmi.2019.2893944] [Citation(s) in RCA: 187] [Impact Index Per Article: 31.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Automated skin lesion classification in dermoscopy images is an essential way to improve the diagnostic performance and reduce melanoma deaths. Although deep convolutional neural networks (DCNNs) have made dramatic breakthroughs in many image classification tasks, accurate classification of skin lesions remains challenging due to the insufficiency of training data, inter-class similarity, intra-class variation, and the lack of the ability to focus on semantically meaningful lesion parts. To address these issues, we propose an attention residual learning convolutional neural network (ARL-CNN) model for skin lesion classification in dermoscopy images, which is composed of multiple ARL blocks, a global average pooling layer, and a classification layer. Each ARL block jointly uses the residual learning and a novel attention learning mechanisms to improve its ability for discriminative representation. Instead of using extra learnable layers, the proposed attention learning mechanism aims to exploit the intrinsic self-attention ability of DCNNs, i.e., using the feature maps learned by a high layer to generate the attention map for a low layer. We evaluated our ARL-CNN model on the ISIC-skin 2017 dataset. Our results indicate that the proposed ARL-CNN model can adaptively focus on the discriminative parts of skin lesions, and thus achieve the state-of-the-art performance in skin lesion classification.
Collapse
|
32
|
Zheng Q, Delingette H, Ayache N. Explainable cardiac pathology classification on cine MRI with motion characterization by semi-supervised learning of apparent flow. Med Image Anal 2019; 56:80-95. [PMID: 31200290 DOI: 10.1016/j.media.2019.06.001] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Revised: 03/27/2019] [Accepted: 06/04/2019] [Indexed: 12/28/2022]
Abstract
We propose a method to classify cardiac pathology based on a novel approach to extract image derived features to characterize the shape and motion of the heart. An original semi-supervised learning procedure, which makes efficient use of a large amount of non-segmented images and a small amount of images segmented manually by experts, is developed to generate pixel-wise apparent flow between two time points of a 2D+t cine MRI image sequence. Combining the apparent flow maps and cardiac segmentation masks, we obtain a local apparent flow corresponding to the 2D motion of myocardium and ventricular cavities. This leads to the generation of time series of the radius and thickness of myocardial segments to represent cardiac motion. These time series of motion features are reliable and explainable characteristics of pathological cardiac motion. Furthermore, they are combined with shape-related features to classify cardiac pathologies. Using only nine feature values as input, we propose an explainable, simple and flexible model for pathology classification. On ACDC training set and testing set, the model achieves 95% and 94% respectively as classification accuracy. Its performance is hence comparable to that of the state-of-the-art. Comparison with various other models is performed to outline some advantages of our model.
Collapse
Affiliation(s)
- Qiao Zheng
- Université Côte d'Azur, Inria, 2004 Route des Lucioles, 06902 Sophia Antipolis, France.
| | - Hervé Delingette
- Université Côte d'Azur, Inria, 2004 Route des Lucioles, 06902 Sophia Antipolis, France
| | - Nicholas Ayache
- Université Côte d'Azur, Inria, 2004 Route des Lucioles, 06902 Sophia Antipolis, France
| |
Collapse
|
33
|
Medical image classification using synergic deep learning. Med Image Anal 2019; 54:10-19. [DOI: 10.1016/j.media.2019.02.010] [Citation(s) in RCA: 152] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Revised: 01/21/2019] [Accepted: 02/15/2019] [Indexed: 02/07/2023]
|
34
|
Zhang Z, Sejdić E. Radiological images and machine learning: Trends, perspectives, and prospects. Comput Biol Med 2019; 108:354-370. [PMID: 31054502 PMCID: PMC6531364 DOI: 10.1016/j.compbiomed.2019.02.017] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 01/18/2023]
Abstract
The application of machine learning to radiological images is an increasingly active research area that is expected to grow in the next five to ten years. Recent advances in machine learning have the potential to recognize and classify complex patterns from different radiological imaging modalities such as x-rays, computed tomography, magnetic resonance imaging and positron emission tomography imaging. In many applications, machine learning based systems have shown comparable performance to human decision-making. The applications of machine learning are the key ingredients of future clinical decision making and monitoring systems. This review covers the fundamental concepts behind various machine learning techniques and their applications in several radiological imaging areas, such as medical image segmentation, brain function studies and neurological disease diagnosis, as well as computer-aided systems, image registration, and content-based image retrieval systems. Synchronistically, we will briefly discuss current challenges and future directions regarding the application of machine learning in radiological imaging. By giving insight on how take advantage of machine learning powered applications, we expect that clinicians can prevent and diagnose diseases more accurately and efficiently.
Collapse
Affiliation(s)
- Zhenwei Zhang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA
| | - Ervin Sejdić
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA.
| |
Collapse
|
35
|
Deep Learning in the Biomedical Applications: Recent and Future Status. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9081526] [Citation(s) in RCA: 75] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deep neural networks represent, nowadays, the most effective machine learning technology in biomedical domain. In this domain, the different areas of interest concern the Omics (study of the genome—genomics—and proteins—transcriptomics, proteomics, and metabolomics), bioimaging (study of biological cell and tissue), medical imaging (study of the human organs by creating visual representations), BBMI (study of the brain and body machine interface) and public and medical health management (PmHM). This paper reviews the major deep learning concepts pertinent to such biomedical applications. Concise overviews are provided for the Omics and the BBMI. We end our analysis with a critical discussion, interpretation and relevant open challenges.
Collapse
|
36
|
Cheplygina V, de Bruijne M, Pluim JPW. Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Med Image Anal 2019; 54:280-296. [PMID: 30959445 DOI: 10.1016/j.media.2019.03.009] [Citation(s) in RCA: 361] [Impact Index Per Article: 60.2] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Revised: 12/20/2018] [Accepted: 03/25/2019] [Indexed: 02/07/2023]
Abstract
Machine learning (ML) algorithms have made a tremendous impact in the field of medical imaging. While medical imaging datasets have been growing in size, a challenge for supervised ML algorithms that is frequently mentioned is the lack of annotated data. As a result, various methods that can learn with less/other types of supervision, have been proposed. We give an overview of semi-supervised, multiple instance, and transfer learning in medical imaging, both in diagnosis or segmentation tasks. We also discuss connections between these learning scenarios, and opportunities for future research. A dataset with the details of the surveyed papers is available via https://figshare.com/articles/Database_of_surveyed_literature_in_Not-so-supervised_a_survey_of_semi-supervised_multi-instance_and_transfer_learning_in_medical_image_analysis_/7479416.
Collapse
Affiliation(s)
- Veronika Cheplygina
- Medical Image Analysis, Department Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands.
| | - Marleen de Bruijne
- Biomedical Imaging Group Rotterdam, Departments Radiology and Medical Informatics, Erasmus Medical Center, Rotterdam, the Netherlands; The Image Section, Department Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Josien P W Pluim
- Medical Image Analysis, Department Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands; Image Sciences Institute, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|
37
|
Hwang JJ, Jung YH, Cho BH, Heo MS. An overview of deep learning in the field of dentistry. Imaging Sci Dent 2019; 49:1-7. [PMID: 30941282 PMCID: PMC6444007 DOI: 10.5624/isd.2019.49.1.1] [Citation(s) in RCA: 141] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Revised: 12/15/2018] [Accepted: 12/17/2018] [Indexed: 01/23/2023] Open
Abstract
Purpose Artificial intelligence (AI), represented by deep learning, can be used for real-life problems and is applied across all sectors of society including medical and dental field. The purpose of this study is to review articles about deep learning that were applied to the field of oral and maxillofacial radiology. Materials and Methods A systematic review was performed using Pubmed, Scopus, and IEEE explore databases to identify articles using deep learning in English literature. The variables from 25 articles included network architecture, number of training data, evaluation result, pros and cons, study object and imaging modality. Results Convolutional Neural network (CNN) was used as a main network component. The number of published paper and training datasets tended to increase, dealing with various field of dentistry. Conclusion Dental public datasets need to be constructed and data standardization is necessary for clinical application of deep learning in dental field.
Collapse
Affiliation(s)
- Jae-Joon Hwang
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Dental Research Institute, Yangsan, Korea
| | - Yun-Hoa Jung
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Dental Research Institute, Yangsan, Korea
| | - Bong-Hae Cho
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Dental Research Institute, Yangsan, Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| |
Collapse
|
38
|
Aeffner F, Zarella MD, Buchbinder N, Bui MM, Goodman MR, Hartman DJ, Lujan GM, Molani MA, Parwani AV, Lillard K, Turner OC, Vemuri VNP, Yuil-Valdes AG, Bowman D. Introduction to Digital Image Analysis in Whole-slide Imaging: A White Paper from the Digital Pathology Association. J Pathol Inform 2019; 10:9. [PMID: 30984469 PMCID: PMC6437786 DOI: 10.4103/jpi.jpi_82_18] [Citation(s) in RCA: 203] [Impact Index Per Article: 33.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Accepted: 12/11/2018] [Indexed: 12/22/2022] Open
Abstract
The advent of whole-slide imaging in digital pathology has brought about the advancement of computer-aided examination of tissue via digital image analysis. Digitized slides can now be easily annotated and analyzed via a variety of algorithms. This study reviews the fundamentals of tissue image analysis and aims to provide pathologists with basic information regarding the features, applications, and general workflow of these new tools. The review gives an overview of the basic categories of software solutions available, potential analysis strategies, technical considerations, and general algorithm readouts. Advantages and limitations of tissue image analysis are discussed, and emerging concepts, such as artificial intelligence and machine learning, are introduced. Finally, examples of how digital image analysis tools are currently being used in diagnostic laboratories, translational research, and drug development are discussed.
Collapse
Affiliation(s)
- Famke Aeffner
- Amgen Inc., Amgen Research, Comparative Biology and Safety Sciences, South San Francisco, CA, USA
| | - Mark D Zarella
- Department of Pathology and Laboratory Medicine, Drexel University, College of Medicine, Philadelphia, PA, USA
| | | | - Marilyn M Bui
- Department of Pathology, Moffitt Cancer Center, Tampa, FL, USA
| | | | | | | | - Mariam A Molani
- Department of Pathology and Microbiology, University of Nebraska Medical Center, Omaha, NE, USA
| | - Anil V Parwani
- The Ohio State University Medical Center, Columbus, OH, USA
| | | | - Oliver C Turner
- Novartis, Novartis Institutes for BioMedical Research, Preclinical Safety, East Hannover, NJ, USA
| | | | - Ana G Yuil-Valdes
- Department of Pathology and Microbiology, University of Nebraska Medical Center, Omaha, NE, USA
| | | |
Collapse
|
39
|
Xing F, Xie Y, Su H, Liu F, Yang L. Deep Learning in Microscopy Image Analysis: A Survey. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:4550-4568. [PMID: 29989994 DOI: 10.1109/tnnls.2017.2766168] [Citation(s) in RCA: 168] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Computerized microscopy image analysis plays an important role in computer aided diagnosis and prognosis. Machine learning techniques have powered many aspects of medical investigation and clinical practice. Recently, deep learning is emerging as a leading machine learning tool in computer vision and has attracted considerable attention in biomedical image analysis. In this paper, we provide a snapshot of this fast-growing field, specifically for microscopy image analysis. We briefly introduce the popular deep neural networks and summarize current deep learning achievements in various tasks, such as detection, segmentation, and classification in microscopy image analysis. In particular, we explain the architectures and the principles of convolutional neural networks, fully convolutional networks, recurrent neural networks, stacked autoencoders, and deep belief networks, and interpret their formulations or modelings for specific tasks on various microscopy images. In addition, we discuss the open challenges and the potential trends of future research in microscopy image analysis using deep learning.
Collapse
|
40
|
Zhang J, Xie Y, Wu Q, Xia Y. Skin Lesion Classification in Dermoscopy Images Using Synergic Deep Learning. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00934-2_2] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
41
|
Pinkert MA, Salkowski LR, Keely PJ, Hall TJ, Block WF, Eliceiri KW. Review of quantitative multiscale imaging of breast cancer. J Med Imaging (Bellingham) 2018; 5:010901. [PMID: 29392158 PMCID: PMC5777512 DOI: 10.1117/1.jmi.5.1.010901] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2017] [Accepted: 12/19/2017] [Indexed: 12/12/2022] Open
Abstract
Breast cancer is the most common cancer among women worldwide and ranks second in terms of overall cancer deaths. One of the difficulties associated with treating breast cancer is that it is a heterogeneous disease with variations in benign and pathologic tissue composition, which contributes to disease development, progression, and treatment response. Many of these phenotypes are uncharacterized and their presence is difficult to detect, in part due to the sparsity of methods to correlate information between the cellular microscale and the whole-breast macroscale. Quantitative multiscale imaging of the breast is an emerging field concerned with the development of imaging technology that can characterize anatomic, functional, and molecular information across different resolutions and fields of view. It involves a diverse collection of imaging modalities, which touch large sections of the breast imaging research community. Prospective studies have shown promising results, but there are several challenges, ranging from basic physics and engineering to data processing and quantification, that must be met to bring the field to maturity. This paper presents some of the challenges that investigators face, reviews currently used multiscale imaging methods for preclinical imaging, and discusses the potential of these methods for clinical breast imaging.
Collapse
Affiliation(s)
- Michael A. Pinkert
- Morgridge Institute for Research, Madison, Wisconsin, United States
- University of Wisconsin–Madison, Laboratory for Optical and Computational Instrumentation, Madison, Wisconsin, United States
- University of Wisconsin–Madison, Department of Medical Physics, Madison, Wisconsin, United States
| | - Lonie R. Salkowski
- University of Wisconsin–Madison, Department of Medical Physics, Madison, Wisconsin, United States
- University of Wisconsin–Madison, Department of Radiology, Madison, Wisconsin, United States
| | - Patricia J. Keely
- University of Wisconsin–Madison, Department of Cell and Regenerative Biology, Madison, Wisconsin, United States
- University of Wisconsin–Madison, Department of Biomedical Engineering, Madison, Wisconsin, United States
| | - Timothy J. Hall
- University of Wisconsin–Madison, Department of Medical Physics, Madison, Wisconsin, United States
- University of Wisconsin–Madison, Department of Biomedical Engineering, Madison, Wisconsin, United States
| | - Walter F. Block
- University of Wisconsin–Madison, Department of Medical Physics, Madison, Wisconsin, United States
- University of Wisconsin–Madison, Department of Radiology, Madison, Wisconsin, United States
- University of Wisconsin–Madison, Department of Biomedical Engineering, Madison, Wisconsin, United States
| | - Kevin W. Eliceiri
- Morgridge Institute for Research, Madison, Wisconsin, United States
- University of Wisconsin–Madison, Laboratory for Optical and Computational Instrumentation, Madison, Wisconsin, United States
- University of Wisconsin–Madison, Department of Medical Physics, Madison, Wisconsin, United States
- University of Wisconsin–Madison, Department of Biomedical Engineering, Madison, Wisconsin, United States
| |
Collapse
|