51
|
Zhang D, Li L, Sripada C, Kang J. Image response regression via deep neural networks. J R Stat Soc Series B Stat Methodol 2023; 85:1589-1614. [PMID: 38584801 PMCID: PMC10994199 DOI: 10.1093/jrsssb/qkad073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 06/22/2023] [Accepted: 06/28/2023] [Indexed: 04/09/2024]
Abstract
Delineating associations between images and covariates is a central aim of imaging studies. To tackle this problem, we propose a novel non-parametric approach in the framework of spatially varying coefficient models, where the spatially varying functions are estimated through deep neural networks. Our method incorporates spatial smoothness, handles subject heterogeneity, and provides straightforward interpretations. It is also highly flexible and accurate, making it ideal for capturing complex association patterns. We establish estimation and selection consistency and derive asymptotic error bounds. We demonstrate the method's advantages through intensive simulations and analyses of two functional magnetic resonance imaging data sets.
Collapse
Affiliation(s)
- Daiwei Zhang
- Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - Lexin Li
- Department of Biostatistics and Epidemiology, University of California, Berkeley, CA, USA
| | - Chandra Sripada
- Department of Psychiatry, University of Michigan, Ann Arbor, MI, USA
- Department of Philosophy, University of Michigan, Ann Arbor, MI, USA
| | - Jian Kang
- Department of Biostatistics, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
52
|
Black SM, Maclean C, Barrientos PH, Ritos K, Kazakidi A. Reconstruction and Validation of Arterial Geometries for Computational Fluid Dynamics Using Multiple Temporal Frames of 4D Flow-MRI Magnitude Images. Cardiovasc Eng Technol 2023; 14:655-676. [PMID: 37653353 PMCID: PMC10602980 DOI: 10.1007/s13239-023-00679-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 08/08/2023] [Indexed: 09/02/2023]
Abstract
PURPOSE Segmentation and reconstruction of arterial blood vessels is a fundamental step in the translation of computational fluid dynamics (CFD) to the clinical practice. Four-dimensional flow magnetic resonance imaging (4D Flow-MRI) can provide detailed information of blood flow but processing this information to elucidate the underlying anatomical structures is challenging. In this study, we present a novel approach to create high-contrast anatomical images from retrospective 4D Flow-MRI data. METHODS For healthy and clinical cases, the 3D instantaneous velocities at multiple cardiac time steps were superimposed directly onto the 4D Flow-MRI magnitude images and combined into a single composite frame. This new Composite Phase-Contrast Magnetic Resonance Angiogram (CPC-MRA) resulted in enhanced and uniform contrast within the lumen. These images were subsequently segmented and reconstructed to generate 3D arterial models for CFD. Using the time-dependent, 3D incompressible Reynolds-averaged Navier-Stokes equations, the transient aortic haemodynamics was computed within a rigid wall model of patient geometries. RESULTS Validation of these models against the gold standard CT-based approach showed no statistically significant inter-modality difference regarding vessel radius or curvature (p > 0.05), and a similar Dice Similarity Coefficient and Hausdorff Distance. CFD-derived near-wall hemodynamics indicated a significant inter-modality difference (p > 0.05), though these absolute errors were small. When compared to the in vivo data, CFD-derived velocities were qualitatively similar. CONCLUSION This proof-of-concept study demonstrated that functional 4D Flow-MRI information can be utilized to retrospectively generate anatomical information for CFD models in the absence of standard imaging datasets and intravenous contrast.
Collapse
Affiliation(s)
| | - Craig Maclean
- Research and Development, Terumo Aortic, Glasgow, UK
| | - Pauline Hall Barrientos
- Clinical Physics, Queen Elizabeth University Hospital, NHS Greater Glasgow & Clyde, Glasgow, UK
| | - Konstantinos Ritos
- Department of Mechanical and Aerospace Engineering, Glasgow, UK
- Department of Mechanical Engineering, University of Thessaly, Volos, Greece
| | - Asimina Kazakidi
- Department of Biomedical Engineering, University of Strathclyde, Glasgow, UK.
| |
Collapse
|
53
|
Duan X, Ding XF, Li N, Wu FX, Chen X, Zhu N. Sparse2Noise: Low-dose synchrotron X-ray tomography without high-quality reference data. Comput Biol Med 2023; 165:107473. [PMID: 37690288 DOI: 10.1016/j.compbiomed.2023.107473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 08/30/2023] [Accepted: 09/04/2023] [Indexed: 09/12/2023]
Abstract
BACKGROUND Synchrotron radiation computed tomography (SR-CT) holds promise for high-resolution in vivo imaging. Notably, the reconstruction of SR-CT images necessitates a large set of data to be captured with sufficient photons from multiple angles, resulting in high radiation dose received by the object. Reducing the number of projections and/or photon flux is a straightforward means to lessen the radiation dose, however, compromises data completeness, thus introducing noises and artifacts. Deep learning (DL)-based supervised methods effectively denoise and remove artifacts, but they heavily depend on high-quality paired data acquired at high doses. Although algorithms exist for training without high-quality references, they struggle to effectively eliminate persistent artifacts present in real-world data. METHODS This work presents a novel low-dose imaging strategy namely Sparse2Noise, which combines the reconstruction data from paired sparse-view CT scan (normal-flux) and full-view CT scan (low-flux) using a convolutional neural network (CNN). Sparse2Noise does not require high-quality reconstructed data as references and allows for fresh training on data with very small size. Sparse2Noise was evaluated by both simulated and experimental data. RESULTS Sparse2Noise effectively reduces noise and ring artifacts while maintaining high image quality, outperforming state-of-the-art image denoising methods at same dose levels. Furthermore, Sparse2Noise produces impressive high image quality for ex vivo rat hindlimb imaging with the acceptable low radiation dose (i.e., 0.5 Gy with the isotropic voxel size of 26 μm). CONCLUSIONS This work represents a significant advance towards in vivo SR-CT imaging. It is noteworthy that Sparse2Noise can also be used for denoising in conventional CT and/or phase-contrast CT.
Collapse
Affiliation(s)
- Xiaoman Duan
- Division of Biomedical Engineering, College of Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
| | - Xiao Fan Ding
- Division of Biomedical Engineering, College of Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
| | - Naitao Li
- Division of Biomedical Engineering, College of Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
| | - Fang-Xiang Wu
- Division of Biomedical Engineering, College of Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada; Department of Computer Science, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada; Department of Mechanical Engineering, College of Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
| | - Xiongbiao Chen
- Division of Biomedical Engineering, College of Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada; Department of Mechanical Engineering, College of Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada.
| | - Ning Zhu
- Division of Biomedical Engineering, College of Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada; Canadian Light Source, Saskatoon, S7N 2V3, SK, Canada; Department of Chemical and Biological Engineering, College of Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada.
| |
Collapse
|
54
|
Huang H, Chen Z, Chen C, Lu M, Zou Y. Complementary consistency semi-supervised learning for 3D left atrial image segmentation. Comput Biol Med 2023; 165:107368. [PMID: 37611420 DOI: 10.1016/j.compbiomed.2023.107368] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 07/24/2023] [Accepted: 08/12/2023] [Indexed: 08/25/2023]
Abstract
A network based on complementary consistency training, CC-Net, has been proposed for semi-supervised left atrium image segmentation. CC-Net efficiently utilizes unlabeled data from the perspective of complementary information, addressing the limited ability of existing semi-supervised segmentation algorithms to extract information from unlabeled data. The complementary symmetrical structure of CC-Net includes a main model and two auxiliary models. The complementary consistency is formed by the model-level perturbation between the main model and the auxiliary models, enforcing their consistency. The complementary information obtained by the two auxiliary models helps the main model effectively focus on ambiguous areas, while the enforced consistency between models facilitates the acquisition of low-uncertainty decision boundaries. CC-Net has been validated in two public datasets. Compared to current state-of-the-art algorithms under specific proportions of annotated data, CC-Net demonstrates the best performance in semi-supervised segmentation. Our code is publicly available at https://github.com/Cuthbert-Huang/CC-Net.
Collapse
Affiliation(s)
- Hejun Huang
- School of Information and Electrical Engineering, Hunan University of Science and Technology, Xiangtan, 411201, China.
| | - Zuguo Chen
- School of Information and Electrical Engineering, Hunan University of Science and Technology, Xiangtan, 411201, China; Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| | - Chaoyang Chen
- School of Information and Electrical Engineering, Hunan University of Science and Technology, Xiangtan, 411201, China
| | - Ming Lu
- School of Information and Electrical Engineering, Hunan University of Science and Technology, Xiangtan, 411201, China
| | - Ying Zou
- School of Information and Electrical Engineering, Hunan University of Science and Technology, Xiangtan, 411201, China
| |
Collapse
|
55
|
Shukla S, Birla L, Gupta AK, Gupta P. Trustworthy Medical Image Segmentation with improved performance for in-distribution samples. Neural Netw 2023; 166:127-136. [PMID: 37487410 DOI: 10.1016/j.neunet.2023.06.047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 06/13/2023] [Accepted: 06/30/2023] [Indexed: 07/26/2023]
Abstract
Despite the enormous achievements of Deep Learning (DL) based models, their non-transparent nature led to restricted applicability and distrusted predictions. Such predictions emerge from erroneous In-Distribution (ID) and Out-Of-Distribution (OOD) samples, which results in disastrous effects in the medical domain, specifically in Medical Image Segmentation (MIS). To mitigate such effects, several existing works accomplish OOD sample detection; however, the trustworthiness issues from ID samples still require thorough investigation. To this end, a novel method TrustMIS (Trustworthy Medical Image Segmentation) is proposed in this paper, which provides the trustworthiness and improved performance of ID samples for DL-based MIS models. TrustMIS works in three folds: IT (Investigating Trustworthiness), INT (Improving Non-Trustworthy prediction) and CSO (Classifier Switching Operation). Initially, the IT method investigates the trustworthiness of MIS by leveraging similar characteristics and consistency analysis of input and its variants. Subsequently, the INT method employs the IT method to improve the performance of the MIS model. It leverages the observation that an input providing erroneous segmentation can provide correct segmentation with rotated input. Eventually, the CSO method employs the INT method to scrutinise several MIS models and selects the model that delivers the most trustworthy prediction. The experiments conducted on publicly available datasets using well-known MIS models reveal that TrustMIS has successfully provided a trustworthiness measure, outperformed the existing methods, and improved the performance of state-of-the-art MIS models. Our implementation is available at https://github.com/SnehaShukla937/TrustMIS.
Collapse
Affiliation(s)
- Sneha Shukla
- Indian Institute of Technology Indore, Indore, India.
| | | | | | - Puneet Gupta
- Indian Institute of Technology Indore, Indore, India.
| |
Collapse
|
56
|
Shiffman S, Rios Piedra EA, Adedeji AO, Ruff CF, Andrews RN, Katavolos P, Liu E, Forster A, Brumm J, Fuji RN, Sullivan R. Analysis of cellularity in H&E-stained rat bone marrow tissue via deep learning. J Pathol Inform 2023; 14:100333. [PMID: 37743975 PMCID: PMC10514468 DOI: 10.1016/j.jpi.2023.100333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 08/18/2023] [Accepted: 08/19/2023] [Indexed: 09/26/2023] Open
Abstract
Our objective was to develop an automated deep-learning-based method to evaluate cellularity in rat bone marrow hematoxylin and eosin whole slide images for preclinical safety assessment. We trained a shallow CNN for segmenting marrow, 2 Mask R-CNN models for segmenting megakaryocytes (MKCs), and small hematopoietic cells (SHCs), and a SegNet model for segmenting red blood cells. We incorporated the models into a pipeline that identifies and counts MKCs and SHCs in rat bone marrow. We compared cell segmentation and counts that our method generated to those that pathologists generated on 10 slides with a range of cell depletion levels from 10 studies. For SHCs, we compared cell counts that our method generated to counts generated by Cellpose and Stardist. The median Dice and object Dice scores for MKCs using our method vs pathologist consensus and the inter- and intra-pathologist variation were comparable, with overlapping first-third quartile ranges. For SHCs, the median scores were close, with first-third quartile ranges partially overlapping intra-pathologist variation. For SHCs, in comparison to Cellpose and Stardist, counts from our method were closer to pathologist counts, with a smaller 95% limits of agreement range. The performance of the bone marrow analysis pipeline supports its incorporation into routine use as an aid for hematotoxicity assessment by pathologists. The pipeline could help expedite hematotoxicity assessment in preclinical studies and consequently could expedite drug development. The method may enable meta-analysis of rat bone marrow characteristics from future and historical whole slide images and may generate new biological insights from cross-study comparisons.
Collapse
Affiliation(s)
- Smadar Shiffman
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Edgar A. Rios Piedra
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Adeyemi O. Adedeji
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Catherine F. Ruff
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Rachel N. Andrews
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Paula Katavolos
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
- Bristol Myers Squibb, New Brunswick, NJ 08901, USA
| | - Evan Liu
- Genentech Research and Early Development (gRED), Department of Development Sciences Informatics, Genentech Inc, South San Francisco, USA
| | - Ashley Forster
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
- University of Pennsylvania School of Veterinary Medicine, Philadelphia, PA 19104, USA
| | - Jochen Brumm
- Genentech Research and Early Development (gRED), Department of Nonclinical Biostatistics, Genentech Inc, South San Francisco, USA
| | - Reina N. Fuji
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| | - Ruth Sullivan
- Genentech Research and Early Development (gRED), Department of Safety Assessment, Genentech Inc., South San Francisco, USA
| |
Collapse
|
57
|
Wang J, Peng Y. MHL-Net: A Multistage Hierarchical Learning Network for Head and Neck Multiorgan Segmentation. IEEE J Biomed Health Inform 2023; 27:4074-4085. [PMID: 37171918 DOI: 10.1109/jbhi.2023.3275746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Accurate segmentation of head and neck organs at risk is crucial in radiotherapy. However, the existing methods suffer from incomplete feature mining, insufficient information utilization, and difficulty in simultaneously improving the performance of small and large organ segmentation. In this paper, a multistage hierarchical learning network is designed to fully extract multidimensional features, combined with anatomical prior information and imaging features, using multistage subnetworks to improve the segmentation performance. First, multilevel subnetworks are constructed for primary segmentation, localization, and fine segmentation by dividing organs into two levels-large and small. Different networks both have their own learning focuses and feature reuse and information sharing among each other, which comprehensively improved the segmentation performance of all organs. Second, an anatomical prior probability map and a boundary contour attention mechanism are developed to address the problem of complex anatomical shapes. Prior information and boundary contour features effectively assist in detecting and segmenting special shapes. Finally, a multidimensional combination attention mechanism is proposed to analyze axial, coronal, and sagittal information, capture spatial and channel features, and maximize the use of structural information and semantic features of 3D medical images. Experimental results on several datasets showed that our method was competitive with state-of-the-art methods and improved the segmentation results for multiscale organs.
Collapse
|
58
|
Zhang Q, Liu X, Chang J, Lu M, Jing Y, Yang R, Sun W, Deng J, Qi T, Wan M. Ultrasound image segmentation using Gamma combined with Bayesian model for focused-ultrasound-surgery lesion recognition. ULTRASONICS 2023; 134:107103. [PMID: 37437399 DOI: 10.1016/j.ultras.2023.107103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Revised: 06/30/2023] [Accepted: 07/04/2023] [Indexed: 07/14/2023]
Abstract
This study aims to investigate the feasibility of combined segmentation for the separation of lesions from non-ablated regions, which allows surgeons to easily distinguish, measure, and evaluate the lesion area, thereby improving the quality of high-intensity focused-ultrasound (HIFU) surgery used for the non-invasive tumor treatment. Given that the flexible shape of the Gamma mixture model (GΓMM) fits the complex statistical distribution of samples, a method combining the GΓMM and Bayes framework is constructed for the classification of samples to obtain the segmentation result. An appropriate normalization range and parameters can be used to rapidly obtain a good performance of GΓMM segmentation. The performance values of the proposed method under four metrics (Dice score: 85%, Jaccard coefficient: 75%, recall: 86%, and accuracy: 96%) are better than those of conventional approaches including Otsu and Region growing. Furthermore, the statistical result of sample intensity indicates that the finding of the GΓMM is similar to that obtained by the manual method. These results indicate the stability and reliability of the GΓMM combined with the Bayes framework for the segmentation of HIFU lesions in ultrasound images. The experimental results show the possibility of combining the GΓMM with the Bayes framework to segment lesion areas and evaluate the effect of therapeutic ultrasound.
Collapse
Affiliation(s)
- Quan Zhang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi' an Jiaotong University, Xi'an 710049, China
| | - Xuan Liu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi' an Jiaotong University, Xi'an 710049, China
| | - Juntao Chang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi' an Jiaotong University, Xi'an 710049, China
| | - Mingzhu Lu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi' an Jiaotong University, Xi'an 710049, China.
| | - Yanshu Jing
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi' an Jiaotong University, Xi'an 710049, China
| | - Rongzhen Yang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi' an Jiaotong University, Xi'an 710049, China
| | - Weihao Sun
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi' an Jiaotong University, Xi'an 710049, China
| | - Jie Deng
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi' an Jiaotong University, Xi'an 710049, China
| | - Tingting Qi
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi' an Jiaotong University, Xi'an 710049, China
| | - Mingxi Wan
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi' an Jiaotong University, Xi'an 710049, China
| |
Collapse
|
59
|
Li L, Liu H, Li Q, Tian Z, Li Y, Geng W, Wang S. Near-Infrared Blood Vessel Image Segmentation Using Background Subtraction and Improved Mathematical Morphology. Bioengineering (Basel) 2023; 10:726. [PMID: 37370657 DOI: 10.3390/bioengineering10060726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 05/31/2023] [Accepted: 06/06/2023] [Indexed: 06/29/2023] Open
Abstract
The precise display of blood vessel information for doctors is crucial. This is not only true for facilitating intravenous injections, but also for the diagnosis and analysis of diseases. Currently, infrared cameras can be used to capture images of superficial blood vessels. However, their imaging quality always has the problems of noises, breaks, and uneven vascular information. In order to overcome these problems, this paper proposes an image segmentation algorithm based on the background subtraction and improved mathematical morphology. The algorithm regards the image as a superposition of blood vessels into the background, removes the noise by calculating the size of connected domains, achieves uniform blood vessel width, and smooths edges that reflect the actual blood vessel state. The algorithm is evaluated subjectively and objectively in this paper to provide a basis for vascular image quality assessment. Extensive experimental results demonstrate that the proposed method can effectively extract accurate and clear vascular information.
Collapse
Affiliation(s)
- Ling Li
- Beijing Engineerin Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Haoting Liu
- Beijing Engineerin Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Qing Li
- Beijing Engineerin Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Zhen Tian
- Beijing Engineerin Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Yajie Li
- Beijing Engineerin Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Wenjia Geng
- Department of Traditional Chinese Medicine, Peking University People's Hospital, Beijing 100044, China
| | - Song Wang
- Department of Nephrology, Peking University Third Hospital, Beijing 100191, China
| |
Collapse
|
60
|
He M, Cao Y, Chi C, Yang X, Ramin R, Wang S, Yang G, Mukhtorov O, Zhang L, Kazantsev A, Enikeev M, Hu K. Research progress on deep learning in magnetic resonance imaging-based diagnosis and treatment of prostate cancer: a review on the current status and perspectives. Front Oncol 2023; 13:1189370. [PMID: 37546423 PMCID: PMC10400334 DOI: 10.3389/fonc.2023.1189370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multiparametric magnetic resonance imaging (mpMRI) has emerged as a first-line screening and diagnostic tool for prostate cancer, aiding in treatment selection and noninvasive radiotherapy guidance. However, the manual interpretation of MRI data is challenging and time-consuming, which may impact sensitivity and specificity. With recent technological advances, artificial intelligence (AI) in the form of computer-aided diagnosis (CAD) based on MRI data has been applied to prostate cancer diagnosis and treatment. Among AI techniques, deep learning involving convolutional neural networks contributes to detection, segmentation, scoring, grading, and prognostic evaluation of prostate cancer. CAD systems have automatic operation, rapid processing, and accuracy, incorporating multiple sequences of multiparametric MRI data of the prostate gland into the deep learning model. Thus, they have become a research direction of great interest, especially in smart healthcare. This review highlights the current progress of deep learning technology in MRI-based diagnosis and treatment of prostate cancer. The key elements of deep learning-based MRI image processing in CAD systems and radiotherapy of prostate cancer are briefly described, making it understandable not only for radiologists but also for general physicians without specialized imaging interpretation training. Deep learning technology enables lesion identification, detection, and segmentation, grading and scoring of prostate cancer, and prediction of postoperative recurrence and prognostic outcomes. The diagnostic accuracy of deep learning can be improved by optimizing models and algorithms, expanding medical database resources, and combining multi-omics data and comprehensive analysis of various morphological data. Deep learning has the potential to become the key diagnostic method in prostate cancer diagnosis and treatment in the future.
Collapse
Affiliation(s)
- Mingze He
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Yu Cao
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Changliang Chi
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| | - Xinyi Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Rzayev Ramin
- Department of Radiology, The Second University Clinic, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Shuowen Wang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Guodong Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Otabek Mukhtorov
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Liqun Zhang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, Liaoning, China
| | - Anton Kazantsev
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Mikhail Enikeev
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Kebang Hu
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| |
Collapse
|
61
|
Bokhorst JM, Nagtegaal ID, Fraggetta F, Vatrano S, Mesker W, Vieth M, van der Laak J, Ciompi F. Deep learning for multi-class semantic segmentation enables colorectal cancer detection and classification in digital pathology images. Sci Rep 2023; 13:8398. [PMID: 37225743 PMCID: PMC10209185 DOI: 10.1038/s41598-023-35491-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 05/18/2023] [Indexed: 05/26/2023] Open
Abstract
In colorectal cancer (CRC), artificial intelligence (AI) can alleviate the laborious task of characterization and reporting on resected biopsies, including polyps, the numbers of which are increasing as a result of CRC population screening programs ongoing in many countries all around the globe. Here, we present an approach to address two major challenges in the automated assessment of CRC histopathology whole-slide images. We present an AI-based method to segment multiple ([Formula: see text]) tissue compartments in the H &E-stained whole-slide image, which provides a different, more perceptible picture of tissue morphology and composition. We test and compare a panel of state-of-the-art loss functions available for segmentation models, and provide indications about their use in histopathology image segmentation, based on the analysis of (a) a multi-centric cohort of CRC cases from five medical centers in the Netherlands and Germany, and (b) two publicly available datasets on segmentation in CRC. We used the best performing AI model as the basis for a computer-aided diagnosis system that classifies colon biopsies into four main categories that are relevant pathologically. We report the performance of this system on an independent cohort of more than 1000 patients. The results show that with a good segmentation network as a base, a tool can be developed which can support pathologists in the risk stratification of colorectal cancer patients, among other possible uses. We have made the segmentation model available for research use on https://grand-challenge.org/algorithms/colon-tissue-segmentation/ .
Collapse
Affiliation(s)
- John-Melle Bokhorst
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Iris D Nagtegaal
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Filippo Fraggetta
- Pathology Unit Gravina Hospital, Gravina Hospital, Caltagirone, Italy
| | - Simona Vatrano
- Pathology Unit Gravina Hospital, Gravina Hospital, Caltagirone, Italy
| | - Wilma Mesker
- Leids Universitair Medisch Centrum, Leiden, The Netherlands
| | - Michael Vieth
- Klinikum Bayreuth, Friedrich-Alexander-University Erlangen-Nuremberg, Bayreuth, Germany
| | - Jeroen van der Laak
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands
- Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden
| | - Francesco Ciompi
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
62
|
Gardiyanoğlu E, Ünsal G, Akkaya N, Aksoy S, Orhan K. Automatic Segmentation of Teeth, Crown-Bridge Restorations, Dental Implants, Restorative Fillings, Dental Caries, Residual Roots, and Root Canal Fillings on Orthopantomographs: Convenience and Pitfalls. Diagnostics (Basel) 2023; 13:diagnostics13081487. [PMID: 37189586 DOI: 10.3390/diagnostics13081487] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 02/26/2023] [Accepted: 03/01/2023] [Indexed: 05/17/2023] Open
Abstract
BACKGROUND The aim of our study is to provide successful automatic segmentation of various objects on orthopantomographs (OPGs). METHODS 8138 OPGs obtained from the archives of the Department of Dentomaxillofacial Radiology were included. OPGs were converted into PNGs and transferred to the segmentation tool's database. All teeth, crown-bridge restorations, dental implants, composite-amalgam fillings, dental caries, residual roots, and root canal fillings were manually segmented by two experts with the manual drawing semantic segmentation technique. RESULTS The intra-class correlation coefficient (ICC) for both inter- and intra-observers for manual segmentation was excellent (ICC > 0.75). The intra-observer ICC was found to be 0.994, while the inter-observer reliability was 0.989. No significant difference was detected amongst observers (p = 0.947). The calculated DSC and accuracy values across all OPGs were 0.85 and 0.95 for the tooth segmentation, 0.88 and 0.99 for dental caries, 0.87 and 0.99 for dental restorations, 0.93 and 0.99 for crown-bridge restorations, 0.94 and 0.99 for dental implants, 0.78 and 0.99 for root canal fillings, and 0.78 and 0.99 for residual roots, respectively. CONCLUSIONS Thanks to faster and automated diagnoses on 2D as well as 3D dental images, dentists will have higher diagnosis rates in a shorter time even without excluding cases.
Collapse
Affiliation(s)
- Emel Gardiyanoğlu
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
| | - Gürkan Ünsal
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
- DESAM Institute, Near East University, 99138 Nicosia, Cyprus
| | - Nurullah Akkaya
- Department of Computer Engineering, Applied Artificial Intelligence Research Centre, Near East University, 99138 Nicosia, Cyprus
| | - Seçil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
| | - Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, 06560 Ankara, Turkey
| |
Collapse
|
63
|
Yeung M, Rundo L, Nan Y, Sala E, Schönlieb CB, Yang G. Calibrating the Dice Loss to Handle Neural Network Overconfidence for Biomedical Image Segmentation. J Digit Imaging 2023; 36:739-752. [PMID: 36474089 PMCID: PMC10039156 DOI: 10.1007/s10278-022-00735-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 10/30/2022] [Accepted: 10/31/2022] [Indexed: 12/12/2022] Open
Abstract
The Dice similarity coefficient (DSC) is both a widely used metric and loss function for biomedical image segmentation due to its robustness to class imbalance. However, it is well known that the DSC loss is poorly calibrated, resulting in overconfident predictions that cannot be usefully interpreted in biomedical and clinical practice. Performance is often the only metric used to evaluate segmentations produced by deep neural networks, and calibration is often neglected. However, calibration is important for translation into biomedical and clinical practice, providing crucial contextual information to model predictions for interpretation by scientists and clinicians. In this study, we provide a simple yet effective extension of the DSC loss, named the DSC++ loss, that selectively modulates the penalty associated with overconfident, incorrect predictions. As a standalone loss function, the DSC++ loss achieves significantly improved calibration over the conventional DSC loss across six well-validated open-source biomedical imaging datasets, including both 2D binary and 3D multi-class segmentation tasks. Similarly, we observe significantly improved calibration when integrating the DSC++ loss into four DSC-based loss functions. Finally, we use softmax thresholding to illustrate that well calibrated outputs enable tailoring of recall-precision bias, which is an important post-processing technique to adapt the model predictions to suit the biomedical or clinical task. The DSC++ loss overcomes the major limitation of the DSC loss, providing a suitable loss function for training deep learning segmentation models for use in biomedical and clinical practice. Source code is available at https://github.com/mlyg/DicePlusPlus .
Collapse
Affiliation(s)
- Michael Yeung
- Department of Radiology, University of Cambridge, Hills Rd, Cambridge, CB2 0QQ UK
- National Heart & Lung Institute, Imperial College London, Dovehouse St, London, SW3 6LY UK
- Department of Computing, Imperial College London, London, UK
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge, Hills Rd, Cambridge, CB2 0QQ UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Robinson Way, Cambridge, CB2 0RE UK
- Department of Information and Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, Fisciano, Salerno 84084 Italy
| | - Yang Nan
- National Heart & Lung Institute, Imperial College London, Dovehouse St, London, SW3 6LY UK
| | - Evis Sala
- Department of Radiology, University of Cambridge, Hills Rd, Cambridge, CB2 0QQ UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Robinson Way, Cambridge, CB2 0RE UK
| | - Carola-Bibiane Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Rd, Cambridge, CB3 0WA UK
| | - Guang Yang
- National Heart & Lung Institute, Imperial College London, Dovehouse St, London, SW3 6LY UK
| |
Collapse
|
64
|
Zhong Y, Guo Y, Fang Y, Wu Z, Wang J, Hu W. Geometric and dosimetric evaluation of deep learning based auto-segmentation for clinical target volume on breast cancer. J Appl Clin Med Phys 2023:e13951. [PMID: 36920901 DOI: 10.1002/acm2.13951] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 02/09/2023] [Accepted: 02/12/2023] [Indexed: 03/16/2023] Open
Abstract
BACKGROUND Recently, target auto-segmentation techniques based on deep learning (DL) have shown promising results. However, inaccurate target delineation will directly affect the treatment planning dose distribution and the effect of subsequent radiotherapy work. Evaluation based on geometric metrics alone may not be sufficient for target delineation accuracy assessment. The purpose of this paper is to validate the performance of automatic segmentation with dosimetric metrics and try to construct new evaluation geometric metrics to comprehensively understand the dose-response relationship from the perspective of clinical application. MATERIALS AND METHODS A DL-based target segmentation model was developed by using 186 manual delineation modified radical mastectomy breast cancer cases. The resulting DL model were used to generate alternative target contours in a new set of 48 patients. The Auto-plan was reoptimized to ensure the same optimized parameters as the reference Manual-plan. To assess the dosimetric impact of target auto-segmentation, not only common geometric metrics but also new spatial parameters with distance and relative volume ( R V ${R}_V$ ) to target were used. Correlations were performed using Spearman's correlation between segmentation evaluation metrics and dosimetric changes. RESULTS Only strong (|R2 | > 0.6, p < 0.01) or moderate (|R2 | > 0.4, p < 0.01) Pearson correlation was established between the traditional geometric metric and three dosimetric evaluation indices to target (conformity index, homogeneity index, and mean dose). For organs at risk (OARs), inferior or no significant relationship was found between geometric parameters and dosimetric differences. Furthermore, we found that OARs dose distribution was affected by boundary error of target segmentation instead of distance and R V ${R}_V$ to target. CONCLUSIONS Current geometric metrics could reflect a certain degree of dose effect of target variation. To find target contour variations that do lead to OARs dosimetry changes, clinically oriented metrics that more accurately reflect how segmentation quality affects dosimetry should be constructed.
Collapse
Affiliation(s)
- Yang Zhong
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Ying Guo
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yingtao Fang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Zhiqiang Wu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| |
Collapse
|
65
|
Setia SA, Stoebner ZA, Floyd C, Lu D, Oguz I, Kavoussi NL. Computer Vision Enabled Segmentation of Kidney Stones During Ureteroscopy and Laser Lithotripsy. J Endourol 2023; 37:495-501. [PMID: 36401503 DOI: 10.1089/end.2022.0511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Objective: To evaluate the performance of computer vision models for automated kidney stone segmentation during flexible ureteroscopy and laser lithotripsy. Materials and Methods: We collected 20 ureteroscopy videos of intrarenal kidney stone treatment and extracted frames (N = 578) from these videos. We manually annotated kidney stones on each frame. Eighty percent of the data were used to train three standard computer vision models (U-Net, U-Net++, and DenseNet) for automatic stone segmentation during flexible ureteroscopy. The remaining data (20%) were used to compare performance of the three models after optimization through Dice coefficients and binary cross entropy. We identified the highest performing model and evaluated automatic segmentation performance during ureteroscopy for both stone localization and treatment using a separate set of endoscopic videos. We evaluated performance of the pixel-based analysis using area under the receiver operating characteristic curve (AUC-ROC), accuracy, sensitivity, and positive predictive value both in previously recorded videos and in real time. Results: A computer vision model (U-Net++) was evaluated, trained, and optimized for kidney stone segmentation during ureteroscopy using 20 surgical videos (mean video duration of 22 seconds, standard deviation ±13 seconds). The model showed good performance for stone localization with both digital ureteroscopes (AUC-ROC: 0.98) and fiberoptic ureteroscopes (AUC-ROC: 0.93). Furthermore, the model was able to accurately segment stones and stone fragments <270 μm in diameter during laser fragmentation (AUC-ROC: 0.87) and dusting (AUC-ROC: 0.77). The model automatically annotated videos intraoperatively in three cases and could do so in real time at 30 frames per second (FPS). Conclusion: Computer vision models demonstrate strong performance for automatic stone segmentation during ureteroscopy. Automatically annotating new videos at 30 FPS demonstrate the feasibility of real-time application during surgery, which could facilitate tracking tools for stone treatment.
Collapse
Affiliation(s)
- Shaan A Setia
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Zachary A Stoebner
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Chase Floyd
- University of South Carolina School of Medicine, Columbia, South Carolina, USA
| | - Daiwei Lu
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Ipek Oguz
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Nicholas L Kavoussi
- Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
66
|
Yang SCH, Folke T, Shafto P. The Inner Loop of Collective Human-Machine Intelligence. Top Cogn Sci 2023. [PMID: 36807872 DOI: 10.1111/tops.12642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 02/02/2023] [Accepted: 02/02/2023] [Indexed: 02/22/2023]
Abstract
With the rise of artificial intelligence (AI) and the desire to ensure that such machines work well with humans, it is essential for AI systems to actively model their human teammates, a capability referred to as Machine Theory of Mind (MToM). In this paper, we introduce the inner loop of human-machine teaming expressed as communication with MToM capability. We present three different approaches to MToM: (1) constructing models of human inference with well-validated psychological theories and empirical measurements; (2) modeling human as a copy of the AI; and (3) incorporating well-documented domain knowledge about human behavior into the above two approaches. We offer a formal language for machine communication and MToM, where each term has a clear mechanistic interpretation. We exemplify the overarching formalism and the specific approaches in two concrete example scenarios. Related work that demonstrates these approaches is highlighted along the way. The formalism, examples, and empirical support provide a holistic picture of the inner loop of human-machine teaming as a foundational building block of collective human-machine intelligence.
Collapse
Affiliation(s)
| | - Tomas Folke
- Department of Mathematics and Computer Science, Rutgers University
| | - Patrick Shafto
- Department of Mathematics and Computer Science, Rutgers University
- School of Mathematics, Institute for Advanced Studies
| |
Collapse
|
67
|
Tolpadi AA, Bharadwaj U, Gao KT, Bhattacharjee R, Gassert FG, Luitjens J, Giesler P, Morshuis JN, Fischer P, Hein M, Baumgartner CF, Razumov A, Dylov D, van Lohuizen Q, Fransen SJ, Zhang X, Tibrewala R, de Moura HL, Liu K, Zibetti MVW, Regatte R, Majumdar S, Pedoia V. K2S Challenge: From Undersampled K-Space to Automatic Segmentation. Bioengineering (Basel) 2023; 10:bioengineering10020267. [PMID: 36829761 PMCID: PMC9952400 DOI: 10.3390/bioengineering10020267] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 02/01/2023] [Accepted: 02/15/2023] [Indexed: 02/22/2023] Open
Abstract
Magnetic Resonance Imaging (MRI) offers strong soft tissue contrast but suffers from long acquisition times and requires tedious annotation from radiologists. Traditionally, these challenges have been addressed separately with reconstruction and image analysis algorithms. To see if performance could be improved by treating both as end-to-end, we hosted the K2S challenge, in which challenge participants segmented knee bones and cartilage from 8× undersampled k-space. We curated the 300-patient K2S dataset of multicoil raw k-space and radiologist quality-checked segmentations. 87 teams registered for the challenge and there were 12 submissions, varying in methodologies from serial reconstruction and segmentation to end-to-end networks to another that eschewed a reconstruction algorithm altogether. Four teams produced strong submissions, with the winner having a weighted Dice Similarity Coefficient of 0.910 ± 0.021 across knee bones and cartilage. Interestingly, there was no correlation between reconstruction and segmentation metrics. Further analysis showed the top four submissions were suitable for downstream biomarker analysis, largely preserving cartilage thicknesses and key bone shape features with respect to ground truth. K2S thus showed the value in considering reconstruction and image analysis as end-to-end tasks, as this leaves room for optimization while more realistically reflecting the long-term use case of tools being developed by the MR community.
Collapse
Affiliation(s)
- Aniket A. Tolpadi
- Department of Bioengineering, University of California, Berkeley, CA 94720, USA
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
- Correspondence:
| | - Upasana Bharadwaj
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
| | - Kenneth T. Gao
- Department of Bioengineering, University of California, Berkeley, CA 94720, USA
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
| | - Rupsa Bhattacharjee
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
| | - Felix G. Gassert
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
- Department of Radiology, Klinikum Rechts der Isar, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Johanna Luitjens
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
- Department of Radiology, Klinikum Großhadern, Ludwig-Maximilians-Universität, 81377 Munich, Germany
| | - Paula Giesler
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
| | - Jan Nikolas Morshuis
- Cluster of Excellence Machine Learning, University of Tübingen, 72076 Tübingen, Germany
| | - Paul Fischer
- Cluster of Excellence Machine Learning, University of Tübingen, 72076 Tübingen, Germany
| | - Matthias Hein
- Cluster of Excellence Machine Learning, University of Tübingen, 72076 Tübingen, Germany
| | | | - Artem Razumov
- Center for Computational and Data-Intensive Science and Engineering, Skolkovo Institute of Science and Technology, 121205 Moscow, Russia
| | - Dmitry Dylov
- Center for Computational and Data-Intensive Science and Engineering, Skolkovo Institute of Science and Technology, 121205 Moscow, Russia
| | - Quintin van Lohuizen
- Department of Radiology, University Medical Center Groningen, 9713 GZ Groningen, The Netherlands
| | - Stefan J. Fransen
- Department of Radiology, University Medical Center Groningen, 9713 GZ Groningen, The Netherlands
| | - Xiaoxia Zhang
- Center for Advanced Imaging Innovation and Research, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Radhika Tibrewala
- Center for Advanced Imaging Innovation and Research, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Hector Lise de Moura
- Center for Advanced Imaging Innovation and Research, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Kangning Liu
- Center for Advanced Imaging Innovation and Research, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Marcelo V. W. Zibetti
- Center for Advanced Imaging Innovation and Research, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Ravinder Regatte
- Center for Advanced Imaging Innovation and Research, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
| |
Collapse
|
68
|
Adjogatse D, Petkar I, Reis Ferreira M, Kong A, Lei M, Thomas C, Barrington SF, Dudau C, Touska P, Guerrero Urbano T, Connor SEJ. The Impact of Interactive MRI-Based Radiologist Review on Radiotherapy Target Volume Delineation in Head and Neck Cancer. AJNR Am J Neuroradiol 2023; 44:192-198. [PMID: 36702503 PMCID: PMC9891322 DOI: 10.3174/ajnr.a7773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 12/31/2022] [Indexed: 01/27/2023]
Abstract
BACKGROUND AND PURPOSE Peer review of head and neck cancer radiation therapy target volumes by radiologists was introduced in our center to optimize target volume delineation. Our aim was to assess the impact of MR imaging-based radiologist peer review of head and neck radiation therapy gross tumor and nodal volumes, through qualitative and quantitative analysis. MATERIALS AND METHODS Cases undergoing radical radiation therapy with a coregistered MR imaging, between April 2019 and March 2020, were reviewed. The frequency and nature of volume changes were documented, with major changes classified as per the guidance of The Royal College of Radiologists. Volumetric alignment was assessed using the Dice similarity coefficient, Jaccard index, and Hausdorff distance. RESULTS Fifty cases were reviewed between April 2019 and March 2020. The median age was 59 years (range, 29-83 years), and 72% were men. Seventy-six percent of gross tumor volumes and 41.5% of gross nodal volumes were altered, with 54.8% of gross tumor volume and 66.6% of gross nodal volume alterations classified as "major." Undercontouring of soft-tissue involvement and unidentified lymph nodes were predominant reasons for change. Radiologist review significantly altered the size of both the gross tumor volume (P = .034) and clinical target tumor volume (P = .003), but not gross nodal volume or clinical target nodal volume. The median conformity and surface distance metrics were the following: gross tumor volume Dice similarity coefficient = 0.93 (range, 0.82-0.96), Jaccard index = 0.87 (range, 0.7-0.94), Hausdorff distance = 7.45 mm (range, 5.6-11.7 mm); and gross nodular tumor volume Dice similarity coefficient = 0.95 (0.91-0.97), Jaccard index = 0.91 (0.83-0.95), and Hausdorff distance = 20.7 mm (range, 12.6-41.6). Conformity improved on gross tumor volume-to-clinical target tumor volume expansion (Dice similarity coefficient = 0.93 versus 0.95, P = .003). CONCLUSIONS MR imaging-based radiologist review resulted in major changes to most radiotherapy target volumes and significant changes in volume size of both gross tumor volume and clinical target tumor volume, suggesting that this is a fundamental step in the radiotherapy workflow of patients with head and neck cancer.
Collapse
Affiliation(s)
- D Adjogatse
- From the Departments of Oncology (D.A., I.P., M.R.F., A.K., M.L., T.G.U.)
- School of Biomedical Engineering and Imaging Sciences (D.A., C.T., S.E.J.C.)
| | - I Petkar
- From the Departments of Oncology (D.A., I.P., M.R.F., A.K., M.L., T.G.U.)
| | - M Reis Ferreira
- From the Departments of Oncology (D.A., I.P., M.R.F., A.K., M.L., T.G.U.)
| | - A Kong
- From the Departments of Oncology (D.A., I.P., M.R.F., A.K., M.L., T.G.U.)
| | - M Lei
- From the Departments of Oncology (D.A., I.P., M.R.F., A.K., M.L., T.G.U.)
| | - C Thomas
- Medical Physics (C.T.)
- School of Biomedical Engineering and Imaging Sciences (D.A., C.T., S.E.J.C.)
| | - S F Barrington
- King's College London and Guy's and St Thomas' PET Centre (S.F.B.), School of Biomedical Engineering and Imaging Sciences, King's College London, King's Health Partners, London, UK
| | - C Dudau
- Radiology (C.D., P.T., S.E.J.C.), Guy's and St Thomas' National Health Service Foundation Trust, London, UK
- Department of Neurororadiology (C.D., S.E.J.C.), King's College Hospital, London, UK
| | - P Touska
- Radiology (C.D., P.T., S.E.J.C.), Guy's and St Thomas' National Health Service Foundation Trust, London, UK
| | - T Guerrero Urbano
- From the Departments of Oncology (D.A., I.P., M.R.F., A.K., M.L., T.G.U.)
- Faculty of Dentistry, Oral and Craniofacial Sciences (T.G.U.), King's College London, London, UK
| | - S E J Connor
- Radiology (C.D., P.T., S.E.J.C.), Guy's and St Thomas' National Health Service Foundation Trust, London, UK
- School of Biomedical Engineering and Imaging Sciences (D.A., C.T., S.E.J.C.)
- Department of Neurororadiology (C.D., S.E.J.C.), King's College Hospital, London, UK
| |
Collapse
|
69
|
Hertel M, Liu C, Song H, Golatta M, Kappler S, Nanke R, Radicke M, Maier A, Rose G. Clinical prototype implementation enabling an improved day-to-day mammography compression. Phys Med 2023; 106:102524. [PMID: 36641900 DOI: 10.1016/j.ejmp.2023.102524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 12/22/2022] [Accepted: 01/02/2023] [Indexed: 01/15/2023] Open
Abstract
PURPOSE In mammography, breast compression is achieved by lowering a compression paddle on the breast. Despite the directive that compression is needed, there is no concrete guideline on its execution. To estimate the degree of compression, current mammography units only provide compression force and breast thickness as parameters. Therefore, radiographers could be induced to mainly determine the level of compression based on compression force and apply the same value to all breast sizes. In this case, smaller breast sizes are exposed to higher pressure. This results in a highly varying perception of discomfort or even pain during the procedure, depending on the breast size. METHODS To overcome this imbalance, current research results suggest that pressure might be a more qualified parameter for a more uniform compression among all breast sizes. To utilize pressure, the contact area between breast and compression paddle must be determined. In this paper, we present an easy-to-implement prototype enabling a real-time pressure-based measure without the need of direct patient contact. Using an optical camera, the contact area between the breast and the compression paddle is automatically segmented by a deep learning model. RESULTS The model provides a mean pixel accuracy of 96.7% (SD: 2.3%), mean frequency-weighted intersection over union of 88.5% (SD: 6.3%), and a Dice score of 93.6% (SD: 2.2%). The subsequent pressure display is updated more than five times per second which enables the use in clinical routines to set the compression level. CONCLUSION This prototype could help guiding to an improved breast compression routine in mammography procedures.
Collapse
Affiliation(s)
- Madeleine Hertel
- Siemens Healthcare GmbH, 91301 Forchheim, Germany; Institute for Medical Engineering and Research Campus STIMULATE, Otto-von-Guericke-University, 39106 Magdeburg, Germany.
| | - Chang Liu
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nuremberg, 91058 Erlangen, Germany.
| | - Haobo Song
- Siemens Healthcare GmbH, 91301 Forchheim, Germany.
| | - Michael Golatta
- University Breast Unit, Department of Gynecology and Obstetrics, 69120 Heidelberg, Germany.
| | | | - Ralf Nanke
- Siemens Healthcare GmbH, 91301 Forchheim, Germany.
| | | | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nuremberg, 91058 Erlangen, Germany.
| | - Georg Rose
- Institute for Medical Engineering and Research Campus STIMULATE, Otto-von-Guericke-University, 39106 Magdeburg, Germany.
| |
Collapse
|
70
|
Shoaib MA, Chuah JH, Ali R, Hasikin K, Khalil A, Hum YC, Tee YK, Dhanalakshmi S, Lai KW. An Overview of Deep Learning Methods for Left Ventricle Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:4208231. [PMID: 36756163 PMCID: PMC9902166 DOI: 10.1155/2023/4208231] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 10/25/2022] [Accepted: 11/24/2022] [Indexed: 01/31/2023]
Abstract
Cardiac health diseases are one of the key causes of death around the globe. The number of heart patients has considerably increased during the pandemic. Therefore, it is crucial to assess and analyze the medical and cardiac images. Deep learning architectures, specifically convolutional neural networks have profoundly become the primary choice for the assessment of cardiac medical images. The left ventricle is a vital part of the cardiovascular system where the boundary and size perform a significant role in the evaluation of cardiac function. Due to automatic segmentation and good promising results, the left ventricle segmentation using deep learning has attracted a lot of attention. This article presents a critical review of deep learning methods used for the left ventricle segmentation from frequently used imaging modalities including magnetic resonance images, ultrasound, and computer tomography. This study also demonstrates the details of the network architecture, software, and hardware used for training along with publicly available cardiac image datasets and self-prepared dataset details incorporated. The summary of the evaluation matrices with results used by different researchers is also presented in this study. Finally, all this information is summarized and comprehended in order to assist the readers to understand the motivation and methodology of various deep learning models, as well as exploring potential solutions to future challenges in LV segmentation.
Collapse
Affiliation(s)
- Muhammad Ali Shoaib
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
- Faculty of Information and Communication Technology, BUITEMS, Quetta, Pakistan
| | - Joon Huang Chuah
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Raza Ali
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
- Faculty of Information and Communication Technology, BUITEMS, Quetta, Pakistan
| | - Khairunnisa Hasikin
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Azira Khalil
- Faculty of Science & Technology, Universiti Sains Islam Malaysia, Nilai 71800, Malaysia
| | - Yan Chai Hum
- Department of Mechatronics and Biomedical Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Malaysia
| | - Yee Kai Tee
- Department of Mechatronics and Biomedical Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Malaysia
| | - Samiappan Dhanalakshmi
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Kattankulathur, India
| | - Khin Wee Lai
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| |
Collapse
|
71
|
Ryalat MH, Dorgham O, Tedmori S, Al-Rahamneh Z, Al-Najdawi N, Mirjalili S. Harris hawks optimization for COVID-19 diagnosis based on multi-threshold image segmentation. Neural Comput Appl 2023; 35:6855-6873. [PMID: 36471798 PMCID: PMC9714421 DOI: 10.1007/s00521-022-08078-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Accepted: 11/22/2022] [Indexed: 12/04/2022]
Abstract
Digital image processing techniques and algorithms have become a great tool to support medical experts in identifying, studying, diagnosing certain diseases. Image segmentation methods are of the most widely used techniques in this area simplifying image representation and analysis. During the last few decades, many approaches have been proposed for image segmentation, among which multilevel thresholding methods have shown better results than most other methods. Traditional statistical approaches such as the Otsu and the Kapur methods are the standard benchmark algorithms for automatic image thresholding. Such algorithms provide optimal results, yet they suffer from high computational costs when multilevel thresholding is required, which is considered as an optimization matter. In this work, the Harris hawks optimization technique is combined with Otsu's method to effectively reduce the required computational cost while maintaining optimal outcomes. The proposed approach is tested on a publicly available imaging datasets, including chest images with clinical and genomic correlates, and represents a rural COVID-19-positive (COVID-19-AR) population. According to various performance measures, the proposed approach can achieve a substantial decrease in the computational cost and the time to converge while maintaining a level of quality highly competitive with the Otsu method for the same threshold values.
Collapse
Affiliation(s)
- Mohammad Hashem Ryalat
- grid.443749.90000 0004 0623 1491Prince Abdullah Bin Ghazi Faculty of Information and Communication Technology, Al-Balqa Applied University, Al-Salt, 19117 Jordan
| | - Osama Dorgham
- grid.443749.90000 0004 0623 1491Prince Abdullah Bin Ghazi Faculty of Information and Communication Technology, Al-Balqa Applied University, Al-Salt, 19117 Jordan ,grid.461585.b0000 0004 1762 8208School of Information Technology, Skyline University College, Sharjah, United Arab Emirates
| | - Sara Tedmori
- grid.29251.3d0000 0004 0404 9637King Hussein School of Computing Sciences, Princess Sumaya University for Technology, Amman, 11941 Jordan
| | - Zainab Al-Rahamneh
- grid.443749.90000 0004 0623 1491Prince Abdullah Bin Ghazi Faculty of Information and Communication Technology, Al-Balqa Applied University, Al-Salt, 19117 Jordan
| | - Nijad Al-Najdawi
- grid.443749.90000 0004 0623 1491Prince Abdullah Bin Ghazi Faculty of Information and Communication Technology, Al-Balqa Applied University, Al-Salt, 19117 Jordan
| | - Seyedali Mirjalili
- grid.449625.80000 0004 4654 2104Centre for Artificial Intelligence Research and Optimisation, Torrens University, Adelaide, SA 5000 Australia ,grid.15444.300000 0004 0470 5454Yonsei Frontier Lab, Yonsei University, Seoul, South Korea
| |
Collapse
|
72
|
Xiao G, Zhu B, Zhang Y, Gao H. FCSNet: A quantitative explanation method for surface scratch defects during belt grinding based on deep learning. COMPUT IND 2023. [DOI: 10.1016/j.compind.2022.103793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
73
|
Intracerebral Hemorrhage Segmentation on Noncontrast Computed Tomography Using a Masked Loss Function U-Net Approach. J Comput Assist Tomogr 2023; 47:93-101. [PMID: 36219722 DOI: 10.1097/rct.0000000000001380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
OBJECTIVE Intracerebral hemorrhage (ICH) volume is a strong predictor of outcome in patients presenting with acute hemorrhagic stroke. It is necessary to segment the hematoma for ICH volume estimation and for computerized extraction of features, such as spot sign, texture parameters, or extravasated iodine content at dual-energy computed tomography. Manual and semiautomatic segmentation methods to delineate the hematoma are tedious, user dependent, and require trained personnel. This article presents a convolutional neural network to automatically delineate ICH from noncontrast computed tomography scans of the head. METHODS A model combining a U-Net architecture with a masked loss function was trained on standard noncontrast computed tomography images that were down sampled to 256 × 256 size. Data augmentation was applied to prevent overfitting, and the loss score was calculated using the soft Dice loss function. The Dice coefficient and the Hausdorff distance were computed to quantitatively evaluate the segmentation performance of the model, together with the sensitivity and specificity to determine the ICH detection accuracy. RESULTS The results demonstrate a median Dice coefficient of 75.9% and Hausdorff distance of 2.65 pixels in segmentation performance, with a detection sensitivity of 77.0% and specificity of 96.2%. CONCLUSIONS The proposed masked loss U-Net is accurate in the automatic segmentation of ICH. Future research should focus on increasing the detection sensitivity of the model and comparing its performance with other model architectures.
Collapse
|
74
|
Fang L, Wang X. Multi-input Unet model based on the integrated block and the aggregation connection for MRI brain tumor segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
75
|
Krithika alias AnbuDevi M, Suganthi K. Review of Semantic Segmentation of Medical Images Using Modified Architectures of UNET. Diagnostics (Basel) 2022; 12:diagnostics12123064. [PMID: 36553071 PMCID: PMC9777361 DOI: 10.3390/diagnostics12123064] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 11/17/2022] [Accepted: 11/22/2022] [Indexed: 12/12/2022] Open
Abstract
In biomedical image analysis, information about the location and appearance of tumors and lesions is indispensable to aid doctors in treating and identifying the severity of diseases. Therefore, it is essential to segment the tumors and lesions. MRI, CT, PET, ultrasound, and X-ray are the different imaging systems to obtain this information. The well-known semantic segmentation technique is used in medical image analysis to identify and label regions of images. The semantic segmentation aims to divide the images into regions with comparable characteristics, including intensity, homogeneity, and texture. UNET is the deep learning network that segments the critical features. However, UNETs basic architecture cannot accurately segment complex MRI images. This review introduces the modified and improved models of UNET suitable for increasing segmentation accuracy.
Collapse
|
76
|
Gökkan O, Kuntalp M. A new imbalance-aware loss function to be used in a deep neural network for colorectal polyp segmentation. Comput Biol Med 2022; 151:106205. [PMID: 36370582 DOI: 10.1016/j.compbiomed.2022.106205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 09/14/2022] [Accepted: 10/09/2022] [Indexed: 12/27/2022]
Abstract
Colorectal cancers may occur in colon region of human body because of late detection of polyps. Therefore, colonoscopists often use colonoscopy device to view the entire colon in their routine practice to remove polyps by excisional biopsy. The aim of this study is to develop a new imbalance-aware loss function, i.e., omni-comprehensive loss, to be used in deep neural networks to overcome both imbalanced dataset and the vanishing gradient problem in identifying the related regions of a polyp. Another reason of developing a new loss function is to be able to produce a more comprehensive one that has evaluation capabilities of region-based, shape-aware, and pixel-wise distribution loss approaches at once. To measure the performance of the new loss function, two scenarios have been conducted. First, an 18-layer residual network as backbone with UNet as the decoder is implemented. Second, a 34-layer residual network as the encoder and a UNet as the decoder is designed. For both scenarios, the results of using popular imbalance-aware losses are compared with those of using our proposed new loss function. During training and 5-fold cross validation steps, multiple publicly available datasets are used. In addition to original data in these datasets, their augmented versions are also created by flipping, scaling, rotating and contrast-limited adaptive histogram equalization operations. As a result, our proposed new custom loss function produced the best performance metrics compared with the popular loss functions.
Collapse
Affiliation(s)
- Ozan Gökkan
- Ege University, Graduate School of Natural and Applied Sciences, Dept. of Biomedical Technologies, 35040, Turkey.
| | - Mehmet Kuntalp
- Dokuz Eylül University, Graduate School of Natural and Applied Sciences, Dept. of Biomedical Technologies, 35390, Turkey
| |
Collapse
|
77
|
Cass ND, Lindquist NR, Zhu Q, Li H, Oguz I, Tawfik KO. Machine Learning for Automated Calculation of Vestibular Schwannoma Volumes. Otol Neurotol 2022; 43:1252-1256. [PMID: 36109146 DOI: 10.1097/mao.0000000000003687] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
HYPOTHESIS Machine learning-derived algorithms are capable of automated calculation of vestibular schwannoma tumor volumes without operator input. BACKGROUND Volumetric measurements are most sensitive for detection of vestibular schwannoma growth and important for patient counseling and management decisions. Yet, manually measuring volume is logistically challenging and time-consuming. METHODS We developed a deep learning framework fusing transformers and convolutional neural networks to calculate vestibular schwannoma volumes without operator input. The algorithm was trained, validated, and tested on an external, publicly available data set consisting of magnetic resonance imaging images of medium and large tumors (178-9,598 mm 3 ) with uniform acquisition protocols. The algorithm was then trained, validated, and tested on an internal data set of variable size tumors (5-6,126 mm 3 ) with variable acquisition protocols. RESULTS The externally trained algorithm yielded 87% voxel overlap (Dice score) with manually segmented tumors on the external data set. The same algorithm failed to translate to accurate tumor detection when tested on the internal data set, with Dice score of 36%. Retraining on the internal data set yielded Dice score of 82% when compared with manually segmented images, and 85% when only considering tumors of similar size as the external data set (>178 mm 3 ). Manual segmentation by two experts demonstrated high intraclass correlation coefficient (0.999). CONCLUSION Sophisticated machine learning algorithms delineate vestibular schwannomas with an accuracy exceeding established norms of up to 20% error for repeated manual volumetric measurements-87% accuracy on a homogeneous data set, and 82% to 85% accuracy on a more varied data set mirroring real world neurotology practice. This technology has promise for clinical applicability and time savings.
Collapse
Affiliation(s)
- Nathan D Cass
- The Otology Group of Vanderbilt, Department of Otolaryngology, Vanderbilt University Medical Center
| | - Nathan R Lindquist
- The Otology Group of Vanderbilt, Department of Otolaryngology, Vanderbilt University Medical Center
| | - Qibang Zhu
- Department of Computer Science, Vanderbilt University
| | | | | | - Kareem O Tawfik
- The Otology Group of Vanderbilt, Department of Otolaryngology, Vanderbilt University Medical Center
| |
Collapse
|
78
|
Zhao L, Zhou D, Jin X, Zhu W. nn-TransUNet: An Automatic Deep Learning Pipeline for Heart MRI Segmentation. Life (Basel) 2022; 12:1570. [PMID: 36295005 PMCID: PMC9604839 DOI: 10.3390/life12101570] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 09/21/2022] [Accepted: 10/07/2022] [Indexed: 11/18/2022] Open
Abstract
Cardiovascular disease (CVD) is a disease with high mortality in modern times. The segmentation task for MRI to extract the related organs for CVD is essential for diagnosis. Currently, a large number of deep learning methods are designed for medical image segmentation tasks. However, the design of segmentation algorithms tends to have more focus on deepening the network architectures and tuning the parameters and hyperparameters manually, which not only leads to a high time and effort consumption, but also causes the problem that the architectures and setting designed for a single task only performs well in a single dataset, but have low performance in other cases. In this paper, nn-TransUNet, an automatic deep learning pipeline for MRI segmentation of the heart is proposed to combine the experiment planning of nnU-net and the network architecture of TransUNet. nn-TransUNet uses vision transformers and convolution layers in the design of the encoder and takes up convolution layers as decoder. With the adaptive preprocessing and network training plan generated by the proposed automatic experiment planning pipeline, nn-TransUNet is able to fulfill the target of medical image segmentation in heart MRI tasks. nn-TransUNet achieved state-of-the-art level in heart MRI segmentation task on Automatic Cardiac Diagnosis Challenge (ACDC) Dataset. It also saves the effort and time to manually tune the parameters and hyperparameters, which can reduce the burden on researchers.
Collapse
Affiliation(s)
- Li Zhao
- School of Information Science and Engineering, Yunnan University, Kunming 650504, China
| | - Dongming Zhou
- School of Information Science and Engineering, Yunnan University, Kunming 650504, China
| | - Xin Jin
- School of Software, Yunnan University, Kunming 650504, China
| | - Weina Zhu
- School of Information Science and Engineering, Yunnan University, Kunming 650504, China
| |
Collapse
|
79
|
Bueno A, Bosch I, Rodríguez A, Jiménez A, Carreres J, Fernández M, Marti-Bonmati L, Alberich-Bayarri A. Automated Cervical Spinal Cord Segmentation in Real-World MRI of Multiple Sclerosis Patients by Optimized Hybrid Residual Attention-Aware Convolutional Neural Networks. J Digit Imaging 2022; 35:1131-1142. [PMID: 35789447 PMCID: PMC9582086 DOI: 10.1007/s10278-022-00637-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 03/22/2022] [Accepted: 04/09/2022] [Indexed: 10/17/2022] Open
Abstract
Magnetic resonance (MR) imaging is the most sensitive clinical tool in the diagnosis and monitoring of multiple sclerosis (MS) alterations. Spinal cord evaluation has gained interest in this clinical scenario in recent years, but, unlike the brain, there is a more limited choice of algorithms to assist spinal cord segmentation. Our goal was to investigate and develop an automatic MR cervical cord segmentation method, enabling automated and seamless spinal cord atrophy assessment and setting the stage for the development of an aggregated algorithm for the extraction of lesion-related imaging biomarkers. The algorithm was developed using a real-world MR imaging dataset of 121 MS patients (96 cases used as a training dataset and 25 cases as a validation dataset). Transversal, 3D T1-weighted gradient echo MR images (TE/TR/FA = 1.7-2.7 ms/5.6-8.2 ms/12°) were acquired in a 3 T system (Signa HD, GEHC) as standard of care in our clinical practice. Experienced radiologists supervised the manual labelling, which was considered the ground-truth. The 2D convolutional neural network consisted of a hybrid residual attention-aware segmentation method trained to delineate the cervical spinal cord. The training was conducted using a focal loss function, based on the Tversky index to address label imbalance, and an automatic optimal learning rate finder. Our automated model provided an accurate segmentation, achieving a validation DICE coefficient of 0.904 ± 0.101 compared with the manual delineation. An automatic method for cervical spinal cord segmentation on T1-weighted MR images was successfully implemented. It will have direct implications serving as the first step for accelerating the process for MS staging and follow-up through imaging biomarkers.
Collapse
Affiliation(s)
- América Bueno
- Instituto de Tecnologías y Aplicaciones Multimedia, Universitat Politècnica de Valencia, Valencia, Spain.
| | - Ignacio Bosch
- Instituto de Tecnologías y Aplicaciones Multimedia, Universitat Politècnica de Valencia, Valencia, Spain
| | - Alejandro Rodríguez
- Biomedical Imaging Research Group (GIBI230), Hospital Universitario y Politécnico e Instituto de Investigación Sanitaria La Fe, Valencia, Spain
| | - Ana Jiménez
- Quantitative Imaging Biomarkers in Medicine, QUIBIM S.L, Valencia, Spain
| | - Joan Carreres
- Radiology Department, Hospital Universitario y Politécnico La Fe, Valencia, Spain
| | - Matías Fernández
- Biomedical Imaging Research Group (GIBI230), Hospital Universitario y Politécnico e Instituto de Investigación Sanitaria La Fe, Valencia, Spain
| | - Luis Marti-Bonmati
- Biomedical Imaging Research Group (GIBI230), Hospital Universitario y Politécnico e Instituto de Investigación Sanitaria La Fe, Valencia, Spain
- Imaging La Fe node at Distributed Network for Biomedical Imaging (ReDIB) Unique Scientific and Technical Infrastructures (ICTS), Valencia, Spain
| | - Angel Alberich-Bayarri
- Biomedical Imaging Research Group (GIBI230), Hospital Universitario y Politécnico e Instituto de Investigación Sanitaria La Fe, Valencia, Spain
- Quantitative Imaging Biomarkers in Medicine, QUIBIM S.L, Valencia, Spain
| |
Collapse
|
80
|
Aloupogianni E, Ichimura T, Hamada M, Ishikawa M, Murakami T, Sasaki A, Nakamura K, Kobayashi N, Obi T. Hyperspectral imaging for tumor segmentation on pigmented skin lesions. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:106007. [PMID: 36316301 PMCID: PMC9619132 DOI: 10.1117/1.jbo.27.10.106007] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 10/10/2022] [Indexed: 06/16/2023]
Abstract
SIGNIFICANCE Malignant skin tumors, which include melanoma and nonmelanoma skin cancers, are the most prevalent type of malignant tumor. Gross pathology of pigmented skin lesions (PSL) remains manual, time-consuming, and heavily dependent on the expertise of the medical personnel. Hyperspectral imaging (HSI) can assist in the detection of tumors and evaluate the status of tumor margins by their spectral signatures. AIM Tumor segmentation of medical HSI data is a research field. The goal of this study is to propose a framework for HSI-based tumor segmentation of PSL. APPROACH An HSI dataset of 28 PSL was prepared. Two frameworks for data preprocessing and tumor segmentation were proposed. Models based on machine learning and deep learning were used at the core of each framework. RESULTS Cross-validation performance showed that pixel-wise processing achieves higher segmentation performance, in terms of the Jaccard coefficient. Simultaneous use of spatio-spectral features produced more comprehensive tumor masks. A three-dimensional Xception-based network achieved performance similar to state-of-the-art networks while allowing for more detailed detection of the tumor border. CONCLUSIONS Good performance was achieved for melanocytic lesions, but margins were difficult to detect in some cases of basal cell carcinoma. The frameworks proposed in this study could be further improved for robustness against different pathologies and detailed delineation of tissue margins to facilitate computer-assisted diagnosis during gross pathology.
Collapse
Affiliation(s)
- Eleni Aloupogianni
- Tokyo Institute of Technology, Department of Information and Communications Engineering, Meguro, Japan
| | - Takaya Ichimura
- Saitama Medical University Moroyama Campus, Department of Pathology, Faculty of Medicine, Iruma, Japan
| | - Mei Hamada
- Saitama Medical University Moroyama Campus, Department of Pathology, Faculty of Medicine, Iruma, Japan
| | - Masahiro Ishikawa
- Saitama Medical University Hidaka Campus, Faculty of Health and Medical Care, Hidaka, Japan
| | - Takuo Murakami
- Saitama Medical University Moroyama Campus, Department of Dermatology, Faculty of Medicine, Iruma, Japan
| | - Atsushi Sasaki
- Saitama Medical University Moroyama Campus, Department of Pathology, Faculty of Medicine, Iruma, Japan
| | - Koichiro Nakamura
- Saitama Medical University Moroyama Campus, Department of Dermatology, Faculty of Medicine, Iruma, Japan
| | - Naoki Kobayashi
- Saitama Medical University Hidaka Campus, Faculty of Health and Medical Care, Hidaka, Japan
| | - Takashi Obi
- Tokyo Institute of Technology, Department of Information and Communications Engineering, Meguro, Japan
- Tokyo Institute of Technology, Institute of Innovative Research, Yokohama, Japan
| |
Collapse
|
81
|
Meng Y, Xu M, Yoon S, Jeong Y, Park DS. Flexible and high quality plant growth prediction with limited data. FRONTIERS IN PLANT SCIENCE 2022; 13:989304. [PMID: 36172552 PMCID: PMC9511019 DOI: 10.3389/fpls.2022.989304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 08/19/2022] [Indexed: 06/16/2023]
Abstract
Predicting plant growth is a fundamental challenge that can be employed to analyze plants and further make decisions to have healthy plants with high yields. Deep learning has recently been showing its potential to address this challenge in recent years, however, there are still two issues. First, image-based plant growth prediction is currently taken either from time series or image generation viewpoints, resulting in a flexible learning framework and clear predictions, respectively. Second, deep learning-based algorithms are notorious to require a large-scale dataset to obtain a competing performance but collecting enough data is time-consuming and expensive. To address the issues, we consider the plant growth prediction from both viewpoints with two new time-series data augmentation algorithms. To be more specific, we raise a new framework with a length-changeable time-series processing unit to generate images flexibly. A generative adversarial loss is utilized to optimize our model to obtain high-quality images. Furthermore, we first recognize three key points to perform time-series data augmentation and then put forward T-Mixup and T-Copy-Paste. T-Mixup fuses images from a different time pixel-wise while T-Copy-Paste makes new time-series images with a different background by reusing individual leaves extracted from the existing dataset. We perform our method in a public dataset and achieve superior results, such as the generated RGB images and instance masks securing an average PSNR of 27.53 and 27.62, respectively, compared to the previously best 26.55 and 26.92.
Collapse
Affiliation(s)
- Yao Meng
- Department of Electronics Engineering, Jeonbuk National University, Jeonbuk, South Korea
- Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonbuk, South Korea
| | - Mingle Xu
- Department of Electronics Engineering, Jeonbuk National University, Jeonbuk, South Korea
- Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonbuk, South Korea
| | - Sook Yoon
- Department of Computer Engineering, Mokpo National University, Jeonnam, South Korea
| | - Yongchae Jeong
- Division of Electronics and Information Engineering, Jeonbuk National University, Jeonbuk, South Korea
| | - Dong Sun Park
- Department of Electronics Engineering, Jeonbuk National University, Jeonbuk, South Korea
- Core Research Institute of Intelligent Robots, Jeonbuk National University, Jeonbuk, South Korea
| |
Collapse
|
82
|
Faghani S, Khosravi B, Zhang K, Moassefi M, Jagtap JM, Nugen F, Vahdati S, Kuanar SP, Rassoulinejad-Mousavi SM, Singh Y, Vera Garcia DV, Rouzrokh P, Erickson BJ. Mitigating Bias in Radiology Machine Learning: 3. Performance Metrics. Radiol Artif Intell 2022; 4:e220061. [PMID: 36204539 PMCID: PMC9530766 DOI: 10.1148/ryai.220061] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 08/16/2022] [Accepted: 08/17/2022] [Indexed: 05/31/2023]
Abstract
The increasing use of machine learning (ML) algorithms in clinical settings raises concerns about bias in ML models. Bias can arise at any step of ML creation, including data handling, model development, and performance evaluation. Potential biases in the ML model can be minimized by implementing these steps correctly. This report focuses on performance evaluation and discusses model fitness, as well as a set of performance evaluation toolboxes: namely, performance metrics, performance interpretation maps, and uncertainty quantification. By discussing the strengths and limitations of each toolbox, our report highlights strategies and considerations to mitigate and detect biases during performance evaluations of radiology artificial intelligence models. Keywords: Segmentation, Diagnosis, Convolutional Neural Network (CNN) © RSNA, 2022.
Collapse
Affiliation(s)
- Shahriar Faghani
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Bardia Khosravi
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Kuan Zhang
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Mana Moassefi
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Jaidip Manikrao Jagtap
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Fred Nugen
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Sanaz Vahdati
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Shiba P. Kuanar
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | | | - Yashbir Singh
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Diana V. Vera Garcia
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Pouria Rouzrokh
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| | - Bradley J. Erickson
- From the Radiology Informatics Laboratory, Department of Radiology,
Mayo Clinic, 200 1st St SW, Rochester, MN 55905
| |
Collapse
|
83
|
Valmaggia P, Friedli P, Hörmann B, Kaiser P, Scholl HPN, Cattin PC, Sandkühler R, Maloca PM. Feasibility of Automated Segmentation of Pigmented Choroidal Lesions in OCT Data With Deep Learning. Transl Vis Sci Technol 2022; 11:25. [PMID: 36156729 PMCID: PMC9526362 DOI: 10.1167/tvst.11.9.25] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To evaluate the feasibility of automated segmentation of pigmented choroidal lesions (PCLs) in optical coherence tomography (OCT) data and compare the performance of different deep neural networks. Methods Swept-source OCT image volumes were annotated pixel-wise for PCLs and background. Three deep neural network architectures were applied to the data: the multi-dimensional gated recurrent units (MD-GRU), the V-Net, and the nnU-Net. The nnU-Net was used to compare the performance of two-dimensional (2D) versus three-dimensional (3D) predictions. Results A total of 121 OCT volumes were analyzed (100 normal and 21 PCLs). Automated PCL segmentations were successful with all neural networks. The 3D nnU-Net predictions showed the highest recall with a mean of 0.77 ± 0.22 (MD-GRU, 0.60 ± 0.31; V-Net, 0.61 ± 0.25). The 3D nnU-Net predicted PCLs with a Dice coefficient of 0.78 ± 0.13, outperforming MD-GRU (0.62 ± 0.23) and V-Net (0.59 ± 0.24). The smallest distance to the manual annotation was found using 3D nnU-Net with a mean maximum Hausdorff distance of 315 ± 172 µm (MD-GRU, 1542 ± 1169 µm; V-Net, 2408 ± 1060 µm). The 3D nnU-Net showed a superior performance compared with stacked 2D predictions. Conclusions The feasibility of automated deep learning segmentation of PCLs was demonstrated in OCT data. The neural network architecture had a relevant impact on PCL predictions. Translational Relevance This work serves as proof of concept for segmentations of choroidal pathologies in volumetric OCT data; improvements are conceivable to meet clinical demands for the diagnosis, monitoring, and treatment evaluation of PCLs.
Collapse
Affiliation(s)
- Philippe Valmaggia
- Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland.,Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland.,Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | | | | | | | - Hendrik P N Scholl
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland.,Department of Ophthalmology, University Hospital Basel, Basel, Switzerland
| | - Philippe C Cattin
- Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
| | - Robin Sandkühler
- Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
| | - Peter M Maloca
- Institute of Molecular and Clinical Ophthalmology Basel (IOB), Basel, Switzerland.,Department of Ophthalmology, University Hospital Basel, Basel, Switzerland.,Moorfields Eye Hospital NHS Foundation Trust, London, EC1V 2PD, UK
| |
Collapse
|
84
|
Mehrvar S, Kambara T. Morphologic Features and Deep Learning-Based Analysis of Canine Spermatogenic Stages. Toxicol Pathol 2022; 50:736-753. [PMID: 36000561 DOI: 10.1177/01926233221117747] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In nonclinical toxicity studies, stage-aware evaluation is often expected to assess drug-induced testicular toxicity. Although stage-aware evaluation does not require identification of specific stages, it is important to understand microscopic features of spermatogenic staging. Staging of the spermatogenic cycle in dogs is a challenging and time-consuming process. In this study, we first defined morphologic features for the eight spermatogenic stages in standard histology sections (H&E slides) of dog testes. For image analysis, we defined the key morphologic features of five stages/pooled stage groups (I-II, III-IV, V, VI-VII, and VIII). These criteria were used to develop a deep learning (DL) algorithm for staging of the spermatogenic cycle of control dog testes using whole slide images. In addition, a DL-based nucleus segmentation model was trained to detect and quantify the number of different germ cells, including spermatogonia, spermatocytes, and spermatids. Identification of spermatogenic stages and quantification of germ cell populations were successfully automated by the DL models. Combining these two algorithms provided color-coding visual spermatogenic staging and quantitative information on germ cell populations at specific stages that would facilitate the stage-aware evaluation and detection of changes in germ cell populations in nonclinical toxicity studies.
Collapse
|
85
|
Suri JS, Agarwal S, Saba L, Chabert GL, Carriero A, Paschè A, Danna P, Mehmedović A, Faa G, Jujaray T, Singh IM, Khanna NN, Laird JR, Sfikakis PP, Agarwal V, Teji JS, R Yadav R, Nagy F, Kincses ZT, Ruzsa Z, Viskovic K, Kalra MK. Multicenter Study on COVID-19 Lung Computed Tomography Segmentation with varying Glass Ground Opacities using Unseen Deep Learning Artificial Intelligence Paradigms: COVLIAS 1.0 Validation. J Med Syst 2022; 46:62. [PMID: 35988110 PMCID: PMC9392994 DOI: 10.1007/s10916-022-01850-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 08/02/2022] [Indexed: 11/09/2022]
Abstract
Variations in COVID-19 lesions such as glass ground opacities (GGO), consolidations, and crazy paving can compromise the ability of solo-deep learning (SDL) or hybrid-deep learning (HDL) artificial intelligence (AI) models in predicting automated COVID-19 lung segmentation in Computed Tomography (CT) from unseen data leading to poor clinical manifestations. As the first study of its kind, “COVLIAS 1.0-Unseen” proves two hypotheses, (i) contrast adjustment is vital for AI, and (ii) HDL is superior to SDL. In a multicenter study, 10,000 CT slices were collected from 72 Italian (ITA) patients with low-GGO, and 80 Croatian (CRO) patients with high-GGO. Hounsfield Units (HU) were automatically adjusted to train the AI models and predict from test data, leading to four combinations—two Unseen sets: (i) train-CRO:test-ITA, (ii) train-ITA:test-CRO, and two Seen sets: (iii) train-CRO:test-CRO, (iv) train-ITA:test-ITA. COVILAS used three SDL models: PSPNet, SegNet, UNet and six HDL models: VGG-PSPNet, VGG-SegNet, VGG-UNet, ResNet-PSPNet, ResNet-SegNet, and ResNet-UNet. Two trained, blinded senior radiologists conducted ground truth annotations. Five types of performance metrics were used to validate COVLIAS 1.0-Unseen which was further benchmarked against MedSeg, an open-source web-based system. After HU adjustment for DS and JI, HDL (Unseen AI) > SDL (Unseen AI) by 4% and 5%, respectively. For CC, HDL (Unseen AI) > SDL (Unseen AI) by 6%. The COVLIAS-MedSeg difference was < 5%, meeting regulatory guidelines.Unseen AI was successfully demonstrated using automated HU adjustment. HDL was found to be superior to SDL.
Collapse
|
86
|
Boildieu D, Guerenne-Del Ben T, Duponchel L, Sol V, Petit JM, Champion É, Kano H, Helbert D, Magnaudeix A, Leproux P, Carré P. Coherent anti-Stokes Raman scattering cell imaging and segmentation with unsupervised data analysis. Front Cell Dev Biol 2022; 10:933897. [PMID: 36051442 PMCID: PMC9424763 DOI: 10.3389/fcell.2022.933897] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Accepted: 07/11/2022] [Indexed: 11/17/2022] Open
Abstract
Coherent Raman imaging has been extensively applied to live-cell imaging in the last 2 decades, allowing to probe the intracellular lipid, protein, nucleic acid, and water content with a high-acquisition rate and sensitivity. In this context, multiplex coherent anti-Stokes Raman scattering (MCARS) microspectroscopy using sub-nanosecond laser pulses is now recognized as a mature and straightforward technology for label-free bioimaging, offering the high spectral resolution of conventional Raman spectroscopy with reduced acquisition time. Here, we introduce the combination of the MCARS imaging technique with unsupervised data analysis based on multivariate curve resolution (MCR). The MCR process is implemented under the classical signal non-negativity constraint and, even more originally, under a new spatial constraint based on cell segmentation. We thus introduce a new methodology for hyperspectral cell imaging and segmentation, based on a simple, unsupervised workflow without any spectrum-to-spectrum phase retrieval computation. We first assess the robustness of our approach by considering cells of different types, namely, from the human HEK293 and murine C2C12 lines. To evaluate its applicability over a broader range, we then study HEK293 cells in different physiological states and experimental situations. Specifically, we compare an interphasic cell with a mitotic (prophase) one. We also present a comparison between a fixed cell and a living cell, in order to visualize the potential changes induced by the fixation protocol in cellular architecture. Next, with the aim of assessing more precisely the sensitivity of our approach, we study HEK293 living cells overexpressing tropomyosin-related kinase B (TrkB), a cancer-related membrane receptor, depending on the presence of its ligand, brain-derived neurotrophic factor (BDNF). Finally, the segmentation capability of the approach is evaluated in the case of a single cell and also by considering cell clusters of various sizes.
Collapse
Affiliation(s)
- Damien Boildieu
- University of Limoges, CNRS, XLIM, UMR 7252, Limoges, France
- University of Poitiers, CNRS, XLIM, UMR 7252, Poitiers, France
| | | | - Ludovic Duponchel
- University of Lille, CNRS, UMR 8516, LASIRE - Laboratoire de Spectroscopie Pour Les Interactions, La Réactivité et L’Environnement, Lille, France
| | - Vincent Sol
- University of Limoges, PEIRENE, UR 22722, Limoges, France
| | | | - Éric Champion
- University of Limoges, CNRS, Institut de Recherche sur Les Céramiques, UMR 7315, Limoges, France
| | - Hideaki Kano
- Department of Chemistry, Faculty of Science, Kyushu University, Fukuoka, Japan
| | - David Helbert
- University of Poitiers, CNRS, XLIM, UMR 7252, Poitiers, France
| | - Amandine Magnaudeix
- University of Limoges, CNRS, Institut de Recherche sur Les Céramiques, UMR 7315, Limoges, France
| | - Philippe Leproux
- University of Limoges, CNRS, XLIM, UMR 7252, Limoges, France
- *Correspondence: Philippe Leproux,
| | - Philippe Carré
- University of Poitiers, CNRS, XLIM, UMR 7252, Poitiers, France
| |
Collapse
|
87
|
Wüstner D. Image segmentation and separation of spectrally similar dyes in fluorescence microscopy by dynamic mode decomposition of photobleaching kinetics. BMC Bioinformatics 2022; 23:334. [PMID: 35962314 PMCID: PMC9373304 DOI: 10.1186/s12859-022-04881-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 08/03/2022] [Indexed: 12/03/2022] Open
Abstract
BACKGROUND Image segmentation in fluorescence microscopy is often based on spectral separation of fluorescent probes (color-based segmentation) or on significant intensity differences in individual image regions (intensity-based segmentation). These approaches fail, if dye fluorescence shows large spectral overlap with other employed probes or with strong cellular autofluorescence. RESULTS Here, a novel model-free approach is presented which determines bleaching characteristics based on dynamic mode decomposition (DMD) and uses the inferred photobleaching kinetics to distinguish different probes or dye molecules from autofluorescence. DMD is a data-driven computational method for detecting and quantifying dynamic events in complex spatiotemporal data. Here, DMD is first used on synthetic image data and thereafter used to determine photobleaching characteristics of a fluorescent sterol probe, dehydroergosterol (DHE), compared to that of cellular autofluorescence in the nematode Caenorhabditis elegans. It is shown that decomposition of those dynamic modes allows for separating probe from autofluorescence without invoking a particular model for the bleaching process. In a second application, DMD of dye-specific photobleaching is used to separate two green-fluorescent dyes, an NBD-tagged sphingolipid and Alexa488-transferrin, thereby assigning them to different cellular compartments. CONCLUSIONS Data-based decomposition of dynamic modes can be employed to analyze spatially varying photobleaching of fluorescent probes in cells and tissues for spatial and temporal image segmentation, discrimination of probe from autofluorescence and image denoising. The new method should find wide application in analysis of dynamic fluorescence imaging data.
Collapse
Affiliation(s)
- Daniel Wüstner
- Department of Biochemistry and Molecular Biology and Physics of Life Sciences (PhyLife) Center, University of Southern Denmark, Campusvej 55, DK-5230, Odense, Denmark.
| |
Collapse
|
88
|
Amorosino G, Peruzzo D, Redaelli D, Olivetti E, Arrigoni F, Avesani P. DBB - A Distorted Brain Benchmark for Automatic Tissue Segmentation in Paediatric Patients. Neuroimage 2022; 260:119486. [PMID: 35843515 DOI: 10.1016/j.neuroimage.2022.119486] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 06/30/2022] [Accepted: 07/13/2022] [Indexed: 10/17/2022] Open
Abstract
T1-weighted magnetic resonance images provide a comprehensive view of the morphology of the human brain at the macro scale. These images are usually the input of a segmentation process that aims detecting the anatomical structures labeling them according to a predefined set of target tissues. Automated methods for brain tissue segmentation rely on anatomical priors of the human brain structures. This is the reason why their performance is quite accurate on healthy individuals. Nevertheless model-based tools become less accurate in clinical practice, specifically in the cases of severe lesions or highly distorted cerebral anatomy. More recently there are empirical evidences that a data-driven approach can be more robust in presence of alterations of brain structures, even though the learning model is trained on healthy brains. Our contribution is a benchmark to support an open investigation on how the tissue segmentation of distorted brains can be improved by adopting a supervised learning approach. We formulate a precise definition of the task and propose an evaluation metric for a fair and quantitative comparison. The training sample is composed of almost one thousand healthy individuals. Data include both T1-weighted MR images and their labeling of brain tissues. The test sample is a collection of several tens of individuals with severe brain distortions. Data and code are openly published on BrainLife, an open science platform for reproducible neuroscience data analysis.
Collapse
Affiliation(s)
- Gabriele Amorosino
- NeuroInformatics Laboratory (NILab), Bruno Kessler Foundation (FBK), Trento, Italy; Center for Mind and Brain Sciences (CIMeC), University of Trento, Italy.
| | - Denis Peruzzo
- Neuroimaging Lab, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy
| | | | - Emanuele Olivetti
- NeuroInformatics Laboratory (NILab), Bruno Kessler Foundation (FBK), Trento, Italy; Center for Mind and Brain Sciences (CIMeC), University of Trento, Italy
| | - Filippo Arrigoni
- Paediatric Radiology and Neuroradiology Department, V. Buzzi Children's Hospital, Milan, Italy
| | - Paolo Avesani
- NeuroInformatics Laboratory (NILab), Bruno Kessler Foundation (FBK), Trento, Italy; Center for Mind and Brain Sciences (CIMeC), University of Trento, Italy
| |
Collapse
|
89
|
Agarwal M, Agarwal S, Saba L, Chabert GL, Gupta S, Carriero A, Pasche A, Danna P, Mehmedovic A, Faa G, Shrivastava S, Jain K, Jain H, Jujaray T, Singh IM, Turk M, Chadha PS, Johri AM, Khanna NN, Mavrogeni S, Laird JR, Sobel DW, Miner M, Balestrieri A, Sfikakis PP, Tsoulfas G, Misra DP, Agarwal V, Kitas GD, Teji JS, Al-Maini M, Dhanjil SK, Nicolaides A, Sharma A, Rathore V, Fatemi M, Alizad A, Krishnan PR, Yadav RR, Nagy F, Kincses ZT, Ruzsa Z, Naidu S, Viskovic K, Kalra MK, Suri JS. Eight pruning deep learning models for low storage and high-speed COVID-19 computed tomography lung segmentation and heatmap-based lesion localization: A multicenter study using COVLIAS 2.0. Comput Biol Med 2022; 146:105571. [PMID: 35751196 PMCID: PMC9123805 DOI: 10.1016/j.compbiomed.2022.105571] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Revised: 04/05/2022] [Accepted: 04/26/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND COVLIAS 1.0: an automated lung segmentation was designed for COVID-19 diagnosis. It has issues related to storage space and speed. This study shows that COVLIAS 2.0 uses pruned AI (PAI) networks for improving both storage and speed, wiliest high performance on lung segmentation and lesion localization. METHOD ology: The proposed study uses multicenter ∼9,000 CT slices from two different nations, namely, CroMed from Croatia (80 patients, experimental data), and NovMed from Italy (72 patients, validation data). We hypothesize that by using pruning and evolutionary optimization algorithms, the size of the AI models can be reduced significantly, ensuring optimal performance. Eight different pruning techniques (i) differential evolution (DE), (ii) genetic algorithm (GA), (iii) particle swarm optimization algorithm (PSO), and (iv) whale optimization algorithm (WO) in two deep learning frameworks (i) Fully connected network (FCN) and (ii) SegNet were designed. COVLIAS 2.0 was validated using "Unseen NovMed" and benchmarked against MedSeg. Statistical tests for stability and reliability were also conducted. RESULTS Pruning algorithms (i) FCN-DE, (ii) FCN-GA, (iii) FCN-PSO, and (iv) FCN-WO showed improvement in storage by 92.4%, 95.3%, 98.7%, and 99.8% respectively when compared against solo FCN, and (v) SegNet-DE, (vi) SegNet-GA, (vii) SegNet-PSO, and (viii) SegNet-WO showed improvement by 97.1%, 97.9%, 98.8%, and 99.2% respectively when compared against solo SegNet. AUC > 0.94 (p < 0.0001) on CroMed and > 0.86 (p < 0.0001) on NovMed data set for all eight EA model. PAI <0.25 s per image. DenseNet-121-based Grad-CAM heatmaps showed validation on glass ground opacity lesions. CONCLUSIONS Eight PAI networks that were successfully validated are five times faster, storage efficient, and could be used in clinical settings.
Collapse
Affiliation(s)
- Mohit Agarwal
- Department of Computer Science Engineering, Bennett University, India
| | - Sushant Agarwal
- Department of Computer Science Engineering, PSIT, Kanpur, India; Advanced Knowledge Engineering Centre, Global Biomedical Technologies, Inc., Roseville, CA 95661, USA
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), Cagliari, Italy
| | - Gian Luca Chabert
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), Cagliari, Italy
| | - Suneet Gupta
- Department of Computer Science Engineering, Bennett University, India
| | - Alessandro Carriero
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), Cagliari, Italy
| | - Alessio Pasche
- Depart of Radiology, "Maggiore della Carità" Hospital, University of Piemonte Orientale, Via Solaroli 17, 28100, Novara, Italy
| | - Pietro Danna
- Depart of Radiology, "Maggiore della Carità" Hospital, University of Piemonte Orientale, Via Solaroli 17, 28100, Novara, Italy
| | | | - Gavino Faa
- Department of Pathology - AOU of Cagliari, Italy
| | - Saurabh Shrivastava
- College of Computing Sciences and IT, Teerthanker Mahaveer University, Moradabad, 244001, India
| | - Kanishka Jain
- College of Computing Sciences and IT, Teerthanker Mahaveer University, Moradabad, 244001, India
| | - Harsh Jain
- College of Computing Sciences and IT, Teerthanker Mahaveer University, Moradabad, 244001, India
| | - Tanay Jujaray
- Dept of Molecular, Cell and Developmental Biology, University of California, Santa Cruz, CA, USA
| | | | - Monika Turk
- The Hanse-Wissenschaftskolleg Institute for Advanced Study, Delmenhorst, Germany
| | | | - Amer M Johri
- Division of Cardiology, Queen's University, Kingston, Ontario, Canada
| | - Narendra N Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi, India
| | - Sophie Mavrogeni
- Cardiology Clinic, Onassis Cardiac Surgery Center, Athens, Greece
| | - John R Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St Helena, CA, USA
| | - David W Sobel
- Minimally Invasive Urology Institute, Brown University, Providence, RI, USA
| | - Martin Miner
- Men's Health Center, Miriam Hospital Providence, Rhode Island, USA
| | - Antonella Balestrieri
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), Cagliari, Italy
| | - Petros P Sfikakis
- Rheumatology Unit, National Kapodistrian University of Athens, Greece
| | - George Tsoulfas
- Aristoteleion University of Thessaloniki, Thessaloniki, Greece
| | | | | | - George D Kitas
- Academic Affairs, Dudley Group NHS Foundation Trust, Dudley, UK; Arthritis Research UK Epidemiology Unit, Manchester University, Manchester, UK
| | - Jagjit S Teji
- Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, USA
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology and Rheumatology Institute, Toronto, Canada
| | | | - Andrew Nicolaides
- Vascular Screening and Diagnostic Centre and Univ. of Nicosia Medical School, Cyprus
| | - Aditya Sharma
- Division of Cardiovascular Medicine, University of Virginia, Charlottesville, VA, USA
| | | | - Mostafa Fatemi
- Dept. of Physiology & Biomedical Engg., Mayo Clinic College of Medicine and Science, MN, USA
| | - Azra Alizad
- Dept. of Radiology, Mayo Clinic College of Medicine and Science, MN, USA
| | | | | | - Frence Nagy
- Department of Radiology, University of Szeged, 6725, Hungary
| | | | - Zoltan Ruzsa
- Invasive Cardiology Division, University of Szeged, Budapest, Hungary
| | - Subbaram Naidu
- Electrical Engineering Department, University of Minnesota, Duluth, MN, USA
| | | | - Manudeep K Kalra
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Jasjit S Suri
- College of Computing Sciences and IT, Teerthanker Mahaveer University, Moradabad, 244001, India; Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA, USA.
| |
Collapse
|
90
|
Wen X, Zhao B, Yuan M, Li J, Sun M, Ma L, Sun C, Yang Y. Application of Multi-Scale Fusion Attention U-Net to Segment the Thyroid Gland on Localized Computed Tomography Images for Radiotherapy. Front Oncol 2022; 12:844052. [PMID: 35720003 PMCID: PMC9204279 DOI: 10.3389/fonc.2022.844052] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 04/26/2022] [Indexed: 12/24/2022] Open
Abstract
Objective To explore the performance of Multi-scale Fusion Attention U-Net (MSFA-U-Net) in thyroid gland segmentation on localized computed tomography (CT) images for radiotherapy. Methods We selected localized radiotherapeutic CT images from 80 patients with breast cancer or head and neck tumors; label images were manually delineated by experienced radiologists. The data set was randomly divided into the training set (n = 60), the validation set (n = 10), and the test set (n = 10). We expanded the data in the training set and evaluated the performance of the MSFA-U-Net model using the evaluation indices Dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), positive predictive value (PPV), sensitivity (SE), and Hausdorff distance (HD). Results For the MSFA-U-Net model, the DSC, JSC, PPV, SE, and HD values of the segmented thyroid gland in the test set were 0.90 ± 0.09, 0.82± 0.11, 0.91 ± 0.09, 0.90 ± 0.11, and 2.39 ± 0.54, respectively. Compared with U-Net, HRNet, and Attention U-Net, MSFA-U-Net increased DSC by 0.04, 0.06, and 0.04, respectively; increased JSC by 0.05, 0.08, and 0.04, respectively; increased SE by 0.04, 0.11, and 0.09, respectively; and reduced HD by 0.21, 0.20, and 0.06, respectively. The test set image results showed that the thyroid edges segmented by the MSFA-U-Net model were closer to the standard thyroid edges delineated by the experts than were those segmented by the other three models. Moreover, the edges were smoother, over-anti-noise interference was stronger, and oversegmentation and undersegmentation were reduced. Conclusion The MSFA-U-Net model could meet basic clinical requirements and improve the efficiency of physicians' clinical work.
Collapse
Affiliation(s)
- Xiaobo Wen
- Department of Radiotherapy, Yunnan Cancer Hospital, Kunming, China
| | - Biao Zhao
- Department of Radiotherapy, Yunnan Cancer Hospital, Kunming, China
| | - Meifang Yuan
- Department of Radiotherapy, Yunnan Cancer Hospital, Kunming, China
| | - Jinzhi Li
- Department of Radiotherapy, Yunnan Cancer Hospital, Kunming, China
| | - Mengzhen Sun
- Department of Radiotherapy, Yunnan Cancer Hospital, Kunming, China
| | - Lishuang Ma
- Department of Radiotherapy, Yunnan Cancer Hospital, Kunming, China
| | - Chaoxi Sun
- Department of Neurosurgery, Yunnan Cancer Hospital, Kunming, China
| | - Yi Yang
- Department of Radiotherapy, Yunnan Cancer Hospital, Kunming, China
| |
Collapse
|
91
|
COVLIAS 2.0-cXAI: Cloud-Based Explainable Deep Learning System for COVID-19 Lesion Localization in Computed Tomography Scans. Diagnostics (Basel) 2022; 12:diagnostics12061482. [PMID: 35741292 PMCID: PMC9221733 DOI: 10.3390/diagnostics12061482] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 06/07/2022] [Accepted: 06/13/2022] [Indexed: 02/07/2023] Open
Abstract
Background: The previous COVID-19 lung diagnosis system lacks both scientific validation and the role of explainable artificial intelligence (AI) for understanding lesion localization. This study presents a cloud-based explainable AI, the “COVLIAS 2.0-cXAI” system using four kinds of class activation maps (CAM) models. Methodology: Our cohort consisted of ~6000 CT slices from two sources (Croatia, 80 COVID-19 patients and Italy, 15 control patients). COVLIAS 2.0-cXAI design consisted of three stages: (i) automated lung segmentation using hybrid deep learning ResNet-UNet model by automatic adjustment of Hounsfield units, hyperparameter optimization, and parallel and distributed training, (ii) classification using three kinds of DenseNet (DN) models (DN-121, DN-169, DN-201), and (iii) validation using four kinds of CAM visualization techniques: gradient-weighted class activation mapping (Grad-CAM), Grad-CAM++, score-weighted CAM (Score-CAM), and FasterScore-CAM. The COVLIAS 2.0-cXAI was validated by three trained senior radiologists for its stability and reliability. The Friedman test was also performed on the scores of the three radiologists. Results: The ResNet-UNet segmentation model resulted in dice similarity of 0.96, Jaccard index of 0.93, a correlation coefficient of 0.99, with a figure-of-merit of 95.99%, while the classifier accuracies for the three DN nets (DN-121, DN-169, and DN-201) were 98%, 98%, and 99% with a loss of ~0.003, ~0.0025, and ~0.002 using 50 epochs, respectively. The mean AUC for all three DN models was 0.99 (p < 0.0001). The COVLIAS 2.0-cXAI showed 80% scans for mean alignment index (MAI) between heatmaps and gold standard, a score of four out of five, establishing the system for clinical settings. Conclusions: The COVLIAS 2.0-cXAI successfully showed a cloud-based explainable AI system for lesion localization in lung CT scans.
Collapse
|
92
|
Semi-supervised segmentation of echocardiography videos via noise-resilient spatiotemporal semantic calibration and fusion. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
93
|
Awasthi N, Vermeer L, Fixsen LS, Lopata RGP, Pluim JPW. LVNet: Lightweight Model for Left Ventricle Segmentation for Short Axis Views in Echocardiographic Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:2115-2128. [PMID: 35452387 DOI: 10.1109/tuffc.2022.3169684] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Lightweight segmentation models are becoming more popular for fast diagnosis on small and low cost medical imaging devices. This study focuses on the segmentation of the left ventricle (LV) in cardiac ultrasound (US) images. A new lightweight model [LV network (LVNet)] is proposed for segmentation, which gives the benefits of requiring fewer parameters but with improved segmentation performance in terms of Dice score (DS). The proposed model is compared with state-of-the-art methods, such as UNet, MiniNetV2, and fully convolutional dense dilated network (FCdDN). The model proposed comes with a post-processing pipeline that further enhances the segmentation results. In general, the training is done directly using the segmentation mask as the output and the US image as the input of the model. A new strategy for segmentation is also introduced in addition to the direct training method used. Compared with the UNet model, an improvement in DS performance as high as 5% for segmentation with papillary (WP) muscles was found, while showcasing an improvement of 18.5% when the papillary muscles are excluded. The model proposed requires only 5% of the memory required by a UNet model. LVNet achieves a better trade-off between the number of parameters and its segmentation performance as compared with other conventional models. The developed codes are available at https://github.com/navchetanawasthi/Left_Ventricle_Segmentation.
Collapse
|
94
|
Xu W, Yang X, Li Y, Jiang G, Jia S, Gong Z, Mao Y, Zhang S, Teng Y, Zhu J, He Q, Wan L, Liang D, Li Y, Hu Z, Zheng H, Liu X, Zhang N. Deep Learning-Based Automated Detection of Arterial Vessel Wall and Plaque on Magnetic Resonance Vessel Wall Images. Front Neurosci 2022; 16:888814. [PMID: 35720719 PMCID: PMC9198483 DOI: 10.3389/fnins.2022.888814] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 04/21/2022] [Indexed: 12/04/2022] Open
Abstract
Purpose To develop and evaluate an automatic segmentation method of arterial vessel walls and plaques, which is beneficial for facilitating the arterial morphological quantification in magnetic resonance vessel wall imaging (MRVWI). Methods MRVWI images acquired from 124 patients with atherosclerotic plaques were included. A convolutional neural network-based deep learning model, namely VWISegNet, was used to extract the features from MRVWI images and calculate the category of each pixel to facilitate the segmentation of vessel wall. Two-dimensional (2D) cross-sectional slices reconstructed from all plaques and 7 main arterial segments of 115 patients were used to build and optimize the deep learning model. The model performance was evaluated on the remaining nine-patient test set using the Dice similarity coefficient (DSC) and average surface distance (ASD). Results The proposed automatic segmentation method demonstrated satisfactory agreement with the manual method, with DSCs of 93.8% for lumen contours and 86.0% for outer wall contours, which were higher than those obtained from the traditional U-Net, Attention U-Net, and Inception U-Net on the same nine-subject test set. And all the ASD values were less than 0.198 mm. The Bland-Altman plots and scatter plots also showed that there was a good agreement between the methods. All intraclass correlation coefficient values between the automatic method and manual method were greater than 0.780, and greater than that between two manual reads. Conclusion The proposed deep learning-based automatic segmentation method achieved good consistency with the manual methods in the segmentation of arterial vessel wall and plaque and is even more accurate than manual results, hence improved the convenience of arterial morphological quantification.
Collapse
Affiliation(s)
- Wenjing Xu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Xiong Yang
- United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Yikang Li
- Department of Computing, Imperial College London, London, United Kingdom
| | - Guihua Jiang
- Department of Radiology, Guangdong Second Provincial General Hospital, Guangzhou, China
| | - Sen Jia
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhenhuan Gong
- United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Yufei Mao
- United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Shuheng Zhang
- United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Yanqun Teng
- United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Jiayu Zhu
- United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Qiang He
- United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Liwen Wan
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ye Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhanli Hu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Xin Liu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Na Zhang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
95
|
Wu C, Wang Z. Robust fuzzy dual-local information clustering with kernel metric and quadratic surface prototype for image segmentation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03690-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
96
|
Suri JS, Agarwal S, Chabert GL, Carriero A, Paschè A, Danna PSC, Saba L, Mehmedović A, Faa G, Singh IM, Turk M, Chadha PS, Johri AM, Khanna NN, Mavrogeni S, Laird JR, Pareek G, Miner M, Sobel DW, Balestrieri A, Sfikakis PP, Tsoulfas G, Protogerou AD, Misra DP, Agarwal V, Kitas GD, Teji JS, Al-Maini M, Dhanjil SK, Nicolaides A, Sharma A, Rathore V, Fatemi M, Alizad A, Krishnan PR, Nagy F, Ruzsa Z, Fouda MM, Naidu S, Viskovic K, Kalra MK. COVLIAS 1.0 Lesion vs. MedSeg: An Artificial Intelligence Framework for Automated Lesion Segmentation in COVID-19 Lung Computed Tomography Scans. Diagnostics (Basel) 2022; 12:1283. [PMID: 35626438 PMCID: PMC9141749 DOI: 10.3390/diagnostics12051283] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 05/18/2022] [Accepted: 05/19/2022] [Indexed: 02/01/2023] Open
Abstract
Background: COVID-19 is a disease with multiple variants, and is quickly spreading throughout the world. It is crucial to identify patients who are suspected of having COVID-19 early, because the vaccine is not readily available in certain parts of the world. Methodology: Lung computed tomography (CT) imaging can be used to diagnose COVID-19 as an alternative to the RT-PCR test in some cases. The occurrence of ground-glass opacities in the lung region is a characteristic of COVID-19 in chest CT scans, and these are daunting to locate and segment manually. The proposed study consists of a combination of solo deep learning (DL) and hybrid DL (HDL) models to tackle the lesion location and segmentation more quickly. One DL and four HDL models—namely, PSPNet, VGG-SegNet, ResNet-SegNet, VGG-UNet, and ResNet-UNet—were trained by an expert radiologist. The training scheme adopted a fivefold cross-validation strategy on a cohort of 3000 images selected from a set of 40 COVID-19-positive individuals. Results: The proposed variability study uses tracings from two trained radiologists as part of the validation. Five artificial intelligence (AI) models were benchmarked against MedSeg. The best AI model, ResNet-UNet, was superior to MedSeg by 9% and 15% for Dice and Jaccard, respectively, when compared against MD 1, and by 4% and 8%, respectively, when compared against MD 2. Statistical tests—namely, the Mann−Whitney test, paired t-test, and Wilcoxon test—demonstrated its stability and reliability, with p < 0.0001. The online system for each slice was <1 s. Conclusions: The AI models reliably located and segmented COVID-19 lesions in CT scans. The COVLIAS 1.0Lesion lesion locator passed the intervariability test.
Collapse
Affiliation(s)
- Jasjit S. Suri
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
- Advanced Knowledge Engineering Centre, GBTI, Roseville, CA 95661, USA;
| | - Sushant Agarwal
- Advanced Knowledge Engineering Centre, GBTI, Roseville, CA 95661, USA;
- Department of Computer Science Engineering, PSIT, Kanpur 209305, India
| | - Gian Luca Chabert
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Alessandro Carriero
- Department of Radiology, “Maggiore della Carità” Hospital, University of Piemonte Orientale (UPO), Via Solaroli 17, 28100 Novara, Italy;
| | - Alessio Paschè
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Pietro S. C. Danna
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Armin Mehmedović
- University Hospital for Infectious Diseases, 10000 Zagreb, Croatia; (A.M.); (K.V.)
| | - Gavino Faa
- Department of Pathology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy;
| | - Inder M. Singh
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
| | - Monika Turk
- The Hanse-Wissenschaftskolleg Institute for Advanced Study, 27753 Delmenhorst, Germany;
| | - Paramjit S. Chadha
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
| | - Amer M. Johri
- Department of Medicine, Division of Cardiology, Queen’s University, Kingston, ON K7L 3N6, Canada;
| | - Narendra N. Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 110076, India;
| | - Sophie Mavrogeni
- Cardiology Clinic, Onassis Cardiac Surgery Center, 17674 Athens, Greece;
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St Helena, CA 94574, USA;
| | - Gyan Pareek
- Minimally Invasive Urology Institute, Brown University, Providence, RI 02912, USA; (G.P.); (D.W.S.)
| | - Martin Miner
- Men’s Health Center, Miriam Hospital, Providence, RI 02906, USA;
| | - David W. Sobel
- Minimally Invasive Urology Institute, Brown University, Providence, RI 02912, USA; (G.P.); (D.W.S.)
| | - Antonella Balestrieri
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Petros P. Sfikakis
- Rheumatology Unit, National Kapodistrian University of Athens, 15772 Athens, Greece;
| | - George Tsoulfas
- Department of Surgery, Aristoteleion University of Thessaloniki, 54124 Thessaloniki, Greece;
| | - Athanasios D. Protogerou
- Cardiovascular Prevention and Research Unit, Department of Pathophysiology, National & Kapodistrian University of Athens, 15772 Athens, Greece;
| | - Durga Prasanna Misra
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India; (D.P.M.); (V.A.)
| | - Vikas Agarwal
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India; (D.P.M.); (V.A.)
| | - George D. Kitas
- Academic Affairs, Dudley Group NHS Foundation Trust, Dudley DY1 2HQ, UK;
- Arthritis Research UK Epidemiology Unit, Manchester University, Manchester M13 9PL, UK
| | - Jagjit S. Teji
- Ann and Robert H. Lurie Children’s Hospital of Chicago, Chicago, IL 60611, USA;
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology and Rheumatology Institute, Toronto, ON L4Z 4C4, Canada;
| | | | - Andrew Nicolaides
- Vascular Screening and Diagnostic Centre, University of Nicosia Medical School, Nicosia 2408, Cyprus;
| | - Aditya Sharma
- Division of Cardiovascular Medicine, University of Virginia, Charlottesville, VA 22908, USA;
| | - Vijay Rathore
- AtheroPoint LLC, Roseville, CA 95661, USA; (S.K.D.); (V.R.)
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA;
| | - Azra Alizad
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA;
| | | | - Ferenc Nagy
- Internal Medicine Department, University of Szeged, 6725 Szeged, Hungary;
| | - Zoltan Ruzsa
- Invasive Cardiology Division, University of Szeged, 6725 Szeged, Hungary;
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA;
| | - Subbaram Naidu
- Electrical Engineering Department, University of Minnesota, Duluth, MN 55812, USA;
| | - Klaudija Viskovic
- University Hospital for Infectious Diseases, 10000 Zagreb, Croatia; (A.M.); (K.V.)
| | - Manudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114, USA;
| |
Collapse
|
97
|
Xiao C, Jin J, Yi J, Han C, Zhou Y, Ai Y, Xie C, Jin X. RefineNet-based 2D and 3D automatic segmentations for clinical target volume and organs at risks for patients with cervical cancer in postoperative radiotherapy. J Appl Clin Med Phys 2022; 23:e13631. [PMID: 35533205 PMCID: PMC9278674 DOI: 10.1002/acm2.13631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 04/09/2021] [Accepted: 04/18/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE An accurate and reliable target volume delineation is critical for the safe and successful radiotherapy. The purpose of this study is to develop new 2D and 3D automatic segmentation models based on RefineNet for clinical target volume (CTV) and organs at risk (OARs) for postoperative cervical cancer based on computed tomography (CT) images. METHODS A 2D RefineNet and 3D RefineNetPlus3D were adapted and built to automatically segment CTVs and OARs on a total of 44 222 CT slices of 313 patients with stage I-III cervical cancer. Fully convolutional networks (FCNs), U-Net, context encoder network (CE-Net), UNet3D, and ResUNet3D were also trained and tested with randomly divided training and validation sets, respectively. The performances of these automatic segmentation models were evaluated by Dice similarity coefficient (DSC), Jaccard similarity coefficient, and average symmetric surface distance when comparing them with manual segmentations with the test data. RESULTS The DSC for RefineNet, FCN, U-Net, CE-Net, UNet3D, ResUNet3D, and RefineNet3D were 0.82, 0.80, 0.82, 0.81, 0.80, 0.81, and 0.82 with a mean contouring time of 3.2, 3.4, 8.2, 3.9, 9.8, 11.4, and 6.4 s, respectively. The generated RefineNetPlus3D demonstrated a good performance in the automatic segmentation of bladder, small intestine, rectum, right and left femoral heads with a DSC of 0.97, 0.95, 091, 0.98, and 0.98, respectively, with a mean computation time of 6.6 s. CONCLUSIONS The newly adapted RefineNet and developed RefineNetPlus3D were promising automatic segmentation models with accurate and clinically acceptable CTV and OARs for cervical cancer patients in postoperative radiotherapy.
Collapse
Affiliation(s)
- Chengjian Xiao
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China
| | - Juebin Jin
- Department of Medical Engineering, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China
| | - Jinling Yi
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China
| | - Ce Han
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China
| | - Yongqiang Zhou
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China
| | - Yao Ai
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China
| | - Congying Xie
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China.,Department of Radiation and Medical Oncology, Wenzhou Medical University Second Affiliated Hospital, Wenzhou, People's Republic of China
| | - Xiance Jin
- Department of Radiotherapy Center, Wenzhou Medical University First Affiliated Hospital, Wenzhou, People's Republic of China.,School of Basic Medical Science, Wenzhou Medical University, Wenzhou, People's Republic of China
| |
Collapse
|
98
|
Gao W, Li X, Wang Y, Cai Y. Medical Image Segmentation Algorithm for Three-Dimensional Multimodal Using Deep Reinforcement Learning and Big Data Analytics. Front Public Health 2022; 10:879639. [PMID: 35462800 PMCID: PMC9024167 DOI: 10.3389/fpubh.2022.879639] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Accepted: 03/09/2022] [Indexed: 11/13/2022] Open
Abstract
To avoid the problems of relative overlap and low signal-to-noise ratio (SNR) of segmented three-dimensional (3D) multimodal medical images, which limit the effect of medical image diagnosis, a 3D multimodal medical image segmentation algorithm using reinforcement learning and big data analytics is proposed. Bayesian maximum a posteriori estimation method and improved wavelet threshold function are used to design wavelet shrinkage algorithm to remove high-frequency signal component noise in wavelet domain. The low-frequency signal component is processed by bilateral filtering and the inverse wavelet transform is used to denoise the 3D multimodal medical image. An end-to-end DRD U-Net model based on deep reinforcement learning is constructed. The feature extraction capacity of denoised image segmentation is increased by changing the convolution layer in the traditional reinforcement learning model to the residual module and introducing the multiscale context feature extraction module. The 3D multimodal medical image segmentation is done using the reward and punishment mechanism in the deep learning reinforcement algorithm. In order to verify the effectiveness of 3D multimodal medical image segmentation algorithm, the LIDC-IDRI data set, the SCR data set, and the DeepLesion data set are selected as the experimental data set of this article. The results demonstrate that the algorithm's segmentation effect is effective. When the number of iterations is increased to 250, the structural similarity reaches 98%, the SNR is always maintained between 55 and 60 dB, the training loss is modest, relative overlap and accuracy all exceed 95%, and the overall segmentation performance is superior. Readers will understand how deep reinforcement learning and big data analytics test the effectiveness of 3D multimodal medical image segmentation algorithm.
Collapse
Affiliation(s)
- Weiwei Gao
- College of Information and Technology, Wenzhou Business College, Wenzhou, China
| | - Xiaofeng Li
- Department of Information Engineering, Heilongjiang International University, Harbin, China
- *Correspondence: Xiaofeng Li
| | - Yanwei Wang
- School of Mechanical Engineering, Harbin Institute of Petroleum, Harbin, China
| | - Yingjie Cai
- The First Psychiatric Hospital of Harbin, Harbin, China
| |
Collapse
|
99
|
Sharobeem S, Le Breton H, Lalys F, Lederlin M, Lagorce C, Bedossa M, Boulmier D, Leurent G, Haigron P, Auffret V. Validation of a Whole Heart Segmentation from Computed Tomography Imaging Using a Deep-Learning Approach. J Cardiovasc Transl Res 2022; 15:427-437. [PMID: 34448116 DOI: 10.1007/s12265-021-10166-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 08/09/2021] [Indexed: 11/28/2022]
Abstract
The aim of this study is to develop an automated deep-learning-based whole heart segmentation of ECG-gated computed tomography data. After 21 exclusions, CT acquired before transcatheter aortic valve implantation in 71 patients were reviewed and randomly split in a training (n = 55 patients), validation (n = 8 patients), and a test set (n = 8 patients). A fully automatic deep-learning method combining two convolutional neural networks performed segmentation of 10 cardiovascular structures, which was compared with the manually segmented reference by the Dice index. Correlations and agreement between myocardial volumes and mass were assessed. The algorithm demonstrated high accuracy (Dice score = 0.920; interquartile range: 0.906-0.925) and a low computing time (13.4 s, range 11.9-14.9). Correlations and agreement of volumes and mass were satisfactory for most structures. Six of ten structures were well segmented. Deep-learning-based method allowed automated WHS from ECG-gated CT data with a high accuracy. Challenges remain to improve right-sided structures segmentation and achieve daily clinical application.
Collapse
Affiliation(s)
- Sam Sharobeem
- LTSI - UMR 1099, Inserm, CHU Rennes, Univ Rennes, 35000, Rennes, France
- Service de Cardiologie, CHU Rennes, 35000, Rennes, France
| | - Hervé Le Breton
- LTSI - UMR 1099, Inserm, CHU Rennes, Univ Rennes, 35000, Rennes, France
- Service de Cardiologie, CHU Rennes, 35000, Rennes, France
| | | | - Mathieu Lederlin
- LTSI - UMR 1099, Inserm, CHU Rennes, Univ Rennes, 35000, Rennes, France
- Service de Radiologie, CHU Rennes, 35000, Rennes, France
| | | | - Marc Bedossa
- Service de Cardiologie, CHU Rennes, 35000, Rennes, France
| | - Dominique Boulmier
- LTSI - UMR 1099, Inserm, CHU Rennes, Univ Rennes, 35000, Rennes, France
- Service de Cardiologie, CHU Rennes, 35000, Rennes, France
| | | | - Pascal Haigron
- LTSI - UMR 1099, Inserm, CHU Rennes, Univ Rennes, 35000, Rennes, France
| | - Vincent Auffret
- LTSI - UMR 1099, Inserm, CHU Rennes, Univ Rennes, 35000, Rennes, France.
- Service de Cardiologie, CHU Rennes, 35000, Rennes, France.
- Service de Cardiologie, CHU Pontchaillou, 2 rue Henri Le Guilloux, 35000, Rennes, France.
| |
Collapse
|
100
|
Brisk R, Bond RR, Finlay D, McLaughlin JAD, Piadlo AJ, McEneaney DJ. WaSP-ECG: A Wave Segmentation Pretraining Toolkit for Electrocardiogram Analysis. Front Physiol 2022; 13:760000. [PMID: 35399264 PMCID: PMC8993503 DOI: 10.3389/fphys.2022.760000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 01/03/2022] [Indexed: 11/23/2022] Open
Abstract
Introduction Representation learning allows artificial intelligence (AI) models to learn useful features from large, unlabelled datasets. This can reduce the need for labelled data across a range of downstream tasks. It was hypothesised that wave segmentation would be a useful form of electrocardiogram (ECG) representation learning. In addition to reducing labelled data requirements, segmentation masks may provide a mechanism for explainable AI. This study details the development and evaluation of a Wave Segmentation Pretraining (WaSP) application. Materials and Methods Pretraining: A non-AI-based ECG signal and image simulator was developed to generate ECGs and wave segmentation masks. U-Net models were trained to segment waves from synthetic ECGs. Dataset: The raw sample files from the PTB-XL dataset were downloaded. Each ECG was also plotted into an image. Fine-tuning and evaluation: A hold-out approach was used with a 60:20:20 training/validation/test set split. The encoder portions of the U-Net models were fine-tuned to classify PTB-XL ECGs for two tasks: sinus rhythm (SR) vs atrial fibrillation (AF), and myocardial infarction (MI) vs normal ECGs. The fine-tuning was repeated without pretraining. Results were compared. Explainable AI: an example pipeline combining AI-derived segmentation masks and a rule-based AF detector was developed and evaluated. Results WaSP consistently improved model performance on downstream tasks for both ECG signals and images. The difference between non-pretrained models and models pretrained for wave segmentation was particularly marked for ECG image analysis. A selection of segmentation masks are shown. An AF detection algorithm comprising both AI and rule-based components performed less well than end-to-end AI models but its outputs are proposed to be highly explainable. An example output is shown. Conclusion WaSP using synthetic data and labels allows AI models to learn useful features for downstream ECG analysis with real-world data. Segmentation masks provide an intermediate output that may facilitate confidence calibration in the context of end-to-end AI. It is possible to combine AI-derived segmentation masks and rule-based diagnostic classifiers for explainable ECG analysis.
Collapse
Affiliation(s)
- Rob Brisk
- Faculty of Computing, Engineering and the Built Environment, Ulster University, Belfast, United Kingdom
- Cardiology Department, Craigavon Area Hospital, Craigavon, United Kingdom
- *Correspondence: Rob Brisk,
| | - Raymond R. Bond
- Faculty of Computing, Engineering and the Built Environment, Ulster University, Belfast, United Kingdom
| | - Dewar Finlay
- Faculty of Computing, Engineering and the Built Environment, Ulster University, Belfast, United Kingdom
| | - James A. D. McLaughlin
- Faculty of Computing, Engineering and the Built Environment, Ulster University, Belfast, United Kingdom
| | - Alicja J. Piadlo
- Faculty of Computing, Engineering and the Built Environment, Ulster University, Belfast, United Kingdom
- Cardiology Department, Craigavon Area Hospital, Craigavon, United Kingdom
| | - David J. McEneaney
- Faculty of Computing, Engineering and the Built Environment, Ulster University, Belfast, United Kingdom
- Cardiology Department, Craigavon Area Hospital, Craigavon, United Kingdom
| |
Collapse
|