101
|
Wu H, Liu J, Xiao F, Wen Z, Cheng L, Qin J. Semi-supervised Segmentation of Echocardiography Videos via Noise-resilient Spatiotemporal Semantic Calibration and Fusion. Med Image Anal 2022; 78:102397. [DOI: 10.1016/j.media.2022.102397] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 01/14/2022] [Accepted: 02/18/2022] [Indexed: 10/19/2022]
|
102
|
Scebba G, Zhang J, Catanzaro S, Mihai C, Distler O, Berli M, Karlen W. Detect-and-segment: A deep learning approach to automate wound image segmentation. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
|
103
|
Veras MB, Sarker B, Aridhi S, Gomes JP, Macêdo JA, Nguifo EM, Devignes MD, Smaïl-Tabbone M. On the design of a similarity function for sparse binary data with application on protein function annotation. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.107863] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
104
|
Nagaya M, Ukita N. Embryo Grading With Unreliable Labels Due to Chromosome Abnormalities by Regularized PU Learning With Ranking. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:320-331. [PMID: 34748484 DOI: 10.1109/tmi.2021.3126169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We propose a method for human embryo grading with its images. This grading has been achieved by positive-negative classification (i.e., live birth or non-live birth). However, negative (non-live birth) labels collected in clinical practice are unreliable because the visual features of negative images are equal to those of positive (live birth) images if these non-live birth embryos have chromosome abnormalities. For alleviating an adverse effect of these unreliable labels, our method employs Positive-Unlabeled (PU) learning so that live birth and non-live birth are labeled as positive and unlabeled, respectively, where unlabeled samples contain both positive and negative samples. In our method, this PU learning on a deep CNN is improved by a learning-to-rank scheme. While the original learning-to-rank scheme is designed for positive-negative learning, it is extended to PU learning. Furthermore, overfitting in this PU learning is alleviated by regularization with mutual information. Experimental results with 643 time-lapse image sequences demonstrate the effectiveness of our framework in terms of the recognition accuracy and the interpretability. In quantitative comparison, the full version of our proposed method outperforms positive-negative classification in recall and F-measure by a wide margin (0.22 vs. 0.69 in recall and 0.27 vs. 0.42 in F-measure). In qualitative evaluation, visual attentions estimated by our method are interpretable in comparison with morphological assessments in clinical practice.
Collapse
|
105
|
TATL: Task Agnostic Transfer Learning for Skin Attributes Detection. Med Image Anal 2022; 78:102359. [DOI: 10.1016/j.media.2022.102359] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2021] [Revised: 01/03/2022] [Accepted: 01/10/2022] [Indexed: 11/20/2022]
|
106
|
Chiu K, Hoskin P, Gupta A, Butt R, Terparia S, Codd L, Tsang Y, Bhudia J, Killen H, Kane C, Ghoshray S, Lemon C, Megias D. The quantitative impact of joint peer review with a specialist radiologist in head and neck cancer radiotherapy planning. Br J Radiol 2022; 95:20211219. [PMID: 34918547 PMCID: PMC8822559 DOI: 10.1259/bjr.20211219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
OBJECTIVES Radiologist input in peer review of head and neck radiotherapy has been introduced as a routine departmental approach. The aim was to evaluate this practice and to quantitatively analyse the changes made. METHODS Patients treated with radical-dose radiotherapy between August and November 2020 were reviewed. The incidence of major and minor changes, as defined by The Royal College of Radiologists guidance, was prospectively recorded. The amended radiotherapy volumes were compared with the original volumes using Jaccard Index (JI) to assess conformity; Geographical Miss Index (GMI) for undercontouring; and Hausdorff Distance (HD) between the volumes. RESULTS In total, 73 out of 87 (84%) patients were discussed. Changes were recommended in 38 (52%) patients: 30 had ≥1 major change, eight had minor changes only. There were 99 amended volumes: The overall median JI, GMI and HD was 0.91 (interquartile range [IQR]=0.80-0.97), 0.06 (IQR = 0.02-0.18) and 0.42 cm (IQR = 0.20-1.17 cm), respectively. The nodal gross-tumour-volume (GTVn) and therapeutic high-dose nodal clinical-target-volume (CTVn) had the biggest magnitude of changes: The median JI, GMI and HD of GTVn was 0.89 (IQR = 0.44-0.95), 0.11 (IQR = 0.05-0.51), 3.71 cm (IQR = 0.31-6.93 cm); high-dose CTVn was 0.78 (IQR = 0.59-0.90), 0.20 (IQR = 0.07-0.31) and 3.28 cm (IQR = 1.22-6.18 cm), respectively. There was no observed difference in the quantitative indices of the 85 'major' and 14 'minor' volumes (p = 0.5). CONCLUSIONS Routine head and neck radiologist input in radiotherapy peer review is feasible and can help avoid gross error in contouring. ADVANCES IN KNOWLEDGE The major and minor classifications may benefit from differentiation with quantitative indices but requires correlation from clinical outcomes.
Collapse
Affiliation(s)
- Kevin Chiu
- Department of Head & Neck Oncology, Mount Vernon Cancer Centre, Northwood, UK
| | - Peter Hoskin
- Department of Clinical Oncology, Mount Vernon Cancer Centre, Northwood, UK
| | - Amit Gupta
- Department of Head & Neck Oncology, Mount Vernon Cancer Centre, Northwood, UK
| | - Roeum Butt
- Department of Clinical Oncology, Mount Vernon Cancer Centre, Northwood, UK
| | - Samsara Terparia
- Department of Clinical Oncology, Mount Vernon Cancer Centre, Northwood, UK
| | - Louise Codd
- Department of Clinical Oncology, Mount Vernon Cancer Centre, Northwood, UK
| | - Yatman Tsang
- Department of Clinical Oncology, Mount Vernon Cancer Centre, Northwood, UK
| | - Jyotsna Bhudia
- Department of Head & Neck Oncology, Mount Vernon Cancer Centre, Northwood, UK
| | - Helen Killen
- Department of Head & Neck Oncology, Mount Vernon Cancer Centre, Northwood, UK
| | - Clare Kane
- Department of Head & Neck Oncology, Mount Vernon Cancer Centre, Northwood, UK
| | | | - Catherine Lemon
- Department of Head & Neck Oncology, Mount Vernon Cancer Centre, Northwood, UK
| | - Daniel Megias
- Department of Clinical Oncology, Mount Vernon Cancer Centre, Northwood, UK
| |
Collapse
|
107
|
Suinesiaputra A, Mauger CA, Ambale-Venkatesh B, Bluemke DA, Dam Gade J, Gilbert K, Janse MHA, Hald LS, Werkhoven C, Wu CO, Lima JAC, Young AA. Deep Learning Analysis of Cardiac MRI in Legacy Datasets: Multi-Ethnic Study of Atherosclerosis. Front Cardiovasc Med 2022; 8:807728. [PMID: 35127868 PMCID: PMC8813768 DOI: 10.3389/fcvm.2021.807728] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 12/24/2021] [Indexed: 12/23/2022] Open
Abstract
The Multi-Ethnic Study of Atherosclerosis (MESA), begun in 2000, was the first large cohort study to incorporate cardiovascular magnetic resonance (CMR) to study the mechanisms of cardiovascular disease in over 5,000 initially asymptomatic participants, and there is now a wealth of follow-up data over 20 years. However, the imaging technology used to generate the CMR images is no longer in routine use, and methods trained on modern data fail when applied to such legacy datasets. This study aimed to develop a fully automated CMR analysis pipeline that leverages the ability of machine learning algorithms to enable extraction of additional information from such a large-scale legacy dataset, expanding on the original manual analyses. We combined the original study analyses with new annotations to develop a set of automated methods for customizing 3D left ventricular (LV) shape models to each CMR exam and build a statistical shape atlas. We trained VGGNet convolutional neural networks using a transfer learning sequence between two-chamber, four-chamber, and short-axis MRI views to detect landmarks. A U-Net architecture was used to detect the endocardial and epicardial boundaries in short-axis images. The landmark detection network accurately predicted mitral valve and right ventricular insertion points with average error distance <2.5 mm. The agreement of the network with two observers was excellent (intraclass correlation coefficient >0.9). The segmentation network produced average Dice score of 0.9 for both myocardium and LV cavity. Differences between the manual and automated analyses were small, i.e., <1.0 ± 2.6 mL/m2 for indexed LV volume, 3.0 ± 6.4 g/m2 for indexed LV mass, and 0.6 ± 3.3% for ejection fraction. In an independent atlas validation dataset, the LV atlas built from the fully automated pipeline showed similar statistical relationships to an atlas built from the manual analysis. Hence, the proposed pipeline is not only a promising framework to automatically assess additional measures of ventricular function, but also to study relationships between cardiac morphologies and future cardiac events, in a large-scale population study.
Collapse
Affiliation(s)
- Avan Suinesiaputra
- Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Charlène A. Mauger
- Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand
| | | | - David A. Bluemke
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, United States
| | - Josefine Dam Gade
- Department of Biomedical Engineering and Informatics, School of Medicine and Health, Aalborg University, Aalborg, Denmark
| | - Kathleen Gilbert
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Markus H. A. Janse
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Line Sofie Hald
- Department of Biomedical Engineering and Informatics, School of Medicine and Health, Aalborg University, Aalborg, Denmark
| | - Conrad Werkhoven
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Colin O. Wu
- Division of Intramural Research, National Heart, Lung and Blood Institute, National Institutes of Health, Baltimore, MD, United States
| | | | - Alistair A. Young
- Faculty of Life Sciences & Medicine, School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
- *Correspondence: Alistair A. Young
| |
Collapse
|
108
|
Kim MS, Cha JH, Lee S, Han L, Park W, Ahn JS, Park SC. Deep-Learning-Based Cerebral Artery Semantic Segmentation in Neurosurgical Operating Microscope Vision Using Indocyanine Green Fluorescence Videoangiography. Front Neurorobot 2022; 15:735177. [PMID: 35095454 PMCID: PMC8790180 DOI: 10.3389/fnbot.2021.735177] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 11/23/2021] [Indexed: 11/18/2022] Open
Abstract
There have been few anatomical structure segmentation studies using deep learning. Numbers of training and ground truth images applied were small and the accuracies of which were low or inconsistent. For a surgical video anatomy analysis, various obstacles, including a variable fast-changing view, large deformations, occlusions, low illumination, and inadequate focus occur. In addition, it is difficult and costly to obtain a large and accurate dataset on operational video anatomical structures, including arteries. In this study, we investigated cerebral artery segmentation using an automatic ground-truth generation method. Indocyanine green (ICG) fluorescence intraoperative cerebral videoangiography was used to create a ground-truth dataset mainly for cerebral arteries and partly for cerebral blood vessels, including veins. Four different neural network models were trained using the dataset and compared. Before augmentation, 35,975 training images and 11,266 validation images were used. After augmentation, 260,499 training and 90,129 validation images were used. A Dice score of 79% for cerebral artery segmentation was achieved using the DeepLabv3+ model trained using an automatically generated dataset. Strict validation in different patient groups was conducted. Arteries were also discerned from the veins using the ICG videoangiography phase. We achieved fair accuracy, which demonstrated the appropriateness of the methodology. This study proved the feasibility of operating field view of the cerebral artery segmentation using deep learning, and the effectiveness of the automatic blood vessel ground truth generation method using ICG fluorescence videoangiography. Using this method, computer vision can discern blood vessels and arteries from veins in a neurosurgical microscope field of view. Thus, this technique is essential for neurosurgical field vessel anatomy-based navigation. In addition, surgical assistance, safety, and autonomous surgery neurorobotics that can detect or manipulate cerebral vessels would require computer vision to identify blood vessels and arteries.
Collapse
Affiliation(s)
- Min-seok Kim
- Clinical Research Team, Deepnoid, Seoul, South Korea
| | - Joon Hyuk Cha
- Department of Internal Medicine, Inha University Hospital, Incheon, South Korea
| | - Seonhwa Lee
- Department of Bio-convergence Engineering, Korea University, Seoul, South Korea
| | - Lihong Han
- Clinical Research Team, Deepnoid, Seoul, South Korea
- Department of Computer Science and Engineering, Soongsil University, Seoul, South Korea
| | - Wonhyoung Park
- Department of Neurosurgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Jae Sung Ahn
- Department of Neurosurgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Seong-Cheol Park
- Clinical Research Team, Deepnoid, Seoul, South Korea
- Department of Neurosurgery, Gangneung Asan Hospital, University of Ulsan College of Medicine, Gangneung, South Korea
- Department of Neurosurgery, Seoul Metropolitan Government—Seoul National University Boramae Medical Center, Seoul, South Korea
- Department of Neurosurgery, Hallym Hospital, Incheon, South Korea
- *Correspondence: Seong-Cheol Park
| |
Collapse
|
109
|
Helaly HA, Badawy M, Haikal AY. Toward deep MRI segmentation for Alzheimer’s disease detection. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06430-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
110
|
Vinurajkumar S, Anandhavelu S. An Enhanced Fuzzy Segmentation Framework for extracting white matter from T1-weighted MR images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103093] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
111
|
Chen CI, Lu NH, Huang YH, Liu KY, Hsu SY, Matsushima A, Wang YM, Chen TB. Segmentation of liver tumors with abdominal computed tomography using fully convolutional networks. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:953-966. [PMID: 35754254 DOI: 10.3233/xst-221194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
BACKGROUND Dividing liver organs or lesions depicting on computed tomography (CT) images could be applied to help tumor staging and treatment. However, most existing image segmentation technologies use manual or semi-automatic analysis, making the analysis process costly and time-consuming. OBJECTIVE This research aims to develop and apply a deep learning network architecture to segment liver tumors automatically after fine tuning parameters. METHODS AND MATERIALS The medical imaging is obtained from the International Symposium on Biomedical Imaging (ISBI), which includes 3D abdominal CT scans of 131 patients diagnosed with liver tumors. From these CT scans, there are 7,190 2D CT images along with the labeled binary images. The labeled binary images are regarded as gold standard for evaluation of the segmented results by FCN (Fully Convolutional Network). The backbones of FCN are extracted from Xception, InceptionresNetv2, MobileNetv2, ResNet18, ResNet50 in this study. Meanwhile, the parameters including optimizers (SGDM and ADAM), size of epoch, and size of batch are investigated. CT images are randomly divided into training and testing sets using a ratio of 9:1. Several evaluation indices including Global Accuracy, Mean Accuracy, Mean IoU (Intersection over Union), Weighted IoU and Mean BF Score are applied to evaluate tumor segmentation results in the testing images. RESULTS The Global Accuracy, Mean Accuracy, Mean IoU, Weighted IoU, and Mean BF Scores are 0.999, 0.969, 0.954, 0.998, 0.962 using ResNet50 in FCN with optimizer SGDM, batch size 12, and epoch 9. It is important to fine tuning the parameters in FCN model. Top 20 FNC models enable to achieve higher tumor segmentation accuracy with Mean IoU over 0.900. The occurred frequency of InceptionresNetv2, MobileNetv2, ResNet18, ResNet50, and Xception are 9, 6, 3, 5, and 2 times. Therefore, the InceptionresNetv2 has higher performance than others. CONCLUSIONS This study develop and test an automated liver tumor segmentation model based on FCN. Study results demonstrate that many deep learning models including InceptionresNetv2, MobileNetv2, ResNet18, ResNet50, and Xception have high potential to segment liver tumors from CT images with accuracy exceeding 90%. However, it is still difficult to accurately segment tiny and small size tumors by FCN models.
Collapse
Affiliation(s)
- Chih-I Chen
- Division of Colon and Rectal Surgery, Department of Surgery, E-DA Hospital, Kaohsiung City, Taiwan
- Division of General Medicine Surgery, Department of Surgery, E-DA Hospital, Kaohsiung City, Taiwan
- School of Medicine, College of Medicine, I-Shou University, Kaohsiung City, Taiwan
- Department of Information Engineering, I-Shou University, Kaohsiung City, Taiwan
- The School of Chinese Medicine for Post Baccalaureate, I-Shou University, Kaohsiung City, Taiwan
| | - Nan-Han Lu
- Department of Pharmacy, Tajen University, Pingtung City, Taiwan
- Department of Radiology, E-DA Hospital, I-Shou University, Kaohsiung City, Taiwan
- Department of Medical Imaging and Radiological Science, I-Shou University, Kaohsiung City, Taiwan
| | - Yung-Hui Huang
- Department of Medical Imaging and Radiological Science, I-Shou University, Kaohsiung City, Taiwan
| | - Kuo-Ying Liu
- Department of Radiology, E-DA Hospital, I-Shou University, Kaohsiung City, Taiwan
| | - Shih-Yen Hsu
- Department of Information Engineering, I-Shou University, Kaohsiung City, Taiwan
| | - Akari Matsushima
- Department of Radiological Technology Faculty of Medical Technology, Teikyo University, Tokyo, Japan
| | - Yi-Ming Wang
- Department of Information Engineering, I-Shou University, Kaohsiung City, Taiwan
- Department of Critical Care Medicine, E-DA hospital, I-Shou University, Kaohsiung City, Taiwan
| | - Tai-Been Chen
- Department of Medical Imaging and Radiological Science, I-Shou University, Kaohsiung City, Taiwan
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| |
Collapse
|
112
|
Rahimpour M, Bertels J, Radwan A, Vandermeulen H, Sunaert S, Vandermeulen D, Maes F, Goffin K, Koole M. Cross-modal distillation to improve MRI-based brain tumor segmentation with missing MRI sequences. IEEE Trans Biomed Eng 2021; 69:2153-2164. [PMID: 34941496 DOI: 10.1109/tbme.2021.3137561] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Convolutional neural networks (CNNs) for brain tumor segmentation are generally developed using complete sets of magnetic resonance imaging (MRI) sequences for both training and inference. As such, these algorithms are not trained for realistic, clinical scenarios where parts of the MRI sequences which were used for training, are missing during inference. To increase clinical applicability, we proposed a cross-modal distillation approach to leverage the availability of multi-sequence MRI data for training and generate an enriched CNN model which uses only single-sequence MRI data for inference but outperforms a single-sequence CNN model. We assessed the performance of the proposed method for whole tumor and tumor core segmentation with multi-sequence MRI data available for training but only T1- weighted (T1w) sequence data available for inference, using both BraTS 2018, and in-house datasets. Results showed that cross-modal distillation significantly improved the Dice score for both whole tumor and tumor core segmentation when only T1w sequence data were available for inference. For the evaluation using the in-house dataset, cross-modal distillation achieved an average Dice score of 79.04% and 69.39% for whole tumor and tumor core segmentation, respectively, while a single-sequence U-Net model using T1w sequence data for both training and inference achieved an average Dice score of 73.60% and 62.62%, respectively. These findings confirmed cross-modal distillation as an effective method to increase the potential of single-sequence CNN models such that segmentation performance is less compromised by missing MRI sequences or having only one MRI sequence available for segmentation.
Collapse
|
113
|
Mehrvar S, Himmel LE, Babburi P, Goldberg AL, Guffroy M, Janardhan K, Krempley AL, Bawa B. Deep Learning Approaches and Applications in Toxicologic Histopathology: Current Status and Future Perspectives. J Pathol Inform 2021; 12:42. [PMID: 34881097 PMCID: PMC8609289 DOI: 10.4103/jpi.jpi_36_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 07/18/2021] [Indexed: 12/13/2022] Open
Abstract
Whole slide imaging enables the use of a wide array of digital image analysis tools that are revolutionizing pathology. Recent advances in digital pathology and deep convolutional neural networks have created an enormous opportunity to improve workflow efficiency, provide more quantitative, objective, and consistent assessments of pathology datasets, and develop decision support systems. Such innovations are already making their way into clinical practice. However, the progress of machine learning - in particular, deep learning (DL) - has been rather slower in nonclinical toxicology studies. Histopathology data from toxicology studies are critical during the drug development process that is required by regulatory bodies to assess drug-related toxicity in laboratory animals and its impact on human safety in clinical trials. Due to the high volume of slides routinely evaluated, low-throughput, or narrowly performing DL methods that may work well in small-scale diagnostic studies or for the identification of a single abnormality are tedious and impractical for toxicologic pathology. Furthermore, regulatory requirements around good laboratory practice are a major hurdle for the adoption of DL in toxicologic pathology. This paper reviews the major DL concepts, emerging applications, and examples of DL in toxicologic pathology image analysis. We end with a discussion of specific challenges and directions for future research.
Collapse
Affiliation(s)
- Shima Mehrvar
- Preclinical Safety, AbbVie Inc., North Chicago, IL, USA
| | | | - Pradeep Babburi
- Business Technology Solutions, AbbVie Inc., North Chicago, IL, USA
| | | | | | | | | | | |
Collapse
|
114
|
Gidde PS, Prasad SS, Singh AP, Bhatheja N, Prakash S, Singh P, Saboo A, Takhar R, Gupta S, Saurav S, M V R, Singh A, Sardana V, Mahajan H, Kalyanpur A, Mandal AS, Mahajan V, Agrawal A, Agrawal A, Venugopal VK, Singh S, Dash D. Validation of expert system enhanced deep learning algorithm for automated screening for COVID-Pneumonia on chest X-rays. Sci Rep 2021; 11:23210. [PMID: 34853342 PMCID: PMC8636645 DOI: 10.1038/s41598-021-02003-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 11/01/2021] [Indexed: 12/15/2022] Open
Abstract
SARS-CoV2 pandemic exposed the limitations of artificial intelligence based medical imaging systems. Earlier in the pandemic, the absence of sufficient training data prevented effective deep learning (DL) solutions for the diagnosis of COVID-19 based on X-Ray data. Here, addressing the lacunae in existing literature and algorithms with the paucity of initial training data; we describe CovBaseAI, an explainable tool using an ensemble of three DL models and an expert decision system (EDS) for COVID-Pneumonia diagnosis, trained entirely on pre-COVID-19 datasets. The performance and explainability of CovBaseAI was primarily validated on two independent datasets. Firstly, 1401 randomly selected CxR from an Indian quarantine center to assess effectiveness in excluding radiological COVID-Pneumonia requiring higher care. Second, curated dataset; 434 RT-PCR positive cases and 471 non-COVID/Normal historical scans, to assess performance in advanced medical settings. CovBaseAI had an accuracy of 87% with a negative predictive value of 98% in the quarantine-center data. However, sensitivity was 0.66-0.90 taking RT-PCR/radiologist opinion as ground truth. This work provides new insights on the usage of EDS with DL methods and the ability of algorithms to confidently predict COVID-Pneumonia while reinforcing the established learning; that benchmarking based on RT-PCR may not serve as reliable ground truth in radiological diagnosis. Such tools can pave the path for multi-modal high throughput detection of COVID-Pneumonia in screening and referral.
Collapse
Affiliation(s)
| | - Shyam Sunder Prasad
- CSIR-Central Electronics Engineering Research Institute, Pilani, Rajasthan, 333031, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | - Ajay Pratap Singh
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | - Nitin Bhatheja
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India
| | - Satyartha Prakash
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India
| | - Prateek Singh
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | - Aakash Saboo
- Centre for Advanced Research in Imaging, Neurosciences Genomics (CARING), New Delhi, India
| | - Rohit Takhar
- Centre for Advanced Research in Imaging, Neurosciences Genomics (CARING), New Delhi, India
| | - Salil Gupta
- Centre for Advanced Research in Imaging, Neurosciences Genomics (CARING), New Delhi, India
| | - Sumeet Saurav
- CSIR-Central Electronics Engineering Research Institute, Pilani, Rajasthan, 333031, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | - Raghunandanan M V
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | | | - Viren Sardana
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | - Harsh Mahajan
- Centre for Advanced Research in Imaging, Neurosciences Genomics (CARING), New Delhi, India
| | - Arjun Kalyanpur
- Teleradiology Solutions, 7G, Opposite Graphite India, Whitefield, Bangalore, Karnataka, 560048, India
| | - Atanendu Shekhar Mandal
- CSIR-Central Electronics Engineering Research Institute, Pilani, Rajasthan, 333031, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | - Vidur Mahajan
- Centre for Advanced Research in Imaging, Neurosciences Genomics (CARING), New Delhi, India
| | - Anurag Agrawal
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | - Anjali Agrawal
- Teleradiology Solutions, 12B Sriram Road, Civil Lines, Delhi, 110054, India.
| | | | - Sanjay Singh
- CSIR-Central Electronics Engineering Research Institute, Pilani, Rajasthan, 333031, India.
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India.
| | - Debasis Dash
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India.
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India.
| |
Collapse
|
115
|
Kovács G, Fazekas A. A new baseline for retinal vessel segmentation: Numerical identification and correction of methodological inconsistencies affecting 100+ papers. Med Image Anal 2021; 75:102300. [PMID: 34814057 DOI: 10.1016/j.media.2021.102300] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 09/20/2021] [Accepted: 11/04/2021] [Indexed: 12/18/2022]
Abstract
In the last 15 years, the segmentation of vessels in retinal images has become an intensively researched problem in medical imaging, with hundreds of algorithms published. One of the de facto benchmarking data sets of vessel segmentation techniques is the DRIVE data set. Since DRIVE contains a predefined split of training and test images, the published performance results of the various segmentation techniques should provide a reliable ranking of the algorithms. Including more than 100 papers in the study, we performed a detailed numerical analysis of the coherence of the published performance scores. We found inconsistencies in the reported scores related to the use of the field of view (FoV), which has a significant impact on the performance scores. We attempted to eliminate the biases using numerical techniques to provide a more realistic picture of the state of the art. Based on the results, we have formulated several findings, most notably: despite the well-defined test set of DRIVE, most rankings in published papers are based on non-comparable figures; in contrast to the near-perfect accuracy scores reported in the literature, the highest accuracy score achieved to date is 0.9582 in the FoV region, which is 1% higher than that of human annotators. The methods we have developed for identifying and eliminating the evaluation biases can be easily applied to other domains where similar problems may arise.
Collapse
Affiliation(s)
- György Kovács
- Analytical Minds Ltd., Árpád street 5, Beregsurány 4933, Hungary.
| | - Attila Fazekas
- University of Debrecen, Faculty of Informatics, P.O.BOX 400, Debrecen 4002, Hungary.
| |
Collapse
|
116
|
Distribution-Aware Margin Calibration for Semantic Segmentation in Images. Int J Comput Vis 2021. [DOI: 10.1007/s11263-021-01533-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
117
|
Wong J, Sigurdson S, Reformat M, Lou E. Centroid-based Distance Loss Function for Lamina Segmentation in 3D Ultrasound Spine Volumes. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:1723-1726. [PMID: 34891619 DOI: 10.1109/embc46164.2021.9631034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Ultrasound imaging of the spine to diagnose the severity of scoliosis is a recent development in the field, offering 3D information that does not require a complicated procedure of reconstruction, unlike with radiography. Determining the severity of scoliosis on ultrasound volumes requires labelling vertebral features called laminae. To increase accuracy and reduce time spent on this task, this paper reported a novel custom centroid-based distance loss function for lamina segmentation in 3D ultrasound volumes, using convolutional neural networks (CNN). A comparison between the custom and two standard loss functions was performed by fitting a CNN with each loss function. The results showed that the custom loss network performed the best in terms of minimization of the distances between the centroids in the ground truth and the centroids in the predicted segmentation. On average, the custom network improved on the total distance between predicted and true centroids by 33 voxels (22%) when compared with the second best performing network, which used the Dice loss. In general, this novel custom loss function allowed the network to detect two more laminae on average in the lumbar region of the spine that the other networks tended to miss.
Collapse
|
118
|
Yang Z, Benhabiles H, Hammoudi K, Windal F, He R, Collard D. A generalized deep learning-based framework for assistance to the human malaria diagnosis from microscopic images. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06604-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
119
|
Kumazu Y, Kobayashi N, Kitamura N, Rayan E, Neculoiu P, Misumi T, Hojo Y, Nakamura T, Kumamoto T, Kurahashi Y, Ishida Y, Masuda M, Shinohara H. Automated segmentation by deep learning of loose connective tissue fibers to define safe dissection planes in robot-assisted gastrectomy. Sci Rep 2021; 11:21198. [PMID: 34707141 PMCID: PMC8551298 DOI: 10.1038/s41598-021-00557-3] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Accepted: 10/13/2021] [Indexed: 02/06/2023] Open
Abstract
The prediction of anatomical structures within the surgical field by artificial intelligence (AI) is expected to support surgeons’ experience and cognitive skills. We aimed to develop a deep-learning model to automatically segment loose connective tissue fibers (LCTFs) that define a safe dissection plane. The annotation was performed on video frames capturing a robot-assisted gastrectomy performed by trained surgeons. A deep-learning model based on U-net was developed to output segmentation results. Twenty randomly sampled frames were provided to evaluate model performance by comparing Recall and F1/Dice scores with a ground truth and with a two-item questionnaire on sensitivity and misrecognition that was completed by 20 surgeons. The model produced high Recall scores (mean 0.606, maximum 0.861). Mean F1/Dice scores reached 0.549 (range 0.335–0.691), showing acceptable spatial overlap of the objects. Surgeon evaluators gave a mean sensitivity score of 3.52 (with 88.0% assigning the highest score of 4; range 2.45–3.95). The mean misrecognition score was a low 0.14 (range 0–0.7), indicating very few acknowledged over-detection failures. Thus, AI can be trained to predict fine, difficult-to-discern anatomical structures at a level convincing to expert surgeons. This technology may help reduce adverse events by determining safe dissection planes.
Collapse
Affiliation(s)
- Yuta Kumazu
- Department of Surgery, Yokohama City University, Kanagawa, Japan.,Anaut Inc., Tokyo, Japan
| | | | | | | | | | - Toshihiro Misumi
- Department of Biostatistics, Yokohama City University School of Medicine, Kanagawa, Japan
| | - Yudai Hojo
- Department of Gastroenterological Surgery, Hyogo College of Medicine, 1-1 Mukogawa-cho, Nishinomiya, Hyogo, 663-8501, Japan
| | - Tatsuro Nakamura
- Department of Gastroenterological Surgery, Hyogo College of Medicine, 1-1 Mukogawa-cho, Nishinomiya, Hyogo, 663-8501, Japan
| | - Tsutomu Kumamoto
- Department of Gastroenterological Surgery, Hyogo College of Medicine, 1-1 Mukogawa-cho, Nishinomiya, Hyogo, 663-8501, Japan
| | - Yasunori Kurahashi
- Department of Gastroenterological Surgery, Hyogo College of Medicine, 1-1 Mukogawa-cho, Nishinomiya, Hyogo, 663-8501, Japan
| | - Yoshinori Ishida
- Department of Gastroenterological Surgery, Hyogo College of Medicine, 1-1 Mukogawa-cho, Nishinomiya, Hyogo, 663-8501, Japan
| | - Munetaka Masuda
- Department of Surgery, Yokohama City University, Kanagawa, Japan
| | - Hisashi Shinohara
- Department of Gastroenterological Surgery, Hyogo College of Medicine, 1-1 Mukogawa-cho, Nishinomiya, Hyogo, 663-8501, Japan.
| |
Collapse
|
120
|
Zhang Z, Rosa B, Nageotte F. Surgical Tool Segmentation Using Generative Adversarial Networks With Unpaired Training Data. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3092302] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
121
|
Wagner MW, Namdar K, Biswas A, Monah S, Khalvati F, Ertl-Wagner BB. Radiomics, machine learning, and artificial intelligence-what the neuroradiologist needs to know. Neuroradiology 2021; 63:1957-1967. [PMID: 34537858 PMCID: PMC8449698 DOI: 10.1007/s00234-021-02813-9] [Citation(s) in RCA: 69] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 09/09/2021] [Indexed: 01/04/2023]
Abstract
PURPOSE Artificial intelligence (AI) is playing an ever-increasing role in Neuroradiology. METHODS When designing AI-based research in neuroradiology and appreciating the literature, it is important to understand the fundamental principles of AI. Training, validation, and test datasets must be defined and set apart as priorities. External validation and testing datasets are preferable, when feasible. The specific type of learning process (supervised vs. unsupervised) and the machine learning model also require definition. Deep learning (DL) is an AI-based approach that is modelled on the structure of neurons of the brain; convolutional neural networks (CNN) are a commonly used example in neuroradiology. RESULTS Radiomics is a frequently used approach in which a multitude of imaging features are extracted from a region of interest and subsequently reduced and selected to convey diagnostic or prognostic information. Deep radiomics uses CNNs to directly extract features and obviate the need for predefined features. CONCLUSION Common limitations and pitfalls in AI-based research in neuroradiology are limited sample sizes ("small-n-large-p problem"), selection bias, as well as overfitting and underfitting.
Collapse
Affiliation(s)
- Matthias W Wagner
- Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada
- Department of Medical Imaging, University of Toronto, 555 University Ave, Toronto, ON, M5G 1X8, Canada
| | - Khashayar Namdar
- Neurosciences and Mental Health Program, SickKids Research Institute, Toronto, Canada
| | - Asthik Biswas
- Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada
- Department of Medical Imaging, University of Toronto, 555 University Ave, Toronto, ON, M5G 1X8, Canada
| | - Suranna Monah
- Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada
| | - Farzad Khalvati
- Neurosciences and Mental Health Program, SickKids Research Institute, Toronto, Canada
- Department of Medical Imaging, University of Toronto, 555 University Ave, Toronto, ON, M5G 1X8, Canada
| | - Birgit B Ertl-Wagner
- Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada.
- Department of Medical Imaging, University of Toronto, 555 University Ave, Toronto, ON, M5G 1X8, Canada.
| |
Collapse
|
122
|
Leonardi MC, Pepa M, Gugliandolo SG, Luraschi R, Vigorito S, Rojas DP, La Porta MR, Cante D, Petrucci E, Marino L, Borzì G, Ippolito E, Marrocco M, Huscher A, Chieregato M, Argenone A, Iadanza L, De Rose F, Lobefalo F, Cucciarelli F, Valenti M, De Santis MC, Cavallo A, Rossi F, Russo S, Prisco A, Guernieri M, Guarnaccia R, Malatesta T, Meaglia I, Liotta M, Tabarelli de Fatis P, Palumbo I, Marcantonini M, Colangione SP, Mezzenga E, Falivene S, Mormile M, Ravo V, Arrichiello C, Fozza A, Barbero MP, Ivaldi GB, Catalano G, Vidali C, Aristei C, Giannitto C, Miglietta E, Ciabattoni A, Meattini I, Orecchia R, Cattani F, Jereczek-Fossa BA. Geometric contour variation in clinical target volume of axillary lymph nodes in breast cancer radiotherapy: an AIRO multi-institutional study. Br J Radiol 2021; 94:20201177. [PMID: 33882239 PMCID: PMC8248216 DOI: 10.1259/bjr.20201177] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 12/23/2020] [Accepted: 01/25/2021] [Indexed: 01/19/2023] Open
Abstract
OBJECTIVES To determine interobserver variability in axillary nodal contouring in breast cancer (BC) radiotherapy (RT) by comparing the clinical target volume of participating single centres (SC-CTV) with a gold-standard CTV (GS-CTV). METHODS The GS-CTV of three patients (P1, P2, P3) with increasing complexity was created in DICOM format from the median contour of axillary CTVs drawn by BC experts, validated using the simultaneous truth and performance-level estimation and peer-reviewed. GS-CTVs were compared with the correspondent SC-CTVs drawn by radiation oncologists, using validated metrics and a total score (TS) integrating all of them. RESULTS Eighteen RT centres participated in the study. Comparative analyses revealed that, on average, the SC-CTVs were smaller than GS-CTV for P1 and P2 (by -29.25% and -27.83%, respectively) and larger for P3 (by +12.53%). The mean Jaccard index was greater for P1 and P2 compared to P3, but the overlap extent value was around 0.50 or less. Regarding nodal levels, L4 showed the highest concordance with the GS. In the intra-patient comparison, L2 and L3 achieved lower TS than L4. Nodal levels showed discrepancy with GS, which was not statistically significant for P1, and negligible for P2, while P3 had the worst agreement. DICE similarity coefficient did not exceed the minimum threshold for agreement of 0.70 in all the measurements. CONCLUSIONS Substantial differences were observed between SC- and GS-CTV, especially for P3 with altered arm setup. L2 and L3 were the most critical levels. The study highlighted these key points to address. ADVANCES IN KNOWLEDGE The present study compares, by means of validated geometric indexes, manual segmentations of axillary lymph nodes in breast cancer from different observers and different institutions made on radiotherapy planning CT images. Assessing such variability is of paramount importance, as geometric uncertainties might lead to incorrect dosimetry and compromise oncological outcome.
Collapse
Affiliation(s)
| | - Matteo Pepa
- Division of Radiation Oncology, IEO Istituto Europeo di Oncologia IRCCS, Milano, Italy
| | | | - Rosa Luraschi
- Unit of Medical Physics, IEO Istituto Europeo di Oncologia IRCCS, Milano, Italy
| | - Sabrina Vigorito
- Unit of Medical Physics, IEO Istituto Europeo di Oncologia IRCCS, Milano, Italy
| | | | | | - Domenico Cante
- Radiotherapy Department, ASL TO4 Ivrea Community Hospital, Ivrea, Italy
| | - Edoardo Petrucci
- Unit of Medical Physics, ASL TO4 Ivrea Community Hospital, Ivrea, Italy
| | - Lorenza Marino
- Radiotherapy Unit, REM Radioterapia, Viagrande (CT), Italy
| | - Giuseppina Borzì
- Unit of Medical Physics, REM Radioterapia, Viagrande (CT), Italy
| | - Edy Ippolito
- Department of Radiotherapy, Campus Bio-Medico University, Roma, Italy
| | | | | | | | - Angela Argenone
- Division of Radiation Oncology, Azienda Ospedaliera di Rilievo Nazionale San Pio, Benevento, Italy
| | - Luciano Iadanza
- Unit of Medical Physics, Azienda Ospedaliera di Rilievo Nazionale San Pio, Benevento, italy
| | - Fiorenza De Rose
- Radiotherapy and Radiosurgery Department, Humanitas Clinical and Research Centre IRCCS, Milano, Italy
| | - Francesca Lobefalo
- Radiotherapy and Radiosurgery Department, Humanitas Clinical and Research Centre IRCCS, Milano, Italy
| | - Francesca Cucciarelli
- Department of Internal Medicine, Radiotherapy Institute, Ospedali Riuniti Umberto I, G.M. Lancisi, G. Salesi, Ancona, Italy
| | - Marco Valenti
- Unit of Medical Physics, Ospedali Riuniti Umberto I, G.M. Lancisi, G. Salesi, Ancona, Italy
| | | | - Anna Cavallo
- Unit of Medical Physics, Fondazione IRCCS Istituto Nazionale dei Tumori, Milano, Italy
| | - Francesca Rossi
- Radiotherapy Unit, Usl Toscana Centro, Ospedale Santa Maria Annunziata, Firenze, Italy
| | - Serenella Russo
- Unit of Medical Physics, Usl Toscana Centro, Ospedale Santa Maria Annunziata, Firenze, Italy
| | - Agnese Prisco
- Department of Radiotherapy, ASUFC - P.O. “ Santa Maria della Misericordia” di Udine, Udine, Italy
| | - Marika Guernieri
- Unit of Medical Physics, ASUFC - P.O. “ Santa Maria della Misericordia” di Udine, Udine, Italy
| | - Roberta Guarnaccia
- Radiotherapy Unit, Ospedale Fatebenefratelli Isola Tiberina, Roma, Italy
| | - Tiziana Malatesta
- Unit of Medical Physics, Ospedale Fatebenefratelli Isola Tiberina, Roma, Italy
| | - Ilaria Meaglia
- Radiation Oncology Unit, Istituti Clinici Scientifici Maugeri IRCCS, Pavia, Italy
| | - Marco Liotta
- Medical Physics Unit, Istituti Clinici Scientifici Maugeri IRCCS, Pavia, Italy
| | | | - Isabella Palumbo
- Radiation Oncology Section, University of Perugia and Perugia General Hospital, Perugia, Italy
| | | | - Sarah Pia Colangione
- Radiotherapy Unit, Istituto Scientifico Romagnolo per lo Studio e la Cura dei Tumori (IRST) IRCCS, Meldola, Italy
| | - Emilio Mezzenga
- Medical Physics Unit, IRCCS Istituto Scientifico Romagnolo per lo Studio e la Cura dei Tumori (IRST) "Dino Amadori", Meldola (FC), Italy
| | - Sara Falivene
- Department of Radiotherapy, ASL Napoli 1 Centro - Ospedale del Mare, Napoli, Italy
| | - Maria Mormile
- Unit of Medical Physics, ASL Napoli 1 Centro - Ospedale del Mare, Napoli, Italy
| | - Vincenzo Ravo
- Unit of Radiotherapy, Istituto Nazionale Tumori – IRCCS - Fondazione G. Pascale, Napoli, Italy
| | - Cecilia Arrichiello
- Unit of Radiotherapy, Istituto Nazionale Tumori – IRCCS - Fondazione G. Pascale, Napoli, Italy
| | - Alessandra Fozza
- Division of Radiation Oncology, Azienda Ospedaliera Nazionale SS. Antonio e Biagio e Cesare Arrigo, Alessandria, Italy
| | - Maria Paola Barbero
- Unit of Medical Physics, Azienda Ospedaliera Nazionale SS. Antonio e Biagio e Cesare Arrigo, Alessandria, Italy
| | | | - Gianpiero Catalano
- Department of Radiotherapy, IRCCS MultiMedica, Sesto San Giovanni (MI), Italy
| | - Cristiana Vidali
- Department of Radiation Oncology, Azienda Sanitaria Universitaria Integrata di Trieste (ASUI-TS), Trieste, Italy
| | - Cynthia Aristei
- Radiation Oncology Section, University of Perugia and Perugia General Hospital, Perugia, Italy
| | - Caterina Giannitto
- Division of Radiology, IEO Istituto Europeo di Oncologia IRCCS, Milano, Italy
| | - Eleonora Miglietta
- Division of Radiation Oncology, IEO Istituto Europeo di Oncologia IRCCS, Milano, Italy
| | | | | | - Roberto Orecchia
- Scientific Direction, IEO Istituto Europeo di Oncologia IRCCS, Milano, Italy
| | - Federica Cattani
- Unit of Medical Physics, IEO Istituto Europeo di Oncologia IRCCS, Milano, Italy
| | | |
Collapse
|
123
|
Chen Y, Moiseev D, Kong WY, Bezanovski A, Li J. Automation of Quantifying Axonal Loss in Patients with Peripheral Neuropathies through Deep Learning Derived Muscle Fat Fraction. J Magn Reson Imaging 2021; 53:1539-1549. [PMID: 33448058 DOI: 10.1002/jmri.27508] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 12/28/2020] [Accepted: 12/29/2020] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND Axonal loss denervates muscle, leading to an increase of fat accumulation in the muscle. Therefore, fat fraction (FF) in whole limb muscle using MRI has emerged as a monitoring biomarker for axonal loss in patients with peripheral neuropathies. In this study, we are testing whether deep learning-based model can automate quantification of the FF in individual muscles. While individual muscle is smaller with irregular shape, manually segmented muscle MRI images have been accumulated in this lab; and make the deep learning feasible. PURPOSE To automate segmentation on muscle MRI images through deep learning for quantifying individual muscle FF in patients with peripheral neuropathies. STUDY TYPE Retrospective. SUBJECTS 24 patients and 19 healthy controls. FIELD STRENGTH/SEQUENCES 3T; Interleaved 3D GRE. ASSESSMENT A 3D U-Net model was implemented in segmenting muscle MRI images. This was enabled by leveraging a large set of manually segmented muscle MRI images. B1+ and B1- maps were used to correct image inhomogeneity. Accuracy of the automation was evaluated using Pixel Accuracy (PA), Dice Coefficient (DC) in binary masks; and Bland-Altman and Pearson correlation by comparing FF values between manual and automated methods. STATISTICAL TESTS PA and DC were reported with their median value and standard deviation. Two methods were compared using the ± 95% confidence intervals (CI) of Bland-Altman analysis and the Pearson's coefficient (r2 ). RESULTS DC values were from 0.83 ± 0.17 to 0.98 ± 0.02 in thigh and from 0.63 ± 0.18 to 0.96 ± 0.02 in calf muscles. For FF values, the overall ± 95% CI and r2 were [0.49, -0.56] and 0.989 in thigh and [0.84, -0.71] and 0.971 in the calf. DATA CONCLUSION Automated results well agreed with the manual results in quantifying FF for individual muscles. This method mitigates the formidable time consumption and intense labor in manual segmentations; and enables the use of individual muscle FF as outcome measures in upcoming longitudinal studies. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY STAGE: 1.
Collapse
Affiliation(s)
- Yongsheng Chen
- Department of Neurology, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Daniel Moiseev
- Department of Neurology, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Wan Yee Kong
- Department of Neurology, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Alexandar Bezanovski
- Department of Neurology, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Jun Li
- Department of Neurology, Wayne State University School of Medicine, Detroit, Michigan, USA
- Center for Molecular Medicine and Genetics, Wayne State University School of Medicine, Detroit, Michigan, USA
- Department of Biochemistry, Microbiology and Immunology, Wayne State University School of Medicine, Detroit, Michigan, USA
- John D. Dingell VA Medical Center, Detroit, Michigan, USA
| |
Collapse
|
124
|
Ma J, Chen J, Ng M, Huang R, Li Y, Li C, Yang X, Martel AL. Loss odyssey in medical image segmentation. Med Image Anal 2021; 71:102035. [PMID: 33813286 DOI: 10.1016/j.media.2021.102035] [Citation(s) in RCA: 136] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Revised: 03/04/2021] [Accepted: 03/06/2021] [Indexed: 12/26/2022]
Abstract
The loss function is an important component in deep learning-based segmentation methods. Over the past five years, many loss functions have been proposed for various segmentation tasks. However, a systematic study of the utility of these loss functions is missing. In this paper, we present a comprehensive review of segmentation loss functions in an organized manner. We also conduct the first large-scale analysis of 20 general loss functions on four typical 3D segmentation tasks involving six public datasets from 10+ medical centers. The results show that none of the losses can consistently achieve the best performance on the four segmentation tasks, but compound loss functions (e.g. Dice with TopK loss, focal loss, Hausdorff distance loss, and boundary loss) are the most robust losses. Our code and segmentation results are publicly available and can serve as a loss function benchmark. We hope this work will also provide insights on new loss function development for the community.
Collapse
Affiliation(s)
- Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, Nanjing, China.
| | - Jianan Chen
- Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Matthew Ng
- Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Rui Huang
- Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Yu Li
- Department of Mathematics, Nanjing University of Science and Technology, Nanjing, China
| | - Chen Li
- Department of Mathematics, Nanjing University, Nanjing, China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Nanjing, China
| | - Anne L Martel
- Department of Medical Biophysics, University of Toronto, Toronto, Canada; Physical Sciences, Sunnybrook Research Institute, Toronto, Canada
| |
Collapse
|
125
|
Computational Complexity Reduction of Neural Networks of Brain Tumor Image Segmentation by Introducing Fermi-Dirac Correction Functions. ENTROPY 2021; 23:e23020223. [PMID: 33670368 PMCID: PMC7918890 DOI: 10.3390/e23020223] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 01/29/2021] [Accepted: 02/07/2021] [Indexed: 11/16/2022]
Abstract
Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi-Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi-Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi-Dirac correction function exhibits better capabilities of image augmentation and segmentation.
Collapse
|
126
|
Yu C, Helwig EJ. Artificial intelligence in gastric cancer: a translational narrative review. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:269. [PMID: 33708896 PMCID: PMC7940908 DOI: 10.21037/atm-20-6337] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Increasing clinical contributions and novel techniques have been made by artificial intelligence (AI) during the last decade. The role of AI is increasingly recognized in cancer research and clinical application. Cancers like gastric cancer, or stomach cancer, are ideal testing grounds to see if early undertakings of applying AI to medicine can yield valuable results. There are numerous concepts derived from AI, including machine learning (ML) and deep learning (DL). ML is defined as the ability to learn data features without being explicitly programmed. It arises at the intersection of data science and computer science and aims at the efficiency of computing algorithms. In cancer research, ML has been increasingly used in predictive prognostic models. DL is defined as a subset of ML targeting multilayer computation processes. DL is less dependent on the understanding of data features than ML. Therefore, the algorithms of DL are much more difficult to interpret than ML, even potentially impossible. This review discussed the role of AI in the diagnostic, therapeutic and prognostic advances of gastric cancer. Models like convolutional neural networks (CNNs) or artificial neural networks (ANNs) achieved significant praise in their application. There is much more to be fully covered across the clinical administration of gastric cancer. Despite growing efforts, adapting AI to improving diagnoses for gastric cancer is a worthwhile venture. The information yield can revolutionize how we approach gastric cancer problems. Though integration might be slow and labored, it can be given the ability to enhance diagnosing through visual modalities and augment treatment strategies. It can grow to become an invaluable tool for physicians. AI not only benefits diagnostic and therapeutic outcomes, but also reshapes perspectives over future medical trajectory.
Collapse
Affiliation(s)
- Chaoran Yu
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Fudan University Shanghai Cancer Center, Shanghai, China
| | - Ernest Johann Helwig
- Tongji Medical College of Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
127
|
Adorno W, Catalano A, Ehsan L, Vitzhum von Eckstaedt H, Barnes B, McGowan E, Syed S, Brown DE. Advancing Eosinophilic Esophagitis Diagnosis and Phenotype Assessment with Deep Learning Computer Vision. BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES, INTERNATIONAL JOINT CONFERENCE, BIOSTEC ... REVISED SELECTED PAPERS. BIOSTEC (CONFERENCE) 2021; 2021:44-55. [PMID: 34046649 PMCID: PMC8144887 DOI: 10.5220/0010241900440055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Eosinophilic Esophagitis (EoE) is an inflammatory esophageal disease which is increasing in prevalence. The diagnostic gold-standard involves manual review of a patient's biopsy tissue sample by a clinical pathologist for the presence of 15 or greater eosinophils within a single high-power field (400× magnification). Diagnosing EoE can be a cumbersome process with added difficulty for assessing the severity and progression of disease. We propose an automated approach for quantifying eosinophils using deep image segmentation. A U-Net model and post-processing system are applied to generate eosinophil-based statistics that can diagnose EoE as well as describe disease severity and progression. These statistics are captured in biopsies at the initial EoE diagnosis and are then compared with patient metadata: clinical and treatment phenotypes. The goal is to find linkages that could potentially guide treatment plans for new patients at their initial disease diagnosis. A deep image classification model is further applied to discover features other than eosinophils that can be used to diagnose EoE. This is the first study to utilize a deep learning computer vision approach for EoE diagnosis and to provide an automated process for tracking disease severity and progression.
Collapse
Affiliation(s)
- William Adorno
- Dept. of Engineering Systems and Environment, University of Virginia, Charlottesville, VA, U.S.A
| | - Alexis Catalano
- College of Dental Medicine, Columbia University, New York City, NY, U.S.A
- School of Medicine, University of Virginia, Charlottesville, VA, U.S.A
| | - Lubaina Ehsan
- School of Medicine, University of Virginia, Charlottesville, VA, U.S.A
| | | | - Barrett Barnes
- Department of Pediatrics, School of Medicine, University of Virginia, Charlottesville, VA, U.S.A
| | - Emily McGowan
- Department of Medicine, University of Virginia, Charlottesville, VA, U.S.A
| | - Sana Syed
- Department of Pediatrics, School of Medicine, University of Virginia, Charlottesville, VA, U.S.A
| | - Donald E Brown
- School of Data Science, University of Virginia, Charlottesville, VA, U.S.A
| |
Collapse
|
128
|
Ma J, He J, Yang X. Learning Geodesic Active Contours for Embedding Object Global Information in Segmentation CNNs. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:93-104. [PMID: 32897860 DOI: 10.1109/tmi.2020.3022693] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Most existing CNNs-based segmentation methods rely on local appearances learned on the regular image grid, without consideration of the object global information. This article aims to embed the object global geometric information into a learning framework via the classical geodesic active contours (GAC). We propose a level set function (LSF) regression network, supervised by the segmentation ground truth, LSF ground truth and geodesic active contours, to not only generate the segmentation probabilistic map but also directly minimize the GAC energy functional in an end-to-end manner. With the help of geodesic active contours, the segmentation contour, embedded in the level set function, can be globally driven towards the image boundary to obtain lower energy, and the geodesic constraint can lead the segmentation result to have fewer outliers. Extensive experiments on four public datasets show that (1) compared with state-of-the-art (SOTA) learning active contour methods, our method can achieve significantly better performance; (2) compared with recent SOTA methods that are designed for reducing boundary errors, our method also outperforms them with more accurate boundaries; (3) compared with SOTA methods on two popular multi-class segmentation challenge datasets, our method can still obtain superior or competitive results in both organ and tumor segmentation tasks. Our study demonstrates that introducing global information by GAC can significantly improve segmentation performance, especially on reducing the boundary errors and outliers, which is very useful in applications such as organ transplantation surgical planning and multi-modality image registration where boundary errors can be very harmful.
Collapse
|
129
|
Lin A, Fang D, Li C, Cheung CY, Chen H. Improved Automated Foveal Avascular Zone Measurement in Cirrus Optical Coherence Tomography Angiography Using the Level Sets Macro. Transl Vis Sci Technol 2020; 9:20. [PMID: 33240573 PMCID: PMC7671870 DOI: 10.1167/tvst.9.12.20] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 10/07/2020] [Indexed: 12/22/2022] Open
Abstract
Purpose To evaluate automated measurements of the foveal avascular zone (FAZ) using the Level Sets macro (LSM) in ImageJ as compared with the Cirrus optical coherence tomography angiography (OCTA) inbuilt algorithm and the Kanno–Saitama macro (KSM). Methods The eyes of healthy volunteers were scanned four times consecutively on the Zeiss Cirrus HD-OCT 5000 system. The FAZ metrics (area, perimeter, and circularity) were measured manually and automatically by the Cirrus inbuilt algorithm, the KSM, and the LSM. The accuracy and repeatability of all methods and agreement between automated and manual methods were evaluated. Results The LSM segmented the FAZ with an average Dice coefficient of 0.9243. Compared with the KSM and the Cirrus inbuilt algorithm, the LSM outperformed them by 0.02 and 0.19, respectively, for Dice coefficients. Both the LSM (intraclass correlation coefficient [ICC] = 0.908; coefficient of variation [CoV] = 9.664%) and manual methods (ICC ≥ 0.921, CoV ≤ 8.727%) showed excellent repeatability for the FAZ area, whereas the other methods presented moderate to good repeatability (ICC ≤ 0.789, CoV ≥ 15.788%). Agreement with manual FAZ area measurement was excellent for both the LSM and KSM but not for the Cirrus inbuilt algorithm (LSM, ICC = 0.930; KSM, ICC = 0.928; Cirrus, ICC = 0.254). Conclusions The LSM exhibited greater accuracy and reliability compared to the KSM and inbuilt automated methods and may be an improved and accessible option for automated FAZ segmentation. Translational Relevance The LSM may be a suitable automated and customizable tool for FAZ quantification of Cirrus HD-OCT 5000 images, providing results comparable to those for manual measurement.
Collapse
Affiliation(s)
- Aidi Lin
- Joint Shantou International Eye Center, Shantou University and The Chinese University of Hong Kong, Shantou, China
| | - Danqi Fang
- Joint Shantou International Eye Center, Shantou University and The Chinese University of Hong Kong, Shantou, China
| | - Cuilian Li
- Joint Shantou International Eye Center, Shantou University and The Chinese University of Hong Kong, Shantou, China
| | - Carol Y Cheung
- Department of Ophthalmology & Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Haoyu Chen
- Joint Shantou International Eye Center, Shantou University and The Chinese University of Hong Kong, Shantou, China
| |
Collapse
|