101
|
Wang H, Huang H, Wang J, Wei M, Yi Z, Wang Z, Zhang H. An intelligent system of pelvic lymph node detection. INT J INTELL SYST 2021. [DOI: 10.1002/int.22452] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Han Wang
- Machine Intelligence Laboratory, College of Computer Science Sichuan University Chengdu China
| | - Hao Huang
- Gastrointestinal Surgery Center, West China Hospital Sichuan University Chengdu P. R. China
| | - Jingling Wang
- Machine Intelligence Laboratory, College of Computer Science Sichuan University Chengdu China
| | - Mingtian Wei
- Gastrointestinal Surgery Center, West China Hospital Sichuan University Chengdu P. R. China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science Sichuan University Chengdu China
| | - Ziqiang Wang
- Gastrointestinal Surgery Center, West China Hospital Sichuan University Chengdu P. R. China
| | - Haixian Zhang
- Machine Intelligence Laboratory, College of Computer Science Sichuan University Chengdu China
| |
Collapse
|
102
|
Yang B, Chen W, Luo H, Tan Y, Liu M, Wang Y. Neuron Image Segmentation via Learning Deep Features and Enhancing Weak Neuronal Structures. IEEE J Biomed Health Inform 2021; 25:1634-1645. [PMID: 32809948 DOI: 10.1109/jbhi.2020.3017540] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Neuron morphology reconstruction (tracing) in 3D volumetric images is critical for neuronal research. However, most existing neuron tracing methods are not applicable in challenging datasets where the neuron images are contaminated by noises or containing weak filament signals. In this paper, we present a two-stage 3D neuron segmentation approach via learning deep features and enhancing weak neuronal structures, to reduce the impact of image noise in the data and enhance the weak-signal neuronal structures. In the first stage, we train a voxel-wise multi-level fully convolutional network (FCN), which specializes in learning deep features, to obtain the 3D neuron image segmentation maps in an end-to-end manner. In the second stage, a ray-shooting model is employed to detect the discontinued segments in segmentation results of the first-stage, and the local neuron diameter of the broken point is estimated and direction of the filamentary fragment is detected by rayburst sampling algorithm. Then, a Hessian-repair model is built to repair the broken structures, by enhancing weak neuronal structures in a fibrous structure determined by the estimated local neuron diameter and the filamentary fragment direction. Experimental results demonstrate that our proposed segmentation approach achieves better segmentation performance than other state-of-the-art methods for 3D neuron segmentation. Compared with the neuron reconstruction results on the segmented images produced by other segmentation methods, the proposed approach gains 47.83% and 34.83% improvement in the average distance scores. The average Precision and Recall rates of the branch point detection with our proposed method are 38.74% and 22.53% higher than the detection results without segmentation.
Collapse
|
103
|
Zhou SK, Greenspan H, Davatzikos C, Duncan JS, van Ginneken B, Madabhushi A, Prince JL, Rueckert D, Summers RM. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2021; 109:820-838. [PMID: 37786449 PMCID: PMC10544772 DOI: 10.1109/jproc.2021.3054390] [Citation(s) in RCA: 267] [Impact Index Per Article: 66.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Since its renaissance, deep learning has been widely used in various medical imaging tasks and has achieved remarkable success in many medical imaging applications, thereby propelling us into the so-called artificial intelligence (AI) era. It is known that the success of AI is mostly attributed to the availability of big data with annotations for a single task and the advances in high performance computing. However, medical imaging presents unique challenges that confront deep learning approaches. In this survey paper, we first present traits of medical imaging, highlight both clinical needs and technical challenges in medical imaging, and describe how emerging trends in deep learning are addressing these issues. We cover the topics of network architecture, sparse and noisy labels, federating learning, interpretability, uncertainty quantification, etc. Then, we present several case studies that are commonly found in clinical practice, including digital pathology and chest, brain, cardiovascular, and abdominal imaging. Rather than presenting an exhaustive literature survey, we instead describe some prominent research highlights related to these case study applications. We conclude with a discussion and presentation of promising future directions.
Collapse
Affiliation(s)
- S Kevin Zhou
- School of Biomedical Engineering, University of Science and Technology of China and Institute of Computing Technology, Chinese Academy of Sciences
| | - Hayit Greenspan
- Biomedical Engineering Department, Tel-Aviv University, Israel
| | - Christos Davatzikos
- Radiology Department and Electrical and Systems Engineering Department, University of Pennsylvania, USA
| | - James S Duncan
- Departments of Biomedical Engineering and Radiology & Biomedical Imaging, Yale University
| | | | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University and Louis Stokes Cleveland Veterans Administration Medical Center, USA
| | - Jerry L Prince
- Electrical and Computer Engineering Department, Johns Hopkins University, USA
| | - Daniel Rueckert
- Klinikum rechts der Isar, TU Munich, Germany and Department of Computing, Imperial College, UK
| | | |
Collapse
|
104
|
Sadad T, Khan AR, Hussain A, Tariq U, Fati SM, Bahaj SA, Munir A. Internet of medical things embedding deep learning with data augmentation for mammogram density classification. Microsc Res Tech 2021; 84:2186-2194. [PMID: 33908111 DOI: 10.1002/jemt.23773] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 03/14/2021] [Accepted: 03/29/2021] [Indexed: 11/09/2022]
Abstract
Females are approximately half of the total population worldwide, and most of them are victims of breast cancer (BC). Computer-aided diagnosis (CAD) frameworks can help radiologists to find breast density (BD), which further helps in BC detection precisely. This research detects BD automatically using mammogram images based on Internet of Medical Things (IoMT) supported devices. Two pretrained deep convolutional neural network models called DenseNet201 and ResNet50 were applied through a transfer learning approach. A total of 322 mammogram images containing 106 fatty, 112 dense, and 104 glandular cases were obtained from the Mammogram Image Analysis Society dataset. The pruning out irrelevant regions and enhancing target regions is performed in preprocessing. The overall classification accuracy of the BD task is performed and accomplished 90.47% through DensNet201 model. Such a framework is beneficial in identifying BD more rapidly to assist radiologists and patients without delay.
Collapse
Affiliation(s)
- Tariq Sadad
- Department of Computer Science & Software Engineering, International Islamic University, Islamabad, Pakistan
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Ayyaz Hussain
- Department of Computer Science, Quaid-i-Azam University, Islamabad, Pakistan
| | - Usman Tariq
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Suliman Mohamed Fati
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Asim Munir
- Department of Computer Science & Software Engineering, International Islamic University, Islamabad, Pakistan
| |
Collapse
|
105
|
Yao Z, Hu X, Liu X, Xie W, Dong Y, Qiu H, Chen Z, Shi Y, Xu X, Huang M, Zhuang J. A machine learning-based pulmonary venous obstruction prediction model using clinical data and CT image. Int J Comput Assist Radiol Surg 2021; 16:609-617. [PMID: 33791921 DOI: 10.1007/s11548-021-02335-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Accepted: 02/25/2021] [Indexed: 12/23/2022]
Abstract
PURPOSE In this study, we try to consider the most common type of total anomalous pulmonary venous connection and established a machine learning-based prediction model for postoperative pulmonary venous obstruction by using clinical data and CT images jointly. METHOD Patients diagnosed with supracardiac TPAVC from January 1, 2009, to December 31, 2018, in Guangdong Province People's Hospital were enrolled. Logistic regression were applied for clinical data features selection, while a convolutional neural network was used to extract CT images features. The prediction model was established by integrating the above two kinds of features for PVO prediction. And the proposed methods were evaluated using fourfold cross-validation. RESULT Finally, 131 patients were enrolled in our study. Results show that compared with traditional approaches, the machine learning-based joint method using clinical data and CT image achieved the highest average AUC score of 0.943. In addition, the joint method also achieved a higher sensitivity of 0.828 and a higher positive prediction value of 0.864. CONCLUSION Using clinical data and CT images jointly can improve the performance significantly compared with other methods that using only clinical data or CT images. The proposed machine learning-based joint method demonstrates the practicability of fully using multi-modality clinical data.
Collapse
Affiliation(s)
- Zeyang Yao
- School of Medicine, South China University of Technology Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Dongchuan Rd 96, Guangzhou, 510080, China
| | - Xinrong Hu
- Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Dongchuan Rd 96, Guangzhou, 510080, China
| | - Xiaobing Liu
- Department of Cardiac Surgery, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Dongchuan Rd 96, Guangzhou, 510080, China
| | - Wen Xie
- School of Medicine, South China University of Technology Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Dongchuan Rd 96, Guangzhou, 510080, China
| | - Yuhao Dong
- Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Dongchuan Rd 96, Guangzhou, 510080, China
| | - Hailong Qiu
- Department of Cardiac Surgery, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Dongchuan Rd 96, Guangzhou, 510080, China
| | - Zewen Chen
- Department of Cardiac Surgery, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Dongchuan Rd 96, Guangzhou, 510080, China
| | - Yiyu Shi
- Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Dongchuan Rd 96, Guangzhou, 510080, China
| | - Xiaowei Xu
- Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Dongchuan Rd 96, Guangzhou, 510080, China.
| | - Meiping Huang
- Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Dongchuan Rd 96, Guangzhou, 510080, China.
| | - Jian Zhuang
- Department of Cardiac Surgery, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Dongchuan Rd 96, Guangzhou, 510080, China.
| |
Collapse
|
106
|
Shivakumar N, Chandrashekar A, Handa AI, Lee R. Use of deep learning for detection, characterisation and prediction of metastatic disease from computerised tomography: a systematic review. Postgrad Med J 2021; 98:e20. [PMID: 33688072 DOI: 10.1136/postgradmedj-2020-139620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 02/08/2021] [Accepted: 02/20/2021] [Indexed: 11/16/2022]
Abstract
CT is widely used for diagnosis, staging and management of cancer. The presence of metastasis has significant implications on treatment and prognosis. Deep learning (DL), a form of machine learning, where layers of programmed algorithms interpret and recognise patterns, may have a potential role in CT image analysis. This review aims to provide an overview on the use of DL in CT image analysis in the diagnostic evaluation of metastatic disease. A total of 29 studies were included which could be grouped together into three areas of research: the use of deep learning on the detection of metastatic disease from CT imaging, characterisation of lesions on CT into metastasis and prediction of the presence or development of metastasis based on the primary tumour. In conclusion, DL in CT image analysis could have a potential role in evaluating metastatic disease; however, prospective clinical trials investigating its clinical value are required.
Collapse
Affiliation(s)
- Natesh Shivakumar
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, Oxfordshire, UK
| | - Anirudh Chandrashekar
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, Oxfordshire, UK
| | - Ashok Inderraj Handa
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, Oxfordshire, UK
| | - Regent Lee
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, Oxfordshire, UK
| |
Collapse
|
107
|
Rakocz N, Chiang JN, Nittala MG, Corradetti G, Tiosano L, Velaga S, Thompson M, Hill BL, Sankararaman S, Haines JL, Pericak-Vance MA, Stambolian D, Sadda SR, Halperin E. Automated identification of clinical features from sparsely annotated 3-dimensional medical imaging. NPJ Digit Med 2021; 4:44. [PMID: 33686212 PMCID: PMC7940637 DOI: 10.1038/s41746-021-00411-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 01/26/2021] [Indexed: 12/30/2022] Open
Abstract
One of the core challenges in applying machine learning and artificial intelligence to medicine is the limited availability of annotated medical data. Unlike in other applications of machine learning, where an abundance of labeled data is available, the labeling and annotation of medical data and images require a major effort of manual work by expert clinicians who do not have the time to annotate manually. In this work, we propose a new deep learning technique (SLIVER-net), to predict clinical features from 3-dimensional volumes using a limited number of manually annotated examples. SLIVER-net is based on transfer learning, where we borrow information about the structure and parameters of the network from publicly available large datasets. Since public volume data are scarce, we use 2D images and account for the 3-dimensional structure using a novel deep learning method which tiles the volume scans, and then adds layers that leverage the 3D structure. In order to illustrate its utility, we apply SLIVER-net to predict risk factors for progression of age-related macular degeneration (AMD), a leading cause of blindness, from optical coherence tomography (OCT) volumes acquired from multiple sites. SLIVER-net successfully predicts these factors despite being trained with a relatively small number of annotated volumes (hundreds) and only dozens of positive training examples. Our empirical evaluation demonstrates that SLIVER-net significantly outperforms standard state-of-the-art deep learning techniques used for medical volumes, and its performance is generalizable as it was validated on an external testing set. In a direct comparison with a clinician panel, we find that SLIVER-net also outperforms junior specialists, and identifies AMD progression risk factors similarly to expert retina specialists.
Collapse
Affiliation(s)
- Nadav Rakocz
- Department of Computer Science, University of California, Los Angeles, CA, USA
| | - Jeffrey N Chiang
- Department of Computational Medicine, University of California, Los Angeles, CA, USA
| | | | - Giulia Corradetti
- Doheny Eye Institute, Los Angeles, CA, USA.,Department of Ophthalmology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Liran Tiosano
- Doheny Eye Institute, Los Angeles, CA, USA.,Faculty of Medicine, Hebrew University of Jerusalem, Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Jerusalem, Israel
| | | | - Michael Thompson
- Department of Computer Science, University of California, Los Angeles, CA, USA
| | - Brian L Hill
- Department of Computer Science, University of California, Los Angeles, CA, USA
| | - Sriram Sankararaman
- Department of Computer Science, University of California, Los Angeles, CA, USA.,Department of Computational Medicine, University of California, Los Angeles, CA, USA.,Department of Human Genetics, University of California, Los Angeles, CA, USA
| | - Jonathan L Haines
- Department of Population & Quantitative Health Sciences, Case Western Reserve University, Cleveland, OH, USA
| | - Margaret A Pericak-Vance
- John P. Hussman Institute for Human Genomics, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Dwight Stambolian
- Department of Ophthalmology, University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA, USA
| | - Srinivas R Sadda
- Doheny Eye Institute, Los Angeles, CA, USA.,Department of Ophthalmology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Eran Halperin
- Department of Computer Science, University of California, Los Angeles, CA, USA. .,Department of Computational Medicine, University of California, Los Angeles, CA, USA. .,Faculty of Medicine, Hebrew University of Jerusalem, Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Jerusalem, Israel. .,Department of Anesthesiology, University of California, Los Angeles, CA, USA. .,Institute of Precision Health, University of California, Los Angeles, CA, USA.
| |
Collapse
|
108
|
Hegde N, Shishir M, Shashank S, Dayananda P, Latte MV. A Survey on Machine Learning and Deep Learning-based Computer-Aided Methods for Detection of Polyps in CT Colonography. Curr Med Imaging 2021; 17:3-15. [PMID: 32294045 DOI: 10.2174/2213335607999200415141427] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 02/09/2020] [Accepted: 02/27/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Colon cancer generally begins as a neoplastic growth of tissue, called polyps, originating from the inner lining of the colon wall. Most colon polyps are considered harmless but over the time, they can evolve into colon cancer, which, when diagnosed in later stages, is often fatal. Hence, time is of the essence in the early detection of polyps and the prevention of colon cancer. METHODS To aid this endeavor, many computer-aided methods have been developed, which use a wide array of techniques to detect, localize and segment polyps from CT Colonography images. In this paper, a comprehensive state-of-the-art method is proposed and categorize this work broadly using the available classification techniques using Machine Learning and Deep Learning. CONCLUSION The performance of each of the proposed approach is analyzed with existing methods and also how they can be used to tackle the timely and accurate detection of colon polyps.
Collapse
Affiliation(s)
- Niharika Hegde
- JSS Academy of Technical Education, Bangalore-560060, Karnataka, India
| | - M Shishir
- JSS Academy of Technical Education, Bangalore-560060, Karnataka, India
| | - S Shashank
- JSS Academy of Technical Education, Bangalore-560060, Karnataka, India
| | - P Dayananda
- JSS Academy of Technical Education, Bangalore-560060, Karnataka, India
| | | |
Collapse
|
109
|
Zhou W, Jian W, Cen X, Zhang L, Guo H, Liu Z, Liang C, Wang G. Prediction of Microvascular Invasion of Hepatocellular Carcinoma Based on Contrast-Enhanced MR and 3D Convolutional Neural Networks. Front Oncol 2021; 11:588010. [PMID: 33854959 PMCID: PMC8040801 DOI: 10.3389/fonc.2021.588010] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 01/08/2021] [Indexed: 12/24/2022] Open
Abstract
Background and Purpose It is extremely important to predict the microvascular invasion (MVI) of hepatocellular carcinoma (HCC) before surgery, which is a key predictor of recurrence and helps determine the treatment strategy before liver resection or liver transplantation. In this study, we demonstrate that a deep learning approach based on contrast-enhanced MR and 3D convolutional neural networks (CNN) can be applied to better predict MVI in HCC patients. Materials and Methods This retrospective study included 114 consecutive patients who were surgically resected from October 2012 to October 2018 with 117 histologically confirmed HCC. MR sequences including 3.0T/LAVA (liver acquisition with volume acceleration) and 3.0T/e-THRIVE (enhanced T1 high resolution isotropic volume excitation) were used in image acquisition of each patient. First, numerous 3D patches were separately extracted from the region of each lesion for data augmentation. Then, 3D CNN was utilized to extract the discriminant deep features of HCC from contrast-enhanced MR separately. Furthermore, loss function for deep supervision was designed to integrate deep features from multiple phases of contrast-enhanced MR. The dataset was divided into two parts, in which 77 HCCs were used as the training set, while the remaining 40 HCCs were used for independent testing. Receiver operating characteristic curve (ROC) analysis was adopted to assess the performance of MVI prediction. The output probability of the model was assessed by the independent student's t-test or Mann-Whitney U test. Results The mean AUC values of MVI prediction of HCC were 0.793 (p=0.001) in the pre-contrast phase, 0.855 (p=0.000) in arterial phase, and 0.817 (p=0.000) in the portal vein phase. Simple concatenation of deep features using 3D CNN derived from all the three phases improved the performance with the AUC value of 0.906 (p=0.000). By comparison, the proposed deep learning model with deep supervision loss function produced the best results with the AUC value of 0.926 (p=0.000). Conclusion A deep learning framework based on 3D CNN and deeply supervised net with contrast-enhanced MR could be effective for MVI prediction.
Collapse
Affiliation(s)
- Wu Zhou
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Wanwei Jian
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xiaoping Cen
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Lijuan Zhang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hui Guo
- Department of Optometry, Guangzhou Aier Eye Hospital, Jinan University, Guangzhou, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Changhong Liang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Guangyi Wang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| |
Collapse
|
110
|
Workflow towards automated segmentation of agglomerated, non-spherical particles from electron microscopy images using artificial neural networks. Sci Rep 2021; 11:4942. [PMID: 33654161 PMCID: PMC7925552 DOI: 10.1038/s41598-021-84287-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 02/15/2021] [Indexed: 11/08/2022] Open
Abstract
We present a workflow for obtaining fully trained artificial neural networks that can perform automatic particle segmentations of agglomerated, non-spherical nanoparticles from scanning electron microscopy images "from scratch", without the need for large training data sets of manually annotated images. The whole process only requires about 15 min of hands-on time by a user and can typically be finished within less than 12 h when training on a single graphics card (GPU). After training, SEM image analysis can be carried out by the artificial neural network within seconds. This is achieved by using unsupervised learning for most of the training dataset generation, making heavy use of generative adversarial networks and especially unpaired image-to-image translation via cycle-consistent adversarial networks. We compare the segmentation masks obtained with our suggested workflow qualitatively and quantitatively to state-of-the-art methods using various metrics. Finally, we used the segmentation masks for automatically extracting particle size distributions from the SEM images of TiO2 particles, which were in excellent agreement with particle size distributions obtained manually but could be obtained in a fraction of the time.
Collapse
|
111
|
Zheng W, Liu S, Chai QW, Pan JS, Chu SC. Automatic Measurement of Pennation Angle from Ultrasound Images using Resnets. ULTRASONIC IMAGING 2021; 43:74-87. [PMID: 33563138 DOI: 10.1177/0161734621989598] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this study, an automatic pennation angle measuring approach based on deep learning is proposed. Firstly, the Local Radon Transform (LRT) is used to detect the superficial and deep aponeuroses on the ultrasound image. Secondly, a reference line are introduced between the deep and superficial aponeuroses to assist the detection of the orientation of muscle fibers. The Deep Residual Networks (Resnets) are used to judge the relative orientation of the reference line and muscle fibers. Then, reference line is revised until the line is parallel to the orientation of the muscle fibers. Finally, the pennation angle is obtained according to the direction of the detected aponeuroses and the muscle fibers. The angle detected by our proposed method differs by about 1° from the angle manually labeled. With a CPU, the average inference time for a single image of the muscle fibers with the proposed method is around 1.6 s, compared to 0.47 s for one of the image of a sequential image sequence. Experimental results show that the proposed method can achieve accurate and robust measurements of pennation angle.
Collapse
Affiliation(s)
- Weimin Zheng
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, China
| | - Shangkun Liu
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, China
| | - Qing-Wei Chai
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, China
| | - Jeng-Shyang Pan
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, China
| | - Shu-Chuan Chu
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, China
| |
Collapse
|
112
|
Kleppe A, Skrede OJ, De Raedt S, Liestøl K, Kerr DJ, Danielsen HE. Designing deep learning studies in cancer diagnostics. Nat Rev Cancer 2021; 21:199-211. [PMID: 33514930 DOI: 10.1038/s41568-020-00327-9] [Citation(s) in RCA: 160] [Impact Index Per Article: 40.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 12/09/2020] [Indexed: 12/16/2022]
Abstract
The number of publications on deep learning for cancer diagnostics is rapidly increasing, and systems are frequently claimed to perform comparable with or better than clinicians. However, few systems have yet demonstrated real-world medical utility. In this Perspective, we discuss reasons for the moderate progress and describe remedies designed to facilitate transition to the clinic. Recent, presumably influential, deep learning studies in cancer diagnostics, of which the vast majority used images as input to the system, are evaluated to reveal the status of the field. By manipulating real data, we then exemplify that much and varied training data facilitate the generalizability of neural networks and thus the ability to use them clinically. To reduce the risk of biased performance estimation of deep learning systems, we advocate evaluation in external cohorts and strongly advise that the planned analyses, including a predefined primary analysis, are described in a protocol preferentially stored in an online repository. Recommended protocol items should be established for the field, and we present our suggestions.
Collapse
Affiliation(s)
- Andreas Kleppe
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Ole-Johan Skrede
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Sepp De Raedt
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Knut Liestøl
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - David J Kerr
- Nuffield Division of Clinical Laboratory Sciences, University of Oxford, Oxford, UK
| | - Håvard E Danielsen
- Institute for Cancer Genetics and Informatics, Oslo University Hospital, Oslo, Norway.
- Department of Informatics, University of Oslo, Oslo, Norway.
- Nuffield Division of Clinical Laboratory Sciences, University of Oxford, Oxford, UK.
| |
Collapse
|
113
|
Heo MS, Kim JE, Hwang JJ, Han SS, Kim JS, Yi WJ, Park IW. Artificial intelligence in oral and maxillofacial radiology: what is currently possible? Dentomaxillofac Radiol 2021; 50:20200375. [PMID: 33197209 PMCID: PMC7923066 DOI: 10.1259/dmfr.20200375] [Citation(s) in RCA: 74] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 08/28/2020] [Accepted: 08/28/2020] [Indexed: 12/13/2022] Open
Abstract
Artificial intelligence, which has been actively applied in a broad range of industries in recent years, is an active area of interest for many researchers. Dentistry is no exception to this trend, and the applications of artificial intelligence are particularly promising in the field of oral and maxillofacial (OMF) radiology. Recent researches on artificial intelligence in OMF radiology have mainly used convolutional neural networks, which can perform image classification, detection, segmentation, registration, generation, and refinement. Artificial intelligence systems in this field have been developed for the purposes of radiographic diagnosis, image analysis, forensic dentistry, and image quality improvement. Tremendous amounts of data are needed to achieve good results, and involvement of OMF radiologist is essential for making accurate and consistent data sets, which is a time-consuming task. In order to widely use artificial intelligence in actual clinical practice in the future, there are lots of problems to be solved, such as building up a huge amount of fine-labeled open data set, understanding of the judgment criteria of artificial intelligence, and DICOM hacking threats using artificial intelligence. If solutions to these problems are presented with the development of artificial intelligence, artificial intelligence will develop further in the future and is expected to play an important role in the development of automatic diagnosis systems, the establishment of treatment plans, and the fabrication of treatment tools. OMF radiologists, as professionals who thoroughly understand the characteristics of radiographic images, will play a very important role in the development of artificial intelligence applications in this field.
Collapse
Affiliation(s)
- Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Republic of Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, Seoul, Republic of Korea
| | - Jae-Joon Hwang
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Yangsan, Republic of Korea
| | - Sang-Sun Han
- Department of Oral and Maxillofacial Radiology, College of Dentistry, Yonsei University, Seoul, Republic of Korea
| | - Jin-Soo Kim
- Department of Oral and Maxillofacial Radiology, College of Dentistry, Chosun University, Gwangju, Republic of Korea
| | - Won-Jin Yi
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Republic of Korea
| | - In-Woo Park
- Department of Oral and Maxillofacial Radiology, College of Dentistry, Gangneung-Wonju National University, Gangneung, Republic of Korea
| |
Collapse
|
114
|
Balagurunathan Y, Mitchell R, El Naqa I. Requirements and reliability of AI in the medical context. Phys Med 2021; 83:72-78. [PMID: 33721700 PMCID: PMC8915137 DOI: 10.1016/j.ejmp.2021.02.024] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 02/04/2021] [Accepted: 02/23/2021] [Indexed: 12/12/2022] Open
Abstract
The digital information age has been a catalyst in creating a renewed interest in Artificial Intelligence (AI) approaches, especially the subclass of computer algorithms that are popularly grouped into Machine Learning (ML). These methods have allowed one to go beyond limited human cognitive ability into understanding the complexity in the high dimensional data. Medical sciences have seen a steady use of these methods but have been slow in adoption to improve patient care. There are some significant impediments that have diluted this effort, which include availability of curated diverse data sets for model building, reliable human-level interpretation of these models, and reliable reproducibility of these methods for routine clinical use. Each of these aspects has several limiting conditions that need to be balanced out, considering the data/model building efforts, clinical implementation, integration cost to translational effort with minimal patient level harm, which may directly impact future clinical adoption. In this review paper, we will assess each aspect of the problem in the context of reliable use of the ML methods in oncology, as a representative study case, with the goal to safeguard utility and improve patient care in medicine in general.
Collapse
Affiliation(s)
| | - Ross Mitchell
- Department of Machine Learning, H. Lee. Moffitt Cancer Center, Tampa, FL, USA; Health Data Services, H. Lee. Moffitt Cancer Center, Tampa, FL, USA.
| | - Issam El Naqa
- Department of Machine Learning, H. Lee. Moffitt Cancer Center, Tampa, FL, USA.
| |
Collapse
|
115
|
Abstract
Deep learning (DL) approaches to medical image analysis tasks have recently become popular; however, they suffer from a lack of human interpretability critical for both increasing understanding of the methods' operation and enabling clinical translation. This review summarizes currently available methods for performing image model interpretation and critically evaluates published uses of these methods for medical imaging applications. We divide model interpretation in two categories: (1) understanding model structure and function and (2) understanding model output. Understanding model structure and function summarizes ways to inspect the learned features of the model and how those features act on an image. We discuss techniques for reducing the dimensionality of high-dimensional data and cover autoencoders, both of which can also be leveraged for model interpretation. Understanding model output covers attribution-based methods, such as saliency maps and class activation maps, which produce heatmaps describing the importance of different parts of an image to the model prediction. We describe the mathematics behind these methods, give examples of their use in medical imaging, and compare them against one another. We summarize several published toolkits for model interpretation specific to medical imaging applications, cover limitations of current model interpretation methods, provide recommendations for DL practitioners looking to incorporate model interpretation into their task, and offer general discussion on the importance of model interpretation in medical imaging contexts.
Collapse
Affiliation(s)
- Daniel T. Huff
- Department of Medical Physics, University of Wisconsin-Madison, Madison WI
| | - Amy J. Weisman
- Department of Medical Physics, University of Wisconsin-Madison, Madison WI
| | - Robert Jeraj
- Department of Medical Physics, University of Wisconsin-Madison, Madison WI
- Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
116
|
Chen W, Liu M, Zhan Q, Tan Y, Meijering E, Radojevic M, Wang Y. Spherical-Patches Extraction for Deep-Learning-Based Critical Points Detection in 3D Neuron Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:527-538. [PMID: 33055023 DOI: 10.1109/tmi.2020.3031289] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Digital reconstruction of neuronal structures is very important to neuroscience research. Many existing reconstruction algorithms require a set of good seed points. 3D neuron critical points, including terminations, branch points and cross-over points, are good candidates for such seed points. However, a method that can simultaneously detect all types of critical points has barely been explored. In this work, we present a method to simultaneously detect all 3 types of 3D critical points in neuron microscopy images, based on a spherical-patches extraction (SPE) method and a 2D multi-stream convolutional neural network (CNN). SPE uses a set of concentric spherical surfaces centered at a given critical point candidate to extract intensity distribution features around the point. Then, a group of 2D spherical patches is generated by projecting the surfaces into 2D rectangular image patches according to the orders of the azimuth and the polar angles. Finally, a 2D multi-stream CNN, in which each stream receives one spherical patch as input, is designed to learn the intensity distribution features from those spherical patches and classify the given critical point candidate into one of four classes: termination, branch point, cross-over point or non-critical point. Experimental results confirm that the proposed method outperforms other state-of-the-art critical points detection methods. The critical points based neuron reconstruction results demonstrate the potential of the detected neuron critical points to be good seed points for neuron reconstruction. Additionally, we have established a public dataset dedicated for neuron critical points detection, which has been released along with this article.
Collapse
|
117
|
Gonzalez Y, Shen C, Jung H, Nguyen D, Jiang SB, Albuquerque K, Jia X. Semi-automatic sigmoid colon segmentation in CT for radiation therapy treatment planning via an iterative 2.5-D deep learning approach. Med Image Anal 2021; 68:101896. [PMID: 33383333 PMCID: PMC7847132 DOI: 10.1016/j.media.2020.101896] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Revised: 11/03/2020] [Accepted: 11/04/2020] [Indexed: 10/22/2022]
Abstract
Automatic sigmoid colon segmentation in CT for radiotherapy treatment planning is challenging due to complex organ shape, close distances to other organs, and large variations in size, shape, and filling status. The patient bowel is often not evacuated, and CT contrast enhancement is not used, which further increase problem difficulty. Deep learning (DL) has demonstrated its power in many segmentation problems. However, standard 2-D approaches cannot handle the sigmoid segmentation problem due to incomplete geometry information and 3-D approaches often encounters the challenge of a limited training data size. Motivated by human's behavior that segments the sigmoid slice by slice while considering connectivity between adjacent slices, we proposed an iterative 2.5-D DL approach to solve this problem. We constructed a network that took an axial CT slice, the sigmoid mask in this slice, and an adjacent CT slice to segment as input and output the predicted mask on the adjacent slice. We also considered other organ masks as prior information. We trained the iterative network with 50 patient cases using five-fold cross validation. The trained network was repeatedly applied to generate masks slice by slice. The method achieved average Dice similarity coefficients of 0.82 0.06 and 0.88 0.02 in 10 test cases without and with using prior information.
Collapse
Affiliation(s)
- Yesenia Gonzalez
- innovative Technology of Radiotherapy Computation and Hardware (iTORCH) Laboratory. Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Chenyang Shen
- innovative Technology of Radiotherapy Computation and Hardware (iTORCH) Laboratory. Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA.
| | - Hyunuk Jung
- innovative Technology of Radiotherapy Computation and Hardware (iTORCH) Laboratory. Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Dan Nguyen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Steve B Jiang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Kevin Albuquerque
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xun Jia
- innovative Technology of Radiotherapy Computation and Hardware (iTORCH) Laboratory. Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA.
| |
Collapse
|
118
|
Medical Image Retrieval Using Empirical Mode Decomposition with Deep Convolutional Neural Network. BIOMED RESEARCH INTERNATIONAL 2021; 2020:6687733. [PMID: 33426062 PMCID: PMC7781707 DOI: 10.1155/2020/6687733] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Revised: 12/07/2020] [Accepted: 12/14/2020] [Indexed: 11/17/2022]
Abstract
Content-based medical image retrieval (CBMIR) systems attempt to search medical image database to narrow the semantic gap in medical image analysis. The efficacy of high-level medical information representation using features is a major challenge in CBMIR systems. Features play a vital role in the accuracy and speed of the search process. In this paper, we propose a deep convolutional neural network- (CNN-) based framework to learn concise feature vector for medical image retrieval. The medical images are decomposed into five components using empirical mode decomposition (EMD). The deep CNN is trained in a supervised way with multicomponent input, and the learned features are used to retrieve medical images. The IRMA dataset, containing 11,000 X-ray images, 116 classes, is used to validate the proposed method. We achieve a total IRMA error of 43.21 and a mean average precision of 0.86 for retrieval task and IRMA error of 68.48 and F1 measure of 0.66 on classification task, which is the best result compared with existing literature for this dataset.
Collapse
|
119
|
Angulakshmi M, Deepa M. A Review on Deep Learning Architecture and Methods for MRI Brain Tumour Segmentation. Curr Med Imaging 2021; 17:695-706. [PMID: 33423651 DOI: 10.2174/1573405616666210108122048] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 10/03/2020] [Accepted: 10/15/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND The automatic segmentation of brain tumour from MRI medical images is mainly covered in this review. Recently, state-of-the-art performance is provided by deep learning- based approaches in the field of image classification, segmentation, object detection, and tracking tasks. INTRODUCTION The core feature deep learning approach is the hierarchical representation of features from images, thus avoiding domain-specific handcrafted features. METHODS In this review paper, we have dealt with a review of Deep Learning Architecture and Methods for MRI Brain Tumour Segmentation. First, we have discussed the basic architecture and approaches for deep learning methods. Secondly, we have discussed the literature survey of MRI brain tumour segmentation using deep learning methods and its multimodality fusion. Then, the advantages and disadvantages of each method are analyzed and finally, it is concluded with a discussion on the merits and challenges of deep learning techniques. RESULTS The review of brain tumour identification using deep learning. CONCLUSION Techniques may help the researchers to have a better focus on it.
Collapse
Affiliation(s)
- M Angulakshmi
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - M Deepa
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| |
Collapse
|
120
|
KC K, Yin Z, Wu M, Wu Z. Evaluation of deep learning-based approaches for COVID-19 classification based on chest X-ray images. SIGNAL, IMAGE AND VIDEO PROCESSING 2021; 15:959-966. [PMID: 33432267 PMCID: PMC7788389 DOI: 10.1007/s11760-020-01820-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Revised: 09/01/2020] [Accepted: 11/12/2020] [Indexed: 05/02/2023]
Abstract
The COVID-19, novel coronavirus or SARS-Cov-2, has claimed hundreds of thousands of lives and affected millions of people all around the world with the number of deaths and infections growing exponentially. Deep convolutional neural network (DCNN) has been a huge milestone for image classification task including medical images. Transfer learning of state-of-the-art models have proven to be an efficient method of overcoming deficient data problem. In this paper, a thorough evaluation of eight pre-trained models is presented. Training, validating, and testing of these models were performed on chest X-ray (CXR) images belonging to five distinct classes, containing a total of 760 images. Fine-tuned models, pre-trained in ImageNet dataset, were computationally efficient and accurate. Fine-tuned DenseNet121 achieved a test accuracy of 98.69% and macro f1-score of 0.99 for four classes classification containing healthy, bacterial pneumonia, COVID-19, and viral pneumonia, and fine-tuned models achieved higher test accuracy for three-class classification containing healthy, COVID-19, and SARS images. The experimental results show that only 62% of total parameters were retrained to achieve such accuracy.
Collapse
Affiliation(s)
- Kamal KC
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin, 150001 China
| | - Zhendong Yin
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin, 150001 China
| | - Mingyang Wu
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin, 150001 China
| | - Zhilu Wu
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin, 150001 China
| |
Collapse
|
121
|
Soun JE, Chow DS, Nagamine M, Takhtawala RS, Filippi CG, Yu W, Chang PD. Artificial Intelligence and Acute Stroke Imaging. AJNR Am J Neuroradiol 2021; 42:2-11. [PMID: 33243898 PMCID: PMC7814792 DOI: 10.3174/ajnr.a6883] [Citation(s) in RCA: 120] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 07/22/2020] [Indexed: 12/12/2022]
Abstract
Artificial intelligence technology is a rapidly expanding field with many applications in acute stroke imaging, including ischemic and hemorrhage subtypes. Early identification of acute stroke is critical for initiating prompt intervention to reduce morbidity and mortality. Artificial intelligence can help with various aspects of the stroke treatment paradigm, including infarct or hemorrhage detection, segmentation, classification, large vessel occlusion detection, Alberta Stroke Program Early CT Score grading, and prognostication. In particular, emerging artificial intelligence techniques such as convolutional neural networks show promise in performing these imaging-based tasks efficiently and accurately. The purpose of this review is twofold: first, to describe AI methods and available public and commercial platforms in stroke imaging, and second, to summarize the literature of current artificial intelligence-driven applications for acute stroke triage, surveillance, and prediction.
Collapse
Affiliation(s)
- J E Soun
- From the Departments of Radiological Sciences (J.E.S., D.S.C., P.D.C.)
| | - D S Chow
- From the Departments of Radiological Sciences (J.E.S., D.S.C., P.D.C.)
- Center for Artificial Intelligence in Diagnostic Medicine (D.S.C., R.S.T., P.D.C.), University of California, Irvine, Orange, California
| | | | - R S Takhtawala
- Center for Artificial Intelligence in Diagnostic Medicine (D.S.C., R.S.T., P.D.C.), University of California, Irvine, Orange, California
| | - C G Filippi
- Department of Radiology (C.G.F.), Northwell Health, Lenox Hill Hospital, New York, New York
| | | | - P D Chang
- From the Departments of Radiological Sciences (J.E.S., D.S.C., P.D.C.)
- Center for Artificial Intelligence in Diagnostic Medicine (D.S.C., R.S.T., P.D.C.), University of California, Irvine, Orange, California
| |
Collapse
|
122
|
Xing F, Zhang X, Cornish TC. Artificial intelligence for pathology. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00011-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
123
|
Guo Z, Nemoto D, Zhu X, Li Q, Aizawa M, Utano K, Isohata N, Endo S, Kawarai Lefor A, Togashi K. Polyp detection algorithm can detect small polyps: Ex vivo reading test compared with endoscopists. Dig Endosc 2021; 33:162-169. [PMID: 32173917 DOI: 10.1111/den.13670] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Revised: 03/09/2020] [Accepted: 03/12/2020] [Indexed: 12/16/2022]
Abstract
BACKGROUND AND STUDY AIMS Small polyps are occasionally missed during colonoscopy. This study was conducted to validate the diagnostic performance of a polyp-detection algorithm to alert endoscopists to unrecognized lesions. METHODS A computer-aided detection (CADe) algorithm was developed based on convolutional neural networks using training data from 1991 still colonoscopy images from 283 subjects with adenomatous polyps. The CADe algorithm was evaluated on a validation dataset including 50 short videos with 1-2 polyps (3.5 ± 1.5 mm, range 2-8 mm) and 50 videos without polyps. Two expert colonoscopists and two physicians in training separately read the same videos, blinded to the presence of polyps. The CADe algorithm was also evaluated using eight full videos with polyps and seven full videos without a polyp. RESULTS The per-video sensitivity of CADe for polyp detection was 88% and the per-frame false-positive rate was 2.8%, with a confidence level of ≥30%. The per-video sensitivity of both experts was 88%, and the sensitivities of the two physicians in training were 84% and 76%. For each reader, the frames with missed polyps appearing on short videos were significantly less than the frames with detected polyps, but no trends were observed regarding polyp size, morphology or color. For full video readings, per-polyp sensitivity was 100% with a per-frame false-positive rate of 1.7%, and per-frame specificity of 98.3%. CONCLUSIONS The sensitivity of CADe to detect small polyps was almost equivalent to experts and superior to physicians in training. A clinical trial using CADe is warranted.
Collapse
Affiliation(s)
- Zhe Guo
- Biomedical Information Engineering Lab, The University of Aizu, Fukushima, Japan
| | - Daiki Nemoto
- Department of Coloproctology, Aizu Medical Center, Fukushima Medical University, Fukushima, Japan
| | - Xin Zhu
- Biomedical Information Engineering Lab, The University of Aizu, Fukushima, Japan
| | - Qin Li
- Biomedical Information Engineering Lab, The University of Aizu, Fukushima, Japan
| | - Masato Aizawa
- Department of Coloproctology, Aizu Medical Center, Fukushima Medical University, Fukushima, Japan
| | - Kenichi Utano
- Department of Coloproctology, Aizu Medical Center, Fukushima Medical University, Fukushima, Japan
| | - Noriyuki Isohata
- Department of Coloproctology, Aizu Medical Center, Fukushima Medical University, Fukushima, Japan
| | - Shungo Endo
- Department of Coloproctology, Aizu Medical Center, Fukushima Medical University, Fukushima, Japan
| | | | - Kazutomo Togashi
- Department of Coloproctology, Aizu Medical Center, Fukushima Medical University, Fukushima, Japan
| |
Collapse
|
124
|
Zhou Z, Sodha V, Pang J, Gotway MB, Liang J. Models Genesis. Med Image Anal 2021; 67:101840. [PMID: 33188996 PMCID: PMC7726094 DOI: 10.1016/j.media.2020.101840] [Citation(s) in RCA: 115] [Impact Index Per Article: 28.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 08/12/2020] [Accepted: 09/14/2020] [Indexed: 12/27/2022]
Abstract
Transfer learning from natural image to medical image has been established as one of the most practical paradigms in deep learning for medical image analysis. To fit this paradigm, however, 3D imaging tasks in the most prominent imaging modalities (e.g., CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information, thereby inevitably compromising its performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created ex nihilo (with no manual labeling), self-taught (learnt by self-supervision), and generic (served as source models for generating application-specific target models). Our extensive experiments demonstrate that our Models Genesis significantly outperform learning from scratch and existing pre-trained 3D models in all five target 3D applications covering both segmentation and classification. More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D/2.5D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of Models Genesis for 3D medical imaging. This performance is attributed to our unified self-supervised learning framework, built on a simple yet powerful observation: the sophisticated and recurrent anatomy in medical images can serve as strong yet free supervision signals for deep models to learn common anatomical representation automatically via self-supervision. As open science, all codes and pre-trained Models Genesis are available at https://github.com/MrGiovanni/ModelsGenesis.
Collapse
Affiliation(s)
- Zongwei Zhou
- Department of Biomedical Informatics, Arizona State University, Scottsdale, AZ 85259, USA
| | - Vatsal Sodha
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ 85281 USA
| | - Jiaxuan Pang
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ 85281 USA
| | | | - Jianming Liang
- Department of Biomedical Informatics, Arizona State University, Scottsdale, AZ 85259, USA.
| |
Collapse
|
125
|
CARNet: Automatic Cerebral Aneurysm Classification in Time-of-Flight MR Angiography by Leveraging Recurrent Neural Networks. ARTIF INTELL 2021. [DOI: 10.1007/978-3-030-93046-2_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
126
|
Meng X, Peng Y, Guo Y. An adaptive multi-scale network with nonorthogonal multi-union input for reducing false positive of lymph nodes. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.01.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
127
|
Jiang Y, Chen W, Liu M, Wang Y, Meijering E. 3D Neuron Microscopy Image Segmentation via the Ray-Shooting Model and a DC-BLSTM Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:26-37. [PMID: 32881683 DOI: 10.1109/tmi.2020.3021493] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The morphology reconstruction (tracing) of neurons in 3D microscopy images is important to neuroscience research. However, this task remains very challenging because of the low signal-to-noise ratio (SNR) and the discontinued segments of neurite patterns in the images. In this paper, we present a neuronal structure segmentation method based on the ray-shooting model and the Long Short-Term Memory (LSTM)-based network to enhance the weak-signal neuronal structures and remove background noise in 3D neuron microscopy images. Specifically, the ray-shooting model is used to extract the intensity distribution features within a local region of the image. And we design a neural network based on the dual channel bidirectional LSTM (DC-BLSTM) to detect the foreground voxels according to the voxel-intensity features and boundary-response features extracted by multiple ray-shooting models that are generated in the whole image. This way, we transform the 3D image segmentation task into multiple 1D ray/sequence segmentation tasks, which makes it much easier to label the training samples than many existing Convolutional Neural Network (CNN) based 3D neuron image segmentation methods. In the experiments, we evaluate the performance of our method on the challenging 3D neuron images from two datasets, the BigNeuron dataset and the Whole Mouse Brain Sub-image (WMBS) dataset. Compared with the neuron tracing results on the segmented images produced by other state-of-the-art neuron segmentation methods, our method improves the distance scores by about 32% and 27% in the BigNeuron dataset, and about 38% and 27% in the WMBS dataset.
Collapse
|
128
|
Drukker K, Yan P, Sibley A, Wang G. Biomedical imaging and analysis through deep learning. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00004-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
129
|
Sadad T, Rehman A, Hussain A, Abbasi AA, Khan MQ. A Review on Multi-organ Cancer Detection Using Advanced Machine Learning Techniques. Curr Med Imaging 2020; 17:686-694. [PMID: 33334293 DOI: 10.2174/1573405616666201217112521] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/07/2020] [Accepted: 07/23/2020] [Indexed: 12/24/2022]
Abstract
Abnormal behaviors of tumors pose a risk to human survival. Thus, the detection of cancers at their initial stage is beneficial for patients and lowers the mortality rate. However, this can be difficult due to various factors related to imaging modalities, such as complex background, low contrast, brightness issues, poorly defined borders and the shape of the affected area. Recently, computer-aided diagnosis (CAD) models have been used to accurately diagnose tumors in different parts of the human body, especially breast, brain, lung, liver, skin and colon cancers. These cancers are diagnosed using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), colonoscopy, mammography, dermoscopy and histopathology. The aim of this review was to investigate existing approaches for the diagnosis of breast, brain, lung, liver, skin and colon tumors. The review focuses on decision-making systems, including handcrafted features and deep learning architectures for tumor detection.
Collapse
Affiliation(s)
- Tariq Sadad
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, Pakistan
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab CCIS Prince Sultan University, Riyadh 11586, Saudi Arabia
| | - Ayyaz Hussain
- Department of Computer Science, Quaid-i-Azam University, Islamabad, Pakistan
| | - Aaqif Afzaal Abbasi
- Department of Software Engineering, Foundation University, Islamabad, Pakistan
| | - Muhammad Qasim Khan
- Department of Computer Science, COMSATS University (Attock Campus) Islamabad, Pakistan
| |
Collapse
|
130
|
Elazab A, Wang C, Gardezi SJS, Bai H, Hu Q, Wang T, Chang C, Lei B. GP-GAN: Brain tumor growth prediction using stacked 3D generative adversarial networks from longitudinal MR Images. Neural Netw 2020; 132:321-332. [DOI: 10.1016/j.neunet.2020.09.004] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 08/27/2020] [Accepted: 09/06/2020] [Indexed: 01/28/2023]
|
131
|
Zhang J, Li X, Li Y, Wang M, Huang B, Yao S, Shen L. Three dimensional convolutional neural network-based classification of conduct disorder with structural MRI. Brain Imaging Behav 2020; 14:2333-2340. [PMID: 31538277 DOI: 10.1007/s11682-019-00186-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Conduct disorder (CD) is a common child and adolescent psychiatric disorder with various representative symptoms, and may cause long-term burden to patients and society. Recently, an increasing number of studies have used deep learning-based approaches, such as convolutional neural network (CNN), to analyze neuroimaging data and to identify biomarkers. In this study, we applied an optimized 3D AlexNet CNN model to automatically extract multi-layer high dimensional features of structural magnetic resonance imaging (sMRI), and to classify CD from healthy controls (HCs). We acquired high-resolution sMRI from 60 CD and 60 age- and gender-matched HCs. All subjects were male, and the age (mean ± std. dev) of participants in the CD and HC groups was 15.3 ± 1.0 and 15.5 ± 0.7, respectively. Five-fold cross validation (CV) was used to train and test this model. The receiver operating characteristic (ROC) curve for this model and that for support vector machine (SVM) model were compared. Feature visualization was performed to obtain intuition about the sMRI features learned by our AlexNet model. Our proposed AlexNet model achieved high classification performance with accuracy of 0.85, specificity of 0.82 and sensitivity of 0.87. The area under the ROC curve (AUC) of AlexNet was 0.86, significantly higher than that of SVM (AUC = 0.78; p = 0.046). The saliency maps for each convolutional layer highlighted the different brain regions in sMRI of CD, mainly including the frontal lobe, superior temporal gyrus, parietal lobe and occipital lobe. The classification results indicated that deep learning-based method is able to explore the hidden features from the sMRI of CD and might assist clinicians in the diagnosis of CD.
Collapse
Affiliation(s)
- Jianing Zhang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, People's Republic of China
| | - Xuechen Li
- Computer Vision Institute, School of Computer Science and Software Engineering, Shenzhen University, Shenzhen, People's Republic of China
| | - Yuexiang Li
- Computer Vision Institute, School of Computer Science and Software Engineering, Shenzhen University, Shenzhen, People's Republic of China
| | - Mingyu Wang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, People's Republic of China
| | - Bingsheng Huang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, People's Republic of China
- Medical Psychological Center, Second Xiangya Hospital, Central South University, Changsha, People's Republic of China
| | - Shuqiao Yao
- Medical Psychological Center, Second Xiangya Hospital, Central South University, Changsha, People's Republic of China.
| | - Linlin Shen
- Computer Vision Institute, School of Computer Science and Software Engineering, Shenzhen University, Shenzhen, People's Republic of China.
| |
Collapse
|
132
|
Karimi-Bidhendi S, Arafati A, Cheng AL, Wu Y, Kheradvar A, Jafarkhani H. Fully‑automated deep‑learning segmentation of pediatric cardiovascular magnetic resonance of patients with complex congenital heart diseases. J Cardiovasc Magn Reson 2020; 22:80. [PMID: 33256762 PMCID: PMC7706241 DOI: 10.1186/s12968-020-00678-0] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Accepted: 09/09/2020] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND For the growing patient population with congenital heart disease (CHD), improving clinical workflow, accuracy of diagnosis, and efficiency of analyses are considered unmet clinical needs. Cardiovascular magnetic resonance (CMR) imaging offers non-invasive and non-ionizing assessment of CHD patients. However, although CMR data facilitates reliable analysis of cardiac function and anatomy, clinical workflow mostly relies on manual analysis of CMR images, which is time consuming. Thus, an automated and accurate segmentation platform exclusively dedicated to pediatric CMR images can significantly improve the clinical workflow, as the present work aims to establish. METHODS Training artificial intelligence (AI) algorithms for CMR analysis requires large annotated datasets, which are not readily available for pediatric subjects and particularly in CHD patients. To mitigate this issue, we devised a novel method that uses a generative adversarial network (GAN) to synthetically augment the training dataset via generating synthetic CMR images and their corresponding chamber segmentations. In addition, we trained and validated a deep fully convolutional network (FCN) on a dataset, consisting of [Formula: see text] pediatric subjects with complex CHD, which we made publicly available. Dice metric, Jaccard index and Hausdorff distance as well as clinically-relevant volumetric indices are reported to assess and compare our platform with other algorithms including U-Net and cvi42, which is used in clinics. RESULTS For congenital CMR dataset, our FCN model yields an average Dice metric of [Formula: see text] and [Formula: see text] for LV at end-diastole and end-systole, respectively, and [Formula: see text] and [Formula: see text] for RV at end-diastole and end-systole, respectively. Using the same dataset, the cvi42, resulted in [Formula: see text], [Formula: see text], [Formula: see text] and [Formula: see text] for LV and RV at end-diastole and end-systole, and the U-Net architecture resulted in [Formula: see text], [Formula: see text], [Formula: see text] and [Formula: see text] for LV and RV at end-diastole and end-systole, respectively. CONCLUSIONS The chambers' segmentation results from our fully-automated method showed strong agreement with manual segmentation and no significant statistical difference was found by two independent statistical analyses. Whereas cvi42 and U-Net segmentation results failed to pass the t-test. Relying on these outcomes, it can be inferred that by taking advantage of GANs, our method is clinically relevant and can be used for pediatric and congenital CMR segmentation and analysis.
Collapse
Affiliation(s)
- Saeed Karimi-Bidhendi
- Center for Pervasive Communications and Computing, University of California, Irvine, Irvine, USA
| | - Arghavan Arafati
- Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, Irvine, Irvine, USA
| | - Andrew L Cheng
- The Keck School of Medicine, University of Southern California and Children's Hospital Los Angeles, Los Angeles, USA
| | - Yilei Wu
- Center for Pervasive Communications and Computing, University of California, Irvine, Irvine, USA
| | - Arash Kheradvar
- Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, Irvine, Irvine, USA.
| | - Hamid Jafarkhani
- Center for Pervasive Communications and Computing, University of California, Irvine, Irvine, USA.
| |
Collapse
|
133
|
Jin D, Guo D, Ho TY, Harrison AP, Xiao J, Tseng CK, Lu L. DeepTarget: Gross tumor and clinical target volume segmentation in esophageal cancer radiotherapy. Med Image Anal 2020; 68:101909. [PMID: 33341494 DOI: 10.1016/j.media.2020.101909] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 09/10/2020] [Accepted: 11/13/2020] [Indexed: 12/19/2022]
Abstract
Gross tumor volume (GTV) and clinical target volume (CTV) delineation are two critical steps in the cancer radiotherapy planning. GTV defines the primary treatment area of the gross tumor, while CTV outlines the sub-clinical malignant disease. Automatic GTV and CTV segmentation are both challenging for distinct reasons: GTV segmentation relies on the radiotherapy computed tomography (RTCT) image appearance, which suffers from poor contrast with the surrounding tissues, while CTV delineation relies on a mixture of predefined and judgement-based margins. High intra- and inter-user variability makes this a particularly difficult task. We develop tailored methods solving each task in the esophageal cancer radiotherapy, together leading to a comprehensive solution for the target contouring task. Specifically, we integrate the RTCT and positron emission tomography (PET) modalities together into a two-stream chained deep fusion framework taking advantage of both modalities to facilitate more accurate GTV segmentation. For CTV segmentation, since it is highly context-dependent-it must encompass the GTV and involved lymph nodes while also avoiding excessive exposure to the organs at risk-we formulate it as a deep contextual appearance-based problem using encoded spatial distances of these anatomical structures. This better emulates the margin- and appearance-based CTV delineation performed by oncologists. Adding to our contributions, for the GTV segmentation we propose a simple yet effective progressive semantically-nested network (PSNN) backbone that outperforms more complicated models. Our work is the first to provide a comprehensive solution for the esophageal GTV and CTV segmentation in radiotherapy planning. Extensive 4-fold cross-validation on 148 esophageal cancer patients, the largest analysis to date, was carried out for both tasks. The results demonstrate that our GTV and CTV segmentation approaches significantly improve the performance over previous state-of-the-art works, e.g., by 8.7% increases in Dice score (DSC) and 32.9mm reduction in Hausdorff distance (HD) for GTV segmentation, and by 3.4% increases in DSC and 29.4mm reduction in HD for CTV segmentation.
Collapse
Affiliation(s)
| | | | | | | | - Jing Xiao
- Ping An Technology, Shenzhen, Guangdong, China
| | | | - Le Lu
- PAII Inc., Bethesda, MD, USA
| |
Collapse
|
134
|
Tucker A, Wang Z, Rotalinti Y, Myles P. Generating high-fidelity synthetic patient data for assessing machine learning healthcare software. NPJ Digit Med 2020; 3:147. [PMID: 33299100 PMCID: PMC7653933 DOI: 10.1038/s41746-020-00353-9] [Citation(s) in RCA: 70] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 10/09/2020] [Indexed: 11/09/2022] Open
Abstract
There is a growing demand for the uptake of modern artificial intelligence technologies within healthcare systems. Many of these technologies exploit historical patient health data to build powerful predictive models that can be used to improve diagnosis and understanding of disease. However, there are many issues concerning patient privacy that need to be accounted for in order to enable this data to be better harnessed by all sectors. One approach that could offer a method of circumventing privacy issues is the creation of realistic synthetic data sets that capture as many of the complexities of the original data set (distributions, non-linear relationships, and noise) but that does not actually include any real patient data. While previous research has explored models for generating synthetic data sets, here we explore the integration of resampling, probabilistic graphical modelling, latent variable identification, and outlier analysis for producing realistic synthetic data based on UK primary care patient data. In particular, we focus on handling missingness, complex interactions between variables, and the resulting sensitivity analysis statistics from machine learning classifiers, while quantifying the risks of patient re-identification from synthetic datapoints. We show that, through our approach of integrating outlier analysis with graphical modelling and resampling, we can achieve synthetic data sets that are not significantly different from original ground truth data in terms of feature distributions, feature dependencies, and sensitivity analysis statistics when inferring machine learning classifiers. What is more, the risk of generating synthetic data that is identical or very similar to real patients is shown to be low.
Collapse
Affiliation(s)
- Allan Tucker
- Department of Computer Science, Brunel University London, London, UK.
| | - Zhenchen Wang
- CPRD, Medicines & Healthcare Products Regulatory Agency, London, UK
| | - Ylenia Rotalinti
- Biomedical Informatics Laboratory, University of Pavia, Pavia, Italy
| | - Puja Myles
- CPRD, Medicines & Healthcare Products Regulatory Agency, London, UK
| |
Collapse
|
135
|
A generative flow-based model for volumetric data augmentation in 3D deep learning for computed tomographic colonography. Int J Comput Assist Radiol Surg 2020; 16:81-89. [PMID: 33150471 PMCID: PMC7822776 DOI: 10.1007/s11548-020-02275-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 09/30/2020] [Indexed: 01/08/2023]
Abstract
Purpose Deep learning can be used for improving the performance of computer-aided detection (CADe) in various medical imaging tasks. However, in computed tomographic (CT) colonography, the performance is limited by the relatively small size and the variety of the available training datasets. Our purpose in this study was to develop and evaluate a flow-based generative model for performing 3D data augmentation of colorectal polyps for effective training of deep learning in CADe for CT colonography. Methods We developed a 3D-convolutional neural network (3D CNN) based on a flow-based generative model (3D Glow) for generating synthetic volumes of interest (VOIs) that has characteristics similar to those of the VOIs of its training dataset. The 3D Glow was trained to generate synthetic VOIs of polyps by use of our clinical CT colonography case collection. The evaluation was performed by use of a human observer study with three observers and by use of a CADe-based polyp classification study with a 3D DenseNet. Results The area-under-the-curve values of the receiver operating characteristic analysis of the three observers were not statistically significantly different in distinguishing between real polyps and synthetic polyps. When trained with data augmentation by 3D Glow, the 3D DenseNet yielded a statistically significantly higher polyp classification performance than when it was trained with alternative augmentation methods. Conclusion The 3D Glow-generated synthetic polyps are visually indistinguishable from real colorectal polyps. Their application to data augmentation can substantially improve the performance of 3D CNNs in CADe for CT colonography. Thus, 3D Glow is a promising method for improving the performance of deep learning in CADe for CT colonography.
Collapse
|
136
|
Dabbagh SR, Rabbi F, Doğan Z, Yetisen AK, Tasoglu S. Machine learning-enabled multiplexed microfluidic sensors. BIOMICROFLUIDICS 2020; 14:061506. [PMID: 33343782 PMCID: PMC7733540 DOI: 10.1063/5.0025462] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Accepted: 12/01/2020] [Indexed: 05/02/2023]
Abstract
High-throughput, cost-effective, and portable devices can enhance the performance of point-of-care tests. Such devices are able to acquire images from samples at a high rate in combination with microfluidic chips in point-of-care applications. However, interpreting and analyzing the large amount of acquired data is not only a labor-intensive and time-consuming process, but also prone to the bias of the user and low accuracy. Integrating machine learning (ML) with the image acquisition capability of smartphones as well as increasing computing power could address the need for high-throughput, accurate, and automatized detection, data processing, and quantification of results. Here, ML-supported diagnostic technologies are presented. These technologies include quantification of colorimetric tests, classification of biological samples (cells and sperms), soft sensors, assay type detection, and recognition of the fluid properties. Challenges regarding the implementation of ML methods, including the required number of data points, image acquisition prerequisites, and execution of data-limited experiments are also discussed.
Collapse
Affiliation(s)
| | - Fazle Rabbi
- Department of Mechanical Engineering, Koç University, Sariyer, Istanbul 34450, Turkey
| | | | - Ali Kemal Yetisen
- Department of Chemical Engineering, Imperial College London, London SW7 2AZ, United Kingdom
| | | |
Collapse
|
137
|
Shi G, Wang J, Qiang Y, Yang X, Zhao J, Hao R, Yang W, Du Q, Kazihise NGF. Knowledge-guided synthetic medical image adversarial augmentation for ultrasonography thyroid nodule classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 196:105611. [PMID: 32650266 DOI: 10.1016/j.cmpb.2020.105611] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Accepted: 06/14/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Image classification is an important task in many medical applications. Methods based on deep learning have made great achievements in the computer vision domain. However, they typically rely on large-scale datasets which are annotated. How to obtain such great datasets is still a serious problem in medical domain. METHODS In this paper, we propose a knowledge-guided adversarial augmentation method for synthesizing medical images. First, we design Term and Image Encoders to extract domain knowledge from radiologists, then we use domain knowledge as novel condition to constrain the Auxiliary Classifier Generative Adversarial Network (ACGAN) framework for the synthesis of high-quality thyroid nodule images. Finally, we demonstrate our method on the task of classifying ultrasonography thyroid nodule. Our method can make effective use of the high-quality diagnostic experience of advanced radiologists. In addition, we creatively choose to extract domain knowledge from standardized terms rather than ultrasound images. RESULTS Our novel method is demonstrated on a limited dataset of 1937 clinical thyroid ultrasound images and corresponding standardized terms. The accuracy of the proposed model for thyroid nodules is 91.46%, the sensitivity is 90.63%, the specificity is 92.65%, and the AUC is 95.32%, which is better than the current classification methods for thyroid nodules. The experimental results show the model has better generalization and robustness. CONCLUSIONS We believe that the proposed method can alleviate the problem of insufficient data in the medical domain, and other medical problems can benefit from using synthetic augmentation.
Collapse
Affiliation(s)
- Guohua Shi
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Jiawen Wang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Yan Qiang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China.
| | - Xiaotang Yang
- Department of Radiology, Shanxi Province Cancer Hospital, Shanxi Medical University, Taiyuan, China.
| | - Juanjuan Zhao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Rui Hao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Wenkai Yang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Qianqian Du
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | | |
Collapse
|
138
|
Zhang Y, Wu J, Liu Y, Chen Y, Chen W, Wu EX, Li C, Tang X. A deep learning framework for pancreas segmentation with multi-atlas registration and 3D level-set. Med Image Anal 2020; 68:101884. [PMID: 33246228 DOI: 10.1016/j.media.2020.101884] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 06/14/2020] [Accepted: 10/16/2020] [Indexed: 12/21/2022]
Abstract
In this paper, we propose and validate a deep learning framework that incorporates both multi-atlas registration and level-set for segmenting pancreas from CT volume images. The proposed segmentation pipeline consists of three stages, namely coarse, fine, and refine stages. Firstly, a coarse segmentation is obtained through multi-atlas based 3D diffeomorphic registration and fusion. After that, to learn the connection feature, a 3D patch-based convolutional neural network (CNN) and three 2D slice-based CNNs are jointly used to predict a fine segmentation based on a bounding box determined from the coarse segmentation. Finally, a 3D level-set method is used, with the fine segmentation being one of its constraints, to integrate information of the original image and the CNN-derived probability map to achieve a refine segmentation. In other words, we jointly utilize global 3D location information (registration), contextual information (patch-based 3D CNN), shape information (slice-based 2.5D CNN) and edge information (3D level-set) in the proposed framework. These components form our cascaded coarse-fine-refine segmentation framework. We test the proposed framework on three different datasets with varying intensity ranges obtained from different resources, respectively containing 36, 82 and 281 CT volume images. In each dataset, we achieve an average Dice score over 82%, being superior or comparable to other existing state-of-the-art pancreas segmentation algorithms.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Jiong Wu
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China; School of Computer and Electrical Engineering, Hunan University of Arts and Science, Hunan, China
| | - Yilong Liu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Yifan Chen
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Wei Chen
- Department of Radiology, Third Military Medical University Southwest Hospital, Chongqing, China
| | - Ed X Wu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Chunming Li
- Department of Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiaoying Tang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China.
| |
Collapse
|
139
|
B S, R N. Transfer Learning Based Automatic Human Identification using Dental Traits- An Aid to Forensic Odontology. J Forensic Leg Med 2020; 76:102066. [PMID: 33032205 DOI: 10.1016/j.jflm.2020.102066] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 08/17/2020] [Accepted: 09/23/2020] [Indexed: 12/13/2022]
Abstract
Forensic Odontology deals with identifying humans based on their dental traits because of their robust nature. Classical methods of human identification require more manual effort and are difficult to use for large number of Images. A Novel way of automating the process of human identification by using deep learning approaches is proposed in this paper. Transfer learning using AlexNet is applied in three stages: In the first stage, the features of the query tooth image are extracted and its location is identified as either in the upper or lower Jaw. In the second stage of transfer learning, the tooth is then classified into any of the four classes namely Molar, Premolar, Canine or Incisor. In the last stage, the classified tooth is then numbered according to the universal numbering system and finally the candidate identification is made by using distance as metrics. These three stage transfer learning approach proposed in this work helps in reducing the search space in the process of candidate matching. Also, instead of making the network classify all the 32 teeth into 32 different classes, this approach reduces the number of classes assigned to the classification layer in each stage thereby increasing the performance of the network. This work outperforms the classical approaches in terms of both accuracy and precision. The hit rate in human identification is also higher compared to the other state-of-art methods.
Collapse
Affiliation(s)
- Sathya B
- Department of Electrical and Electronics Engineering, PSG College of Technology, Coimbatore, Tamilnadu, 641 004, India.
| | - Neelaveni R
- Department of Electrical and Electronics Engineering, PSG College of Technology, Coimbatore, Tamilnadu, 641 004, India.
| |
Collapse
|
140
|
ROI-based feature learning for efficient true positive prediction using convolutional neural network for lung cancer diagnosis. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-04787-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
141
|
Kwon JH, Cho GH. An examination of the intersection environment associated with perceived crash risk among school-aged children: using street-level imagery and computer vision. ACCIDENT; ANALYSIS AND PREVENTION 2020; 146:105716. [PMID: 32827845 DOI: 10.1016/j.aap.2020.105716] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 07/31/2020] [Accepted: 08/03/2020] [Indexed: 06/11/2023]
Abstract
While computer vision techniques and big data of street-level imagery are getting increasing attention, a "black-box" model of deep learning hinders the active application of these techniques to the field of traffic safety research. To address this issue, we presented a semantic scene labeling approach that leverages wide-coverage street-level imagery for the purpose of exploring the association between built environment characteristics and perceived crash risk at 533 intersections. The environmental attributes were measured at eye-level using scene segmentation and object detection algorithms, and they were classified as one of four intersection typologies using the k-means clustering method. Data on perceived crash risk were collected from a questionnaire conducted on 799 children 10 to 12 years old. Our results showed that environmental features derived from deep learning algorithms were significantly associated with perceived crash risk among school-aged children. The results have revealed that some of the intersection characteristics including the proportional area of sky and roadway were significantly associated with the perceived crash risk among school-aged children. In particular, road width had dominant influence on risk perception. The findings provide information useful to providing appropriate and proactive interventions that may reduce the risk of crashes at intersections.
Collapse
Affiliation(s)
- Jae-Hong Kwon
- School of Urban and Environmental Engineering, Ulsan National Institute of Science and Technology, South Korea.
| | - Gi-Hyoug Cho
- School of Urban and Environmental Engineering, Ulsan National Institute of Science and Technology, 50 UNIST-gil, Uljugun, Ulsan, 44949, South Korea.
| |
Collapse
|
142
|
Abstract
The use of artificial intelligence (AI) is a powerful tool for image analysis that is increasingly being evaluated by radiology professionals. However, due to the fact that these methods have been developed for the analysis of nonmedical image data and data structure in radiology departments is not "AI ready", implementing AI in radiology is not straightforward. The purpose of this review is to guide the reader through the pipeline of an AI project for automated image analysis in radiology and thereby encourage its implementation in radiology departments. At the same time, this review aims to enable readers to critically appraise articles on AI-based software in radiology.
Collapse
|
143
|
Saba T. Recent advancement in cancer detection using machine learning: Systematic survey of decades, comparisons and challenges. J Infect Public Health 2020; 13:1274-1289. [DOI: 10.1016/j.jiph.2020.06.033] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Revised: 06/21/2020] [Accepted: 06/28/2020] [Indexed: 12/24/2022] Open
|
144
|
Liang S, Thung KH, Nie D, Zhang Y, Shen D. Multi-View Spatial Aggregation Framework for Joint Localization and Segmentation of Organs at Risk in Head and Neck CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2794-2805. [PMID: 32091997 DOI: 10.1109/tmi.2020.2975853] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Accurate segmentation of organs at risk (OARs) from head and neck (H&N) CT images is crucial for effective H&N cancer radiotherapy. However, the existing deep learning methods are often not trained in an end-to-end fashion, i.e., they independently predetermine the regions of target organs before organ segmentation, causing limited information sharing between related tasks and thus leading to suboptimal segmentation results. Furthermore, when conventional segmentation network is used to segment all the OARs simultaneously, the results often favor big OARs over small OARs. Thus, the existing methods often train a specific model for each OAR, ignoring the correlation between different segmentation tasks. To address these issues, we propose a new multi-view spatial aggregation framework for joint localization and segmentation of multiple OARs using H&N CT images. The core of our framework is a proposed region-of-interest (ROI)-based fine-grained representation convolutional neural network (CNN), which is used to generate multi-OAR probability maps from each 2D view (i.e., axial, coronal, and sagittal view) of CT images. Specifically, our ROI-based fine-grained representation CNN (1) unifies the OARs localization and segmentation tasks and trains them in an end-to-end fashion, and (2) improves the segmentation results of various-sized OARs via a novel ROI-based fine-grained representation. Our multi-view spatial aggregation framework then spatially aggregates and assembles the generated multi-view multi-OAR probability maps to segment all the OARs simultaneously. We evaluate our framework using two sets of H&N CT images and achieve competitive and highly robust segmentation performance for OARs of various sizes.
Collapse
|
145
|
Ma J, Yu J, Liu S, Chen L, Li X, Feng J, Chen Z, Zeng S, Liu X, Cheng S. PathSRGAN: Multi-Supervised Super-Resolution for Cytopathological Images Using Generative Adversarial Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2920-2930. [PMID: 32175859 DOI: 10.1109/tmi.2020.2980839] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In the cytopathology screening of cervical cancer, high-resolution digital cytopathological slides are critical for the interpretation of lesion cells. However, the acquisition of high-resolution digital slides requires high-end imaging equipment and long scanning time. In the study, we propose a GAN-based progressive multi-supervised super-resolution model called PathSRGAN (pathology super-resolution GAN) to learn the mapping of real low-resolution and high-resolution cytopathological images. With respect to the characteristics of cytopathological images, we design a new two-stage generator architecture with two supervision terms. The generator of the first stage corresponds to a densely-connected U-Net and achieves 4× to 10× super resolution. The generator of the second stage corresponds to a residual-in-residual DenseBlock and achieves 10× to 20× super resolution. The designed generator alleviates the difficulty in learning the mapping from 4× images to 20× images caused by the great numerical aperture difference and generates high quality high-resolution images. We conduct a series of comparison experiments and demonstrate the superiority of PathSRGAN to mainstream CNN-based and GAN-based super-resolution methods in cytopathological images. Simultaneously, the reconstructed high-resolution images by PathSRGAN improve the accuracy of computer-aided diagnosis tasks effectively. It is anticipated that the study will help increase the penetration rate of cytopathology screening in remote and impoverished areas that lack high-end imaging equipment.
Collapse
|
146
|
Fu Y, Xue P, Ji H, Cui W, Dong E. Deep model with Siamese network for viable and necrotic tumor regions assessment in osteosarcoma. Med Phys 2020; 47:4895-4905. [DOI: 10.1002/mp.14397] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2019] [Revised: 07/01/2020] [Accepted: 07/10/2020] [Indexed: 01/06/2023] Open
Affiliation(s)
- Yu Fu
- Department of Mechanical Electrical and Information Engineering Shandong University Weihai264209 China
| | - Peng Xue
- Department of Mechanical Electrical and Information Engineering Shandong University Weihai264209 China
| | - Huizhong Ji
- Department of Mechanical Electrical and Information Engineering Shandong University Weihai264209 China
| | - Wentao Cui
- Department of Mechanical Electrical and Information Engineering Shandong University Weihai264209 China
| | - Enqing Dong
- Department of Mechanical Electrical and Information Engineering Shandong University Weihai264209 China
| |
Collapse
|
147
|
Abstract
In recent years, deep learning techniques, and in particular convolutional neural networks (CNNs) methods have demonstrated a superior performance in image classification and visual object recognition. In this work, we propose a classification of four types of liver lesions, namely, hepatocellular carcinoma, metastases, hemangiomas, and healthy tissues using convolutional neural networks with a succinct model called FireNet. We improved speed for quick classification and decreased the model size and the number of parameters by using fire modules from SqueezeNet. We have used bypass connection by adding it around Fire modules for learning a residual function between input and output, and to solve the vanishing gradient problem. We have proposed a new Particle Swarm Optimization (NPSO) to optimize the network parameters in order to further boost the performance of the proposed FireNet. The experimental results show that the parameters of FireNet are 9.5 times smaller than GoogLeNet, 51.6 times smaller than AlexNet, and 75.8 smaller than ResNet. The size of FireNet is reduced 16.6 times smaller than GoogLeNet, 75 times smaller than AlexNet and 76.6 times smaller than ResNet. The final accuracy of our proposed FireNet model was 89.2%.
Collapse
|
148
|
Chen S, Han Y, Lin J, Zhao X, Kong P. Pulmonary nodule detection on chest radiographs using balanced convolutional neural network and classic candidate detection. Artif Intell Med 2020; 107:101881. [DOI: 10.1016/j.artmed.2020.101881] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Revised: 04/05/2020] [Accepted: 05/12/2020] [Indexed: 12/21/2022]
|
149
|
Moon WK, Lee YW, Ke HH, Lee SH, Huang CS, Chang RF. Computer-aided diagnosis of breast ultrasound images using ensemble learning from convolutional neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 190:105361. [PMID: 32007839 DOI: 10.1016/j.cmpb.2020.105361] [Citation(s) in RCA: 79] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2019] [Revised: 01/14/2020] [Accepted: 01/24/2020] [Indexed: 05/11/2023]
Abstract
Breast ultrasound and computer aided diagnosis (CAD) has been used to classify tumors into benignancy or malignancy. However, conventional CAD software has some problems (such as handcrafted features are hard to design; conventional CAD systems are difficult to confirm overfitting problems, etc.). In our study, we propose a CAD system for tumor diagnosis using an image fusion method combined with different image content representations and ensemble different CNN architectures on US images. The CNN-based method proposed in this study includes VGGNet, ResNet, and DenseNet. In our private dataset, there was a total of 1687 tumors that including 953 benign and 734 malignant tumors. The accuracy, sensitivity, specificity, precision, F1 score and the AUC of the proposed method were 91.10%, 85.14%, 95.77%, 94.03%, 89.36%, and 0.9697 respectively. In the open dataset (BUSI), there was a total of 697 tumors that including 437 benign lesions, 210 malignant tumors, and 133 normal images. The accuracy, sensitivity, specificity, precision, F1 score, and the AUC of the proposed method were 94.62%, 92.31%, 95.60%, 90%, 91.14%, and 0.9711. In conclusion, the results indicated different image content representations that affect the prediction performance of the CAD system, more image information improves the prediction performance, and the tumor shape feature can improve the diagnostic effect.
Collapse
Affiliation(s)
- Woo Kyung Moon
- Department of Radiology, Seoul National University Hospital and Seoul National University College of Medicine, Seoul 110-744, South Korea
| | - Yan-Wei Lee
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, ROC
| | - Hao-Hsiang Ke
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, ROC
| | - Su Hyun Lee
- Department of Radiology, Seoul National University Hospital and Seoul National University College of Medicine, Seoul 110-744, South Korea
| | - Chiun-Sheng Huang
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan, ROC
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, ROC; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan, ROC; Graduate Institute of Network and Multimedia, National Taiwan University, Taipei, Taiwan, ROC; MOST Joint Research Center for AI Technology and All Vista Healthcare, Taipei, Taiwan, ROC.
| |
Collapse
|
150
|
Dhengre N, Sinha S, Chinni B, Dogra V, Rao N. Computer aided detection of prostate cancer using multiwavelength photoacoustic data with convolutional neural network. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|