51
|
Zhu Z, Mittendorf A, Shropshire E, Allen B, Miller C, Bashir MR, Mazurowski MA. 3D Pyramid Pooling Network for Abdominal MRI Series Classification. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:1688-1698. [PMID: 33112740 DOI: 10.1109/tpami.2020.3033990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Recognizing and organizing different series in an MRI examination is important both for clinical review and research, but it is poorly addressed by the current generation of picture archiving and communication systems (PACSs) and post-processing workstations. In this paper, we study the problem of using deep convolutional neural networks for automatic classification of abdominal MRI series to one of many series types. Our contributions are three-fold. First, we created a large abdominal MRI dataset containing 3717 MRI series including 188,665 individual images, derived from liver examinations. 30 different series types are represented in this dataset. The dataset was annotated by consensus readings from two radiologists. Both the MRIs and the annotations were made publicly available. Second, we proposed a 3D pyramid pooling network, which can elegantly handle abdominal MRI series with varied sizes of each dimension, and achieved state-of-the-art classification performance. Third, we performed the first ever comparison between the algorithm and the radiologists on an additional dataset and had several meaningful findings.
Collapse
|
52
|
Shreve JT, Khanani SA, Haddad TC. Artificial Intelligence in Oncology: Current Capabilities, Future Opportunities, and Ethical Considerations. Am Soc Clin Oncol Educ Book 2022; 42:1-10. [PMID: 35687826 DOI: 10.1200/edbk_350652] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
The promise of highly personalized oncology care using artificial intelligence (AI) technologies has been forecasted since the emergence of the field. Cumulative advances across the science are bringing this promise to realization, including refinement of machine learning- and deep learning algorithms; expansion in the depth and variety of databases, including multiomics; and the decreased cost of massively parallelized computational power. Examples of successful clinical applications of AI can be found throughout the cancer continuum and in multidisciplinary practice, with computer vision-assisted image analysis in particular having several U.S. Food and Drug Administration-approved uses. Techniques with emerging clinical utility include whole blood multicancer detection from deep sequencing, virtual biopsies, natural language processing to infer health trajectories from medical notes, and advanced clinical decision support systems that combine genomics and clinomics. Substantial issues have delayed broad adoption, with data transparency and interpretability suffering from AI's "black box" mechanism, and intrinsic bias against underrepresented persons limiting the reproducibility of AI models and perpetuating health care disparities. Midfuture projections of AI maturation involve increasing a model's complexity by using multimodal data elements to better approximate an organic system. Far-future positing includes living databases that accumulate all aspects of a person's health into discrete data elements; this will fuel highly convoluted modeling that can tailor treatment selection, dose determination, surveillance modality and schedule, and more. The field of AI has had a historical dichotomy between its proponents and detractors. The successful development of recent applications, and continued investment in prospective validation that defines their impact on multilevel outcomes, has established a momentum of accelerated progress.
Collapse
Affiliation(s)
| | | | - Tufia C Haddad
- Department of Oncology, Mayo Clinic, Rochester, MN.,Center for Digital Health, Mayo Clinic, Rochester, MN
| |
Collapse
|
53
|
Reliable detection of lymph nodes in whole pelvic for radiotherapy. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
54
|
Zhou J, Xin H. Emerging artificial intelligence methods for fighting lung cancer: a survey. CLINICAL EHEALTH 2022. [DOI: 10.1016/j.ceh.2022.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|
55
|
Chen X, Li Y, Yao L, Adeli E, Zhang Y, Wang X. Generative Adversarial U-Net for Domain-free Few-shot Medical Diagnosis. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.03.022] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
56
|
Li S, Xie Y, Wang G, Zhang L, Zhou W. Adaptive multimodal fusion with attention guided deep supervision net for grading hepatocellular carcinoma. IEEE J Biomed Health Inform 2022; 26:4123-4131. [PMID: 35344499 DOI: 10.1109/jbhi.2022.3161466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Multimodal medical imaging plays a crucial role in the diagnosis and characterization of lesions. However, challenges remain in lesion characterization based on multimodal feature fusion. First, current fusion methods have not thoroughly studied the relative importance of characterization modals. In addition, multimodal feature fusion cannot provide the contribution of different modal information to inform critical decision-making. In this study, we propose an adaptive multimodal fusion method with an attention-guided deep supervision net for grading hepatocellular carcinoma (HCC). Specifically, our proposed framework comprises two modules: attention-based adaptive feature fusion and attention-guided deep supervision net. The former uses the attention mechanism at the feature fusion level to generate weights for adaptive feature concatenation and balances the importance of features among various modals. The latter uses the weight generated by the attention mechanism as the weight coefficient of each loss to balance the contribution of the corresponding modal to the total loss function. The experimental results of grading clinical HCC with contrast-enhanced MR demonstrated the effectiveness of the proposed method. A significant performance improvement was achieved compared with existing fusion methods. In addition, the weight coefficient of attention in multimodal fusion has demonstrated great significance in clinical interpretation.
Collapse
|
57
|
Instance Importance-Aware Graph Convolutional Network for 3D Medical Diagnosis. Med Image Anal 2022; 78:102421. [DOI: 10.1016/j.media.2022.102421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 02/11/2022] [Accepted: 03/10/2022] [Indexed: 11/21/2022]
|
58
|
Mubarak AS, Serte S, Al‐Turjman F, Ameen ZS, Ozsoz M. Local binary pattern and deep learning feature extraction fusion for COVID-19 detection on computed tomography images. EXPERT SYSTEMS 2022; 39:e12842. [PMID: 34898796 PMCID: PMC8646483 DOI: 10.1111/exsy.12842] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 09/09/2021] [Indexed: 06/14/2023]
Abstract
The deadly coronavirus virus (COVID-19) was confirmed as a pandemic by the World Health Organization (WHO) in December 2019. It is important to identify suspected patients as early as possible in order to control the spread of the virus, improve the efficacy of medical treatment, and, as a result, lower the mortality rate. The adopted method of detecting COVID-19 is the reverse-transcription polymerase chain reaction (RT-PCR), the process is affected by a scarcity of RT-PCR kits as well as its complexities. Medical imaging using machine learning and deep learning has proved to be one of the most efficient methods of detecting respiratory diseases, but to train machine learning features needs to be extracted manually, and in deep learning, efficiency is affected by deep learning architecture and low data. In this study, handcrafted local binary pattern (LBP) and automatic seven deep learning models extracted features were used to train support vector machines (SVM) and K-nearest neighbour (KNN) classifiers, to improve the performance of the classifier, a concatenated LBP and deep learning feature was proposed to train the KNN and SVM, based on the performance criteria, the models VGG-19 + LBP achieved the highest accuracy of 99.4%. The SVM and KNN classifiers trained on the hybrid feature outperform the state of the art model. This shows that the proposed feature can improve the performance of the classifiers in detecting COVID-19.
Collapse
Affiliation(s)
- Auwalu Saleh Mubarak
- Department of Electrical and Electronics EngineeringNear East UniversityMersinTurkey
| | - Sertan Serte
- Department of Electrical and Electronics EngineeringNear East UniversityMersinTurkey
| | - Fadi Al‐Turjman
- Department of Artificial Intelligence, Research Center for AI and IoTNear East UniversityMersinTurkey
| | | | - Mehmet Ozsoz
- Department of Biomedical EngineeringNear East UniversityMersinTurkey
| |
Collapse
|
59
|
Burati M, Tagliabue F, Lomonaco A, Chiarelli M, Zago M, Cioffi G, Cioffi U. Artificial intelligence as a future in cancer surgery. Artif Intell Cancer 2022; 3:11-16. [DOI: 10.35713/aic.v3.i1.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 12/24/2021] [Accepted: 01/17/2022] [Indexed: 02/06/2023] Open
Affiliation(s)
- Morena Burati
- Department of Robotic and Emergency Surgery, Ospedale A Manzoni, ASST Lecco, Lecco 23900, Italy
| | - Fulvio Tagliabue
- Department of Robotic and Emergency Surgery, Ospedale A Manzoni, ASST Lecco, Lecco 23900, Italy
| | - Adriana Lomonaco
- Department of Robotic and Emergency Surgery, Ospedale A Manzoni, ASST Lecco, Lecco 23900, Italy
| | - Marco Chiarelli
- Department of Robotic and Emergency Surgery, Ospedale A Manzoni, ASST Lecco, Lecco 23900, Italy
| | - Mauro Zago
- Department of Robotic and Emergency Surgery, Ospedale A Manzoni, ASST Lecco, Lecco 23900, Italy
| | - Gerardo Cioffi
- Department of Sciences and Technologies, Unisannio, Benevento 82100, Italy
| | - Ugo Cioffi
- Department of Surgery, University of Milan, Milano 20122, Italy
| |
Collapse
|
60
|
Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. ROFO-FORTSCHR RONTG 2022; 194:605-612. [PMID: 35211929 DOI: 10.1055/a-1718-4128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET.. CITATION FORMAT · Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Fortschr Röntgenstr 2022; DOI: 10.1055/a-1718-4128.
Collapse
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Tobias Hepp
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Ferdinand Seith
- Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| |
Collapse
|
61
|
COVID-19 Pneumonia Classification Based on NeuroWavelet Capsule Network. Healthcare (Basel) 2022; 10:healthcare10030422. [PMID: 35326900 PMCID: PMC8949056 DOI: 10.3390/healthcare10030422] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 02/16/2022] [Accepted: 02/20/2022] [Indexed: 12/23/2022] Open
Abstract
Since it was first reported, coronavirus disease 2019, also known as COVID-19, has spread expeditiously around the globe. COVID-19 must be diagnosed as soon as possible in order to control the disease and provide proper care to patients. The chest X-ray (CXR) has been identified as a useful diagnostic tool, but the disease outbreak has put a lot of pressure on radiologists to read the scans, which could give rise to fatigue-related misdiagnosis. Automatic classification algorithms that are reliable can be extremely beneficial; however, they typically depend upon a large amount of COVID-19 data for training, which are troublesome to obtain in the nick of time. Therefore, we propose a novel method for the classification of COVID-19. Concretely, a novel neurowavelet capsule network is proposed for COVID-19 classification. To be more precise, first, we introduce a multi-resolution analysis of a discrete wavelet transform to filter noisy and inconsistent information from the CXR data in order to improve the feature extraction robustness of the network. Secondly, the discrete wavelet transform of the multi-resolution analysis also performs a sub-sampling operation in order to minimize the loss of spatial details, thereby enhancing the overall classification performance. We examined the proposed model on a public-sourced dataset of pneumonia-related illnesses, including COVID-19 confirmed cases and healthy CXR images. The proposed method achieves an accuracy of 99.6%, sensitivity of 99.2%, specificity of 99.1% and precision of 99.7%. Our approach achieves an up-to-date performance that is useful for COVID-19 screening according to the experimental results. This latest paradigm will contribute significantly in the battle against COVID-19 and other diseases.
Collapse
|
62
|
Jones MA, Faiz R, Qiu Y, Zheng B. Improving mammography lesion classification by optimal fusion of handcrafted and deep transfer learning features. Phys Med Biol 2022; 67:10.1088/1361-6560/ac5297. [PMID: 35130517 PMCID: PMC8935657 DOI: 10.1088/1361-6560/ac5297] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 02/07/2022] [Indexed: 12/20/2022]
Abstract
Objective.Handcrafted radiomics features or deep learning model-generated automated features are commonly used to develop computer-aided diagnosis schemes of medical images. The objective of this study is to test the hypothesis that handcrafted and automated features contain complementary classification information and fusion of these two types of features can improve CAD performance.Approach.We retrospectively assembled a dataset involving 1535 lesions (740 malignant and 795 benign). Regions of interest (ROI) surrounding suspicious lesions are extracted and two types of features are computed from each ROI. The first one includes 40 radiomic features and the second one includes automated features computed from a VGG16 network using a transfer learning method. A single channel ROI image is converted to three channel pseudo-ROI images by stacking the original image, a bilateral filtered image, and a histogram equalized image. Two VGG16 models using pseudo-ROIs and 3 stacked original ROIs without pre-processing are used to extract automated features. Five linear support vector machines (SVM) are built using the optimally selected feature vectors from the handcrafted features, two sets of VGG16 model-generated automated features, and the fusion of handcrafted and each set of automated features, respectively.Main Results.Using a 10-fold cross-validation, the fusion SVM using pseudo-ROIs yields the highest lesion classification performance with area under ROC curve (AUC = 0.756 ± 0.042), which is significantly higher than those yielded by other SVMs trained using handcrafted or automated features only (p < 0.05).Significance.This study demonstrates that both handcrafted and automated futures contain useful information to classify breast lesions. Fusion of these two types of features can further increase CAD performance.
Collapse
Affiliation(s)
- Meredith A. Jones
- School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Rowzat Faiz
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| |
Collapse
|
63
|
Yin HL, Jiang Y, Huang WJ, Li SH, Lin GW. A Magnetic Resonance Angiography-Based Study Comparing Machine Learning and Clinical Evaluation: Screening Intracranial Regions Associated with the Hemorrhagic Stroke of Adult Moyamoya Disease. J Stroke Cerebrovasc Dis 2022; 31:106382. [PMID: 35183983 DOI: 10.1016/j.jstrokecerebrovasdis.2022.106382] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 01/25/2022] [Accepted: 01/29/2022] [Indexed: 02/05/2023] Open
Abstract
OBJECTIVES Moyamoya disease patients with hemorrhagic stroke usually have a poor prognosis. This study aimed to determine whether hemorrhagic moyamoya disease could be distinguished from MRA images using transfer deep learning and to screen potential regions that contain rich distinguishing information from MRA images in moyamoya disease. MATERIALS AND METHODS A total of 116 adult patients with bilateral moyamoya diseases suffering from hemorrhagic or ischemia complications were retrospectively screened. Based on original MRA images at the level of the basal cistern, basal ganglia, and centrum semiovale, we adopted the pretrained ResNet18 to build three models for differentiating hemorrhagic moyamoya disease. Grad-CAM was applied to visualize the regions of interest. RESULTS For the test set, the accuracies of model differentiation in the basal cistern, basal ganglia, and centrum semiovale were 93.3%, 91.5%, and 86.4%, respectively. Visualization of the regions of interest demonstrated that the models focused on the deep and periventricular white matter and abnormal collateral vessels in hemorrhagic moyamoya disease. CONCLUSION A transfer learning model based on MRA images of the basal cistern and basal ganglia showed a good ability to differentiate between patients with hemorrhagic moyamoya disease and those with ischemic moyamoya disease. The deep and periventricular white matter and collateral vessels at the level of the basal cistern and basal ganglia may contain rich distinguishing information.
Collapse
Affiliation(s)
- Hao-Lin Yin
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, No. 221 Yan'anxi Road, Jing'an District, Shanghai 200040, China
| | - Yu Jiang
- Department of Radiology, West China Hospital, Sichuan University, 37# Guo Xue Xiang, Chengdu, Sichuan 610041, China
| | - Wen-Jun Huang
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, No. 221 Yan'anxi Road, Jing'an District, Shanghai 200040, China
| | - Shi-Hong Li
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, No. 221 Yan'anxi Road, Jing'an District, Shanghai 200040, China
| | - Guang-Wu Lin
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, No. 221 Yan'anxi Road, Jing'an District, Shanghai 200040, China.
| |
Collapse
|
64
|
|
65
|
Abdou MA. Literature review: efficient deep neural networks techniques for medical image analysis. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-06960-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
66
|
Ma J, Liu S, Cheng S, Chen R, Liu X, Chen L, Zeng S. STSRNet: Self-Texture Transfer Super-Resolution and Refocusing Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:383-393. [PMID: 34520352 DOI: 10.1109/tmi.2021.3112923] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Biomedical microscopy images with high-resolution (HR) and axial information can help analysis and diagnosis. However, obtaining such images usually takes more time and economic costs, which makes it impractical in most scenarios. In this paper, we first propose a novel Self-texture Transfer Super-resolution and Refocusing Network (STSRNet) to reconstruct HR multi-focal plane (MFP) images from a single 2D low-resolution (LR) wide field image without relying on scanning or any special devices. The proposed STSRNet consists of three parts: the backbone module for extracting features, the self-texture transfer module for transferring and fusing features, and the flexible reconstruction module for SR and refocusing. Specifically, the self-texture transfer module is designed for images with self-similarity such as cytological images and it searches for similar textures within the image and transfers to help MFP reconstruction. As for reconstruction module, it is composed of multiple pluggable components, each of which is responsible for a specific focal plane, so as to performs SR and refocusing all focal planes at one time to reduce computation. We conduct extensive experiments on cytological images and the experiments show that MFP images reconstructed by STSRNet have richer details in the axial and horizontal directions than input images. At the same time, the reconstructed MFP images also perform better than single 2D wide field images on high-level tasks. The proposed method provides relatively high-quality MFP images when real MFP images cannot be obtained, which greatly expands the application potential of LR wide-field images. To further promote the development of this field, we released our cytology dataset named RSDC for more researchers to use.
Collapse
|
67
|
Chang CY, Buckless C, Yeh KJ, Torriani M. Automated detection and segmentation of sclerotic spinal lesions on body CTs using a deep convolutional neural network. Skeletal Radiol 2022; 51:391-399. [PMID: 34291325 DOI: 10.1007/s00256-021-03873-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 07/09/2021] [Accepted: 07/14/2021] [Indexed: 02/07/2023]
Abstract
PURPOSE To develop a deep convolutional neural network capable of detecting spinal sclerotic metastases on body CTs. MATERIALS AND METHODS Our study was IRB-approved and HIPAA-compliant. Cases of confirmed sclerotic bone metastases in chest, abdomen, and pelvis CTs were identified. Images were manually segmented for 3 classes: background, normal bone, and sclerotic lesion(s). If multiple lesions were present on a slice, all lesions were segmented. A total of 600 images were obtained, with a 90/10 training/testing split. Images were stored as 128 × 128 pixel grayscale and the training dataset underwent a processing pipeline of histogram equalization and data augmentation. We trained our model from scratch on Keras/TensorFlow using an 80/20 training/validation split and a U-Net architecture (64 batch size, 100 epochs, dropout 0.25, initial learning rate 0.0001, sigmoid activation). We also tested our model's true negative and false positive rate with 1104 non-pathologic images. Global sensitivity measured model detection of any lesion on a single image, local sensitivity and positive predictive value (PPV) measured model detection of each lesion on a given image, and local specificity measured the false positive rate in non-pathologic bone. RESULTS Dice scores were 0.83 for lesion, 0.96 for non-pathologic bone, and 0.99 for background. Global sensitivity was 95% (57/60), local sensitivity was 92% (89/97), local PPV was 97% (89/92), and local specificity was 87% (958/1104). CONCLUSION A deep convolutional neural network has the potential to assist in detecting sclerotic spinal metastases.
Collapse
Affiliation(s)
- Connie Y Chang
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit Street, YAW 6, Boston, MA, 02114, USA.
| | - Colleen Buckless
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit Street, YAW 6, Boston, MA, 02114, USA
| | - Kaitlyn J Yeh
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit Street, YAW 6, Boston, MA, 02114, USA
| | - Martin Torriani
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit Street, YAW 6, Boston, MA, 02114, USA
| |
Collapse
|
68
|
Li S, Xie Y, Wang G, Zhang L, Zhou W. Attention guided discriminative feature learning and adaptive fusion for grading hepatocellular carcinoma with Contrast-enhanced MR. Comput Med Imaging Graph 2022; 97:102050. [PMID: 35255322 DOI: 10.1016/j.compmedimag.2022.102050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 12/19/2021] [Accepted: 02/17/2022] [Indexed: 10/19/2022]
|
69
|
AFA: adversarial frequency alignment for domain generalized lung nodule detection. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-06928-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
70
|
Yang Z, Chen M, Kazemimoghadam M, Ma L, Stojadinovic S, Timmerman R, Dan T, Wardak Z, Lu W, Gu X. Deep-learning and radiomics ensemble classifier for false positive reduction in brain metastases segmentation. Phys Med Biol 2022; 67:10.1088/1361-6560/ac4667. [PMID: 34952535 PMCID: PMC8858586 DOI: 10.1088/1361-6560/ac4667] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 12/24/2021] [Indexed: 01/21/2023]
Abstract
Stereotactic radiosurgery (SRS) is now the standard of care for brain metastases (BMs) patients. The SRS treatment planning process requires precise target delineation, which in clinical workflow for patients with multiple (>4) BMs (mBMs) could become a pronounced time bottleneck. Our group has developed an automated BMs segmentation platform to assist in this process. The accuracy of the auto-segmentation, however, is influenced by the presence of false-positive segmentations, mainly caused by the injected contrast during MRI acquisition. To address this problem and further improve the segmentation performance, a deep-learning and radiomics ensemble classifier was developed to reduce the false-positive rate in segmentations. The proposed model consists of a Siamese network and a radiomic-based support vector machine (SVM) classifier. The 2D-based Siamese network contains a pair of parallel feature extractors with shared weights followed by a single classifier. This architecture is designed to identify the inter-class difference. On the other hand, the SVM model takes the radiomic features extracted from 3D segmentation volumes as the input for twofold classification, either a false-positive segmentation or a true BM. Lastly, the outputs from both models create an ensemble to generate the final label. The performance of the proposed model in the segmented mBMs testing dataset reached the accuracy (ACC), sensitivity (SEN), specificity (SPE) and area under the curve of 0.91, 0.96, 0.90 and 0.93, respectively. After integrating the proposed model into the original segmentation platform, the average segmentation false negative rate (FNR) and the false positive over the union (FPoU) were 0.13 and 0.09, respectively, which preserved the initial FNR (0.07) and significantly improved the FPoU (0.55). The proposed method effectively reduced the false-positive rate in the BMs raw segmentations indicating that the integration of the proposed ensemble classifier into the BMs segmentation platform provides a beneficial tool for mBMs SRS management.
Collapse
Affiliation(s)
- Zi Yang
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Mingli Chen
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Mahdieh Kazemimoghadam
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Lin Ma
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Strahinja Stojadinovic
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Robert Timmerman
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Tu Dan
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Zabi Wardak
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Weiguo Lu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Xuejun Gu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305
| |
Collapse
|
71
|
Yin H, Jiang Y, Xu Z, Huang W, Chen T, Lin G. Apparent Diffusion Coefficient-Based Convolutional Neural Network Model Can Be Better Than Sole Diffusion-Weighted Magnetic Resonance Imaging to Improve the Differentiation of Invasive Breast Cancer From Breast Ductal Carcinoma In Situ. Front Oncol 2022; 11:805911. [PMID: 35096609 PMCID: PMC8795910 DOI: 10.3389/fonc.2021.805911] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 12/24/2021] [Indexed: 02/05/2023] Open
Abstract
BACKGROUND AND PURPOSE Breast ductal carcinoma in situ (DCIS) has no metastatic potential, and has better clinical outcomes compared with invasive breast cancer (IBC). Convolutional neural networks (CNNs) can adaptively extract features and may achieve higher efficiency in apparent diffusion coefficient (ADC)-based tumor invasion assessment. This study aimed to determine the feasibility of constructing an ADC-based CNN model to discriminate DCIS from IBC. METHODS The study retrospectively enrolled 700 patients with primary breast cancer between March 2006 and June 2019 from our hospital, and randomly selected 560 patients as the training and validation sets (ratio of 3 to 1), and 140 patients as the internal test set. An independent external test set of 102 patients during July 2019 and May 2021 from a different scanner of our hospital was selected as the primary cohort using the same criteria. In each set, the status of tumor invasion was confirmed by pathologic examination. The CNN model was constructed to discriminate DCIS from IBC using the training and validation sets. The CNN model was evaluated using the internal and external tests, and compared with the discriminating performance using the mean ADC. The area under the curve (AUC), sensitivity, specificity, and accuracy were calculated to evaluate the performance of the previous model. RESULTS The AUCs of the ADC-based CNN model using the internal and external test sets were larger than those of the mean ADC (AUC: 0.977 vs. 0.866, P = 0.001; and 0.926 vs. 0.845, P = 0.096, respectively). Regarding the internal test set and external test set, the ADC-based CNN model yielded sensitivities of 0.893 and 0.873, specificities of 0.929 and 0.894, and accuracies of 0.907 and 0.902, respectively. Regarding the two test sets, the mean ADC showed sensitivities of 0.845 and 0.818, specificities of 0.821 and 0.829, and accuracies of 0.836 and 0.824, respectively. Using the ADC-based CNN model, the prediction only takes approximately one second for a single lesion. CONCLUSION The ADC-based CNN model can improve the differentiation of IBC from DCIS with higher accuracy and less time.
Collapse
Affiliation(s)
- Haolin Yin
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Shanghai, China
| | - Yu Jiang
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Zihan Xu
- Lung Cancer Center, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital of Sichuan University, Chengdu, China
| | - Wenjun Huang
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Shanghai, China
| | - Tianwu Chen
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Guangwu Lin
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Shanghai, China
| |
Collapse
|
72
|
Magnuska ZA, Theek B, Darguzyte M, Palmowski M, Stickeler E, Schulz V, Kießling F. Influence of the Computer-Aided Decision Support System Design on Ultrasound-Based Breast Cancer Classification. Cancers (Basel) 2022; 14:277. [PMID: 35053441 PMCID: PMC8773857 DOI: 10.3390/cancers14020277] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Accepted: 12/30/2021] [Indexed: 02/04/2023] Open
Abstract
Automation of medical data analysis is an important topic in modern cancer diagnostics, aiming at robust and reproducible workflows. Therefore, we used a dataset of breast US images (252 malignant and 253 benign cases) to realize and compare different strategies for CAD support in lesion detection and classification. Eight different datasets (including pre-processed and spatially augmented images) were prepared, and machine learning algorithms (i.e., Viola-Jones; YOLOv3) were trained for lesion detection. The radiomics signature (RS) was derived from detection boxes and compared with RS derived from manually obtained segments. Finally, the classification model was established and evaluated concerning accuracy, sensitivity, specificity, and area under the Receiver Operating Characteristic curve. After training on a dataset including logarithmic derivatives of US images, we found that YOLOv3 obtains better results in breast lesion detection (IoU: 0.544 ± 0.081; LE: 0.171 ± 0.009) than the Viola-Jones framework (IoU: 0.399 ± 0.054; LE: 0.096 ± 0.016). Interestingly, our findings show that the classification model trained with RS derived from detection boxes and the model based on the RS derived from a gold standard manual segmentation are comparable (p-value = 0.071). Thus, deriving radiomics signatures from the detection box is a promising technique for building a breast lesion classification model, and may reduce the need for the lesion segmentation step in the future design of CAD systems.
Collapse
Affiliation(s)
- Zuzanna Anna Magnuska
- Institute for Experimental Molecular Imaging, Uniklinik RWTH Aachen and Helmholtz Institute for Biomedical Engineering, Faculty of Medicine, RWTH Aachen University, 52074 Aachen, Germany; (Z.A.M.); (B.T.); (M.D.); (V.S.)
| | - Benjamin Theek
- Institute for Experimental Molecular Imaging, Uniklinik RWTH Aachen and Helmholtz Institute for Biomedical Engineering, Faculty of Medicine, RWTH Aachen University, 52074 Aachen, Germany; (Z.A.M.); (B.T.); (M.D.); (V.S.)
| | - Milita Darguzyte
- Institute for Experimental Molecular Imaging, Uniklinik RWTH Aachen and Helmholtz Institute for Biomedical Engineering, Faculty of Medicine, RWTH Aachen University, 52074 Aachen, Germany; (Z.A.M.); (B.T.); (M.D.); (V.S.)
| | - Moritz Palmowski
- Radiologie Baden-Baden, Beethovenstraße 2, 76530 Baden-Baden, Germany;
| | - Elmar Stickeler
- Department of Obstetrics and Gynecology, University Clinic Aachen, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany;
- Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Pauwelsstr. 30, 52074 Aachen, Germany
| | - Volkmar Schulz
- Institute for Experimental Molecular Imaging, Uniklinik RWTH Aachen and Helmholtz Institute for Biomedical Engineering, Faculty of Medicine, RWTH Aachen University, 52074 Aachen, Germany; (Z.A.M.); (B.T.); (M.D.); (V.S.)
- Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Pauwelsstr. 30, 52074 Aachen, Germany
- Physics Institute III B, RWTH Aachen University, 52074 Aachen, Germany
- Hyperion Hybrid Imaging Systems GmbH, 52074 Aachen, Germany
- Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany
| | - Fabian Kießling
- Institute for Experimental Molecular Imaging, Uniklinik RWTH Aachen and Helmholtz Institute for Biomedical Engineering, Faculty of Medicine, RWTH Aachen University, 52074 Aachen, Germany; (Z.A.M.); (B.T.); (M.D.); (V.S.)
- Comprehensive Diagnostic Center Aachen, Uniklinik RWTH Aachen, Pauwelsstr. 30, 52074 Aachen, Germany
- Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany
| |
Collapse
|
73
|
A Machine Learning Method for Detection of Surface Defects on Ceramic Tiles Using Convolutional Neural Networks. ELECTRONICS 2021. [DOI: 10.3390/electronics11010055] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
We propose a simple but effective convolutional neural network to learn the similarities between closely related raw pixel images for feature representation extraction and classification through the initialization of convolutional kernels from learned filter kernels of the network. The binary-class classification of sigmoid and discriminative feature vectors are simultaneously learned together contrasting the handcrafted traditional method of feature extractions, which split feature-extraction and classification tasks into two different processes during training. Relying on the high-quality feature representation learned by the network, the classification tasks can be efficiently conducted. We evaluated the classification performance of our proposed method using a collection of tile surface images consisting of cracked surfaces and no-cracked surfaces. We tried to classify the tiny-cracked surfaces from non-crack normal tile demarcations, which could be useful for automated visual inspections that are labor intensive, risky in high altitudes, and time consuming with manual inspection methods. We performed a series of comparisons on the results obtained by varying the optimization, activation functions, and deployment of different data augmentation methods in our network architecture. By doing this, the effectiveness of the presented model for smooth surface defect classification was explored and determined. Through extensive experimentation, we obtained a promising validation accuracy and minimal loss.
Collapse
|
74
|
Saba T, Abunadi I, Sadad T, Khan AR, Bahaj SA. Optimizing the transfer-learning with pretrained deep convolutional neural networks for first stage breast tumor diagnosis using breast ultrasound visual images. Microsc Res Tech 2021; 85:1444-1453. [PMID: 34908213 DOI: 10.1002/jemt.24008] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 09/09/2021] [Accepted: 10/26/2021] [Indexed: 11/10/2022]
Abstract
Female accounts for approximately 50% of the total population worldwide and many of them had breast cancer. Computer-aided diagnosis frameworks could reduce the number of needless biopsies and the workload of radiologists. This research aims to detect benign and malignant tumors automatically using breast ultrasound (BUS) images. Accordingly, two pretrained deep convolutional neural network (CNN) models were employed for transfer learning using BUS images like AlexNet and DenseNet201. A total of 697 BUS images containing benign and malignant tumors are preprocessed and performed classification tasks using the transfer learning-based CNN models. The classification accuracy of the benign and malignant tasks is completed and achieved 92.8% accuracy using the DensNet201 model. The results thus achieved compared in state of the art using benchmark data set and concluded proposed model outperforms in accuracy from first stage breast tumor diagnosis. Finally, the proposed model could help radiologists diagnose benign and malignant tumors swiftly by screening suspected patients.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Ibrahim Abunadi
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Tariq Sadad
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, 44000, Pakistan
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj, 11942, Saudi Arabia
| |
Collapse
|
75
|
Gonzales RA, Seemann F, Lamy J, Mojibian H, Atar D, Erlinge D, Steding-Ehrenborg K, Arheden H, Hu C, Onofrey JA, Peters DC, Heiberg E. MVnet: automated time-resolved tracking of the mitral valve plane in CMR long-axis cine images with residual neural networks: a multi-center, multi-vendor study. J Cardiovasc Magn Reson 2021; 23:137. [PMID: 34857009 PMCID: PMC8638514 DOI: 10.1186/s12968-021-00824-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 10/20/2021] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND Mitral annular plane systolic excursion (MAPSE) and left ventricular (LV) early diastolic velocity (e') are key metrics of systolic and diastolic function, but not often measured by cardiovascular magnetic resonance (CMR). Its derivation is possible with manual, precise annotation of the mitral valve (MV) insertion points along the cardiac cycle in both two and four-chamber long-axis cines, but this process is highly time-consuming, laborious, and prone to errors. A fully automated, consistent, fast, and accurate method for MV plane tracking is lacking. In this study, we propose MVnet, a deep learning approach for MV point localization and tracking capable of deriving such clinical metrics comparable to human expert-level performance, and validated it in a multi-vendor, multi-center clinical population. METHODS The proposed pipeline first performs a coarse MV point annotation in a given cine accurately enough to apply an automated linear transformation task, which standardizes the size, cropping, resolution, and heart orientation, and second, tracks the MV points with high accuracy. The model was trained and evaluated on 38,854 cine images from 703 patients with diverse cardiovascular conditions, scanned on equipment from 3 main vendors, 16 centers, and 7 countries, and manually annotated by 10 observers. Agreement was assessed by the intra-class correlation coefficient (ICC) for both clinical metrics and by the distance error in the MV plane displacement. For inter-observer variability analysis, an additional pair of observers performed manual annotations in a randomly chosen set of 50 patients. RESULTS MVnet achieved a fast segmentation (<1 s/cine) with excellent ICCs of 0.94 (MAPSE) and 0.93 (LV e') and a MV plane tracking error of -0.10 ± 0.97 mm. In a similar manner, the inter-observer variability analysis yielded ICCs of 0.95 and 0.89 and a tracking error of -0.15 ± 1.18 mm, respectively. CONCLUSION A dual-stage deep learning approach for automated annotation of MV points for systolic and diastolic evaluation in CMR long-axis cine images was developed. The method is able to carefully track these points with high accuracy and in a timely manner. This will improve the feasibility of CMR methods which rely on valve tracking and increase their utility in a clinical setting.
Collapse
Affiliation(s)
- Ricardo A. Gonzales
- Clinical Physiology, Department of Clinical Sciences, Lund University, Skåne University Hospital, Lund, Sweden
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, Connecticut United States of America
- Department of Electrical Engineering, Universidad de Ingeniería y Tecnología, Lima, Peru
| | - Felicia Seemann
- Clinical Physiology, Department of Clinical Sciences, Lund University, Skåne University Hospital, Lund, Sweden
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, Connecticut United States of America
- Department of Biomedical Engineering, Lund University, Lund, Sweden
| | - Jérôme Lamy
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, Connecticut United States of America
| | - Hamid Mojibian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, Connecticut United States of America
| | - Dan Atar
- Department of Cardiology B, Oslo University Hospital Ullevål and Faculty of Medicine, University of Oslo, Oslo, Norway
| | - David Erlinge
- Department of Cardiology, Clinical Sciences, Lund University, Skåne University Hospital, Lund, Sweden
| | - Katarina Steding-Ehrenborg
- Clinical Physiology, Department of Clinical Sciences, Lund University, Skåne University Hospital, Lund, Sweden
| | - Håkan Arheden
- Clinical Physiology, Department of Clinical Sciences, Lund University, Skåne University Hospital, Lund, Sweden
| | - Chenxi Hu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - John A. Onofrey
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, Connecticut United States of America
- Department of Urology, Yale School of Medicine, Yale University, New Haven, Connecticut United States of America
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut United States of America
| | - Dana C. Peters
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, Yale University, New Haven, Connecticut United States of America
| | - Einar Heiberg
- Clinical Physiology, Department of Clinical Sciences, Lund University, Skåne University Hospital, Lund, Sweden
- Department of Biomedical Engineering, Lund University, Lund, Sweden
- Wallenberg Center for Molecular Medicine, Lund University, Lund, Sweden
| |
Collapse
|
76
|
Peng X, Yang X. Liver tumor detection based on objects as points. Phys Med Biol 2021; 66. [PMID: 34727529 DOI: 10.1088/1361-6560/ac35c7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Accepted: 11/02/2021] [Indexed: 11/11/2022]
Abstract
The automatic detection of liver tumors by computed tomography is challenging, owing to their wide variations in size and location, as well as to their irregular shapes. Existing detection methods largely rely on two-stage detectors and use CT images marked with bounding boxes for training and detection. In this study, we propose a single-stage detector method designed to accurately detect multiple tumors simultaneously, and provide results demonstrating its increased speed and efficiency compared to prior methods. The proposed model divides CT images into multiple channels to obtain continuity information and implements a bounding box attention mechanism to overcome the limitation of inaccurate prediction of tumor center points and decrease redundant bounding boxes. The model integrates information from various channels using an effective Squeeze-and-Excitation attention module. The proposed model obtained a mean average precision result of 0.476 on the Decathlon dataset, which was superior to that of the prior methods examined for comparison. This research is expected to enable physicians to diagnose tumors very efficiently; particularly, the prediction of tumor center points is expected to enable physicians to rapidly verify their diagnostic judgments. The proposed method is considered suitable for future adoption in clinical practice in hospitals and resource-poor areas because its superior performance does not increase computational cost; hence, the equipment required is relatively inexpensive.
Collapse
Affiliation(s)
- Xuefeng Peng
- The Faculty of Information, Beijing University of Technology, Beijing, People's Republic of China
| | - Xinwu Yang
- The Faculty of Information, Beijing University of Technology, Beijing, People's Republic of China
| |
Collapse
|
77
|
Yousefirizi F, Pierre Decazes, Amyar A, Ruan S, Saboury B, Rahmim A. AI-Based Detection, Classification and Prediction/Prognosis in Medical Imaging:: Towards Radiophenomics. PET Clin 2021; 17:183-212. [PMID: 34809866 DOI: 10.1016/j.cpet.2021.09.010] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Artificial intelligence (AI) techniques have significant potential to enable effective, robust, and automated image phenotyping including the identification of subtle patterns. AI-based detection searches the image space to find the regions of interest based on patterns and features. There is a spectrum of tumor histologies from benign to malignant that can be identified by AI-based classification approaches using image features. The extraction of minable information from images gives way to the field of "radiomics" and can be explored via explicit (handcrafted/engineered) and deep radiomics frameworks. Radiomics analysis has the potential to be used as a noninvasive technique for the accurate characterization of tumors to improve diagnosis and treatment monitoring. This work reviews AI-based techniques, with a special focus on oncological PET and PET/CT imaging, for different detection, classification, and prediction/prognosis tasks. We also discuss needed efforts to enable the translation of AI techniques to routine clinical workflows, and potential improvements and complementary techniques such as the use of natural language processing on electronic health records and neuro-symbolic AI techniques.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Pierre Decazes
- Department of Nuclear Medicine, Henri Becquerel Centre, Rue d'Amiens - CS 11516 - 76038 Rouen Cedex 1, France; QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Amine Amyar
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France; General Electric Healthcare, Buc, France
| | - Su Ruan
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, USA; Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada; Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; Department of Physics, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
78
|
Liu X, Sun Z, Han C, Cui Y, Huang J, Wang X, Zhang X, Wang X. Development and validation of the 3D U-Net algorithm for segmentation of pelvic lymph nodes on diffusion-weighted images. BMC Med Imaging 2021; 21:170. [PMID: 34774001 PMCID: PMC8590773 DOI: 10.1186/s12880-021-00703-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 11/08/2021] [Indexed: 12/16/2022] Open
Abstract
Background The 3D U-Net model has been proved to perform well in the automatic organ segmentation. The aim of this study is to evaluate the feasibility of the 3D U-Net algorithm for the automated detection and segmentation of lymph nodes (LNs) on pelvic diffusion-weighted imaging (DWI) images. Methods A total of 393 DWI images of patients suspected of having prostate cancer (PCa) between January 2019 and December 2020 were collected for model development. Seventy-seven DWI images from another group of PCa patients imaged between January 2021 and April 2021 were collected for temporal validation. Segmentation performance was assessed using the Dice score, positive predictive value (PPV), true positive rate (TPR), and volumetric similarity (VS), Hausdorff distance (HD), the Average distance (AVD), and the Mahalanobis distance (MHD) with manual annotation of pelvic LNs as the reference. The accuracy with which the suspicious metastatic LNs (short diameter > 0.8 cm) were detected was evaluated using the area under the curve (AUC) at the patient level, and the precision, recall, and F1-score were determined at the lesion level. The consistency of LN staging on an hold-out test dataset between the model and radiologist was assessed using Cohen’s kappa coefficient. Results In the testing set used for model development, the Dice score, TPR, PPV, VS, HD, AVD and MHD values for the segmentation of suspicious LNs were 0.85, 0.82, 0.80, 0.86, 2.02 (mm), 2.01 (mm), and 1.54 (mm) respectively. The precision, recall, and F1-score for the detection of suspicious LNs were 0.97, 0.98 and 0.97, respectively. In the temporal validation dataset, the AUC of the model for identifying PCa patients with suspicious LNs was 0.963 (95% CI: 0.892–0.993). High consistency of LN staging (Kappa = 0.922) was achieved between the model and expert radiologist. Conclusion The 3D U-Net algorithm can accurately detect and segment pelvic LNs based on DWI images.
Collapse
Affiliation(s)
- Xiang Liu
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Zhaonan Sun
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Yingpu Cui
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Jiahao Huang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiangpeng Wang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China.
| |
Collapse
|
79
|
Jena SR, George ST, Ponraj DN. Lung cancer detection and classification with DGMM-RBCNN technique. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06182-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
80
|
Yan K, Cai J, Zheng Y, Harrison AP, Jin D, Tang Y, Tang Y, Huang L, Xiao J, Lu L. Learning From Multiple Datasets With Heterogeneous and Partial Labels for Universal Lesion Detection in CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2759-2770. [PMID: 33370236 DOI: 10.1109/tmi.2020.3047598] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Large-scale datasets with high-quality labels are desired for training accurate deep learning models. However, due to the annotation cost, datasets in medical imaging are often either partially-labeled or small. For example, DeepLesion is such a large-scale CT image dataset with lesions of various types, but it also has many unlabeled lesions (missing annotations). When training a lesion detector on a partially-labeled dataset, the missing annotations will generate incorrect negative signals and degrade the performance. Besides DeepLesion, there are several small single-type datasets, such as LUNA for lung nodules and LiTS for liver tumors. These datasets have heterogeneous label scopes, i.e., different lesion types are labeled in different datasets with other types ignored. In this work, we aim to develop a universal lesion detection algorithm to detect a variety of lesions. The problem of heterogeneous and partial labels is tackled. First, we build a simple yet effective lesion detection framework named Lesion ENSemble (LENS). LENS can efficiently learn from multiple heterogeneous lesion datasets in a multi-task fashion and leverage their synergy by proposal fusion. Next, we propose strategies to mine missing annotations from partially-labeled datasets by exploiting clinical prior knowledge and cross-dataset knowledge transfer. Finally, we train our framework on four public lesion datasets and evaluate it on 800 manually-labeled sub-volumes in DeepLesion. Our method brings a relative improvement of 49% compared to the current state-of-the-art approach in the metric of average sensitivity. We have publicly released our manual 3D annotations of DeepLesion online.1 1https://github.com/viggin/DeepLesion_manual_test_set.
Collapse
|
81
|
Verzat C, Harley J, Patani R, Luisier R. Image-based deep learning reveals the responses of human motor neurons to stress and VCP-related ALS. Neuropathol Appl Neurobiol 2021; 48:e12770. [PMID: 34595747 PMCID: PMC9298273 DOI: 10.1111/nan.12770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 09/16/2021] [Accepted: 09/22/2021] [Indexed: 11/28/2022]
Abstract
AIMS Although morphological attributes of cells and their substructures are recognised readouts of physiological or pathophysiological states, these have been relatively understudied in amyotrophic lateral sclerosis (ALS) research. METHODS In this study, we integrate multichannel fluorescence high-content microscopy data with deep learning imaging methods to reveal-directly from unsegmented images-novel neurite-associated morphological perturbations associated with (ALS-causing) VCP-mutant human motor neurons (MNs). RESULTS Surprisingly, we reveal that previously unrecognised disease-relevant information is withheld in broadly used and often considered 'generic' biological markers of nuclei (DAPI) and neurons ( β III-tubulin). Additionally, we identify changes within the information content of ALS-related RNA binding protein (RBP) immunofluorescence imaging that is captured in VCP-mutant MN cultures. Furthermore, by analysing MN cultures exposed to different extrinsic stressors, we show that heat stress recapitulates key aspects of ALS. CONCLUSIONS Our study therefore reveals disease-relevant information contained in a range of both generic and more specific fluorescent markers and establishes the use of image-based deep learning methods for rapid, automated and unbiased identification of biological hypotheses.
Collapse
Affiliation(s)
- Colombine Verzat
- Genomics and Health Informatics Group, Idiap Research Institute, Martigny, Switzerland
| | - Jasmine Harley
- Human Stem Cells and Neurodegeneration Laboratory, The Francis Crick Institute, London, UK.,Department of Neuromuscular Diseases, UCL Queen Square Institute of Neurology, London, UK
| | - Rickie Patani
- Human Stem Cells and Neurodegeneration Laboratory, The Francis Crick Institute, London, UK.,Department of Neuromuscular Diseases, UCL Queen Square Institute of Neurology, London, UK
| | - Raphaëlle Luisier
- Genomics and Health Informatics Group, Idiap Research Institute, Martigny, Switzerland
| |
Collapse
|
82
|
|
83
|
Zhang YN, XIA KR, LI CY, WEI BL, Zhang B. Review of Breast Cancer Pathologigcal Image Processing. BIOMED RESEARCH INTERNATIONAL 2021; 2021:1994764. [PMID: 34595234 PMCID: PMC8478535 DOI: 10.1155/2021/1994764] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 08/24/2021] [Indexed: 11/17/2022]
Abstract
Breast cancer is one of the most common malignancies. Pathological image processing of breast has become an important means for early diagnosis of breast cancer. Using medical image processing to assist doctors to detect potential breast cancer as early as possible has always been a hot topic in the field of medical image diagnosis. In this paper, a breast cancer recognition method based on image processing is systematically expounded from four aspects: breast cancer detection, image segmentation, image registration, and image fusion. The achievements and application scope of supervised learning, unsupervised learning, deep learning, CNN, and so on in breast cancer examination are expounded. The prospect of unsupervised learning and transfer learning for breast cancer diagnosis is prospected. Finally, the privacy protection of breast cancer patients is put forward.
Collapse
Affiliation(s)
- Ya-nan Zhang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
- HRG International Institute (Hefei) of Research and Innovation, Hefei 230000, China
| | - Ke-rui XIA
- HRG International Institute (Hefei) of Research and Innovation, Hefei 230000, China
| | - Chang-yi LI
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Ben-li WEI
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Bing Zhang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
| |
Collapse
|
84
|
Oza P, Sharma P, Patel S, Bruno A. A Bottom-Up Review of Image Analysis Methods for Suspicious Region Detection in Mammograms. J Imaging 2021; 7:190. [PMID: 34564116 PMCID: PMC8466003 DOI: 10.3390/jimaging7090190] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 09/09/2021] [Accepted: 09/14/2021] [Indexed: 11/17/2022] Open
Abstract
Breast cancer is one of the most common death causes amongst women all over the world. Early detection of breast cancer plays a critical role in increasing the survival rate. Various imaging modalities, such as mammography, breast MRI, ultrasound and thermography, are used to detect breast cancer. Though there is a considerable success with mammography in biomedical imaging, detecting suspicious areas remains a challenge because, due to the manual examination and variations in shape, size, other mass morphological features, mammography accuracy changes with the density of the breast. Furthermore, going through the analysis of many mammograms per day can be a tedious task for radiologists and practitioners. One of the main objectives of biomedical imaging is to provide radiologists and practitioners with tools to help them identify all suspicious regions in a given image. Computer-aided mass detection in mammograms can serve as a second opinion tool to help radiologists avoid running into oversight errors. The scientific community has made much progress in this topic, and several approaches have been proposed along the way. Following a bottom-up narrative, this paper surveys different scientific methodologies and techniques to detect suspicious regions in mammograms spanning from methods based on low-level image features to the most recent novelties in AI-based approaches. Both theoretical and practical grounds are provided across the paper sections to highlight the pros and cons of different methodologies. The paper's main scope is to let readers embark on a journey through a fully comprehensive description of techniques, strategies and datasets on the topic.
Collapse
Affiliation(s)
- Parita Oza
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Paawan Sharma
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Samir Patel
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Alessandro Bruno
- Department of Computing and Informatics, Bournemouth University, Poole, Dorset BH12 5BB, UK
| |
Collapse
|
85
|
Deep Reinforcement Learning with Explicit Spatio-Sequential Encoding Network for Coronary Ostia Identification in CT Images. SENSORS 2021; 21:s21186187. [PMID: 34577391 PMCID: PMC8469841 DOI: 10.3390/s21186187] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 08/31/2021] [Accepted: 09/13/2021] [Indexed: 11/16/2022]
Abstract
Accurate identification of the coronary ostia from 3D coronary computed tomography angiography (CCTA) is a essential prerequisite step for automatically tracking and segmenting three main coronary arteries. In this paper, we propose a novel deep reinforcement learning (DRL) framework to localize the two coronary ostia from 3D CCTA. An optimal action policy is determined using a fully explicit spatial-sequential encoding policy network applying 2.5D Markovian states with three past histories. The proposed network is trained using a dueling DRL framework on the CAT08 dataset. The experiment results show that our method is more efficient and accurate than the other methods. blueFloating-point operations (FLOPs) are calculated to measure computational efficiency. The result shows that there are 2.5M FLOPs on the proposed method, which is about 10 times smaller value than 3D box-based methods. In terms of accuracy, the proposed method shows that 2.22 ± 1.12 mm and 1.94 ± 0.83 errors on the left and right coronary ostia, respectively. The proposed method can be applied to the tasks to identify other target objects by changing the target locations in the ground truth data. Further, the proposed method can be utilized as a pre-processing method for coronary artery tracking methods.
Collapse
|
86
|
Hou Y, Zhang W, Liu Q, Ge H, Meng J, Zhang Q, Wei X. Adaptive kernel selection network with attention constraint for surgical instrument classification. Neural Comput Appl 2021; 34:1577-1591. [PMID: 34539089 PMCID: PMC8435567 DOI: 10.1007/s00521-021-06368-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Accepted: 07/26/2021] [Indexed: 11/15/2022]
Abstract
Computer vision (CV) technologies are assisting the health care industry in many respects, i.e., disease diagnosis. However, as a pivotal procedure before and after surgery, the inventory work of surgical instruments has not been researched with the CV-powered technologies. To reduce the risk and hazard of surgical tools' loss, we propose a study of systematic surgical instrument classification and introduce a novel attention-based deep neural network called SKA-ResNet which is mainly composed of: (a) A feature extractor with selective kernel attention module to automatically adjust the receptive fields of neurons and enhance the learnt expression and (b) A multi-scale regularizer with KL-divergence as the constraint to exploit the relationships between feature maps. Our method is easily trained end-to-end in only one stage with few additional calculation burdens. Moreover, to facilitate our study, we create a new surgical instrument dataset called SID19 (with 19 kinds of surgical tools consisting of 3800 images) for the first time. Experimental results show the superiority of SKA-ResNet for the classification of surgical tools on SID19 when compared with state-of-the-art models. The classification accuracy of our method reaches up to 97.703%, which is well supportive for the inventory and recognition study of surgical tools. Also, our method can achieve state-of-the-art performance on four challenging fine-grained visual classification datasets.
Collapse
Affiliation(s)
- Yaqing Hou
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Wenkai Zhang
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Qian Liu
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Hongwei Ge
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Jun Meng
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Qiang Zhang
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Xiaopeng Wei
- School of Computer Science and Technology, Dalian University of Technology, Dalian, China
| |
Collapse
|
87
|
Albaradei S, Thafar M, Alsaedi A, Van Neste C, Gojobori T, Essack M, Gao X. Machine learning and deep learning methods that use omics data for metastasis prediction. Comput Struct Biotechnol J 2021; 19:5008-5018. [PMID: 34589181 PMCID: PMC8450182 DOI: 10.1016/j.csbj.2021.09.001] [Citation(s) in RCA: 83] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 08/16/2021] [Accepted: 09/02/2021] [Indexed: 12/14/2022] Open
Abstract
Knowing metastasis is the primary cause of cancer-related deaths, incentivized research directed towards unraveling the complex cellular processes that drive the metastasis. Advancement in technology and specifically the advent of high-throughput sequencing provides knowledge of such processes. This knowledge led to the development of therapeutic and clinical applications, and is now being used to predict the onset of metastasis to improve diagnostics and disease therapies. In this regard, predicting metastasis onset has also been explored using artificial intelligence approaches that are machine learning, and more recently, deep learning-based. This review summarizes the different machine learning and deep learning-based metastasis prediction methods developed to date. We also detail the different types of molecular data used to build the models and the critical signatures derived from the different methods. We further highlight the challenges associated with using machine learning and deep learning methods, and provide suggestions to improve the predictive performance of such methods.
Collapse
Key Words
- AE, autoencoder
- ANN, Artificial Neural Network
- AUC, area under the curve
- Acc, Accuracy
- Artificial intelligence
- BC, Betweenness centrality
- BH, Benjamini-Hochberg
- BioGRID, Biological General Repository for Interaction Datasets
- CCP, compound covariate predictor
- CEA, Carcinoembryonic antigen
- CNN, convolution neural networks
- CV, cross-validation
- Cancer
- DBN, deep belief network
- DDBN, discriminative deep belief network
- DEGs, differentially expressed genes
- DIP, Database of Interacting Proteins
- DNN, Deep neural network
- DT, Decision Tree
- Deep learning
- EMT, epithelial-mesenchymal transition
- FC, fully connected
- GA, Genetic Algorithm
- GANs, generative adversarial networks
- GEO, Gene Expression Omnibus
- HCC, hepatocellular carcinoma
- HPRD, Human Protein Reference Database
- KNN, K-nearest neighbor
- L-SVM, linear SVM
- LIMMA, linear models for microarray data
- LOOCV, Leave-one-out cross-validation
- LR, Logistic Regression
- MCCV, Monte Carlo cross-validation
- MLP, multilayer perceptron
- Machine learning
- Metastasis
- NPV, negative predictive value
- PCA, Principal component analysis
- PPI, protein-protein interaction
- PPV, positive predictive value
- RC, ridge classifier
- RF, Random Forest
- RFE, recursive feature elimination
- RMA, robust multi‐array average
- RNN, recurrent neural networks
- SGD, stochastic gradient descent
- SMOTE, synthetic minority over-sampling technique
- SVM, Support Vector Machine
- Se, sensitivity
- Sp, specificity
- TCGA, The Cancer Genome Atlas
- k-CV, k-fold cross validation
- mRMR, minimum redundancy maximum relevance
Collapse
Affiliation(s)
- Somayah Albaradei
- Computational Bioscience Research Center (CBRC), Computer, Electrical and Mathematical Sciences and Engineering Division (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia
- King Abdulaziz University, Faculty of Computing and Information Technology, Jeddah, Saudi Arabia
| | - Maha Thafar
- Computational Bioscience Research Center (CBRC), Computer, Electrical and Mathematical Sciences and Engineering Division (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia
- Taif University, Collage of Computers and Information Technology, Taif, Saudi Arabia
| | - Asim Alsaedi
- King Saud bin Abdulaziz University for Health Sciences, Jeddah, Saudi Arabia
- King Abdulaziz Medical City, Jeddah, Saudi Arabia
| | - Christophe Van Neste
- Computational Bioscience Research Center (CBRC), Computer, Electrical and Mathematical Sciences and Engineering Division (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia
| | - Takashi Gojobori
- Computational Bioscience Research Center (CBRC), Computer, Electrical and Mathematical Sciences and Engineering Division (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia
- Biological and Environmental Sciences and Engineering Division (BESE), King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia
| | - Magbubah Essack
- Computational Bioscience Research Center (CBRC), Computer, Electrical and Mathematical Sciences and Engineering Division (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia
| | - Xin Gao
- Computational Bioscience Research Center (CBRC), Computer, Electrical and Mathematical Sciences and Engineering Division (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia
| |
Collapse
|
88
|
Yuan H, Fan Z, Wu Y, Cheng J. An efficient multi-path 3D convolutional neural network for false-positive reduction of pulmonary nodule detection. Int J Comput Assist Radiol Surg 2021; 16:2269-2277. [PMID: 34449037 DOI: 10.1007/s11548-021-02478-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 08/10/2021] [Indexed: 12/19/2022]
Abstract
PURPOSE Considering that false-positive and true pulmonary nodules are highly similar in shapes and sizes between lung computed tomography scans, we develop and evaluate a false-positive nodules reduction method applied to the computer-aided diagnosis system. METHODS To improve the pulmonary nodule diagnosis quality, a 3D convolutional neural networks (CNN) model is constructed to effectively extract spatial information of candidate nodule features through the hierarchical architecture. Furthermore, three paths corresponding to three receptive field sizes are adopted and concatenated in the network model, so that the feature information is fully extracted and fused to actively adapting to the changes in shapes, sizes, and contextual information between pulmonary nodules. In this way, the false-positive reduction is well implemented in pulmonary nodule detection. RESULTS Multi-path 3D CNN is performed on LUNA16 dataset, which achieves an average competitive performance metric score of 0.881, and excellent sensitivity of 0.952 and 0.962 occurs to 4, 8 FP/Scans. CONCLUSION By constructing a multi-path 3D CNN to fully extract candidate target features, it accurately identifies pulmonary nodules with different sizes, shapes, and background information. In addition, the proposed general framework is also suitable for similar 3D medical image classification tasks.
Collapse
Affiliation(s)
- Haiying Yuan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, People's Republic of China.
| | - Zhongwei Fan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, People's Republic of China
| | - Yanrui Wu
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, People's Republic of China
| | - Junpeng Cheng
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, People's Republic of China
| |
Collapse
|
89
|
Non-invasive multi-channel deep learning convolutional neural networks for localization and classification of common hepatic lesions. Pol J Radiol 2021; 86:e440-e448. [PMID: 34429791 PMCID: PMC8369821 DOI: 10.5114/pjr.2021.108257] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 09/06/2021] [Indexed: 01/22/2023] Open
Abstract
Purpose Machine learning techniques, especially convolutional neural networks (CNN), have revolutionized the spectrum of computer vision tasks with a primary focus on supervised and labelled image datasets. We aimed to assess a novel method to segment the liver from the abdomen computed tomography (CT) image using the CNN network, and to train a unique method to locate and classify liver lesion pre-histological findings using multi-channel deep learning CNN (MDL-CNN). Material and methods The post-contrast CT images of the liver with a resolution of 0.625 mm were chosen for the study. In a random method, 50 examples of each hepatocellular carcinomas, metastases tumours, haemangiomas, hepatic cysts were chosen and evaluated. Results The dice score quantitatively analyses the similarity of segmentation results with the training dataset. In the first CNN model for segmenting the liver, the dice score was 96.18%. The MDL-CNN model yielded 98.78% accuracy in classification, and the dice score for locating liver lesions was 95.70%. Additionally, the performance of this model was compared to various other existing models. Conclusions According to our study, the machine learning approach can be successfully implemented to segment the liver and classify lesions, which will help radiologists impart better diagnosis.
Collapse
|
90
|
Wada K, Watanabe M, Shinchi M, Noguchi K, Mukoyoshi T, Matsuyama M, Arimura T, Ogino T. [A Study on Radiation Dermatitis Grading Support System Based on Deep Learning by Hybrid Generation Method]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2021; 77:787-794. [PMID: 34421066 DOI: 10.6009/jjrt.2021_jsrt_77.8.787] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE Radiation dermatitis is one of the most common adverse events in patients undergoing radiotherapy. However, the objective evaluation of this condition is difficult to provide because the clinical evaluation of radiation dermatitis is made by visual assessment based on Common Terminology Criteria for Adverse Events (CTCAE). Therefore, we created a radiation dermatitis grading support system (RDGS) using a deep convolutional neural network (DCNN) and then evaluated the effectiveness of the RDGS. METHODS The DCNN was trained with a dataset that comprised 647 clinical skin images graded with radiation dermatitis (Grades 1-4) at our center from April 2011 to May 2019. We created the datasets by mixing data augmentation images generated by image conversion and images generated by Poisson image editing using the hybrid generation method (Hyb) against lowvolume severe dermatitis (Grade 4). We then evaluated the classification accuracy of RDGS based on the hybrid generation method (Hyb-RDGS). RESULTS The overall accuracy of the Hyb-RDGS was 85.1%, which was higher than that of the data augmentation method generally used for image generation. CONCLUSION Effectiveness of the Hyb-RDGS using Poisson image editing was suggested. This result shows a possible supporting system for objective evaluation in grading radiation dermatitis.
Collapse
Affiliation(s)
- Kiyotaka Wada
- Medipolis Proton Therapy and Research Center.,Graduate School of Science and Engineering, Kagoshima University
| | - Mutsumi Watanabe
- Graduate School of Science and Engineering, Kagoshima University
| | - Masahiro Shinchi
- Graduate School of Science and Engineering, Kagoshima University
| | - Kousuke Noguchi
- Graduate School of Science and Engineering, Kagoshima University
| | | | | | | | | |
Collapse
|
91
|
Zhang L, Wang L, Gao J, Risacher SL, Yan J, Li G, Liu T, Zhu D. Deep Fusion of Brain Structure-Function in Mild Cognitive Impairment. Med Image Anal 2021; 72:102082. [PMID: 34004495 DOI: 10.1016/j.media.2021.102082] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 03/20/2021] [Accepted: 04/13/2021] [Indexed: 01/22/2023]
Abstract
Multimodal fusion of different types of neural image data provides an irreplaceable opportunity to take advantages of complementary cross-modal information that may only partially be contained in single modality. To jointly analyze multimodal data, deep neural networks can be especially useful because many studies have suggested that deep learning strategy is very efficient to reveal complex and non-linear relations buried in the data. However, most deep models, e.g., convolutional neural network and its numerous extensions, can only operate on regular Euclidean data like voxels in 3D MRI. The interrelated and hidden structures that beyond the grid neighbors, such as brain connectivity, may be overlooked. Moreover, how to effectively incorporate neuroscience knowledge into multimodal data fusion with a single deep framework is understudied. In this work, we developed a graph-based deep neural network to simultaneously model brain structure and function in Mild Cognitive Impairment (MCI): the topology of the graph is initialized using structural network (from diffusion MRI) and iteratively updated by incorporating functional information (from functional MRI) to maximize the capability of differentiating MCI patients from elderly normal controls. This resulted in a new connectome by exploring "deep relations" between brain structure and function in MCI patients and we named it as Deep Brain Connectome. Though deep brain connectome is learned individually, it shows consistent patterns of alteration comparing to structural network at group level. With deep brain connectome, our developed deep model can achieve 92.7% classification accuracy on ADNI dataset.
Collapse
Affiliation(s)
- Lu Zhang
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019 USA
| | - Li Wang
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019 USA; Department of Mathematics, The University of Texas at Arlington, Arlington, TX 76019 USA
| | - Jean Gao
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019 USA
| | - Shannon L Risacher
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN 46202 USA
| | - Jingwen Yan
- School of Informatics and Computing, Indiana University School of Medicine, Indianapolis, IN 46202 USA
| | - Gang Li
- Biomedical Research Imaging Center and Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-7160, USA
| | - Tianming Liu
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Dajiang Zhu
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019 USA.
| | | |
Collapse
|
92
|
Kougia V, Pavlopoulos J, Papapetrou P, Gordon M. RTEX: A novel framework for ranking, tagging, and explanatory diagnostic captioning of radiography exams. J Am Med Inform Assoc 2021; 28:1651-1659. [PMID: 33880528 PMCID: PMC8324241 DOI: 10.1093/jamia/ocab046] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 02/27/2021] [Accepted: 03/02/2021] [Indexed: 11/14/2022] Open
Abstract
OBJECTIVE The study sought to assist practitioners in identifying and prioritizing radiography exams that are more likely to contain abnormalities, and provide them with a diagnosis in order to manage heavy workload more efficiently (eg, during a pandemic) or avoid mistakes due to tiredness. MATERIALS AND METHODS This article introduces RTEx, a novel framework for (1) ranking radiography exams based on their probability to be abnormal, (2) generating abnormality tags for abnormal exams, and (3) providing a diagnostic explanation in natural language for each abnormal exam. Our framework consists of deep learning and retrieval methods and is assessed on 2 publicly available datasets. RESULTS For ranking, RTEx outperforms its competitors in terms of nDCG@k. The tagging component outperforms 2 strong competitor methods in terms of F1. Moreover, the diagnostic captioning component, which exploits the predicted tags to constrain the captioning process, outperforms 4 captioning competitors with respect to clinical precision and recall. DISCUSSION RTEx prioritizes abnormal exams toward the improvement of the healthcare workflow by introducing a ranking method. Also, for each abnormal radiography exam RTEx generates a set of abnormality tags alongside a diagnostic text to explain the tags and guide the medical expert. Human evaluation of the produced text shows that employing the generated tags offers consistency to the clinical correctness and that the sentences of each text have high clinical accuracy. CONCLUSIONS This is the first framework that successfully combines 3 tasks: ranking, tagging, and diagnostic captioning with focus on radiography exams that contain abnormalities.
Collapse
Affiliation(s)
- Vasiliki Kougia
- Department of Computer and Systems Sciences, Stockholm University, Stockholm, Sweden
| | - John Pavlopoulos
- Department of Computer and Systems Sciences, Stockholm University, Stockholm, Sweden
| | - Panagiotis Papapetrou
- Department of Computer and Systems Sciences, Stockholm University, Stockholm, Sweden
| | - Max Gordon
- Division of Orthopaedics, Department of Clinical Sciences, Danderyd Hospital, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
93
|
Thapa A, Alsadoon A, Prasad PWC, Bajaj S, Alsadoon OH, Rashid TA, Ali RS, Jerew OD. Deep learning for breast cancer classification: Enhanced tangent function. Comput Intell 2021. [DOI: 10.1111/coin.12476] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Affiliation(s)
- Ashu Thapa
- School of Computing and Mathematics Charles Sturt University (CSU) Wagga Wagga Australia
| | - Abeer Alsadoon
- School of Computing and Mathematics Charles Sturt University (CSU) Wagga Wagga Australia
- School of Computer Data and Mathematical Sciences University of Western Sydney (UWS) Sydney Australia
- Kent Institute Australia Sydney Australia
- Asia Pacific International College (APIC) Sydney Australia
| | - P. W. C. Prasad
- School of Computing and Mathematics Charles Sturt University (CSU) Wagga Wagga Australia
| | - Simi Bajaj
- School of Computer Data and Mathematical Sciences University of Western Sydney (UWS) Sydney Australia
| | | | - Tarik A. Rashid
- Computer Science and Engineering University of Kurdistan Hewler Erbil KRG IRAQ
| | - Rasha S. Ali
- Department of Computer Techniques Engineering AL Nisour University College Baghdad Iraq
| | - Oday D. Jerew
- Asia Pacific International College (APIC) Sydney Australia
| |
Collapse
|
94
|
Petibon Y, Fahey F, Cao X, Levin Z, Sexton-Stallone B, Falone A, Zukotynski K, Kwatra N, Lim R, Bar-Sever Z, Chemli Y, Treves ST, Fakhri GE, Ouyang J. Detecting lumbar lesions in 99m Tc-MDP SPECT by deep learning: Comparison with physicians. Med Phys 2021; 48:4249-4261. [PMID: 34101855 DOI: 10.1002/mp.15033] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 04/16/2021] [Accepted: 05/25/2021] [Indexed: 11/10/2022] Open
Abstract
PURPOSE 99m Tc-MDP single-photon emission computed tomography (SPECT) is an established tool for diagnosing lumbar stress, a common cause of low back pain (LBP) in pediatric patients. However, detection of small stress lesions is complicated by the low quality of SPECT, leading to significant interreader variability. The study objectives were to develop an approach based on a deep convolutional neural network (CNN) for detecting lumbar lesions in 99m Tc-MDP scans and to compare its performance to that of physicians in a localization receiver operating characteristic (LROC) study. METHODS Sixty-five lesion-absent (LA) 99m Tc-MDP studies performed in pediatric patients for evaluating LBP were retrospectively identified. Projections for an artificial focal lesion were acquired separately by imaging a 99m Tc capillary tube at multiple distances from the collimator. An approach was developed to automatically insert lesions into LA scans to obtain realistic lesion-present (LP) 99m Tc-MDP images while ensuring knowledge of the ground truth. A deep CNN was trained using 2.5D views extracted in LP and LA 99m Tc-MDP image sets. During testing, the CNN was applied in a sliding-window fashion to compute a 3D "heatmap" reporting the probability of a lesion being present at each lumbar location. The algorithm was evaluated using cross-validation on a 99m Tc-MDP test dataset which was also studied by five physicians in a LROC study. LP images in the test set were obtained by incorporating lesions at sites selected by a physician based on clinical likelihood of injury in this population. RESULTS The deep learning (DL) system slightly outperformed human observers, achieving an area under the LROC curve (AUCLROC ) of 0.830 (95% confidence interval [CI]: [0.758, 0.924]) compared with 0.785 (95% CI: [0.738, 0.830]) for physicians. The AUCLROC for the DL system was higher than that of two readers (difference in AUCLROC [ΔAUCLROC ] = 0.049 and 0.053) who participated to the study and slightly lower than that of two other readers (ΔAUCLROC = -0.006 and -0.012). Another reader outperformed DL by a more substantial margin (ΔAUCLROC = -0.053). CONCLUSION The DL system provides comparable or superior performance than physicians in localizing small 99m Tc-MDP positive lumbar lesions.
Collapse
Affiliation(s)
- Yoann Petibon
- Gordon Center of Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA.,Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| | - Frederic Fahey
- Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA.,Division of Nuclear Medicine and Molecular Imaging, Boston Children's Hospital, Boston, Massachusetts, USA
| | - Xinhua Cao
- Division of Nuclear Medicine and Molecular Imaging, Boston Children's Hospital, Boston, Massachusetts, USA
| | - Zakhar Levin
- Gordon Center of Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Briana Sexton-Stallone
- Division of Nuclear Medicine and Molecular Imaging, Boston Children's Hospital, Boston, Massachusetts, USA
| | - Anthony Falone
- Division of Nuclear Medicine and Molecular Imaging, Boston Children's Hospital, Boston, Massachusetts, USA
| | - Katherine Zukotynski
- Departments of Medicine and Radiology, McMaster University, Hamilton, Ontario, Canada
| | - Neha Kwatra
- Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA.,Division of Nuclear Medicine and Molecular Imaging, Boston Children's Hospital, Boston, Massachusetts, USA
| | - Ruth Lim
- Gordon Center of Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA.,Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| | - Zvi Bar-Sever
- Institute of Nuclear Medicine, Schneider Children's Medical Center of Israel, Petah Tikva, Israel
| | - Yanis Chemli
- Gordon Center of Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - S Ted Treves
- Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA.,Division of Nuclear Medicine and Molecular Imaging, Brigham and Women's Hospital, Boston, Massachusetts, USA
| | - Georges El Fakhri
- Gordon Center of Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA.,Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| | - Jinsong Ouyang
- Gordon Center of Medical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA.,Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
95
|
Lou M, Qi Y, Meng J, Xu C, Wang Y, Pi J, Ma Y. DCANet: Dual contextual affinity network for mass segmentation in whole mammograms. Med Phys 2021; 48:4291-4303. [PMID: 34061371 DOI: 10.1002/mp.15010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Revised: 04/27/2021] [Accepted: 05/25/2021] [Indexed: 12/26/2022] Open
Abstract
PURPOSE Breast mass segmentation in mammograms remains a crucial yet challenging topic in computer-aided diagnosis systems. Existing algorithms mainly used mass-centered patches to achieve mass segmentation, which is time-consuming and unstable in clinical diagnosis. Therefore, we aim to directly perform fully automated mass segmentation in whole mammograms with deep learning solutions. METHODS In this work, we propose a novel dual contextual affinity network (a.k.a., DCANet) for mass segmentation in whole mammograms. Based on the encoder-decoder structure, two lightweight yet effective contextual affinity modules including the global-guided affinity module (GAM) and the local-guided affinity module (LAM) are proposed. The former aggregates the features integrated by all positions and captures long-range contextual dependencies, aiming to enhance the feature representations of homogeneous regions. The latter emphasizes semantic information around each position and exploits contextual affinity based on the local field-of-view, aiming to improve the indistinction among heterogeneous regions. RESULTS The proposed DCANet is greatly demonstrated on two public mammographic databases including the DDSM and the INbreast, achieving the Dice similarity coefficient (DSC) of 85.95% and 84.65%, respectively. Both segmentation performance and computational efficiency outperform the current state-of-the-art methods. CONCLUSION According to extensive qualitative and quantitative analyses, we believe that the proposed fully automated approach has sufficient robustness to provide fast and accurate diagnoses for possible clinical breast mass segmentation.
Collapse
Affiliation(s)
- Meng Lou
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Yunliang Qi
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Jie Meng
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Chunbo Xu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Yiming Wang
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Jiande Pi
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| | - Yide Ma
- School of Information Science and Engineering, Lanzhou University, Lanzhou, Gansu, China
| |
Collapse
|
96
|
Cao B, Zhang KC, Wei B, Chen L. Status quo and future prospects of artificial neural network from the perspective of gastroenterologists. World J Gastroenterol 2021; 27:2681-2709. [PMID: 34135549 PMCID: PMC8173384 DOI: 10.3748/wjg.v27.i21.2681] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/29/2021] [Accepted: 04/22/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial neural networks (ANNs) are one of the primary types of artificial intelligence and have been rapidly developed and used in many fields. In recent years, there has been a sharp increase in research concerning ANNs in gastrointestinal (GI) diseases. This state-of-the-art technique exhibits excellent performance in diagnosis, prognostic prediction, and treatment. Competitions between ANNs and GI experts suggest that efficiency and accuracy might be compatible in virtue of technique advancements. However, the shortcomings of ANNs are not negligible and may induce alterations in many aspects of medical practice. In this review, we introduce basic knowledge about ANNs and summarize the current achievements of ANNs in GI diseases from the perspective of gastroenterologists. Existing limitations and future directions are also proposed to optimize ANN’s clinical potential. In consideration of barriers to interdisciplinary knowledge, sophisticated concepts are discussed using plain words and metaphors to make this review more easily understood by medical practitioners and the general public.
Collapse
Affiliation(s)
- Bo Cao
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ke-Cheng Zhang
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Bo Wei
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Lin Chen
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| |
Collapse
|
97
|
Lei W, Mei H, Sun Z, Ye S, Gu R, Wang H, Huang R, Zhang S, Zhang S, Wang G. Automatic segmentation of organs-at-risk from head-and-neck CT using separable convolutional neural network with hard-region-weighted loss. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.135] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
98
|
Villarini B, Asaturyan H, Kurugol S, Afacan O, Bell JD, Thomas EL. 3D Deep Learning for Anatomical Structure Segmentation in Multiple Imaging Modalities. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS 2021; 2021:166-171. [PMID: 35224185 PMCID: PMC8867534 DOI: 10.1109/cbms52027.2021.00066] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Accurate, quantitative segmentation of anatomical structures in radiological scans, such as Magnetic Resonance Imaging (MRI) and Computer Tomography (CT), can produce significant biomarkers and can be integrated into computer-aided assisted diagnosis (CADx) systems to support the interpretation of medical images from multi-protocol scanners. However, there are serious challenges towards developing robust automated segmentation techniques, including high variations in anatomical structure and size, the presence of edge-based artefacts, and heavy un-controlled breathing that can produce blurred motion-based artefacts. This paper presents a novel computing approach for automatic organ and muscle segmentation in medical images from multiple modalities by harnessing the advantages of deep learning techniques in a two-part process. (1) a 3D encoder-decoder, Rb-UNet, builds a localisation model and a 3D Tiramisu network generates a boundary-preserving segmentation model for each target structure; (2) the fully trained Rb-UNet predicts a 3D bounding box encapsulating the target structure of interest, after which the fully trained Tiramisu model performs segmentation to reveal detailed organ or muscle boundaries. The proposed approach is evaluated on six different datasets, including MRI, Dynamic Contrast Enhanced (DCE) MRI and CT scans targeting the pancreas, liver, kidneys and psoas-muscle and achieves quantitative measures of mean Dice similarity coefficient (DSC) that surpass or are comparable with the state-of-the-art. A qualitative evaluation performed by two independent radiologists verified the preservation of detailed organ and muscle boundaries.
Collapse
Affiliation(s)
| | - Hykoush Asaturyan
- School of Computer Science, University of Westminster, London, United Kingdom
| | - Sila Kurugol
- Department of Radiology, Boston Children’s Hospital & Harvard Medical School, Boston, Massachusetts, USA
| | - Onur Afacan
- Department of Radiology Boston Children’s Hospital & Harvard Medical School, Boston, Massachusetts, USA
| | - Jimmy D. Bell
- School of Life Sciences, University of Westminster, London, United Kingdom
| | - E. Louise Thomas
- School of Life Sciences, University of Westminster, London, United Kingdom
| |
Collapse
|
99
|
Farhangi MM, Sahiner B, Petrick N, Pezeshk A. Automatic lung nodule detection in thoracic CT scans using dilated slice-wise convolutions. Med Phys 2021; 48:3741-3751. [PMID: 33932241 DOI: 10.1002/mp.14915] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 04/08/2021] [Accepted: 04/15/2021] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Most state-of-the-art automated medical image analysis methods for volumetric data rely on adaptations of two-dimensional (2D) and three-dimensional (3D) convolutional neural networks (CNNs). In this paper, we develop a novel unified CNN-based model that combines the benefits of 2D and 3D networks for analyzing volumetric medical images. METHODS In our proposed framework, multiscale contextual information is first extracted from 2D slices inside a volume of interest (VOI). This is followed by dilated 1D convolutions across slices to aggregate in-plane features in a slice-wise manner and encode the information in the entire volume. Moreover, we formalize a curriculum learning strategy for a two-stage system (i.e., a system that consists of screening and false positive reduction), where the training samples are presented to the network in a meaningful order to further improve the performance. RESULTS We evaluated the proposed approach by developing a computer-aided detection (CADe) system for lung nodules. Our results on 888 CT exams demonstrate that the proposed approach can effectively analyze volumetric data by achieving a sensitivity of > 0.99 in the screening stage and a sensitivity of > 0.96 at eight false positives per case in the false positive reduction stage. CONCLUSION Our experimental results show that the proposed method provides competitive results compared to state-of-the-art 3D frameworks. In addition, we illustrate the benefits of curriculum learning strategies in two-stage systems that are of common use in medical imaging applications.
Collapse
Affiliation(s)
- M Mehdi Farhangi
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Berkman Sahiner
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Nicholas Petrick
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Aria Pezeshk
- Division of Imaging, Diagnostics, and Software Reliability, CDRH, U.S Food and Drug Administration, Silver Spring, MD, 20993, USA
| |
Collapse
|
100
|
Alkenani AH, Li Y, Xu Y, Zhang Q. Predicting Alzheimer's Disease from Spoken and Written Language Using Fusion-Based Stacked Generalization. J Biomed Inform 2021; 118:103803. [PMID: 33965639 DOI: 10.1016/j.jbi.2021.103803] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Revised: 04/06/2021] [Accepted: 05/03/2021] [Indexed: 11/29/2022]
Abstract
The importance of automating the diagnosis of Alzheimer disease (AD) towards facilitating its early prediction has long been emphasized, hampered in part by lack of empirical support. Given the evident association of AD with age and the increasing aging population owing to the general well-being of individuals, there have been unprecedented estimated economic complications. Consequently, many recent studies have attempted to employ the language deficiency caused by cognitive decline in automating the diagnostic task via training machine learning (ML) algorithms with linguistic patterns and deficits. In this study, we aim to develop multiple heterogeneous stacked fusion models that harness the advantages of several base learning algorithms to improve the overall generalizability and robustness of AD diagnostic ML models, where we parallelly utilized two different written and spoken-based datasets to train our stacked fusion models. Further, we examined the effect of linking these two datasets to develop a hybrid stacked fusion model that can predict AD from written and spoken languages. Our feature spaces involved two widely used linguistic patterns: lexicosyntactics and character n-gram spaces. We firstly investigated lexicosyntactics of AD alongside healthy controls (HC), where we explored a few new lexicosyntactic features, then optimized the lexicosyntactic feature space by proposing a correlation feature selection technique that eliminates features based on their feature-feature inter-correlations and feature-target correlations according to a certain threshold. Our stacked fusion models establish benchmarks on both datasets with AUC of 98.1% and 99.47% for the spoken and written-based datasets, respectively, and corresponding accuracy and F1 score values around 95% on spoken-based dataset and around 97% on the written-based dataset. Likewise, the hybrid stacked fusion model on linked data presents an optimal performance with 99.2% AUC as well as accuracy and F1 score falling around 97%. In view of the achieved performance and enhanced generalizability of such fusion models over single classifiers, this study suggests replacing the initial traditional screening test with such models that can be embedded into an online format for a fully automated remote diagnosis.
Collapse
Affiliation(s)
- Ahmed H Alkenani
- School of Computer Science, Queensland University of Technology, Brisbane 4001, Australia; The Australian e-Health Research Centre, CSIRO, Brisbane 4029, Australia
| | - Yuefeng Li
- School of Computer Science, Queensland University of Technology, Brisbane 4001, Australia.
| | - Yue Xu
- School of Computer Science, Queensland University of Technology, Brisbane 4001, Australia
| | - Qing Zhang
- The Australian e-Health Research Centre, CSIRO, Brisbane 4029, Australia
| |
Collapse
|