101
|
Zhang S, Li L, Yu P, Wu C, Wang X, Liu M, Deng S, Guo C, Tan R. A deep learning model for drug screening and evaluation in bladder cancer organoids. Front Oncol 2023; 13:1064548. [PMID: 37168370 PMCID: PMC10164950 DOI: 10.3389/fonc.2023.1064548] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Accepted: 02/06/2023] [Indexed: 05/13/2023] Open
Abstract
Three-dimensional cell tissue culture, which produces biological structures termed organoids, has rapidly promoted the progress of biological research, including basic research, drug discovery, and regenerative medicine. However, due to the lack of algorithms and software, analysis of organoid growth is labor intensive and time-consuming. Currently it requires individual measurements using software such as ImageJ, leading to low screening efficiency when used for a high throughput screen. To solve this problem, we developed a bladder cancer organoid culture system, generated microscopic images, and developed a novel automatic image segmentation model, AU2Net (Attention and Cross U2Net). Using a dataset of two hundred images from growing organoids (day1 to day 7) and organoids with or without drug treatment, our model applies deep learning technology for image segmentation. To further improve the accuracy of model prediction, a variety of methods are integrated to improve the model's specificity, including adding Grouping Cross Merge (GCM) modules at the model's jump joints to strengthen the model's feature information. After feature information acquisition, a residual attentional gate (RAG) is added to suppress unnecessary feature propagation and improve the precision of organoids segmentation by establishing rich context-dependent models for local features. Experimental results show that each optimization scheme can significantly improve model performance. The sensitivity, specificity, and F1-Score of the ACU2Net model reached 94.81%, 88.50%, and 91.54% respectively, which exceed those of U-Net, Attention U-Net, and other available network models. Together, this novel ACU2Net model can provide more accurate segmentation results from organoid images and can improve the efficiency of drug screening evaluation using organoids.
Collapse
Affiliation(s)
- Shudi Zhang
- School of Information Science and Engineering, Yunnan University, Kunming, China
| | - Lu Li
- College of Life Sciences, Yunnan University, Kunming, China
| | - Pengfei Yu
- School of Information Science and Engineering, Yunnan University, Kunming, China
- *Correspondence: Ruirong Tan, ; Pengfei Yu,
| | - Chunyue Wu
- College of Life Sciences, Yunnan University, Kunming, China
| | - Xiaowen Wang
- School of Information Science and Engineering, Yunnan University, Kunming, China
| | - Meng Liu
- College of Life Sciences, Yunnan University, Kunming, China
| | | | - Chunming Guo
- College of Life Sciences, Yunnan University, Kunming, China
| | - Ruirong Tan
- Center for Organoids and Translational Pharmacology, Translational Chinese Medicine Key Laboratory of Sichuan Province, Sichuan Institute for Translational Chinese Medicine, Sichuan Academy of Chinese Medicine Sciences, Chengdu, China
- *Correspondence: Ruirong Tan, ; Pengfei Yu,
| |
Collapse
|
102
|
Liu M, Wu J, Wang N, Zhang X, Bai Y, Guo J, Zhang L, Liu S, Tao K. The value of artificial intelligence in the diagnosis of lung cancer: A systematic review and meta-analysis. PLoS One 2023; 18:e0273445. [PMID: 36952523 PMCID: PMC10035910 DOI: 10.1371/journal.pone.0273445] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 02/03/2023] [Indexed: 03/25/2023] Open
Abstract
Lung cancer is a common malignant tumor disease with high clinical disability and death rates. Currently, lung cancer diagnosis mainly relies on manual pathology section analysis, but the low efficiency and subjective nature of manual film reading can lead to certain misdiagnoses and omissions. With the continuous development of science and technology, artificial intelligence (AI) has been gradually applied to imaging diagnosis. Although there are reports on AI-assisted lung cancer diagnosis, there are still problems such as small sample size and untimely data updates. Therefore, in this study, a large amount of recent data was included, and meta-analysis was used to evaluate the value of AI for lung cancer diagnosis. With the help of STATA16.0, the value of AI-assisted lung cancer diagnosis was assessed by specificity, sensitivity, negative likelihood ratio, positive likelihood ratio, diagnostic ratio, and plotting the working characteristic curves of subjects. Meta-regression and subgroup analysis were used to investigate the value of AI-assisted lung cancer diagnosis. The results of the meta-analysis showed that the combined sensitivity of the AI-aided diagnosis system for lung cancer diagnosis was 0.87 [95% CI (0.82, 0.90)], specificity was 0.87 [95% CI (0.82, 0.91)] (CI stands for confidence interval.), the missed diagnosis rate was 13%, the misdiagnosis rate was 13%, the positive likelihood ratio was 6.5 [95% CI (4.6, 9.3)], the negative likelihood ratio was 0.15 [95% CI (0.11, 0.21)], a diagnostic ratio of 43 [95% CI (24, 76)] and a sum of area under the combined subject operating characteristic (SROC) curve of 0.93 [95% CI (0.91, 0.95)]. Based on the results, the AI-assisted diagnostic system for CT (Computerized Tomography), imaging has considerable diagnostic accuracy for lung cancer diagnosis, which is of significant value for lung cancer diagnosis and has greater feasibility of realizing the extension application in the field of clinical diagnosis.
Collapse
Affiliation(s)
- Mingsi Liu
- Department of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou, Henan, China
| | - Jinghui Wu
- College of Life Science, Sichuan University, Chengdu, Sichuan, China
| | - Nian Wang
- School of Basic Medical Sciences, Chengdu Medical College, Chengdu, Sichuan, China
| | - Xianqin Zhang
- School of Basic Medical Sciences, Chengdu Medical College, Chengdu, Sichuan, China
| | - Yujiao Bai
- School of Basic Medical Sciences, Chengdu Medical College, Chengdu, Sichuan, China
- Non-Coding RNA and Drug Discovery Key Laboratory of Sichuan Province, Chengdu Medical College, Chengdu, Sichuan, China
| | - Jinlin Guo
- Chongqing Key Laboratory of Sichuan-Chongqing Co-construction for Diagnosis and Treatment of Infectious Diseases Integrated Traditional Chinese and Western Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Lin Zhang
- Department of Pharmacy, Shaoxing people's Hospital, Shaoxing, Zhejiang, China
| | - Shulin Liu
- Department of the First Affiliated Hospital of Chengdu Medical College, Sichuan, China
| | - Ke Tao
- College of Life Science, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
103
|
Hocke J, Krauth J, Krause C, Gerlach S, Warnemünde N, Affeldt K, van Beek N, Schmidt E, Voigt J. Computer-aided classification of indirect immunofluorescence patterns on esophagus and split skin for the detection of autoimmune dermatoses. Front Immunol 2023; 14:1111172. [PMID: 36926325 PMCID: PMC10013071 DOI: 10.3389/fimmu.2023.1111172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 02/13/2023] [Indexed: 03/06/2023] Open
Abstract
Autoimmune bullous dermatoses (AIBD) are rare diseases that affect human skin and mucous membranes. Clinically, they are characterized by blister formation and/or erosions. Depending on the structures involved and the depth of blister formation, they are grouped into pemphigus diseases, pemphigoid diseases, and dermatitis herpetiformis. Classification of AIBD into their sub-entities is crucial to guide treatment decisions. One of the most sensitive screening methods for initial differentiation of AIBD is the indirect immunofluorescence (IIF) microscopy on tissue sections of monkey esophagus and primate salt-split skin, which are used to detect disease-specific autoantibodies. Interpretation of IIF patterns requires a detailed examination of the image by trained professionals automating this process is a challenging task with these highly complex tissue substrates, but offers the great advantage of an objective result. Here, we present computer-aided classification of esophagus and salt-split skin IIF images. We show how deep networks can be adapted to the specifics and challenges of IIF image analysis by incorporating segmentation of relevant regions into the prediction process, and demonstrate their high accuracy. Using this semi-automatic extension can reduce the workload of professionals when reading tissue sections in IIF testing. Furthermore, these results on highly complex tissue sections show that further integration of semi-automated workflows into the daily workflow of diagnostic laboratories is promising.
Collapse
Affiliation(s)
- Jens Hocke
- Institute for Experimental Immunology, affiliated to EUROIMMUN Medizinische Labordiagnostika AG, Lübeck, Germany
| | - Jens Krauth
- Institute for Experimental Immunology, affiliated to EUROIMMUN Medizinische Labordiagnostika AG, Lübeck, Germany
| | - Christopher Krause
- Institute for Experimental Immunology, affiliated to EUROIMMUN Medizinische Labordiagnostika AG, Lübeck, Germany
| | - Stefan Gerlach
- Institute for Experimental Immunology, affiliated to EUROIMMUN Medizinische Labordiagnostika AG, Lübeck, Germany
| | - Nicole Warnemünde
- Institute for Experimental Immunology, affiliated to EUROIMMUN Medizinische Labordiagnostika AG, Lübeck, Germany
| | - Kai Affeldt
- Institute for Experimental Immunology, affiliated to EUROIMMUN Medizinische Labordiagnostika AG, Lübeck, Germany
| | - Nina van Beek
- Department of Dermatology, Allergology and Venerology, University Hospital Schleswig-Holstein/University of Lübeck, Lübeck, Germany
| | - Enno Schmidt
- Department of Dermatology, Allergology and Venerology, University Hospital Schleswig-Holstein/University of Lübeck, Lübeck, Germany.,Lübeck Institute of Experimental Dermatology (LIED), University of Lübeck, Lübeck, Germany
| | - Jörn Voigt
- Institute for Experimental Immunology, affiliated to EUROIMMUN Medizinische Labordiagnostika AG, Lübeck, Germany
| |
Collapse
|
104
|
Mridha MF, Prodeep AR, Hoque ASMM, Islam MR, Lima AA, Kabir MM, Hamid MA, Watanobe Y. A Comprehensive Survey on the Progress, Process, and Challenges of Lung Cancer Detection and Classification. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:5905230. [PMID: 36569180 PMCID: PMC9788902 DOI: 10.1155/2022/5905230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/17/2022] [Accepted: 11/09/2022] [Indexed: 12/23/2022]
Abstract
Lung cancer is the primary reason of cancer deaths worldwide, and the percentage of death rate is increasing step by step. There are chances of recovering from lung cancer by detecting it early. In any case, because the number of radiologists is limited and they have been working overtime, the increase in image data makes it hard for them to evaluate the images accurately. As a result, many researchers have come up with automated ways to predict the growth of cancer cells using medical imaging methods in a quick and accurate way. Previously, a lot of work was done on computer-aided detection (CADe) and computer-aided diagnosis (CADx) in computed tomography (CT) scan, magnetic resonance imaging (MRI), and X-ray with the goal of effective detection and segmentation of pulmonary nodule, as well as classifying nodules as malignant or benign. But still, no complete comprehensive review that includes all aspects of lung cancer has been done. In this paper, every aspect of lung cancer is discussed in detail, including datasets, image preprocessing, segmentation methods, optimal feature extraction and selection methods, evaluation measurement matrices, and classifiers. Finally, the study looks into several lung cancer-related issues with possible solutions.
Collapse
Affiliation(s)
- M. F. Mridha
- Department of Computer Science and Engineering, American International University Bangladesh, Dhaka 1229, Bangladesh
| | - Akibur Rahman Prodeep
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - A. S. M. Morshedul Hoque
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
| | - Aklima Akter Lima
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Muhammad Mohsin Kabir
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Md. Abdul Hamid
- Department of Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Yutaka Watanobe
- Department of Computer Science and Engineering, University of Aizu, Aizuwakamatsu 965-8580, Japan
| |
Collapse
|
105
|
Liu J, Cao L, Akin O, Tian Y. Robust and accurate pulmonary nodule detection with self-supervised feature learning on domain adaptation. FRONTIERS IN RADIOLOGY 2022; 2:1041518. [PMID: 37492669 PMCID: PMC10365286 DOI: 10.3389/fradi.2022.1041518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Accepted: 11/28/2022] [Indexed: 07/27/2023]
Abstract
Medical imaging data annotation is expensive and time-consuming. Supervised deep learning approaches may encounter overfitting if trained with limited medical data, and further affect the robustness of computer-aided diagnosis (CAD) on CT scans collected by various scanner vendors. Additionally, the high false-positive rate in automatic lung nodule detection methods prevents their applications in daily clinical routine diagnosis. To tackle these issues, we first introduce a novel self-learning schema to train a pre-trained model by learning rich feature representatives from large-scale unlabeled data without extra annotation, which guarantees a consistent detection performance over novel datasets. Then, a 3D feature pyramid network (3DFPN) is proposed for high-sensitivity nodule detection by extracting multi-scale features, where the weights of the backbone network are initialized by the pre-trained model and then fine-tuned in a supervised manner. Further, a High Sensitivity and Specificity (HS2 ) network is proposed to reduce false positives by tracking the appearance changes among continuous CT slices on Location History Images (LHI) for the detected nodule candidates. The proposed method's performance and robustness are evaluated on several publicly available datasets, including LUNA16, SPIE-AAPM, LungTIME, and HMS. Our proposed detector achieves the state-of-the-art result of 90.6 % sensitivity at 1 / 8 false positive per scan on the LUNA16 dataset. The proposed framework's generalizability has been evaluated on three additional datasets (i.e., SPIE-AAPM, LungTIME, and HMS) captured by different types of CT scanners.
Collapse
Affiliation(s)
- Jingya Liu
- The City College of New York, New York, NY, USA
| | | | - Oguz Akin
- Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Yingli Tian
- The City College of New York, New York, NY, USA
| |
Collapse
|
106
|
Katase S, Ichinose A, Hayashi M, Watanabe M, Chin K, Takeshita Y, Shiga H, Tateishi H, Onozawa S, Shirakawa Y, Yamashita K, Shudo J, Nakamura K, Nakanishi A, Kuroki K, Yokoyama K. Development and performance evaluation of a deep learning lung nodule detection system. BMC Med Imaging 2022; 22:203. [PMID: 36419044 PMCID: PMC9682774 DOI: 10.1186/s12880-022-00938-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 11/14/2022] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND Lung cancer is the leading cause of cancer-related deaths throughout the world. Chest computed tomography (CT) is now widely used in the screening and diagnosis of lung cancer due to its effectiveness. Radiologists must identify each small nodule shadow from 3D volume images, which is very burdensome and often results in missed nodules. To address these challenges, we developed a computer-aided detection (CAD) system that automatically detects lung nodules in CT images. METHODS A total of 1997 chest CT scans were collected for algorithm development. The algorithm was designed using deep learning technology. In addition to evaluating detection performance on various public datasets, its robustness to changes in radiation dose was assessed by a phantom study. To investigate the clinical usefulness of the CAD system, a reader study was conducted with 10 doctors, including inexperienced and expert readers. This study investigated whether the use of the CAD as a second reader could prevent nodular lesions in lungs that require follow-up examinations from being overlooked. Analysis was performed using the Jackknife Free-Response Receiver-Operating Characteristic (JAFROC). RESULTS The CAD system achieved sensitivity of 0.98/0.96 at 3.1/7.25 false positives per case on two public datasets. Sensitivity did not change within the range of practical doses for a study using a phantom. A second reader study showed that the use of this system significantly improved the detection ability of nodules that could be picked up clinically (p = 0.026). CONCLUSIONS We developed a deep learning-based CAD system that is robust to imaging conditions. Using this system as a second reader increased detection performance.
Collapse
Affiliation(s)
- Shichiro Katase
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Akimichi Ichinose
- grid.410862.90000 0004 1770 2279Imaging Technology Center, ICT Strategy Division, Fujifilm Corporation, 2-26-30, Nishi-Azabu, Minato-ku, Tokyo, Japan
| | - Mahiro Hayashi
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Masanaka Watanabe
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Kinka Chin
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Yuhei Takeshita
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Hisae Shiga
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Hidekatsu Tateishi
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Shiro Onozawa
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Yuya Shirakawa
- grid.459686.00000 0004 0386 8956Department of Radiology, Kyorin University Hospital, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Koji Yamashita
- grid.459686.00000 0004 0386 8956Department of Radiology, Kyorin University Hospital, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Jun Shudo
- grid.459686.00000 0004 0386 8956Department of Radiology, Kyorin University Hospital, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Keigo Nakamura
- grid.410862.90000 0004 1770 2279Imaging Technology Center, ICT Strategy Division, Fujifilm Corporation, 2-26-30, Nishi-Azabu, Minato-ku, Tokyo, Japan
| | - Akihito Nakanishi
- grid.459686.00000 0004 0386 8956Department of Radiology, Kyorin University Hospital, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Kazunori Kuroki
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| | - Kenichi Yokoyama
- grid.411205.30000 0000 9340 2869Department of Radiology, Faculty of Medicine, Kyorin University, 6-20-2, Shinkawa, Mitaka-shi, Tokyo, Japan
| |
Collapse
|
107
|
Sekeroglu K, Soysal ÖM. Multi-Perspective Hierarchical Deep-Fusion Learning Framework for Lung Nodule Classification. SENSORS (BASEL, SWITZERLAND) 2022; 22:8949. [PMID: 36433541 PMCID: PMC9697252 DOI: 10.3390/s22228949] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 11/09/2022] [Accepted: 11/10/2022] [Indexed: 06/16/2023]
Abstract
Lung cancer is the leading cancer type that causes mortality in both men and women. Computer-aided detection (CAD) and diagnosis systems can play a very important role for helping physicians with cancer treatments. This study proposes a hierarchical deep-fusion learning scheme in a CAD framework for the detection of nodules from computed tomography (CT) scans. In the proposed hierarchical approach, a decision is made at each level individually employing the decisions from the previous level. Further, individual decisions are computed for several perspectives of a volume of interest. This study explores three different approaches to obtain decisions in a hierarchical fashion. The first model utilizes raw images. The second model uses a single type of feature image having salient content. The last model employs multi-type feature images. All models learn the parameters by means of supervised learning. The proposed CAD frameworks are tested using lung CT scans from the LIDC/IDRI database. The experimental results showed that the proposed multi-perspective hierarchical fusion approach significantly improves the performance of the classification. The proposed hierarchical deep-fusion learning model achieved a sensitivity of 95% with only 0.4 fp/scan.
Collapse
Affiliation(s)
- Kazim Sekeroglu
- Department of Computer Science, Southeastern Louisiana University, Hammond, LA 70402, USA
| | - Ömer Muhammet Soysal
- Department of Computer Science, Southeastern Louisiana University, Hammond, LA 70402, USA
- School of Electrical Engineering and Computer Science, Louisiana State University, Baton Rouge, LA 70803, USA
| |
Collapse
|
108
|
Chang R, Qi S, Wu Y, Song Q, Yue Y, Zhang X, Guan Y, Qian W. Deep multiple instance learning for predicting chemotherapy response in non-small cell lung cancer using pretreatment CT images. Sci Rep 2022; 12:19829. [PMID: 36400881 PMCID: PMC9672640 DOI: 10.1038/s41598-022-24278-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Accepted: 11/14/2022] [Indexed: 11/19/2022] Open
Abstract
The individual prognosis of chemotherapy is quite different in non-small cell lung cancer (NSCLC). There is an urgent need to precisely predict and assess the treatment response. To develop a deep multiple-instance learning (DMIL) based model for predicting chemotherapy response in NSCLC in pretreatment CT images. Two datasets of NSCLC patients treated with chemotherapy as the first-line treatment were collected from two hospitals. Dataset 1 (163 response and 138 nonresponse) was used to train, validate, and test the DMIL model and dataset 2 (22 response and 20 nonresponse) was used as the external validation cohort. Five backbone networks in the feature extraction module and three pooling methods were compared. The DMIL with a pre-trained VGG16 backbone and an attention mechanism pooling performed the best, with an accuracy of 0.883 and area under the curve (AUC) of 0.982 on Dataset 1. While using max pooling and convolutional pooling, the AUC was 0.958 and 0.931, respectively. In Dataset 2, the best DMIL model produced an accuracy of 0.833 and AUC of 0.940. Deep learning models based on the MIL can predict chemotherapy response in NSCLC using pretreatment CT images and the pre-trained VGG16 with attention mechanism pooling yielded better predictions.
Collapse
Affiliation(s)
- Runsheng Chang
- grid.412252.20000 0004 0368 6968College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Shouliang Qi
- grid.412252.20000 0004 0368 6968College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China ,grid.412252.20000 0004 0368 6968Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Yanan Wu
- grid.412252.20000 0004 0368 6968College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Qiyuan Song
- grid.412252.20000 0004 0368 6968College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yong Yue
- grid.412467.20000 0004 1806 3501Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Xiaoye Zhang
- grid.412467.20000 0004 1806 3501Department of Oncology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yubao Guan
- grid.410737.60000 0000 8653 1072Department of Radiology, The Fifth Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Wei Qian
- grid.412252.20000 0004 0368 6968College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| |
Collapse
|
109
|
Wang L. Deep Learning Techniques to Diagnose Lung Cancer. Cancers (Basel) 2022; 14:5569. [PMID: 36428662 PMCID: PMC9688236 DOI: 10.3390/cancers14225569] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/11/2022] [Accepted: 11/11/2022] [Indexed: 11/15/2022] Open
Abstract
Medical imaging tools are essential in early-stage lung cancer diagnostics and the monitoring of lung cancer during treatment. Various medical imaging modalities, such as chest X-ray, magnetic resonance imaging, positron emission tomography, computed tomography, and molecular imaging techniques, have been extensively studied for lung cancer detection. These techniques have some limitations, including not classifying cancer images automatically, which is unsuitable for patients with other pathologies. It is urgently necessary to develop a sensitive and accurate approach to the early diagnosis of lung cancer. Deep learning is one of the fastest-growing topics in medical imaging, with rapidly emerging applications spanning medical image-based and textural data modalities. With the help of deep learning-based medical imaging tools, clinicians can detect and classify lung nodules more accurately and quickly. This paper presents the recent development of deep learning-based imaging techniques for early lung cancer detection.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen 518118, China
| |
Collapse
|
110
|
Chao HS, Wu YH, Siana L, Chen YM. Generating High-Resolution CT Slices from Two Image Series Using Deep-Learning-Based Resolution Enhancement Methods. Diagnostics (Basel) 2022; 12:2725. [PMID: 36359568 PMCID: PMC9689374 DOI: 10.3390/diagnostics12112725] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 10/30/2022] [Accepted: 11/04/2022] [Indexed: 08/30/2023] Open
Abstract
Medical image super-resolution (SR) has mainly been developed for a single image in the literature. However, there is a growing demand for high-resolution, thin-slice medical images. We hypothesized that fusing the two planes of a computed tomography (CT) study and applying the SR model to the third plane could yield high-quality thin-slice SR images. From the same CT study, we collected axial planes of 1 mm and 5 mm in thickness and coronal planes of 5 mm in thickness. Four SR algorithms were then used for SR reconstruction. Quantitative measurements were performed for image quality testing. We also tested the effects of different regions of interest (ROIs). Based on quantitative comparisons, the image quality obtained when the SR models were applied to the sagittal plane was better than that when applying the models to the other planes. The results were statistically significant according to the Wilcoxon signed-rank test. The overall effect of the enhanced deep residual network (EDSR) model was superior to those of the other three resolution-enhancement methods. A maximal ROI containing minimal blank areas was the most appropriate for quantitative measurements. Fusing two series of thick-slice CT images and applying SR models to the third plane can yield high-resolution thin-slice CT images. EDSR provides superior SR performance across all ROI conditions.
Collapse
Affiliation(s)
- Heng-Sheng Chao
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei City 112, Taiwan
- Faculty of Medicine, School of Medicine, National Yang Ming Chiao Tung University, Taipei City 112, Taiwan
| | - Yu-Hong Wu
- Research and Development III, V5 Technologies Co., Ltd., Hsinchu 300, Taiwan
| | - Linda Siana
- Research and Development III, V5 Technologies Co., Ltd., Hsinchu 300, Taiwan
| | - Yuh-Min Chen
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei City 112, Taiwan
- Faculty of Medicine, School of Medicine, National Yang Ming Chiao Tung University, Taipei City 112, Taiwan
| |
Collapse
|
111
|
Zheng S, Kong S, Huang Z, Pan L, Zeng T, Zheng B, Yang M, Liu Z. A Lower False Positive Pulmonary Nodule Detection Approach for Early Lung Cancer Screening. Diagnostics (Basel) 2022; 12:2660. [PMID: 36359503 PMCID: PMC9689063 DOI: 10.3390/diagnostics12112660] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 10/26/2022] [Accepted: 10/27/2022] [Indexed: 09/25/2024] Open
Abstract
Pulmonary nodule detection with low-dose computed tomography (LDCT) is indispensable in early lung cancer screening. Although existing methods have achieved excellent detection sensitivity, nodule detection still faces challenges such as nodule size variation and uneven distribution, as well as excessive nodule-like false positive candidates in the detection results. We propose a novel two-stage nodule detection (TSND) method. In the first stage, a multi-scale feature detection network (MSFD-Net) is designed to generate nodule candidates. This includes a proposed feature extraction network to learn the multi-scale feature representation of candidates. In the second stage, a candidate scoring network (CS-Net) is built to estimate the score of candidate patches to realize false positive reduction (FPR). Finally, we develop an end-to-end nodule computer-aided detection (CAD) system based on the proposed TSND for LDCT scans. Experimental results on the LUNA16 dataset show that our proposed TSND obtained an excellent average sensitivity of 90.59% at seven predefined false positives (FPs) points: 0.125, 0.25, 0.5, 1, 2, 4, and 8 FPs per scan on the FROC curve introduced in LUNA16. Moreover, comparative experiments indicate that our CS-Net can effectively suppress false positives and improve the detection performance of TSND.
Collapse
Affiliation(s)
- Shaohua Zheng
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Shaohua Kong
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Zihan Huang
- School of Future Technology, Harbin Institute of Technology, Harbin 150000, China
| | - Lin Pan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Taidui Zeng
- Key Laboratory of Cardio-Thoracic Surgery (Fujian Medical University ), Fujian Province University, Fuzhou 350108, China
| | - Bin Zheng
- Key Laboratory of Cardio-Thoracic Surgery (Fujian Medical University ), Fujian Province University, Fuzhou 350108, China
| | - Mingjing Yang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Zheng Liu
- School of Engineering, Faculty of Applied Science, University of British Columbia, Kelowna, BC V1V 1V7, Canada
| |
Collapse
|
112
|
Gao R, Li T, Tang Y, Xu K, Khan M, Kammer M, Antic SL, Deppen S, Huo Y, Lasko TA, Sandler KL, Maldonado F, Landman BA. Reducing uncertainty in cancer risk estimation for patients with indeterminate pulmonary nodules using an integrated deep learning model. Comput Biol Med 2022; 150:106113. [PMID: 36198225 PMCID: PMC10050219 DOI: 10.1016/j.compbiomed.2022.106113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Revised: 08/21/2022] [Accepted: 09/17/2022] [Indexed: 11/03/2022]
Abstract
OBJECTIVE Patients with indeterminate pulmonary nodules (IPN) with an intermediate to a high probability of lung cancer generally undergo invasive diagnostic procedures. Chest computed tomography image and clinical data have been in estimating the pretest probability of lung cancer. In this study, we apply a deep learning network to integrate multi-modal data from CT images and clinical data (including blood-based biomarkers) to improve lung cancer diagnosis. Our goal is to reduce uncertainty and to avoid morbidity, mortality, over- and undertreatment of patients with IPNs. METHOD We use a retrospective study design with cross-validation and external-validation from four different sites. We introduce a deep learning framework with a two-path structure to learn from CT images and clinical data. The proposed model can learn and predict with single modality if the multi-modal data is not complete. We use 1284 patients in the learning cohort for model development. Three external sites (with 155, 136 and 96 patients, respectively) provided patient data for external validation. We compare our model to widely applied clinical prediction models (Mayo and Brock models) and image-only methods (e.g., Liao et al. model). RESULTS Our co-learning model improves upon the performance of clinical-factor-only (Mayo and Brock models) and image-only (Liao et al.) models in both cross-validation of learning cohort (e.g. , AUC 0.787 (ours) vs. 0.707-0.719 (baselines), results reported in validation fold and external-validation using three datasets from University of Pittsburgh Medical Center (e.g., 0.918 (ours) vs. 0.828-0.886 (baselines)), Detection of Early Cancer Among Military Personnel (e.g., 0.712 (ours) vs. 0.576-0.709 (baselines)), and University of Colorado Denver (e.g., 0.847 (ours) vs. 0.679-0.746 (baselines)). In addition, our model achieves better re-classification performance (cNRI 0.04 to 0.20) in all cross- and external-validation sets compared to the Mayo model. CONCLUSIONS Lung cancer risk estimation in patients with IPNs can benefit from the co-learning of CT image and clinical data. Learning from more subjects, even though those only have a single modality, can improve the prediction accuracy. An integrated deep learning model can achieve reasonable discrimination and re-classification performance.
Collapse
Affiliation(s)
- Riqiang Gao
- Vanderbilt University, Nashville, TN, 37235, USA.
| | - Thomas Li
- Vanderbilt University, Nashville, TN, 37235, USA
| | - Yucheng Tang
- Vanderbilt University, Nashville, TN, 37235, USA
| | - Kaiwen Xu
- Vanderbilt University, Nashville, TN, 37235, USA
| | - Mirza Khan
- Vanderbilt University Medical Center, Nashville, TN, 37235, USA
| | - Michael Kammer
- Vanderbilt University Medical Center, Nashville, TN, 37235, USA
| | - Sanja L Antic
- Vanderbilt University Medical Center, Nashville, TN, 37235, USA
| | - Stephen Deppen
- Vanderbilt University Medical Center, Nashville, TN, 37235, USA
| | - Yuankai Huo
- Vanderbilt University, Nashville, TN, 37235, USA
| | - Thomas A Lasko
- Vanderbilt University Medical Center, Nashville, TN, 37235, USA
| | - Kim L Sandler
- Vanderbilt University Medical Center, Nashville, TN, 37235, USA
| | | | - Bennett A Landman
- Vanderbilt University, Nashville, TN, 37235, USA; Vanderbilt University Medical Center, Nashville, TN, 37235, USA
| |
Collapse
|
113
|
Zhao G, Feng Q, Chen C, Zhou Z, Yu Y. Diagnose Like a Radiologist: Hybrid Neuro-Probabilistic Reasoning for Attribute-Based Medical Image Diagnosis. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:7400-7416. [PMID: 34822325 DOI: 10.1109/tpami.2021.3130759] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
During clinical practice, radiologists often use attributes, e.g., morphological and appearance characteristics of a lesion, to aid disease diagnosis. Effectively modeling attributes as well as all relationships involving attributes could boost the generalization ability and verifiability of medical image diagnosis algorithms. In this paper, we introduce a hybrid neuro-probabilistic reasoning algorithm for verifiable attribute-based medical image diagnosis. There are two parallel branches in our hybrid algorithm, a Bayesian network branch performing probabilistic causal relationship reasoning and a graph convolutional network branch performing more generic relational modeling and reasoning using a feature representation. Tight coupling between these two branches is achieved via a cross-network attention mechanism and the fusion of their classification results. We have successfully applied our hybrid reasoning algorithm to two challenging medical image diagnosis tasks. On the LIDC-IDRI benchmark dataset for benign-malignant classification of pulmonary nodules in CT images, our method achieves a new state-of-the-art accuracy of 95.36% and an AUC of 96.54%. Our method also achieves a 3.24% accuracy improvement on an in-house chest X-ray image dataset for tuberculosis diagnosis. Our ablation study indicates that our hybrid algorithm achieves a much better generalization performance than a pure neural network architecture under very limited training data.
Collapse
|
114
|
Wang J, Ji X, Zhao M, Wen Y, She Y, Deng J, Chen C, Qian D, Lu H, Zhao D. Size-adaptive mediastinal multilesion detection in chest CT images via deep learning and a benchmark dataset. Med Phys 2022; 49:7222-7236. [PMID: 35689486 DOI: 10.1002/mp.15804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 05/12/2022] [Accepted: 06/03/2022] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Many deep learning methods have been developed for pulmonary lesion detection in chest computed tomography (CT) images. However, these methods generally target one particular lesion type, that is, pulmonary nodules. In this work, we intend to develop and evaluate a novel deep learning method for a more challenging task, detecting various benign and malignant mediastinal lesions with wide variations in sizes, shapes, intensities, and locations in chest CT images. METHODS Our method for mediastinal lesion detection contains two main stages: (a) size-adaptive lesion candidate detection followed by (b) false-positive (FP) reduction and benign-malignant classification. For candidate detection, an anchor-free and one-stage detector, namely 3D-CenterNet is designed to locate suspicious regions (i.e., candidates with various sizes) within the mediastinum. Then, a 3D-SEResNet-based classifier is used to differentiate FPs, benign lesions, and malignant lesions from the candidates. RESULTS We evaluate the proposed method by conducting five-fold cross-validation on a relatively large-scale dataset, which consists of data collected on 1136 patients from a grade A tertiary hospital. The method can achieve sensitivity scores of 84.3% ± 1.9%, 90.2% ± 1.4%, 93.2% ± 0.8%, and 93.9% ± 1.1%, respectively, in finding all benign and malignant lesions at 1/8, 1/4, ½, and 1 FPs per scan, and the accuracy of benign-malignant classification can reach up to 78.7% ± 2.5%. CONCLUSIONS The proposed method can effectively detect mediastinal lesions with various sizes, shapes, and locations in chest CT images. It can be integrated into most existing pulmonary lesion detection systems to promote their clinical applications. The method can also be readily extended to other similar 3D lesion detection tasks.
Collapse
Affiliation(s)
- Jun Wang
- School of Computer and Computing Science, Zhejiang University City College, Hangzhou, China
| | - Xiawei Ji
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Mengmeng Zhao
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Yaofeng Wen
- Lanhui Medical Technology Co., Ltd, Shanghai, China
| | - Yunlang She
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Jiajun Deng
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Chang Chen
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Dahong Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Hongbing Lu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Deping Zhao
- Department of Thoracic Surgery, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
115
|
Sheng B, Chen X, Li T, Ma T, Yang Y, Bi L, Zhang X. An overview of artificial intelligence in diabetic retinopathy and other ocular diseases. Front Public Health 2022; 10:971943. [PMID: 36388304 PMCID: PMC9650481 DOI: 10.3389/fpubh.2022.971943] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 10/04/2022] [Indexed: 01/25/2023] Open
Abstract
Artificial intelligence (AI), also known as machine intelligence, is a branch of science that empowers machines using human intelligence. AI refers to the technology of rendering human intelligence through computer programs. From healthcare to the precise prevention, diagnosis, and management of diseases, AI is progressing rapidly in various interdisciplinary fields, including ophthalmology. Ophthalmology is at the forefront of AI in medicine because the diagnosis of ocular diseases heavy reliance on imaging. Recently, deep learning-based AI screening and prediction models have been applied to the most common visual impairment and blindness diseases, including glaucoma, cataract, age-related macular degeneration (ARMD), and diabetic retinopathy (DR). The success of AI in medicine is primarily attributed to the development of deep learning algorithms, which are computational models composed of multiple layers of simulated neurons. These models can learn the representations of data at multiple levels of abstraction. The Inception-v3 algorithm and transfer learning concept have been applied in DR and ARMD to reuse fundus image features learned from natural images (non-medical images) to train an AI system with a fraction of the commonly used training data (<1%). The trained AI system achieved performance comparable to that of human experts in classifying ARMD and diabetic macular edema on optical coherence tomography images. In this study, we highlight the fundamental concepts of AI and its application in these four major ocular diseases and further discuss the current challenges, as well as the prospects in ophthalmology.
Collapse
Affiliation(s)
- Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Xiaosi Chen
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tingyao Li
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Tianxing Ma
- Chongqing University-University of Cincinnati Joint Co-op Institute, Chongqing University, Chongqing, China
| | - Yang Yang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lei Bi
- School of Computer Science, University of Sydney, Sydney, NSW, Australia
| | - Xinyuan Zhang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
116
|
Gomes R, Kamrowski C, Mohan PD, Senor C, Langlois J, Wildenberg J. Application of Deep Learning to IVC Filter Detection from CT Scans. Diagnostics (Basel) 2022; 12:diagnostics12102475. [PMID: 36292164 PMCID: PMC9600884 DOI: 10.3390/diagnostics12102475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 10/05/2022] [Accepted: 10/10/2022] [Indexed: 11/16/2022] Open
Abstract
IVC filters (IVCF) perform an important function in select patients that have venous blood clots. However, they are usually intended to be temporary, and significant delay in removal can have negative health consequences for the patient. Currently, all Interventional Radiology (IR) practices are tasked with tracking patients in whom IVCF are placed. Due to their small size and location deep within the abdomen it is common for patients to forget that they have an IVCF. Therefore, there is a significant delay for a new healthcare provider to become aware of the presence of a filter. Patients may have an abdominopelvic CT scan for many reasons and, fortunately, IVCF are clearly visible on these scans. In this research a deep learning model capable of segmenting IVCF from CT scan slices along the axial plane is developed. The model achieved a Dice score of 0.82 for training over 372 CT scan slices. The segmentation model is then integrated with a prediction algorithm capable of flagging an entire CT scan as having IVCF. The prediction algorithm utilizing the segmentation model achieved a 92.22% accuracy at detecting IVCF in the scans.
Collapse
Affiliation(s)
- Rahul Gomes
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA
- Correspondence: (R.G.); (J.W.)
| | - Connor Kamrowski
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA
| | - Pavithra Devy Mohan
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA
| | - Cameron Senor
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA
| | - Jordan Langlois
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA
| | - Joseph Wildenberg
- Interventional Radiology, Mayo Clinic Health System, Eau Claire, WI 54703, USA
- Correspondence: (R.G.); (J.W.)
| |
Collapse
|
117
|
Jiang L, Li M, Jiang H, Tao L, Yang W, Yuan H, He B. Development of an Artificial Intelligence Model for Analyzing the Relationship between Imaging Features and Glucocorticoid Sensitivity in Idiopathic Interstitial Pneumonia. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:13099. [PMID: 36293674 PMCID: PMC9602820 DOI: 10.3390/ijerph192013099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 09/29/2022] [Accepted: 10/10/2022] [Indexed: 06/16/2023]
Abstract
High-resolution CT (HRCT) imaging features of idiopathic interstitial pneumonia (IIP) patients are related to glucocorticoid sensitivity. This study aimed to develop an artificial intelligence model to assess glucocorticoid efficacy according to the HRCT imaging features of IIP. The medical records and chest HRCT images of 150 patients with IIP were analyzed retrospectively. The U-net framework was used to create a model for recognizing different imaging features, including ground glass opacities, reticulations, honeycombing, and consolidations. Then, the area ratio of those imaging features was calculated automatically. Forty-five patients were treated with glucocorticoids, and according to the drug efficacy, they were divided into a glucocorticoid-sensitive group and a glucocorticoid-insensitive group. Models assessing the correlation between imaging features and glucocorticoid sensitivity were established using the k-nearest neighbor (KNN) algorithm. The total accuracy (ACC) and mean intersection over union (mIoU) of the U-net model were 0.9755 and 0.4296, respectively. Out of the 45 patients treated with glucocorticoids, 34 and 11 were placed in the glucocorticoid-sensitive and glucocorticoid-insensitive groups, respectively. The KNN-based model had an accuracy of 0.82. An artificial intelligence model was successfully developed for recognizing different imaging features of IIP and a preliminary model for assessing the correlation between imaging features and glucocorticoid sensitivity in IIP patients was established.
Collapse
Affiliation(s)
- Ling Jiang
- Department of Respiratory and Critical Care Medicine, Peking University Third Hospital, Beijing 100191, China
| | - Meijiao Li
- Department of Radiology, Peking University Third Hospital, Beijing 100191, China
| | - Han Jiang
- OpenBayes (Tianjin) IT Co., Ltd., Beijing 100027, China
| | - Liyuan Tao
- Research Center of Clinical Epidemiology, Peking University Third Hospital, Beijing 100191, China
| | - Wei Yang
- Department of Respiratory and Critical Care Medicine, Peking University Third Hospital, Beijing 100191, China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Beijing 100191, China
| | - Bei He
- Department of Respiratory and Critical Care Medicine, Peking University Third Hospital, Beijing 100191, China
| |
Collapse
|
118
|
Liu Y, Zhang F, Chen C, Wang S, Wang Y, Yu Y. Act Like a Radiologist: Towards Reliable Multi-View Correspondence Reasoning for Mammogram Mass Detection. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:5947-5961. [PMID: 34061740 DOI: 10.1109/tpami.2021.3085783] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Mammogram mass detection is crucial for diagnosing and preventing the breast cancers in clinical practice. The complementary effect of multi-view mammogram images provides valuable information about the breast anatomical prior structure and is of great significance in digital mammography interpretation. However, unlike radiologists who can utilize the natural reasoning ability to identify masses based on multiple mammographic views, how to endow the existing object detection models with the capability of multi-view reasoning is vital for decision-making in clinical diagnosis but remains the boundary to explore. In this paper, we propose an anatomy-aware graph convolutional network (AGN), which is tailored for mammogram mass detection and endows existing detection methods with multi-view reasoning ability. The proposed AGN consists of three steps. First, we introduce a bipartite graph convolutional network (BGN) to model the intrinsic geometric and semantic relations of ipsilateral views. Second, considering that the visual asymmetry of bilateral views is widely adopted in clinical practice to assist the diagnosis of breast lesions, we propose an inception graph convolutional network (IGN) to model the structural similarities of bilateral views. Finally, based on the constructed graphs, the multi-view information is propagated through nodes methodically, which equips the features learned from the examined view with multi-view reasoning ability. Experiments on two standard benchmarks reveal that AGN significantly exceeds the state-of-the-art performance. Visualization results show that AGN provides interpretable visual cues for clinical diagnosis.
Collapse
|
119
|
Chen Q, Xie W, Zhou P, Zheng C, Wu D. Multi-Crop Convolutional Neural Networks for Fast Lung Nodule Segmentation. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2022. [DOI: 10.1109/tetci.2021.3051910] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Affiliation(s)
- Quan Chen
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Wei Xie
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Pan Zhou
- Hubei Engineering Research Center on Big Data Security, School of Cyber Science and Engineering, Huazhong University of Science and Technology, China
| | - Chuansheng Zheng
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Dapeng Wu
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, USA
| |
Collapse
|
120
|
Lou J, Xu J, Zhang Y, Sun Y, Fang A, Liu J, Mur LAJ, Ji B. PPsNet: An improved deep learning model for microsatellite instability high prediction in colorectal cancer from whole slide images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 225:107095. [PMID: 36057226 DOI: 10.1016/j.cmpb.2022.107095] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Revised: 08/18/2022] [Accepted: 08/26/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Recent studies have shown that colorectal cancer (CRC) patients with microsatellite instability high (MSI-H) are more likely to benefit from immunotherapy. However, current MSI testing methods are not available for all patients due to the lack of available equipment and trained personnel, as well as the high cost of the assay. Here, we developed an improved deep learning model to predict MSI-H in CRC from whole slide images (WSIs). METHODS We established the MSI-H prediction model based on two stages: tumor detection and MSI classification. Previous works applied fine-tuning strategy directly for tumor detection, but ignoring the challenge of vanishing gradient due to the large number of convolutional layers. We added auxiliary classifiers to intermediate layers of pre-trained models to help propagate gradients back through in an effective manner. To predict MSI status, we constructed a pair-wise learning model with a synergic network, named parameter partial sharing network (PPsNet), where partial parameters are shared among two deep convolutional neural networks (DCNNs). The proposed PPsNet contained fewer parameters and reduced the problem of intra-class variation and inter-class similarity. We validated the proposed model on a holdout test set and two external test sets. RESULTS 144 H&E-stained WSIs from 144 CRC patients (81 cases with MSI-H and 63 cases with MSI-L/MSS) were collected retrospectively from three hospitals. The experimental results indicate that deep supervision based fine-tuning almost outperforms training from scratch and utilizing fine-tuning directly. The proposed PPsNet always achieves better accuracy and area under the receiver operating characteristic curve (AUC) than other solutions with four different neural network architectures on validation. The proposed method finally achieves obvious improvements than other state-of-the-art methods on the validation dataset with an accuracy of 87.28% and AUC of 94.29%. CONCLUSIONS The proposed method can obviously increase model performance and our model yields better performance than other methods. Additionally, this work also demonstrates the feasibility of MSI-H prediction using digital pathology images based on deep learning in the Asian population. It is hoped that this model could serve as an auxiliary tool to identify CRC patients with MSI-H more time-saving and efficiently.
Collapse
Affiliation(s)
- Jingjiao Lou
- School of Control Science and Engineering, Shandong University, 17923 Jingshi Road, Jinan, Shandong 250061, PR China
| | - Jiawen Xu
- Department of Pathology, Shandong Provincial Hospital affiliated to Shandong First Medical University, Jinan, Shandong 250021, PR China
| | - Yuyan Zhang
- School of Control Science and Engineering, Shandong University, 17923 Jingshi Road, Jinan, Shandong 250061, PR China
| | - Yuhong Sun
- Department of Pathology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong 250117, PR China
| | - Aiju Fang
- Department of Pathology, Shandong Provincial Third Hospital, Shandong University, Jinan, Shandong 250132, PR China
| | - Jixuan Liu
- Department of Pathology, Shandong Provincial Hospital affiliated to Shandong First Medical University, Jinan, Shandong 250021, PR China
| | - Luis A J Mur
- Institute of Biological, Environmental and Rural Sciences (IBERS), Aberystwyth University, Aberystwyth, Wales SY23 3DZ, UK
| | - Bing Ji
- School of Control Science and Engineering, Shandong University, 17923 Jingshi Road, Jinan, Shandong 250061, PR China.
| |
Collapse
|
121
|
Design of a flexible robot toward transbronchial lung biopsy. ROBOTICA 2022. [DOI: 10.1017/s0263574722001345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Abstract
Transbronchial lung biopsy is an effective and less-invasive treatment for the early diagnosis of lung cancer. However, the limited dexterity of existing endoscopic instruments and the complexity of bronchial access prevent the application of such procedures mainly for biopsy and diagnosis. This paper proposes a flexible robot for transbronchial lung biopsy with a cable-driven mechanism-based flexible manipulator. The robotic system of transbronchial lung biopsy is presented in detail, including the snake-bone end effector, the flexible catheters and the actuation unit. The kinematic analysis of the snake-bone end effector is conducted for the master-slave control. The experimental results show that the end effector reaches the target nodule through a narrow and tortuous pathway in a bronchial model. In conclusion, the proposed robotic system contributes to the field of advanced endoscopic surgery with high flexibility and controllability.
Collapse
|
122
|
Chen J, Li Y, Guo L, Zhou X, Zhu Y, He Q, Han H, Feng Q. Machine learning techniques for CT imaging diagnosis of novel coronavirus pneumonia: a review. Neural Comput Appl 2022; 36:1-19. [PMID: 36159188 PMCID: PMC9483435 DOI: 10.1007/s00521-022-07709-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 08/04/2022] [Indexed: 11/20/2022]
Abstract
Since 2020, novel coronavirus pneumonia has been spreading rapidly around the world, bringing tremendous pressure on medical diagnosis and treatment for hospitals. Medical imaging methods, such as computed tomography (CT), play a crucial role in diagnosing and treating COVID-19. A large number of CT images (with large volume) are produced during the CT-based medical diagnosis. In such a situation, the diagnostic judgement by human eyes on the thousands of CT images is inefficient and time-consuming. Recently, in order to improve diagnostic efficiency, the machine learning technology is being widely used in computer-aided diagnosis and treatment systems (i.e., CT Imaging) to help doctors perform accurate analysis and provide them with effective diagnostic decision support. In this paper, we comprehensively review these frequently used machine learning methods applied in the CT Imaging Diagnosis for the COVID-19, discuss the machine learning-based applications from the various kinds of aspects including the image acquisition and pre-processing, image segmentation, quantitative analysis and diagnosis, and disease follow-up and prognosis. Moreover, we also discuss the limitations of the up-to-date machine learning technology in the context of CT imaging computer-aided diagnosis.
Collapse
Affiliation(s)
- Jingjing Chen
- Zhejiang University City College, Hangzhou, China
- Zhijiang College of Zhejiang University of Technology, Shaoxing, China
| | - Yixiao Li
- Faculty of Science, Zhejiang University of Technology, Hangzhou, China
| | - Lingling Guo
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Xiaokang Zhou
- Faculty of Data Science, Shiga University, Hikone, Japan
- RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
| | - Yihan Zhu
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Qingfeng He
- School of Pharmacy, Fudan University, Shanghai, China
| | - Haijun Han
- School of Medicine, Zhejiang University City College, Hangzhou, China
| | - Qilong Feng
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, China
| |
Collapse
|
123
|
Characterization of different reconstruction techniques on computer-aided system for detection of pulmonary nodules in lung from low-dose CT protocol. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2022. [DOI: 10.1016/j.jrras.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
124
|
Küstner T, Vogel J, Hepp T, Forschner A, Pfannenberg C, Schmidt H, Schwenzer NF, Nikolaou K, la Fougère C, Seith F. Development of a Hybrid-Imaging-Based Prognostic Index for Metastasized-Melanoma Patients in Whole-Body 18F-FDG PET/CT and PET/MRI Data. Diagnostics (Basel) 2022; 12:2102. [PMID: 36140504 PMCID: PMC9498091 DOI: 10.3390/diagnostics12092102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/19/2022] [Accepted: 08/25/2022] [Indexed: 11/17/2022] Open
Abstract
Besides tremendous treatment success in advanced melanoma patients, the rapid development of oncologic treatment options comes with increasingly high costs and can cause severe life-threatening side effects. For this purpose, predictive baseline biomarkers are becoming increasingly important for risk stratification and personalized treatment planning. Thus, the aim of this pilot study was the development of a prognostic tool for the risk stratification of the treatment response and mortality based on PET/MRI and PET/CT, including a convolutional neural network (CNN) for metastasized-melanoma patients before systemic-treatment initiation. The evaluation was based on 37 patients (19 f, 62 ± 13 y/o) with unresectable metastasized melanomas who underwent whole-body 18F-FDG PET/MRI and PET/CT scans on the same day before the initiation of therapy with checkpoint inhibitors and/or BRAF/MEK inhibitors. The overall survival (OS), therapy response, metastatically involved organs, number of lesions, total lesion glycolysis, total metabolic tumor volume (TMTV), peak standardized uptake value (SULpeak), diameter (Dmlesion) and mean apparent diffusion coefficient (ADCmean) were assessed. For each marker, a Kaplan−Meier analysis and the statistical significance (Wilcoxon test, paired t-test and Bonferroni correction) were assessed. Patients were divided into high- and low-risk groups depending on the OS and treatment response. The CNN segmentation and prediction utilized multimodality imaging data for a complementary in-depth risk analysis per patient. The following parameters correlated with longer OS: a TMTV < 50 mL; no metastases in the brain, bone, liver, spleen or pleura; ≤4 affected organ regions; no metastases; a Dmlesion > 37 mm or SULpeak < 1.3; a range of the ADCmean < 600 mm2/s. However, none of the parameters correlated significantly with the stratification of the patients into the high- or low-risk groups. For the CNN, the sensitivity, specificity, PPV and accuracy were 92%, 96%, 92% and 95%, respectively. Imaging biomarkers such as the metastatic involvement of specific organs, a high tumor burden, the presence of at least one large lesion or a high range of intermetastatic diffusivity were negative predictors for the OS, but the identification of high-risk patients was not feasible with the handcrafted parameters. In contrast, the proposed CNN supplied risk stratification with high specificity and sensitivity.
Collapse
Affiliation(s)
- Thomas Küstner
- MIDAS.Lab, Department of Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| | - Jonas Vogel
- Nuclear Medicine and Clinical Molecular Imaging, Department of Radiology, University Hospital Tübingen, 72076 Tubingen, Germany
| | - Tobias Hepp
- MIDAS.Lab, Department of Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| | - Andrea Forschner
- Department of Dermatology, University Hospital of Tübingen, 72070 Tubingen, Germany
| | - Christina Pfannenberg
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| | - Holger Schmidt
- Faculty of Medicine, Eberhard-Karls-University Tübingen, 72076 Tubingen, Germany
- Siemens Healthineers, 91052 Erlangen, Germany
| | - Nina F. Schwenzer
- Faculty of Medicine, Eberhard-Karls-University Tübingen, 72076 Tubingen, Germany
| | - Konstantin Nikolaou
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
- Cluster of Excellence iFIT (EXC 2180) Image-Guided and Functionally Instructed Tumor Therapies, Eberhard Karls University, 72076 Tubingen, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Tübingen, 72076 Tubingen, Germany
| | - Christian la Fougère
- Nuclear Medicine and Clinical Molecular Imaging, Department of Radiology, University Hospital Tübingen, 72076 Tubingen, Germany
- Cluster of Excellence iFIT (EXC 2180) Image-Guided and Functionally Instructed Tumor Therapies, Eberhard Karls University, 72076 Tubingen, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Tübingen, 72076 Tubingen, Germany
| | - Ferdinand Seith
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| |
Collapse
|
125
|
A novel multi-branch hybrid neural network for motor imagery EEG signal classification. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
126
|
Mei J, Cheng MM, Xu G, Wan LR, Zhang H. SANet: A Slice-Aware Network for Pulmonary Nodule Detection. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:4374-4387. [PMID: 33687839 DOI: 10.1109/tpami.2021.3065086] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Lung cancer is the most common cause of cancer death worldwide. A timely diagnosis of the pulmonary nodules makes it possible to detect lung cancer in the early stage, and thoracic computed tomography (CT) provides a convenient way to diagnose nodules. However, it is hard even for experienced doctors to distinguish them from the massive CT slices. The currently existing nodule datasets are limited in both scale and category, which is insufficient and greatly restricts its applications. In this paper, we collect the largest and most diverse dataset named PN9 for pulmonary nodule detection by far. Specifically, it contains 8,798 CT scans and 40,439 annotated nodules from 9 common classes. We further propose a slice-aware network (SANet) for pulmonary nodule detection. A slice grouped non-local (SGNL) module is developed to capture long-range dependencies among any positions and any channels of one slice group in the feature map. And we introduce a 3D region proposal network to generate pulmonary nodule candidates with high sensitivity, while this detection stage usually comes with many false positives. Subsequently, a false positive reduction module (FPR) is proposed by using the multi-scale feature maps. To verify the performance of SANet and the significance of PN9, we perform extensive experiments compared with several state-of-the-art 2D CNN-based and 3D CNN-based detection methods. Promising evaluation results on PN9 prove the effectiveness of our proposed SANet. The dataset and source code is available at https://mmcheng.net/SANet/.
Collapse
|
127
|
Ramachandran A, Bhalla D, Rangarajan K, Pramanik R, Banerjee S, Arora C. Building and evaluating an artificial intelligence algorithm: A practical guide for practicing oncologists. Artif Intell Cancer 2022; 3:42-53. [DOI: 10.35713/aic.v3.i3.42] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 04/09/2022] [Accepted: 06/17/2022] [Indexed: 02/06/2023] Open
Abstract
The use of machine learning and deep learning has enabled many applications, previously thought of as being impossible. Among all medical fields, cancer care is arguably the most significantly impacted, with precision medicine now truly being a possibility. The effect of these technologies, loosely known as artificial intelligence, is particularly striking in fields involving images (such as radiology and pathology) and fields involving large amounts of data (such as genomics). Practicing oncologists are often confronted with new technologies claiming to predict response to therapy or predict the genomic make-up of patients. Underst-anding these new claims and technologies requires a deep understanding of the field. In this review, we provide an overview of the basis of deep learning. We describe various common tasks and their data requirements so that oncologists could be equipped to start such projects, as well as evaluate algorithms presented to them.
Collapse
Affiliation(s)
- Anupama Ramachandran
- Department of Radiology, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Deeksha Bhalla
- Department of Radiology, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Krithika Rangarajan
- Department of Radiology, All India Institute of Medical Sciences New Delhi, New Delhi 110029, India
- School of Information Technology, Indian Institute of Technology, Delhi 110016, India
| | - Raja Pramanik
- Department of Medical Oncology, Dr. B.R.A. Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Subhashis Banerjee
- Department of Computer Science, Indian Institute of Technology, Delhi 110016, India
| | - Chetan Arora
- Department of Computer Science, Indian Institute of Technology, Delhi 110016, India
| |
Collapse
|
128
|
Li H, Song Q, Gui D, Wang M, Min X, Li A. Reconstruction-assisted Feature Encoding Network for Histologic Subtype Classification of Non-small Cell Lung Cancer. IEEE J Biomed Health Inform 2022; 26:4563-4574. [PMID: 35849680 DOI: 10.1109/jbhi.2022.3192010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Accurate histological subtype classification between adenocarcinoma (ADC) and squamous cell carcinoma (SCC) using computed tomography (CT) images is of great importance to assist clinicians in determining treatment and therapy plans for non-small cell lung cancer (NSCLC) patients. Although current deep learning approaches have achieved promising progress in this field, they are often difficult to capture efficient tumor representations due to inadequate training data, and in consequence show limited performance. In this study, we propose a novel and effective reconstruction-assisted feature encoding network (RAFENet) for histological subtype classification by leveraging an auxiliary image reconstruction task to enable extra guidance and regularization for enhanced tumor feature representations. Different from existing reconstruction-assisted methods that directly use generalizable features obtained from shared encoder for primary task, a dedicated task-aware encoding module is utilized in RAFENet to perform refinement of generalizable features. Specifically, a cascade of cross-level non-local blocks are introduced to progressively refine generalizable features at different levels with the aid of lower-level task-specific information, which can successfully learn multi-level task-specific features tailored to histological subtype classification. Moreover, in addition to widely adopted pixel-wise reconstruction loss, we introduce a powerful semantic consistency loss function to explicitly supervise the training of RAFENet, which combines both feature consistency loss and prediction consistency loss to ensure semantic invariance during image reconstruction. Extensive experimental results show that RAFENet effectively addresses the difficult issues that cannot be resolved by existing reconstruction-based methods and consistently outperforms other state-of-the-art methods on both public and in-house NSCLC datasets.
Collapse
|
129
|
Zhu Y, Hu P, Li X, Tian Y, Bai X, Liang T, Li J. Multiscale unsupervised domain adaptation for automatic pancreas segmentation in CT volumes using adversarial learning. Med Phys 2022; 49:5799-5818. [PMID: 35833617 DOI: 10.1002/mp.15827] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 04/28/2022] [Accepted: 05/27/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Computer-aided automatic pancreas segmentation is essential for early diagnosis and treatment of pancreatic diseases. However, the annotation of pancreas images requires professional doctors and considerable expenditure. Due to imaging differences among various institution population, scanning devices and imaging protocols etc., significant degradation in the performance of model inference results is prone to occur when models trained with domain-specific (usually institution-specific) datasets are directly applied to new (other centers/institutions) domain data. In this paper, we propose a novel unsupervised domain adaptation method based on adversarial learning to address pancreas segmentation challenges with the lack of annotations and domain shift interference. METHODS A 3D semantic segmentation model with attention module and residual module is designed as the backbone pancreas segmentation model. In both segmentation model and domain adaptation discriminator network, a multiscale progressively weighted structure is introduced to acquire different field of views. Features of labeled data and unlabeled data are fed in pairs into the proposed multiscale discriminator to learn domain-specific characteristics. Then the unlabeled data features with pseudo-domain label are fed to the discriminator to acquire domain-ambiguous information. With this adversarial learning strategy, the performance of the segmentation network is enhanced to segment unseen unlabeled data. RESULTS Experiments were conducted on two public annotated datasets as source datasets respectively and one private dataset as target dataset, where annotations were not used for the training process but only for evaluation. The 3D segmentation model achieves comparative performance with state-of-the-art pancreas segmentation methods on source domain. After implementing our domain adaptation architecture, the average Dice Similarity Coefficient(DSC) of the segmentation model trained on the NIH-TCIA source dataset increases from 58.79% to 72.73% on the local hospital dataset, while the performance of the target domain segmentation model transferred from the MSD source dataset rises from 62.34% to 71.17%. CONCLUSIONS Correlation of features across data domains are utilized to train the pancreas segmentation model on unlabeled data domain, improving the generalization of the model. Our results demonstrate that the proposed method enables the segmentation model to make meaningful segmentation for unseen data of the training set. In the future, the proposed method has the potential to apply segmentation model trained on public dataset to clinical unannotated CT images from local hospital, effectively assisting radiologists in clinical practice. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Yan Zhu
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, 310027, China
| | - Peijun Hu
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, 310027, China.,Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, 311100, China
| | - Xiang Li
- Department of Hepatobiliary and Pancreatic Surgery, the First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310006, China.,Zhejiang Provincial Key Laboratory of Pancreatic Disease, Hangzhou, 310006, China
| | - Yu Tian
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, 310027, China
| | - Xueli Bai
- Department of Hepatobiliary and Pancreatic Surgery, the First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310006, China.,Zhejiang Provincial Key Laboratory of Pancreatic Disease, Hangzhou, 310006, China
| | - Tingbo Liang
- Department of Hepatobiliary and Pancreatic Surgery, the First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310006, China.,Zhejiang Provincial Key Laboratory of Pancreatic Disease, Hangzhou, 310006, China
| | - Jingsong Li
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, 310027, China.,Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, 311100, China
| |
Collapse
|
130
|
Xia X, Zhang R, Yao X, Huang G, Tang T. A novel lung nodule accurate detection of computerized tomography images based on convolutional neural network and probability graph model. Comput Intell 2022. [DOI: 10.1111/coin.12531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/09/2022]
Affiliation(s)
- Xunpeng Xia
- School of Optical‐Electrical and Computer Engineering University of Shanghai for Science and Technology Shanghai China
| | - Rongfu Zhang
- School of Optical‐Electrical and Computer Engineering University of Shanghai for Science and Technology Shanghai China
| | - Xufeng Yao
- College of Medical Imaging Shanghai University of Medicine and Health Sciences Shanghai China
| | - Gang Huang
- College of Medical Imaging Shanghai University of Medicine and Health Sciences Shanghai China
- Shanghai Key Laboratory of Molecular Imaging Zhoupu Hospital, Shanghai University of Medicine and Health Sciences Shanghai China
- Shanghai Key Laboratory of Molecular Imaging Jiading District Central Hospital Affiliated Shanghai University of Medicine and Health Sciences Shanghai China
| | - Tiequn Tang
- School of Optical‐Electrical and Computer Engineering University of Shanghai for Science and Technology Shanghai China
| |
Collapse
|
131
|
Zhang J, Tao X, Jiang Y, Wu X, Yan D, Xue W, Zhuang S, Chen L, Luo L, Ni D. Application of Convolution Neural Network Algorithm Based on Multicenter ABUS Images in Breast Lesion Detection. Front Oncol 2022; 12:938413. [PMID: 35898876 PMCID: PMC9310547 DOI: 10.3389/fonc.2022.938413] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 05/30/2022] [Indexed: 11/24/2022] Open
Abstract
Objective This study aimed to evaluate a convolution neural network algorithm for breast lesion detection with multi-center ABUS image data developed based on ABUS image and Yolo v5. Methods A total of 741 cases with 2,538 volume data of ABUS examinations were analyzed, which were recruited from 7 hospitals between October 2016 and December 2020. A total of 452 volume data of 413 cases were used as internal validation data, and 2,086 volume data from 328 cases were used as external validation data. There were 1,178 breast lesions in 413 patients (161 malignant and 1,017 benign) and 1,936 lesions in 328 patients (57 malignant and 1,879 benign). The efficiency and accuracy of the algorithm were analyzed in detecting lesions with different allowable false positive values and lesion sizes, and the differences were compared and analyzed, which included the various indicators in internal validation and external validation data. Results The study found that the algorithm had high sensitivity for all categories of lesions, even when using internal or external validation data. The overall detection rate of the algorithm was as high as 78.1 and 71.2% in the internal and external validation sets, respectively. The algorithm could detect more lesions with increasing nodule size (87.4% in ≥10 mm lesions but less than 50% in <10 mm). The detection rate of BI-RADS 4/5 lesions was higher than that of BI-RADS 3 or 2 (96.5% vs 79.7% vs 74.7% internal, 95.8% vs 74.7% vs 88.4% external). Furthermore, the detection performance was better for malignant nodules than benign (98.1% vs 74.9% internal, 98.2% vs 70.4% external). Conclusions This algorithm showed good detection efficiency in the internal and external validation sets, especially for category 4/5 lesions and malignant lesions. However, there are still some deficiencies in detecting category 2 and 3 lesions and lesions smaller than 10 mm.
Collapse
Affiliation(s)
- Jianxing Zhang
- Department of Medical Imaging Center, The First Affiliated Hospital, Jinan University, Guangzhou, China
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
- *Correspondence: Jianxing Zhang, ; Liangping Luo, ; Dong Ni,
| | - Xing Tao
- Medical Ultrasound Image Computing Lab, Shenzhen University, Shenzhen, China
| | - Yanhui Jiang
- Medical Ultrasound Image Computing Lab, Shenzhen University, Shenzhen, China
| | - Xiaoxi Wu
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Dan Yan
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Wen Xue
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Shulian Zhuang
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Ling Chen
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Liangping Luo
- Department of Medical Imaging Center, The First Affiliated Hospital, Jinan University, Guangzhou, China
- *Correspondence: Jianxing Zhang, ; Liangping Luo, ; Dong Ni,
| | - Dong Ni
- Medical Ultrasound Image Computing Lab, Shenzhen University, Shenzhen, China
- *Correspondence: Jianxing Zhang, ; Liangping Luo, ; Dong Ni,
| |
Collapse
|
132
|
Liao Z, Xie Y, Hu S, Xia Y. Learning From Ambiguous Labels for Lung Nodule Malignancy Prediction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1874-1884. [PMID: 35130152 DOI: 10.1109/tmi.2022.3149344] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Lung nodule malignancy prediction is an essential step in the early diagnosis of lung cancer. Besides the difficulties commonly discussed, the challenges of this task also come from the ambiguous labels provided by annotators, since deep learning models have in some cases been found to reproduce or amplify human biases. In this paper, we propose a multi-view 'divide-and-rule' (MV-DAR) model to learn from both reliable and ambiguous annotations for lung nodule malignancy prediction on chest CT scans. According to the consistency and reliability of their annotations, we divide nodules into three sets: a consistent and reliable set (CR-Set), an inconsistent set (IC-Set), and a low reliable set (LR-Set). The nodule in IC-Set is annotated by multiple radiologists inconsistently, and the nodule in LR-Set is annotated by only one radiologist. Although ambiguous, inconsistent labels tell which label(s) is consistently excluded by all annotators, and the unreliable labels of a cohort of nodules are largely correct from the statistical point of view. Hence, both IC-Set and LR-Set can be used to facilitate the training of MV-DAR. Our MV-DAR contains three DAR models to characterize a lung nodule from three orthographic views and is trained following a two-stage procedure. Each DAR consists of three networks with the same architecture, including a prediction network (Prd-Net), a counterfactual network (CF-Net), and a low reliable network (LR-Net), which are trained on CR-Set, IC-Set, and LR-Set respectively in the pretraining phase. In the fine-tuning phase, the image representation ability learned by CF-Net and LR-Net is transferred to Prd-Net by negative-attention module (NA-Module) and consistent-attention module (CA-Module), aiming to boost the prediction ability of Prd-Net. The MV-DAR model has been evaluated on the LIDC-IDRI dataset and LUNGx dataset. Our results indicate not only the effectiveness of the MV-DAR in learning from ambiguous labels but also its superiority over present noisy label-learning models in lung nodule malignancy prediction.
Collapse
|
133
|
Benign-malignant classification of pulmonary nodule with deep feature optimization framework. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103701] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
134
|
A layer-level multi-scale architecture for lung cancer classification with fluorescence lifetime imaging endomicroscopy. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07481-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractIn this paper, we introduce our unique dataset of fluorescence lifetime imaging endo/microscopy (FLIM), containing over 100,000 different FLIM images collected from 18 pairs of cancer/non-cancer human lung tissues of 18 patients by our custom fibre-based FLIM system. The aim of providing this dataset is that more researchers from relevant fields can push forward this particular area of research. Afterwards, we describe the best practice of image post-processing suitable per the dataset. In addition, we propose a novel hierarchically aggregated multi-scale architecture to improve the binary classification performance of classic CNNs. The proposed model integrates the advantages of multi-scale feature extraction at different levels, where layer-wise global information is aggregated with branch-wise local information. We integrate the proposal, namely ResNetZ, into ResNet, and appraise it on the FLIM dataset. Since ResNetZ can be configured with a shortcut connection and the aggregations by Addition or Concatenation, we first evaluate the impact of different configurations on the performance. We thoroughly examine various ResNetZ variants to demonstrate the superiority. We also compare our model with a feature-level multi-scale model to illustrate the advantages and disadvantages of multi-scale architectures at different levels.
Collapse
|
135
|
Peters AA, Huber AT, Obmann VC, Heverhagen JT, Christe A, Ebner L. Diagnostic validation of a deep learning nodule detection algorithm in low-dose chest CT: determination of optimized dose thresholds in a virtual screening scenario. Eur Radiol 2022; 32:4324-4332. [PMID: 35059804 DOI: 10.1007/s00330-021-08511-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 12/06/2021] [Accepted: 12/09/2021] [Indexed: 12/17/2022]
Abstract
OBJECTIVES This study was conducted to evaluate the effect of dose reduction on the performance of a deep learning (DL)-based computer-aided diagnosis (CAD) system regarding pulmonary nodule detection in a virtual screening scenario. METHODS Sixty-eight anthropomorphic chest phantoms were equipped with 329 nodules (150 ground glass, 179 solid) with four sizes (5 mm, 8 mm, 10 mm, 12 mm) and scanned with nine tube voltage/current combinations. The examinations were analyzed by a commercially available DL-based CAD system. The results were compared by a comparison of proportions. Logistic regression was performed to evaluate the impact of tube voltage, tube current, nodule size, nodule density, and nodule location. RESULTS The combination with the lowest effective dose (E) and unimpaired detection rate was 80 kV/50 mAs (sensitivity: 97.9%, mean false-positive rate (FPR): 1.9, mean CTDIvol: 1.2 ± 0.4 mGy, mean E: 0.66 mSv). Logistic regression revealed that tube voltage and current had the greatest impact on the detection rate, while nodule size and density had no significant influence. CONCLUSIONS The optimal tube voltage/current combination proposed in this study (80 kV/50 mAs) is comparable to the proposed combinations in similar studies, which mostly dealt with conventional CAD software. Modification of tube voltage and tube current has a significant impact on the performance of DL-based CAD software in pulmonary nodule detection regardless of their size and composition. KEY POINTS • Modification of tube voltage and tube current has a significant impact on the performance of deep learning-based CAD software. • Nodule size and composition have no significant impact on the software's performance. • The optimal tube voltage/current combination for the examined software is 80 kV/50 mAs.
Collapse
Affiliation(s)
- Alan A Peters
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Bern University Hospital, University of Bern, Inselspital Bern, 3010, Switzerland.
| | - Adrian T Huber
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Bern University Hospital, University of Bern, Inselspital Bern, 3010, Switzerland
| | - Verena C Obmann
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Bern University Hospital, University of Bern, Inselspital Bern, 3010, Switzerland
| | - Johannes T Heverhagen
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Bern University Hospital, University of Bern, Inselspital Bern, 3010, Switzerland.,Department of BioMedical Research, Experimental Radiology, University of Bern, 3008, Bern, Switzerland.,Department of Radiology, The Ohio State University, Columbus, OH, USA
| | - Andreas Christe
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Bern University Hospital, University of Bern, Inselspital Bern, 3010, Switzerland
| | - Lukas Ebner
- Department of Diagnostic, Interventional and Pediatric Radiology (DIPR), Bern University Hospital, University of Bern, Inselspital Bern, 3010, Switzerland
| |
Collapse
|
136
|
Channel-Wise Attention Mechanism in the 3D Convolutional Network for Lung Nodule Detection. ELECTRONICS 2022. [DOI: 10.3390/electronics11101600] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Pulmonary nodule detection is essential to reduce the mortality of lung cancer. One-stage detection methods have recently emerged as high-performance and lower-power alternatives to two-stage lung nodule detection methods. However, it is difficult for existing one-stage detection networks to balance sensitivity and specificity. In this paper, we propose an end-to-end detection mechanism combined with a channel-wise attention mechanism based on a 3D U-shaped residual network. First, an improved attention gate (AG) is introduced to reduce the false positive rate by employing critical feature dimensions at skip connections for feature propagation. Second, a channel interaction unit (CIU) is designed before the detection head to further improve detection sensitivity. Furthermore, the gradient harmonizing mechanism (GHM) loss function is adopted to solve the problem caused by the imbalance of positive and negative samples. We conducted experiments on the LUNA16 dataset and achieved a performance with a competition performance metric (CPM) score of 89.5% and sensitivity of 95%. The proposed method outperforms existing models in terms of sensitivity and specificity while maintaining the attractiveness of being lightweight, making it suitable for automatic lung nodule detection.
Collapse
|
137
|
Painuli D, Bhardwaj S, Köse U. Recent advancement in cancer diagnosis using machine learning and deep learning techniques: A comprehensive review. Comput Biol Med 2022; 146:105580. [PMID: 35551012 DOI: 10.1016/j.compbiomed.2022.105580] [Citation(s) in RCA: 49] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Revised: 04/14/2022] [Accepted: 04/30/2022] [Indexed: 02/07/2023]
Abstract
Being a second most cause of mortality worldwide, cancer has been identified as a perilous disease for human beings, where advance stage diagnosis may not help much in safeguarding patients from mortality. Thus, efforts to provide a sustainable architecture with proven cancer prevention estimate and provision for early diagnosis of cancer is the need of hours. Advent of machine learning methods enriched cancer diagnosis area with its overwhelmed efficiency & low error-rate then humans. A significant revolution has been witnessed in the development of machine learning & deep learning assisted system for segmentation & classification of various cancers during past decade. This research paper includes a review of various types of cancer detection via different data modalities using machine learning & deep learning-based methods along with different feature extraction techniques and benchmark datasets utilized in the recent six years studies. The focus of this study is to review, analyse, classify, and address the recent development in cancer detection and diagnosis of six types of cancers i.e., breast, lung, liver, skin, brain and pancreatic cancer, using machine learning & deep learning techniques. Various state-of-the-art technique are clustered into same group and results are examined through key performance indicators like accuracy, area under the curve, precision, sensitivity, dice score on benchmark datasets and concluded with future research work challenges.
Collapse
Affiliation(s)
- Deepak Painuli
- Department of Computer Science and Engineering, Gurukula Kangri Vishwavidyalaya, Haridwar, India.
| | - Suyash Bhardwaj
- Department of Computer Science and Engineering, Gurukula Kangri Vishwavidyalaya, Haridwar, India
| | - Utku Köse
- Department of Computer Engineering, Suleyman Demirel University, Isparta, Turkey
| |
Collapse
|
138
|
Kör H, Erbay H, Yurttakal AH. Diagnosing and differentiating viral pneumonia and COVID-19 using X-ray images. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:39041-39057. [PMID: 35493416 PMCID: PMC9042669 DOI: 10.1007/s11042-022-13071-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Revised: 01/29/2022] [Accepted: 04/04/2022] [Indexed: 05/31/2023]
Abstract
Coronavirus-caused diseases are common worldwide and might worsen both human health and the world economy. Most people may instantly encounter coronavirus in their life and may result in pneumonia. Nowadays, the world is fighting against the new coronavirus: COVID-19. The rate of increase is high, and the world got caught the disease unprepared. In most regions of the world, COVID-19 test is not possible due to the absence of the diagnostic kit, even if the kit exists, its false-negative (giving a negative result for a person infected with COVID-19) rate is high. Also, early detection of COVID-19 is crucial to keep its morbidity and mortality rates low. The symptoms of pneumonia are alike, and COVID-19 is no exception. The chest X-ray is the main reference in diagnosing pneumonia. Thus, the need for radiologists has been increased considerably not only to detect COVID-19 but also to identify other abnormalities it caused. Herein, a transfer learning-based multi-class convolutional neural network model was proposed for the automatic detection of pneumonia and also for differentiating non-COVID-19 pneumonia and COVID-19. The model that inputs chest X-ray images is capable of extracting radiographic patterns on chest X-ray images to turn into valuable information and monitor structural differences in the lungs caused by the diseases. The model was developed by two public datasets: Cohen dataset and Kermany dataset. The model achieves an average training accuracy of 0.9886, an average training recall of 0.9829, and an average training precision of 0.9837. Moreover, the average training false-positive and false-negative rates are 0.0085 and 0.0171, respectively. Conversely, the model's test set metrics such as average accuracy, average recall, and average precision are 97.78%, 96.67%, and 96.67%, respectively. According to the simulation results, the proposed model is promising, can quickly and accurately classify chest images, and helps doctors as the second reader in their final decision.
Collapse
Affiliation(s)
- Hakan Kör
- Department of Computer Engineering, Engineering Faculty, Hitit University, Çorum, Turkey
| | - Hasan Erbay
- Computer Engineering Department, Engineering Faculty, University of Turkish Aeronautical Association, 06790 Etimesgut Ankara, Turkey
| | - Ahmet Haşim Yurttakal
- Computer Engineering Department, Engineering Faculty, Afyon Kocatepe University, 03204 Erenler Afyon, Turkey
| |
Collapse
|
139
|
Majanga V, Viriri S. A Survey of Dental Caries Segmentation and Detection Techniques. ScientificWorldJournal 2022; 2022:8415705. [PMID: 35450417 PMCID: PMC9017544 DOI: 10.1155/2022/8415705] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Revised: 02/21/2022] [Accepted: 03/10/2022] [Indexed: 01/15/2023] Open
Abstract
Dental caries detection, in the past, has been a challenging task given the amount of information got from various radiographic images. Several methods have been introduced to improve the quality of images for faster caries detection. Deep learning has become the methodology of choice when it comes to analysis of medical images. This survey gives an in-depth look into the use of deep learning for object detection, segmentation, and classification. It further looks into literature on segmentation and detection methods of dental images through deep learning. From the literature studied, we found out that methods were grouped according to the type of dental caries (proximal, enamel), type of X-ray images used (extraoral, intraoral), and segmentation method (threshold-based, cluster-based, boundary-based, and region-based). From the works reviewed, the main focus has been found to be on threshold-based segmentation methods. Most of the reviewed papers have preferred the use of intraoral X-ray images over extraoral X-ray images to perform segmentation on dental images of already isolated parts of the teeth. This paper presents an in-depth analysis of recent research in deep learning for dental caries segmentation and detection. It involves discussing the methods and algorithms used in segmenting and detecting dental caries. It also discusses various existing models used and how they compare with each other in terms of system performance and evaluation. We also discuss the limitations of these methods, as well as future perspectives on how to improve their performance.
Collapse
Affiliation(s)
- Vincent Majanga
- Statistics and Computer Science, University of KwaZulu-Natal, Durban 4000, South Africa
| | - Serestina Viriri
- Statistics and Computer Science, University of KwaZulu-Natal, Durban 4000, South Africa
| |
Collapse
|
140
|
Shi F, Chen B, Cao Q, Wei Y, Zhou Q, Zhang R, Zhou Y, Yang W, Wang X, Fan R, Yang F, Chen Y, Li W, Gao Y, Shen D. Semi-Supervised Deep Transfer Learning for Benign-Malignant Diagnosis of Pulmonary Nodules in Chest CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:771-781. [PMID: 34705640 DOI: 10.1109/tmi.2021.3123572] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Lung cancer is the leading cause of cancer deaths worldwide. Accurately diagnosing the malignancy of suspected lung nodules is of paramount clinical importance. However, to date, the pathologically-proven lung nodule dataset is largely limited and is highly imbalanced in benign and malignant distributions. In this study, we proposed a Semi-supervised Deep Transfer Learning (SDTL) framework for benign-malignant pulmonary nodule diagnosis. First, we utilize a transfer learning strategy by adopting a pre-trained classification network that is used to differentiate pulmonary nodules from nodule-like tissues. Second, since the size of samples with pathological-proven is small, an iterated feature-matching-based semi-supervised method is proposed to take advantage of a large available dataset with no pathological results. Specifically, a similarity metric function is adopted in the network semantic representation space for gradually including a small subset of samples with no pathological results to iteratively optimize the classification network. In this study, a total of 3,038 pulmonary nodules (from 2,853 subjects) with pathologically-proven benign or malignant labels and 14,735 unlabeled nodules (from 4,391 subjects) were retrospectively collected. Experimental results demonstrate that our proposed SDTL framework achieves superior diagnosis performance, with accuracy = 88.3%, AUC = 91.0% in the main dataset, and accuracy = 74.5%, AUC = 79.5% in the independent testing dataset. Furthermore, ablation study shows that the use of transfer learning provides 2% accuracy improvement, and the use of semi-supervised learning further contributes 2.9% accuracy improvement. Results implicate that our proposed classification network could provide an effective diagnostic tool for suspected lung nodules, and might have a promising application in clinical practice.
Collapse
|
141
|
Reliable detection of lymph nodes in whole pelvic for radiotherapy. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
142
|
Zhou J, Xin H. Emerging artificial intelligence methods for fighting lung cancer: a survey. CLINICAL EHEALTH 2022. [DOI: 10.1016/j.ceh.2022.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|
143
|
SC-Dynamic R-CNN: A Self-Calibrated Dynamic R-CNN Model for Lung Cancer Lesion Detection. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:9452157. [PMID: 35387227 PMCID: PMC8979747 DOI: 10.1155/2022/9452157] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 02/24/2022] [Accepted: 03/06/2022] [Indexed: 12/21/2022]
Abstract
Lung cancer has complex biological characteristics and a high degree of malignancy. It has always been the number one “killer” in cancer, threatening human life and health. The diagnosis and early treatment of lung cancer still require improvement and further development. With high morbidity and mortality, there is an urgent need for an accurate diagnosis method. However, the existing computer-aided detection system has a complicated process and low detection accuracy. To solve this problem, this paper proposed a two-stage detection method based on the dynamic region-based convolutional neural network (Dynamic R-CNN). We divide lung cancer into squamous cell carcinoma, adenocarcinoma, and small cell carcinoma. By adding the self-calibrated convolution module into the feature network, we extracted more abundant lung cancer features and proposed a new regression loss function to further improve the detection performance of lung cancer. After experimental verification, the mAP (mean average precision) of the model can reach 88.1% on the lung cancer dataset and it performed particularly well with a high IoU (intersection over union) threshold. This method has a good performance in the detection of lung cancer and can improve the efficiency of doctors' diagnoses. It can avoid false detection and miss detection to a certain extent.
Collapse
|
144
|
Chen Y, Tian X, Fan K, Zheng Y, Tian N, Fan K. The Value of Artificial Intelligence Film Reading System Based on Deep Learning in the Diagnosis of Non-Small-Cell Lung Cancer and the Significance of Efficacy Monitoring: A Retrospective, Clinical, Nonrandomized, Controlled Study. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:2864170. [PMID: 35360550 PMCID: PMC8964156 DOI: 10.1155/2022/2864170] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 01/26/2022] [Accepted: 01/29/2022] [Indexed: 12/24/2022]
Abstract
Objective To explore the value of artificial intelligence (AI) film reading system based on deep learning in the diagnosis of non-small-cell lung cancer (NSCLC) and the significance of curative effect monitoring. Methods We retrospectively selected 104 suspected NSCLC cases from the self-built chest CT pulmonary nodule database in our hospital, and all of them were confirmed by pathological examination. The lung CT images of the selected patients were introduced into the AI reading system of pulmonary nodules, and the recording software automatically identified the nodules, and the results were compared with the results of the original image report. The nodules detected by the AI software and film readers were evaluated by two chest experts and recorded their size and characteristics. Comparison of calculation sensitivity, false positive rate evaluation of the NSCLC software, and physician's efficiency of nodule detection whether there was a significant difference between the two groups. Results The sensitivity, specificity, accuracy, positive predictive rate, and false positive rate of NSCLC diagnosed by radiologists were 72.94% (62/85), 92.06% (58/63), 81.08% (62+58/148), 92.53% (62/67), and 7.93% (5/63), respectively. The sensitivity, specificity, accuracy, positive prediction rate, and false positive rate of AI film reading system in the diagnosis of NSCLC were 94.12% (80/85), 77.77% (49/63), 87.161% (80 + 49/148), 85.11% (80/94), and 22.22% (14/63), respectively. Compared with radiologists, the sensitivity and false positive rate of artificial intelligence film reading system in the diagnosis of NSCLC were higher (P < 0.05). The sensitivity, specificity, accuracy, positive prediction rate, and negative prediction rate of artificial intelligence film reading system in evaluating the efficacy of patients with NSCLC were 87.50% (63/72), 69.23% (9/13), 84.70% (63 + 9)/85, 94.02% (63/67), and 50% (9/18), respectively. Conclusion The AI film reading system based on deep learning has higher sensitivity for the diagnosis of NSCLC than radiologists and can be used as an auxiliary detection tool for doctors to screen for NSCLC, but its false positive rate is relatively high. Attention should be paid to identification. Meanwhile, the AI film reading system based on deep learning also has a certain guiding significance for the diagnosis and treatment monitoring of NSCLC.
Collapse
Affiliation(s)
- Yunbing Chen
- Department of Computerized Tomography, Jincheng People's Hospital (Jincheng Hospital Affiliated to Changzhi Medical College), No. 456 Wenchang East Street, Jincheng, 048026 Shanxi, China
| | - Xin Tian
- Department of Computerized Tomography, Jincheng People's Hospital (Jincheng Hospital Affiliated to Changzhi Medical College), No. 456 Wenchang East Street, Jincheng, 048026 Shanxi, China
| | - Kai Fan
- Department of Computerized Tomography, Jincheng People's Hospital (Jincheng Hospital Affiliated to Changzhi Medical College), No. 456 Wenchang East Street, Jincheng, 048026 Shanxi, China
| | - Yanni Zheng
- Department of Computerized Tomography, Jincheng People's Hospital (Jincheng Hospital Affiliated to Changzhi Medical College), No. 456 Wenchang East Street, Jincheng, 048026 Shanxi, China
| | - Nannan Tian
- Department of Computerized Tomography, Jincheng People's Hospital (Jincheng Hospital Affiliated to Changzhi Medical College), No. 456 Wenchang East Street, Jincheng, 048026 Shanxi, China
| | - Ka Fan
- Department of Computerized Tomography, Jincheng People's Hospital (Jincheng Hospital Affiliated to Changzhi Medical College), No. 456 Wenchang East Street, Jincheng, 048026 Shanxi, China
| |
Collapse
|
145
|
Bouget D, Pedersen A, Vanel J, Leira HO, Langø T. Mediastinal lymph nodes segmentation using 3D convolutional neural network ensembles and anatomical priors guiding. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2043778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- David Bouget
- Department of Medical Technology, SINTEF, Trondheim, Norway
- Department of Circulation and Medical Imaging, NTNU, Center for Innovative Ultrasound Solutions, Trondheim, Norway
| | - André Pedersen
- Department of Medical Technology, SINTEF, Trondheim, Norway
| | - Johanna Vanel
- Department of Medical Technology, SINTEF, Trondheim, Norway
| | - Haakon O. Leira
- Department of Thoracic Medicine, St. Olavs Hospital, Trondheim, Norway
| | - Thomas Langø
- Department of Medical Technology, SINTEF, Trondheim, Norway
| |
Collapse
|
146
|
Arabahmadi M, Farahbakhsh R, Rezazadeh J. Deep Learning for Smart Healthcare-A Survey on Brain Tumor Detection from Medical Imaging. SENSORS (BASEL, SWITZERLAND) 2022; 22:1960. [PMID: 35271115 PMCID: PMC8915095 DOI: 10.3390/s22051960] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 02/18/2022] [Accepted: 02/28/2022] [Indexed: 12/13/2022]
Abstract
Advances in technology have been able to affect all aspects of human life. For example, the use of technology in medicine has made significant contributions to human society. In this article, we focus on technology assistance for one of the most common and deadly diseases to exist, which is brain tumors. Every year, many people die due to brain tumors; based on "braintumor" website estimation in the U.S., about 700,000 people have primary brain tumors, and about 85,000 people are added to this estimation every year. To solve this problem, artificial intelligence has come to the aid of medicine and humans. Magnetic resonance imaging (MRI) is the most common method to diagnose brain tumors. Additionally, MRI is commonly used in medical imaging and image processing to diagnose dissimilarity in different parts of the body. In this study, we conducted a comprehensive review on the existing efforts for applying different types of deep learning methods on the MRI data and determined the existing challenges in the domain followed by potential future directions. One of the branches of deep learning that has been very successful in processing medical images is CNN. Therefore, in this survey, various architectures of CNN were reviewed with a focus on the processing of medical images, especially brain MRI images.
Collapse
Affiliation(s)
| | - Reza Farahbakhsh
- Institut Polytechnique de Paris, Telecom SudParis, 91000 Evry, France;
| | - Javad Rezazadeh
- North Tehran Branch, Azad University, Tehran 1667914161, Iran;
- Kent Institute Australia, Sydney, NSW 2000, Australia
| |
Collapse
|
147
|
Assari Z, Mahloojifar A, Ahmadinejad N. Discrimination of benign and malignant solid breast masses using deep residual learning-based bimodal computer-aided diagnosis system. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103453] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
148
|
Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. ROFO-FORTSCHR RONTG 2022; 194:605-612. [PMID: 35211929 DOI: 10.1055/a-1718-4128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET.. CITATION FORMAT · Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Fortschr Röntgenstr 2022; DOI: 10.1055/a-1718-4128.
Collapse
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Tobias Hepp
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Ferdinand Seith
- Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| |
Collapse
|
149
|
Efficient tumor volume measurement and segmentation approach for CT image based on twin support vector machines. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06769-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
150
|
Atkins KM, Weiss J, Zeleznik R, Bitterman DS, Chaunzwa TL, Huynh E, Guthier C, Kozono DE, Lewis JH, Tamarappoo BK, Nohria A, Hoffmann U, Aerts HJWL, Mak RH. Elevated Coronary Artery Calcium Quantified by a Validated Deep Learning Model From Lung Cancer Radiotherapy Planning Scans Predicts Mortality. JCO Clin Cancer Inform 2022; 6:e2100095. [PMID: 35084935 DOI: 10.1200/cci.21.00095] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
PURPOSE Coronary artery calcium (CAC) quantified on computed tomography (CT) scans is a robust predictor of atherosclerotic coronary disease; however, the feasibility and relevance of quantitating CAC from lung cancer radiotherapy planning CT scans is unknown. We used a previously validated deep learning (DL) model to assess whether CAC is a predictor of all-cause mortality and major adverse cardiac events (MACEs). METHODS Retrospective analysis of non-contrast-enhanced radiotherapy planning CT scans from 428 patients with locally advanced lung cancer is performed. The DL-CAC algorithm was previously trained on 1,636 cardiac-gated CT scans and tested on four clinical trial cohorts. Plaques ≥ 1 cubic millimeter were measured to generate an Agatston-like DL-CAC score and grouped as DL-CAC = 0 (very low risk) and DL-CAC ≥ 1 (elevated risk). Cox and Fine and Gray regressions were adjusted for lung cancer and cardiovascular factors. RESULTS The median follow-up was 18.1 months. The majority (61.4%) had a DL-CAC ≥ 1. There was an increased risk of all-cause mortality with DL-CAC ≥ 1 versus DL-CAC = 0 (adjusted hazard ratio, 1.51; 95% CI, 1.01 to 2.26; P = .04), with 2-year estimates of 56.2% versus 45.4%, respectively. There was a trend toward increased risk of major adverse cardiac events with DL-CAC ≥ 1 versus DL-CAC = 0 (hazard ratio, 1.80; 95% CI, 0.87 to 3.74; P = .11), with 2-year estimates of 7.3% versus 1.2%, respectively. CONCLUSION In this proof-of-concept study, CAC was effectively measured from routinely acquired radiotherapy planning CT scans using an automated model. Elevated CAC, as predicted by the DL model, was associated with an increased risk of mortality, suggesting a potential benefit for automated cardiac risk screening before cancer therapy begins.
Collapse
Affiliation(s)
- Katelyn M Atkins
- Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, CA.,Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA
| | - Jakob Weiss
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA.,Artificial Intelligence in Medicine (AIM) Program, Brigham and Women's Hospital, Harvard Medical School, Boston, MA.,Department of Diagnostic and Interventional Radiology, University Hospital, Freiburg, Germany
| | - Roman Zeleznik
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA.,Artificial Intelligence in Medicine (AIM) Program, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Danielle S Bitterman
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA.,Artificial Intelligence in Medicine (AIM) Program, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Tafadzwa L Chaunzwa
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA.,Artificial Intelligence in Medicine (AIM) Program, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Elizabeth Huynh
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA
| | - Christian Guthier
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA
| | - David E Kozono
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA
| | - John H Lewis
- Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, CA
| | | | - Anju Nohria
- Department of Cardiovascular Medicine, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA
| | - Udo Hoffmann
- Artificial Intelligence in Medicine (AIM) Program, Brigham and Women's Hospital, Harvard Medical School, Boston, MA.,Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA
| | - Hugo J W L Aerts
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA.,Artificial Intelligence in Medicine (AIM) Program, Brigham and Women's Hospital, Harvard Medical School, Boston, MA.,Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands
| | - Raymond H Mak
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women's Hospital, Boston, MA.,Artificial Intelligence in Medicine (AIM) Program, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| |
Collapse
|