1
|
Felefly T, Francis Z, Roukoz C, Fares G, Achkar S, Yazbeck S, Nasr A, Kordahi M, Azoury F, Nasr DN, Nasr E, Noël G. A 3D Convolutional Neural Network Based on Non-enhanced Brain CT to Identify Patients with Brain Metastases. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:858-864. [PMID: 39187703 PMCID: PMC11950574 DOI: 10.1007/s10278-024-01240-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2024] [Revised: 08/03/2024] [Accepted: 08/16/2024] [Indexed: 08/28/2024]
Abstract
Dedicated brain imaging for cancer patients is seldom recommended in the absence of symptoms. There is increasing availability of non-enhanced CT (NE-CT) of the brain, mainly owing to a wider utilization of Positron Emission Tomography-CT (PET-CT) in cancer staging. Brain metastases (BM) are often hard to diagnose on NE-CT. This work aims to develop a 3D Convolutional Neural Network (3D-CNN) based on brain NE-CT to distinguish patients with and without BM. We retrospectively included NE-CT scans for 100 patients with single or multiple BM and 100 patients without brain imaging abnormalities. Patients whose largest lesion was < 5 mm were excluded. The largest tumor was manually segmented on a matched contrast-enhanced T1 weighted Magnetic Resonance Imaging (MRI), and shape radiomics were extracted to determine the size and volume of the lesion. The brain was automatically segmented, and masked images were normalized and resampled. The dataset was split into training (70%) and validation (30%) sets. Multiple versions of a 3D-CNN were developed, and the best model was selected based on accuracy (ACC) on the validation set. The median largest tumor Maximum-3D-Diameter was 2.29 cm, and its median volume was 2.81 cc. Solitary BM were found in 27% of the patients, while 49% had > 5 BMs. The best model consisted of 4 convolutional layers with 3D average pooling layers, dropout layers of 50%, and a sigmoid activation function. Mean validation ACC was 0.983 (SD: 0.020) and mean area under receiver-operating characteristic curve was 0.983 (SD: 0.023). Sensitivity was 0.983 (SD: 0.020). We developed an accurate 3D-CNN based on brain NE-CT to differentiate between patients with and without BM. The model merits further external validation.
Collapse
Affiliation(s)
- Tony Felefly
- Radiation Oncology Department, Hôtel-Dieu de France Hospital, Saint Joseph University, Beirut, Lebanon.
- ICube Laboratory, University of Strasbourg, Strasbourg, France.
- Radiation Oncology Department, Hôtel-Dieu de Lévis, Lévis, QC, Canada.
| | - Ziad Francis
- Physics Department, Saint Joseph University, Beirut, Lebanon
| | - Camille Roukoz
- Radiation Oncology Department, Hôtel-Dieu de France Hospital, Saint Joseph University, Beirut, Lebanon
| | - Georges Fares
- Radiation Oncology Department, Hôtel-Dieu de France Hospital, Saint Joseph University, Beirut, Lebanon
- Physics Department, Saint Joseph University, Beirut, Lebanon
| | - Samir Achkar
- Radiation Oncology Department, Gustave Roussy Cancer Campus, 94805, Villejuif, France
| | - Sandrine Yazbeck
- Department of Radiology, University of Maryland School of Medicine, 655 W Baltimore St S, Baltimore, MD, 21201, USA
| | | | - Manal Kordahi
- Pathology Department, Centre Hospitalier Affilié Universitaire Régional, Trois-Rivières, QC, Canada
| | - Fares Azoury
- Radiation Oncology Department, Hôtel-Dieu de France Hospital, Saint Joseph University, Beirut, Lebanon
| | - Dolly Nehme Nasr
- Radiation Oncology Department, Hôtel-Dieu de France Hospital, Saint Joseph University, Beirut, Lebanon
| | - Elie Nasr
- Radiation Oncology Department, Hôtel-Dieu de France Hospital, Saint Joseph University, Beirut, Lebanon
| | - Georges Noël
- Radiotherapy Department, Institut de Cancérologie de Strasbourg (ICANS), 67200, Strasbourg, France
- Radiobiology Department, IMIS Unit, IRIS Platform, ICube, University of Strasbourg, 67085, Strasbourg Cedex, France
- Faculty of Medicine, University of Strasbourg, 67000, Strasbourg, France
| |
Collapse
|
2
|
Yasaka K, Hatano S, Mizuki M, Okimoto N, Kubo T, Shibata E, Watadani T, Abe O. Effects of deep learning on radiologists' and radiology residents' performance in identifying esophageal cancer on CT. Br J Radiol 2023; 96:20220685. [PMID: 37000686 PMCID: PMC10546446 DOI: 10.1259/bjr.20220685] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 12/16/2022] [Accepted: 01/24/2023] [Indexed: 04/01/2023] Open
Abstract
OBJECTIVE To investigate the effectiveness of a deep learning model in helping radiologists or radiology residents detect esophageal cancer on contrast-enhanced CT images. METHODS This retrospective study included 250 and 25 patients with and without esophageal cancer, respectively, who underwent contrast-enhanced CT between December 2014 and May 2021 (mean age, 67.9 ± 10.3 years; 233 men). A deep learning model was developed using data from 200 and 25 patients with esophageal cancer as training and validation data sets, respectively. The model was then applied to the test data set, consisting of additional 25 and 25 patients with and without esophageal cancer, respectively. Four readers (one radiologist and three radiology residents) independently registered the likelihood of malignant lesions using a 3-point scale in the test data set. After the scorings were completed, the readers were allowed to reference to the deep learning model results and modify their scores, when necessary. RESULTS The area under the curve (AUC) of the deep learning model was 0.95 and 0.98 in the image- and patient-based analyses, respectively. By referencing to the deep learning model results, the AUCs for the readers were improved from 0.96/0.93/0.96/0.93 to 0.97/0.95/0.99/0.96 (p = 0.100/0.006/<0.001/<0.001, DeLong's test) in the image-based analysis, with statistically significant differences noted for the three less-experienced readers. Furthermore, the AUCs for the readers tended to improve from 0.98/0.96/0.98/0.94 to 1.00/1.00/1.00/1.00 (p = 0.317/0.149/0.317/0.073, DeLong's test) in the patient-based analysis. CONCLUSION The deep learning model mainly helped less-experienced readers improve their performance in detecting esophageal cancer on contrast-enhanced CT. ADVANCES IN KNOWLEDGE A deep learning model could mainly help less-experienced readers to detect esophageal cancer by improving their diagnostic confidence and diagnostic performance.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Sosuke Hatano
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Masumi Mizuki
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Naomasa Okimoto
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Takatoshi Kubo
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Eisuke Shibata
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Takeyuki Watadani
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| |
Collapse
|
3
|
Wu Y, Chen G, Feng Z, Cui H, Rao F, Ni Y, Huang Z, Zhu W. Phase Difference Network for Efficient Differentiation of Hepatic Tumors with Multi-Phase CT. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083466 DOI: 10.1109/embc40787.2023.10340090] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Liver cancer has been one of the top causes of cancer-related death. For developing an accurate treatment strategy and raising the survival rate, the differentiation of liver cancers is essential. Multiphase CT recently acts as the primary examination method for clinical diagnosis. Deep learning techniques based on multiphase CT have been proposed to distinguish hepatic cancers. However, due to the recurrent mechanism, RNN-based approaches require expensive calculations whereas CNN-based models fail to explicitly establish temporal correlations among phases. In this paper, we proposed a phase difference network, termed as Phase Difference Network (PDN), to identify two liver cancer, hepatocellular carcinoma and intrahepatic cholangiocarcinoma, from four-phase CT. Specifically, the phase difference was used as interphase temporal information in a differential attention module, which enhanced the feature representation. Additionally, utilizing a multihead self-attention module, a transformer-based classification module was employed to explore the long-term context and capture the temporal relation between phases. Clinical datasets are used in experiments to compare the performance of the proposed strategy versus conventional approaches. The results indicate that the proposed method outperforms the traditional deep learning based methods.
Collapse
|
4
|
Li R, Guo Y, Zhao Z, Chen M, Liu X, Gong G, Wang L. MRI-based two-stage deep learning model for automatic detection and segmentation of brain metastases. Eur Radiol 2023; 33:3521-3531. [PMID: 36695903 DOI: 10.1007/s00330-023-09420-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 12/12/2022] [Accepted: 12/29/2022] [Indexed: 01/26/2023]
Abstract
OBJECTIVES To develop and validate a two-stage deep learning model for automatic detection and segmentation of brain metastases (BMs) in MRI images. METHODS In this retrospective study, T1-weighted (T1) and T1-weighted contrast-enhanced (T1ce) MRI images of 649 patients who underwent radiotherapy from August 2019 to January 2022 were included. A total of 5163 metastases were manually annotated by neuroradiologists. A two-stage deep learning model was developed for automatic detection and segmentation of BMs, which consisted of a lightweight segmentation network for generating metastases proposals and a multi-scale classification network for false-positive suppression. Its performance was evaluated by sensitivity, precision, F1-score, dice, and relative volume difference (RVD). RESULTS Six hundred forty-nine patients were randomly divided into training (n = 295), validation (n = 99), and testing (n = 255) sets. The proposed two-stage model achieved a sensitivity of 90% (1463/1632) and a precision of 56% (1463/2629) on the testing set, outperforming one-stage methods based on a single-shot detector, 3D U-Net, and nnU-Net, whose sensitivities were 78% (1276/1632), 79% (1290/1632), and 87% (1426/1632), and the precisions were 40% (1276/3222), 51% (1290/2507), and 53% (1426/2688), respectively. Particularly for BMs smaller than 5 mm, the proposed model achieved a sensitivity of 66% (116/177), far superior to one-stage models (21% (37/177), 36% (64/177), and 53% (93/177)). Furthermore, it also achieved high segmentation performance with an average dice of 81% and an average RVD of 20%. CONCLUSION A two-stage deep learning model can detect and segment BMs with high sensitivity and low volume error. KEY POINTS • A two-stage deep learning model based on triple-channel MRI images identified brain metastases with 90% sensitivity and 56% precision. • For brain metastases smaller than 5 mm, the proposed two-stage model achieved 66% sensitivity and 22% precision. • For segmentation of brain metastases, the proposed two-stage model achieved a dice of 81% and a relative volume difference (RVD) of 20%.
Collapse
Affiliation(s)
- Ruikun Li
- Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yujie Guo
- Shandong Cancer Hospital Affiliated to Shandong University, Jinan, 250117, China
| | - Zhongchen Zhao
- Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Mingming Chen
- Shandong Cancer Hospital Affiliated to Shandong University, Jinan, 250117, China
| | | | - Guanzhong Gong
- Shandong Cancer Hospital Affiliated to Shandong University, Jinan, 250117, China. .,Department of Engineering Physics, Tsinghua University, Beijing, 100084, China.
| | - Lisheng Wang
- Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
5
|
Spiking Neural P System with Synaptic Vesicles and Applications in Multiple Brain Metastasis Segmentation. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.01.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
6
|
Deep Learning-Assisted Droplet Digital PCR for Quantitative Detection of Human Coronavirus. BIOCHIP JOURNAL 2023; 17:112-119. [PMID: 36687365 PMCID: PMC9843095 DOI: 10.1007/s13206-023-00095-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 11/15/2022] [Accepted: 12/29/2022] [Indexed: 01/19/2023]
Abstract
Since coronavirus disease 2019 (COVID-19) pandemic rapidly spread worldwide, there is an urgent demand for accurate and suitable nucleic acid detection technology. Although the conventional threshold-based algorithms have been used for processing images of droplet digital polymerase chain reaction (ddPCR), there are still challenges from noise and irregular size of droplets. Here, we present a combined method of the mask region convolutional neural network (Mask R-CNN)-based image detection algorithm and Gaussian mixture model (GMM)-based thresholding algorithm. This novel approach significantly reduces false detection rate and achieves highly accurate prediction model in a ddPCR image processing. We demonstrated that how deep learning improved the overall performance in a ddPCR image processing. Therefore, our study could be a promising method in nucleic acid detection technology.
Collapse
|
7
|
Kato S, Amemiya S, Takao H, Yamashita H, Sakamoto N, Miki S, Watanabe Y, Suzuki F, Fujimoto K, Mizuki M, Abe O. Computer-aided detection improves brain metastasis identification on non-enhanced CT in less experienced radiologists. Acta Radiol 2022; 64:1958-1965. [PMID: 36426577 DOI: 10.1177/02841851221139124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Background Brain metastases (BMs) are the most common intracranial tumors causing neurological complications associated with significant morbidity and mortality. Purpose To evaluate the effect of computer-aided detection (CAD) on the performance of observers in detecting BMs on non-enhanced computed tomography (NECT). Material and Methods Three less experienced and three experienced radiologists interpreted 30 NECT scans with 89 BMs in 25 cases to detect BMs with and without the assistance of CAD. The observers’ sensitivity, number of false positives (FPs), positive predictive value (PPV), and reading time with and without CAD were compared using paired t-tests. The sensitivity of CAD and the observers were compared using a one-sample t-test Results With CAD, less experienced radiologists’ sensitivity significantly increased from 27.7% ± 4.6% to 32.6% ± 4.8% ( P = 0.007), while the experienced radiologists’ sensitivity did not show a significant difference (from 33.3% ± 3.5% to 31.9% ± 3.7%; P = 0.54). There was no significant difference between conditions with CAD and without CAD for FPs (less experienced radiologists: 23.0 ± 10.4 and 25.0 ± 9.3; P = 0.32; experienced radiologists: 18.3 ± 7.4 and 17.3 ± 6.7; P = 0.76) and PPVs (less experienced radiologists: 57.9% ± 8.3% and 50.9% ± 7.0%; P = 0.14; experienced radiologists: 61.8% ± 12.7% and 64.0% ± 12.1%; P = 0.69). There were no significant differences in reading time with and without CAD (85.0 ± 45.6 s and 73.7 ± 36.7 s; P = 0.09). The sensitivity of CAD was 47.2% (with a PPV of 8.9%), which was significantly higher than that of any radiologist ( P < 0.001). Conclusion CAD improved BM detection sensitivity on NECT without increasing FPs or reading time among less experienced radiologists, but this was not the case among experienced radiologists.
Collapse
Affiliation(s)
- Shimpei Kato
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Shiori Amemiya
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Hidemasa Takao
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Hiroshi Yamashita
- Department of Radiology, Teikyo University Hospital, Kawasaki, Kanagawa, Japan
| | - Naoya Sakamoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Soichiro Miki
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Yusuke Watanabe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Fumio Suzuki
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Kotaro Fujimoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Masumi Mizuki
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| |
Collapse
|
8
|
The Usefulness of Computer-Aided Detection of Brain Metastases on Contrast-Enhanced Computed Tomography Using Single-Shot Multibox Detector: Observer Performance Study. J Comput Assist Tomogr 2022; 46:786-791. [PMID: 35819922 DOI: 10.1097/rct.0000000000001339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE This study aimed to test the usefulness of computer-aided detection (CAD) for the detection of brain metastasis (BM) on contrast-enhanced computed tomography. METHODS The test data set included whole-brain axial contrast-enhanced computed tomography images of 25 cases with 62 BMs and 5 cases without BM. Six radiologists from 3 institutions with 2 to 4 years of experience independently reviewed the cases, both in conditions with and without CAD assistance. Sensitivity, positive predictive value, number of false positives, and reading time were compared between the conditions using paired t tests. Subanalysis was also performed for groups of lesions divided according to size. A P value <0.05 was considered statistically significant. RESULTS With CAD, sensitivity significantly increased from 80.4% to 83.9% (P = 0.04), whereas positive predictive value significantly decreased from 88.7% to 84.8% (P = 0.03). Reading time with and without CAD was 112 and 107 seconds, respectively (P = 0.38), and the number of false positives was 10.5 with CAD and 7.0 without CAD (P = 0.053). Sensitivity significantly improved for 6- to 12-mm lesions, from 71.2% without CAD to 80.3% with CAD (P = 0.02). The sensitivity of the CAD (95.2%) was significantly higher than that of any reader (with CAD: P = 0.01; without CAD: P = 0.005). CONCLUSIONS Computer-aided detection significantly improved BM detection sensitivity without prolonging reading time while marginally increased the false positives.
Collapse
|
9
|
Deep-learning 2.5-dimensional single-shot detector improves the performance of automated detection of brain metastases on contrast-enhanced CT. Neuroradiology 2022; 64:1511-1518. [PMID: 35064786 DOI: 10.1007/s00234-022-02902-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 01/15/2022] [Indexed: 10/19/2022]
Abstract
PURPOSE This study aims to develop a 2.5-dimensional (2.5D) deep-learning, object detection model for the automated detection of brain metastases, into which three consecutive slices were fed as the input for the prediction in the central slice, and to compare its performance with that of an ordinary 2-dimensional (2D) model. METHODS We analyzed 696 brain metastases on 127 contrast-enhanced computed tomography (CT) scans from 127 patients with brain metastases. The scans were randomly divided into training (n = 79), validation (n = 18), and test (n = 30) datasets. Single-shot detector (SSD) models with a feature fusion module were constructed, trained, and compared using the lesion-based sensitivity, positive predictive value (PPV), and the number of false positives per patient at a confidence threshold of 50%. RESULTS The 2.5D SSD model had a significantly higher PPV (t test, p < 0.001) and a significantly smaller number of false positives (t test, p < 0.001). The sensitivities of the 2D and 2.5D models were 88.1% (95% confidence interval [CI], 86.6-89.6%) and 88.7% (95% CI, 87.3-90.1%), respectively. The corresponding PPVs were 39.0% (95% CI, 36.5-41.4%) and 58.9% (95% CI, 55.2-62.7%), respectively. The numbers of false positives per patient were 11.9 (95% CI, 10.7-13.2) and 4.9 (95% CI, 4.2-5.7), respectively. CONCLUSION Our results indicate that 2.5D deep-learning, object detection models, which use information about the continuity between adjacent slices, may reduce false positives and improve the performance of automated detection of brain metastases compared with ordinary 2D models.
Collapse
|
10
|
Kato S, Amemiya S, Takao H, Yamashita H, Sakamoto N, Abe O. Automated detection of brain metastases on non-enhanced CT using single-shot detectors. Neuroradiology 2021; 63:1995-2004. [PMID: 34114064 DOI: 10.1007/s00234-021-02743-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Accepted: 05/30/2021] [Indexed: 12/23/2022]
Abstract
PURPOSE To develop and investigate deep learning-based detectors for brain metastases detection on non-enhanced (NE) CT. METHODS The study included 116 NECTs from 116 patients (81 men, age 66.5 ± 10.6 years) to train and test single-shot detector (SSD) models using 89 and 27 cases, respectively. The annotation was performed by three radiologists using bounding-boxes defined on contrast-enhanced CT (CECT) images. NECTs were coregistered and resliced to CECTs. The detection performance was evaluated at the SSD's 50% confidence threshold using sensitivity, positive-predictive value (PPV), and the false-positive rate per scan (FPR). For false negatives and true positives, binary logistic regression was used to examine the possible contributing factors. RESULTS For lesions 6 mm or larger, the SSD achieved a sensitivity of 35.4% (95% confidence interval (CI): [32.3%, 33.5%]); 51/144) with an FPR of 14.9 (95% CI [12.4, 13.9]). The overall sensitivity was 23.8% (95% CI: [21.3%, 22.8%]; 55/231) and PPV was 19.1% (95% CI: [18.5%, 20.4%]; 98/ of 513), with an FPR of 15.4 (95% CI [12.9, 14.5]). Ninety-five percent of the lesions that SSD failed to detect were also undetectable to radiologists (168/176). Twenty-four percent of the lesions (13/50) detected by the SSD were undetectable to radiologists. Logistic regression analysis indicated that density, necrosis, and size contributed to the lesions' visibility for radiologists, while for the SSD, the surrounding edema also enhanced the detection performance. CONCLUSION The SSD model we developed could detect brain metastases larger than 6 mm to some extent, a quarter of which were even retrospectively unrecognizable to radiologists.
Collapse
Affiliation(s)
- Shimpei Kato
- Department of Radiology, The Graduate School of Medicine, University of Tokyo, 7‑3‑1 Hongo, Bunkyo‑ku, Tokyo, 113‑8655, Japan
| | - Shiori Amemiya
- Department of Radiology, The Graduate School of Medicine, University of Tokyo, 7‑3‑1 Hongo, Bunkyo‑ku, Tokyo, 113‑8655, Japan.
| | - Hidemasa Takao
- Department of Radiology, The Graduate School of Medicine, University of Tokyo, 7‑3‑1 Hongo, Bunkyo‑ku, Tokyo, 113‑8655, Japan
| | - Hiroshi Yamashita
- Department of Radiology, Teikyo University Hospital, Mizonokuchi, 5-1-1 Futago, Takatsu-ku, Kawasaki, Kanagawa, 213-8507, Japan
| | - Naoya Sakamoto
- Department of Radiology, The Graduate School of Medicine, University of Tokyo, 7‑3‑1 Hongo, Bunkyo‑ku, Tokyo, 113‑8655, Japan
| | - Osamu Abe
- Department of Radiology, The Graduate School of Medicine, University of Tokyo, 7‑3‑1 Hongo, Bunkyo‑ku, Tokyo, 113‑8655, Japan
| |
Collapse
|
11
|
Takao H, Amemiya S, Kato S, Yamashita H, Sakamoto N, Abe O. Deep-learning single-shot detector for automatic detection of brain metastases with the combined use of contrast-enhanced and non-enhanced computed tomography images. Eur J Radiol 2021; 144:110015. [PMID: 34742108 DOI: 10.1016/j.ejrad.2021.110015] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 10/10/2021] [Accepted: 10/27/2021] [Indexed: 11/30/2022]
Abstract
PURPOSE To develop a deep-learning object detection model for automatic detection of brain metastases that simultaneously uses contrast-enhanced and non-enhanced images as inputs, and to compare its performance with that of a model that uses only contrast-enhanced images. METHOD A total of 116 computed tomography (CT) scans of 116 patients with brain metastases were included in this study. They showed a total of 659 metastases, 428 of which were used for training and validation (mean size, 11.3 ± 9.9 mm) and 231 were used for testing (mean size, 9.0 ± 7.0 mm). Single-shot detector (SSD) models were constructed with a feature fusion module, and their results were compared per lesion at a confidence threshold of 50%. RESULTS The sensitivity was 88.7% for the model that used both contrast-enhanced and non-enhanced CT images (the CE + NECT model) and 87.6% for the model that used only contrast-enhanced CT images (the CECT model). The positive predictive value (PPV) was 44.0% for the CE + NECT model and 37.2% for the CECT model. The number of false positives per patient was 9.9 for the CE + NECT model and 13.6 for the CECT model. The CE + NECT model had a significantly higher PPV (t test, p < 0.001), significantly fewer false positives (t test, p < 0.001), and a tendency to be more sensitive (t test, p = 0.14). CONCLUSIONS The results indicate that the information on true contrast enhancement obtained by comparing the contrast-enhanced and non-enhanced images may prevent the detection of pseudolesions, suppress false positives, and improve the performance of deep-learning object detection models.
Collapse
Affiliation(s)
- Hidemasa Takao
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan.
| | - Shiori Amemiya
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Shimpei Kato
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Hiroshi Yamashita
- Department of Radiology, Teikyo University Hospital, Mizonokuchi, 5-1-1 Futago, Takatsu-ku, Kawasaki, Kanagawa 213-8507, Japan
| | - Naoya Sakamoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| |
Collapse
|
12
|
Hsu DG, Ballangrud Å, Shamseddine A, Deasy JO, Veeraraghavan H, Cervino L, Beal K, Aristophanous M. Automatic segmentation of brain metastases using T1 magnetic resonance and computed tomography images. Phys Med Biol 2021; 66. [PMID: 34315148 DOI: 10.1088/1361-6560/ac1835] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 07/27/2021] [Indexed: 12/26/2022]
Abstract
An increasing number of patients with multiple brain metastases are being treated with stereotactic radiosurgery (SRS). Manually identifying and contouring all metastatic lesions is difficult and time-consuming, and a potential source of variability. Hence, we developed a 3D deep learning approach for segmenting brain metastases on MR and CT images. Five-hundred eleven patients treated with SRS were retrospectively identified for this study. Prior to radiotherapy, the patients were imaged with 3D T1 spoiled-gradient MR post-Gd (T1 + C) and contrast-enhanced CT (CECT), which were co-registered by a treatment planner. The gross tumor volume contours, authored by the attending radiation oncologist, were taken as the ground truth. There were 3 ± 4 metastases per patient, with volume up to 57 ml. We produced a multi-stage model that automatically performs brain extraction, followed by detection and segmentation of brain metastases using co-registered T1 + C and CECT. Augmented data from 80% of these patients were used to train modified 3D V-Net convolutional neural networks for this task. We combined a normalized boundary loss function with soft Dice loss to improve the model optimization, and employed gradient accumulation to stabilize the training. The average Dice similarity coefficient (DSC) for brain extraction was 0.975 ± 0.002 (95% CI). The detection sensitivity per metastasis was 90% (329/367), with moderate dependence on metastasis size. Averaged across 102 test patients, our approach had metastasis detection sensitivity 95 ± 3%, 2.4 ± 0.5 false positives, DSC of 0.76 ± 0.03, and 95th-percentile Hausdorff distance of 2.5 ± 0.3 mm (95% CIs). The volumes of automatic and manual segmentations were strongly correlated for metastases of volume up to 20 ml (r=0.97,p<0.001). This work expounds a fully 3D deep learning approach capable of automatically detecting and segmenting brain metastases using co-registered T1 + C and CECT.
Collapse
Affiliation(s)
- Dylan G Hsu
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Åse Ballangrud
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Achraf Shamseddine
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Joseph O Deasy
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Laura Cervino
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Kathryn Beal
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Michalis Aristophanous
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| |
Collapse
|