1
|
Dai H, Xiao Y, Fu C, Grimm R, von Busch H, Stieltjes B, Choi MH, Xu Z, Chabin G, Yang C, Zeng M. Deep Learning-Based Approach for Identifying and Measuring Focal Liver Lesions on Contrast-Enhanced MRI. J Magn Reson Imaging 2024. [PMID: 38826142 DOI: 10.1002/jmri.29404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 04/12/2024] [Accepted: 04/12/2024] [Indexed: 06/04/2024] Open
Abstract
BACKGROUND The number of focal liver lesions (FLLs) detected by imaging has increased worldwide, highlighting the need to develop a robust, objective system for automatically detecting FLLs. PURPOSE To assess the performance of the deep learning-based artificial intelligence (AI) software in identifying and measuring lesions on contrast-enhanced magnetic resonance imaging (MRI) images in patients with FLLs. STUDY TYPE Retrospective. SUBJECTS 395 patients with 1149 FLLs. FIELD STRENGTH/SEQUENCE The 1.5 T and 3 T scanners, including T1-, T2-, diffusion-weighted imaging, in/out-phase imaging, and dynamic contrast-enhanced imaging. ASSESSMENT The diagnostic performance of AI, radiologist, and their combination was compared. Using 20 mm as the cut-off value, the lesions were divided into two groups, and then divided into four subgroups: <10, 10-20, 20-40, and ≥40 mm, to evaluate the sensitivity of radiologists and AI in the detection of lesions of different sizes. We compared the pathologic sizes of 122 surgically resected lesions with measurements obtained using AI and those made by radiologists. STATISTICAL TESTS McNemar test, Bland-Altman analyses, Friedman test, Pearson's chi-squared test, Fisher's exact test, Dice coefficient, and intraclass correlation coefficients. A P-value <0.05 was considered statistically significant. RESULTS The average Dice coefficient of AI in segmentation of liver lesions was 0.62. The combination of AI and radiologist outperformed the radiologist alone, with a significantly higher detection rate (0.894 vs. 0.825) and sensitivity (0.883 vs. 0.806). The AI showed significantly sensitivity than radiologists in detecting all lesions <20 mm (0.848 vs. 0.788). Both AI and radiologists achieved excellent detection performance for lesions ≥20 mm (0.867 vs. 0.881, P = 0.671). A remarkable agreement existed in the average tumor sizes among the three measurements (P = 0.174). DATA CONCLUSION AI software based on deep learning exhibited practical value in automatically identifying and measuring liver lesions. LEVEL OF EVIDENCE: 4 TECHNICAL EFFICACY Stage 2.
Collapse
Affiliation(s)
- Haoran Dai
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Yuyao Xiao
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Caixia Fu
- MR Application Development, Siemens Shenzhen Magnetic Resonance Ltd., Shenzhen, China
| | - Robert Grimm
- MR Predevelopment, Siemens Healthineers AG, Erlangen, Germany
| | - Heinrich von Busch
- Innovation Owner Artificial Intelligence for Oncology, Siemens Healthineers AG, Erlangen, Germany
| | | | - Moon Hyung Choi
- Eunpyeong St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea
| | - Zhoubing Xu
- Technology Excellence, Digital Technology and Innovation, Siemens Healthineers, Princeton, New Jersey, USA
| | - Guillaume Chabin
- Technology Excellence, Digital Technology and Innovation, Siemens Healthecare SAS, Paris, France
| | - Chun Yang
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Mengsu Zeng
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China
- Shanghai Institute of Medical Imaging, Shanghai, China
- Department of Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, China
| |
Collapse
|
2
|
Boninsegna E, Piffer S, Simonini E, Romano M, Lettieri C, Colopi S, Barai G. CT angiography prior to endovascular procedures: can artificial intelligence improve reporting? Phys Eng Sci Med 2024; 47:643-649. [PMID: 38294678 DOI: 10.1007/s13246-024-01393-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 01/12/2024] [Indexed: 02/01/2024]
Abstract
CT angiography prior to endovascular aortic surgery is the standard non-invasive imaging method for evaluation of aortic dimensions and access sites. A detailed report is crucial to a proper planning. We assessed Artificial Intelligence (AI)-algorithm accuracy to measure vessels diameters at CT prior to transcatheter aortic valve implantation (TAVI). CT scans of 50 patients were included. Two Radiologists with experience in vascular imaging together manually assessed diameters at nine landmark positions according to the American Heart Association guidelines: 450 values were obtained. We implemented TOST (Two One-Sided Test) to determine whether the measurements were equivalent to the values obtained from the AI algorithm. When the equivalence bound was a range of ± 2 mm the test showed equivalence for every point; if the range was equal to ± 1 mm the two measurements were not equivalent in 6 points out of 9 (p-value > 0.05), close to the aortic valve. The time for automatic evaluation (average 1 min 47 s) was significantly lower compared with manual measurements (5 min 41 s) (p < 0.01). In conclusion, our results indicate that AI-algorithms can measure aortic diameters at CT prior to endovascular surgery with high accuracy. AI-assisted reporting promises high efficiency, reduced inter-reader variabilities and time saving. In order to perform optimal TAVI procedure planning aortic root analysis could be improved, including annulus dimensions.
Collapse
Affiliation(s)
- Enrico Boninsegna
- Department of Radiology, Azienda Socio Sanitaria Territoriale di Mantova, St. Lago Paiolo 10, 46100, Mantova, Italy.
| | - Stefano Piffer
- Department of Medical Physics, Azienda Socio Sanitaria Territoriale di Mantova, Mantova, Italy
| | - Emilio Simonini
- Department of Radiology, Azienda Socio Sanitaria Territoriale di Mantova, St. Lago Paiolo 10, 46100, Mantova, Italy
| | - Michele Romano
- Department of Cardiology, Azienda Socio Sanitaria Territoriale di Mantova, Mantova, Italy
| | - Corrado Lettieri
- Department of Cardiology, Azienda Socio Sanitaria Territoriale di Mantova, Mantova, Italy
| | - Stefano Colopi
- Department of Radiology, Azienda Socio Sanitaria Territoriale di Mantova, St. Lago Paiolo 10, 46100, Mantova, Italy
| | - Giampietro Barai
- Department of Medical Physics, Azienda Socio Sanitaria Territoriale di Mantova, Mantova, Italy
| |
Collapse
|
3
|
Berenato S, Williams M, Woodley O, Möhler C, Evans E, Millin AE, Wheeler PA. Novel dosimetric validation of a commercial CT scanner based deep learning automated contour solution for prostate radiotherapy. Phys Med 2024; 122:103339. [PMID: 38718703 DOI: 10.1016/j.ejmp.2024.103339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 03/19/2024] [Accepted: 03/25/2024] [Indexed: 06/13/2024] Open
Abstract
PURPOSE OAR delineation accuracy influences: (i) a patient's optimised dose distribution (PD), (ii) the reported doses (RD) presented at approval, which represent plan quality. This study utilised a novel dosimetric validation methodology, comprehensively evaluating a new CT-scanner-based AI contouring solution in terms of PD and RD within an automated planning workflow. METHODS 20 prostate patients were selected to evaluate AI contouring for rectum, bladder, and proximal femurs. Five planning 'pipelines' were considered; three using AI contours with differing levels of manual editing (nominally none (AIStd), minor editing in specific regions (AIMinEd), and fully corrected (AIFullEd)). Remaining pipelines were manual delineations from two observers (MDOb1, MDOb2). Automated radiotherapy plans were generated for each pipeline. Geometric and dosimetric agreement of contour sets AIStd, AIMinEd, AIFullEd and MDOb2 were evaluated against the reference set MDOb1. Non-inferiority of AI pipelines was assessed, hypothesising that compared to MDOb1, absolute deviations in metrics for AI contouring were no greater than that from MDOb2. RESULTS Compared to MDOb1, organ delineation time was reduced by 24.9 min (96 %), 21.4 min (79 %) and 12.2 min (45 %) for AIStd, AIMinEd and AIFullEd respectively. All pipelines exhibited generally good dosimetric agreement with MDOb1. For RD, median deviations were within ± 1.8 cm3, ± 1.7 % and ± 0.6 Gy for absolute volume, relative volume and mean dose metrics respectively. For PD, respective values were within ± 0.4 cm3, ± 0.5 % and ± 0.2 Gy. Statistically (p < 0.05), AIMinEd and AIFullEd were dosimetrically non-inferior to MDOb2. CONCLUSIONS This novel dosimetric validation demonstrated that following targeted minor editing (AIMinEd), AI contours were dosimetrically non-inferior to manual delineations, reducing delineation time by 79 %.
Collapse
Affiliation(s)
- Salvatore Berenato
- Velindre Cancer Centre, Radiotherapy Physics Department, Cardiff, Wales, United Kingdom
| | - Matthew Williams
- Velindre Cancer Centre, Radiotherapy Physics Department, Cardiff, Wales, United Kingdom
| | - Owain Woodley
- Velindre Cancer Centre, Radiotherapy Physics Department, Cardiff, Wales, United Kingdom
| | | | - Elin Evans
- Velindre Cancer Centre, Medical Directorate, Cardiff, Wales, United Kingdom
| | - Anthony E Millin
- Velindre Cancer Centre, Radiotherapy Physics Department, Cardiff, Wales, United Kingdom
| | - Philip A Wheeler
- Velindre Cancer Centre, Radiotherapy Physics Department, Cardiff, Wales, United Kingdom.
| |
Collapse
|
4
|
Tan Z, Feng J, Lu W, Yin Y, Yang G, Zhou J. Multi-task global optimization-based method for vascular landmark detection. Comput Med Imaging Graph 2024; 114:102364. [PMID: 38432060 DOI: 10.1016/j.compmedimag.2024.102364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 12/04/2023] [Accepted: 02/22/2024] [Indexed: 03/05/2024]
Abstract
Vascular landmark detection plays an important role in medical analysis and clinical treatment. However, due to the complex topology and similar local appearance around landmarks, the popular heatmap regression based methods always suffer from the landmark confusion problem. Vascular landmarks are connected by vascular segments and have special spatial correlations, which can be utilized for performance improvement. In this paper, we propose a multi-task global optimization-based framework for accurate and automatic vascular landmark detection. A multi-task deep learning network is exploited to accomplish landmark heatmap regression, vascular semantic segmentation, and orientation field regression simultaneously. The two auxiliary objectives are highly correlated with the heatmap regression task and help the network incorporate the structural prior knowledge. During inference, instead of performing a max-voting strategy, we propose a global optimization-based post-processing method for final landmark decision. The spatial relationships between neighboring landmarks are utilized explicitly to tackle the landmark confusion problem. We evaluated our method on a cerebral MRA dataset with 564 volumes, a cerebral CTA dataset with 510 volumes, and an aorta CTA dataset with 50 volumes. The experiments demonstrate that the proposed method is effective for vascular landmark localization and achieves state-of-the-art performance.
Collapse
Affiliation(s)
- Zimeng Tan
- Department of Automation, Tsinghua University, Beijing, China
| | - Jianjiang Feng
- Department of Automation, Tsinghua University, Beijing, China.
| | - Wangsheng Lu
- UnionStrong (Beijing) Technology Co.Ltd, Beijing, China
| | - Yin Yin
- UnionStrong (Beijing) Technology Co.Ltd, Beijing, China
| | | | - Jie Zhou
- Department of Automation, Tsinghua University, Beijing, China
| |
Collapse
|
5
|
Brüggemann D, Kuzo N, Anwer S, Kebernik J, Eberhard M, Alkadhi H, Tanner FC, Konukoglu E. Predicting mortality after transcatheter aortic valve replacement using preprocedural CT. Sci Rep 2024; 14:12526. [PMID: 38822074 PMCID: PMC11143216 DOI: 10.1038/s41598-024-63022-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 05/23/2024] [Indexed: 06/02/2024] Open
Abstract
Transcatheter aortic valve replacement (TAVR) is a widely used intervention for patients with severe aortic stenosis. Identifying high-risk patients is crucial due to potential postprocedural complications. Currently, this involves manual clinical assessment and time-consuming radiological assessment of preprocedural computed tomography (CT) images by an expert radiologist. In this study, we introduce a probabilistic model that predicts post-TAVR mortality automatically using unprocessed, preprocedural CT and 25 baseline patient characteristics. The model utilizes CT volumes by automatically localizing and extracting a region of interest around the aortic root and ascending aorta. It then extracts task-specific features with a 3D deep neural network and integrates them with patient characteristics to perform outcome prediction. As missing measurements or even missing CT images are common in TAVR planning, the proposed model is designed with a probabilistic structure to allow for marginalization over such missing information. Our model demonstrates an AUROC of 0.725 for predicting all-cause mortality during postprocedure follow-up on a cohort of 1449 TAVR patients. This performance is on par with what can be achieved with lengthy radiological assessments performed by experts. Thus, these findings underscore the potential of the proposed model in automatically analyzing CT volumes and integrating them with patient characteristics for predicting mortality after TAVR.
Collapse
Affiliation(s)
- David Brüggemann
- Computer Vision Laboratory, ETH Zurich, 8092, Zurich, Switzerland
| | - Nazar Kuzo
- Department of Cardiology, University Heart Center, University Hospital Zurich, 8091, Zurich, Switzerland
| | - Shehab Anwer
- Department of Cardiology, University Heart Center, University Hospital Zurich, 8091, Zurich, Switzerland
| | - Julia Kebernik
- Institute for Diagnostic and Interventional Radiology, University Hospital Zurich, 8091, Zurich, Switzerland
| | - Matthias Eberhard
- Institute for Diagnostic and Interventional Radiology, University Hospital Zurich, 8091, Zurich, Switzerland
| | - Hatem Alkadhi
- Institute for Diagnostic and Interventional Radiology, University Hospital Zurich, 8091, Zurich, Switzerland
| | - Felix C Tanner
- Department of Cardiology, University Heart Center, University Hospital Zurich, 8091, Zurich, Switzerland
| | - Ender Konukoglu
- Computer Vision Laboratory, ETH Zurich, 8092, Zurich, Switzerland.
| |
Collapse
|
6
|
Condrea F, Rapaka S, Itu L, Sharma P, Sperl J, Ali AM, Leordeanu M. Anatomically aware dual-hop learning for pulmonary embolism detection in CT pulmonary angiograms. Comput Biol Med 2024; 174:108464. [PMID: 38613894 DOI: 10.1016/j.compbiomed.2024.108464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 04/01/2024] [Accepted: 04/08/2024] [Indexed: 04/15/2024]
Abstract
Pulmonary Embolisms (PE) represent a leading cause of cardiovascular death. While medical imaging, through computed tomographic pulmonary angiography (CTPA), represents the gold standard for PE diagnosis, it is still susceptible to misdiagnosis or significant diagnosis delays, which may be fatal for critical cases. Despite the recently demonstrated power of deep learning to bring a significant boost in performance in a wide range of medical imaging tasks, there are still very few published researches on automatic pulmonary embolism detection. Herein we introduce a deep learning based approach, which efficiently combines computer vision and deep neural networks for pulmonary embolism detection in CTPA. Our method brings novel contributions along three orthogonal axes: (1) automatic detection of anatomical structures; (2) anatomical aware pretraining, and (3) a dual-hop deep neural net for PE detection. We obtain state-of-the-art results on the publicly available multicenter large-scale RSNA dataset.
Collapse
Affiliation(s)
- Florin Condrea
- Institute of Mathematics of the Romanian Academy "Simion Stoilow, Bucharest, Romania; Advanta, Siemens, 15 Noiembrie Bvd, Brasov, 500097, Romania.
| | | | - Lucian Itu
- Advanta, Siemens, 15 Noiembrie Bvd, Brasov, 500097, Romania
| | | | | | - A Mohamed Ali
- Siemens Healthcare Private Limited, Mumbai, 400079, India
| | - Marius Leordeanu
- Institute of Mathematics of the Romanian Academy "Simion Stoilow, Bucharest, Romania; Advanta, Siemens, 15 Noiembrie Bvd, Brasov, 500097, Romania; Polytechnic University of Bucharest, Bucharest, Romania
| |
Collapse
|
7
|
Fink N, Yacoub B, Schoepf UJ, Zsarnoczay E, Pinos D, Vecsey-Nagy M, Rapaka S, Sharma P, O’Doherty J, Ricke J, Varga-Szemes A, Emrich T. Artificial Intelligence Provides Accurate Quantification of Thoracic Aortic Enlargement and Dissection in Chest CT. Diagnostics (Basel) 2024; 14:866. [PMID: 38732280 PMCID: PMC11083497 DOI: 10.3390/diagnostics14090866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Revised: 04/15/2024] [Accepted: 04/19/2024] [Indexed: 05/13/2024] Open
Abstract
This study evaluated a deep neural network (DNN) algorithm for automated aortic diameter quantification and aortic dissection detection in chest computed tomography (CT). A total of 100 patients (median age: 67.0 [interquartile range 55.3/73.0] years; 60.0% male) with aortic aneurysm who underwent non-enhanced and contrast-enhanced electrocardiogram-gated chest CT were evaluated. All the DNN measurements were compared to manual assessment, overall and between the following subgroups: (1) ascending (AA) vs. descending aorta (DA); (2) non-obese vs. obese; (3) without vs. with aortic repair; (4) without vs. with aortic dissection. Furthermore, the presence of aortic dissection was determined (yes/no decision). The automated and manual diameters differed significantly (p < 0.05) but showed excellent correlation and agreement (r = 0.89; ICC = 0.94). The automated and manual values were similar in the AA group but significantly different in the DA group (p < 0.05), similar in obese but significantly different in non-obese patients (p < 0.05) and similar in patients without aortic repair or dissection but significantly different in cases with such pathological conditions (p < 0.05). However, in all the subgroups, the automated diameters showed strong correlation and agreement with the manual values (r > 0.84; ICC > 0.9). The accuracy, sensitivity and specificity of DNN-based aortic dissection detection were 92.1%, 88.1% and 95.7%, respectively. This DNN-based algorithm enabled accurate quantification of the largest aortic diameter and detection of aortic dissection in a heterogenous patient population with various aortic pathologies. This has the potential to enhance radiologists' efficiency in clinical practice.
Collapse
Affiliation(s)
- Nicola Fink
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC 29425, USA
- Department of Radiology, University Hospital, LMU Munich, 81377 Munich, Germany
| | - Basel Yacoub
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC 29425, USA
| | - U. Joseph Schoepf
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC 29425, USA
| | - Emese Zsarnoczay
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC 29425, USA
- Medical Imaging Center, Semmelweis University, Korányi Sándor utca 2, 1083 Budapest, Hungary
| | - Daniel Pinos
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC 29425, USA
| | - Milan Vecsey-Nagy
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC 29425, USA
- Heart and Vascular Center, Semmelweis University, Varosmajor utca 68, 1122 Budapest, Hungary
| | - Saikiran Rapaka
- Siemens Healthineers, Princeton, NJ 08540, USA; (S.R.); (P.S.)
| | - Puneet Sharma
- Siemens Healthineers, Princeton, NJ 08540, USA; (S.R.); (P.S.)
| | | | - Jens Ricke
- Department of Radiology, University Hospital, LMU Munich, 81377 Munich, Germany
| | - Akos Varga-Szemes
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC 29425, USA
| | - Tilman Emrich
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC 29425, USA
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg-University, Langenbeckstr. 1, 55131 Mainz, Germany
- German Centre for Cardiovascular Research, 55131 Mainz, Germany
| |
Collapse
|
8
|
Sotoudeh-Paima S, Segars WP, Ghosh D, Luo S, Samei E, Abadi E. A systematic assessment and optimization of photon-counting CT for lung density quantifications. Med Phys 2024; 51:2893-2904. [PMID: 38368605 PMCID: PMC11055522 DOI: 10.1002/mp.16987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 01/31/2024] [Accepted: 02/02/2024] [Indexed: 02/20/2024] Open
Abstract
BACKGROUND Photon-counting computed tomography (PCCT) has recently emerged into clinical use; however, its optimum imaging protocols and added benefits remains unknown in terms of providing more accurate lung density quantification compared to energy-integrating computed tomography (EICT) scanners. PURPOSE To systematically assess the performance of a clinical PCCT scanner for lung density quantifications and compare it against EICT. METHODS This cross-sectional study involved a retrospective analysis of subjects scanned (August-December 2021) using a clinical PCCT system. The influence of altering reconstruction parameters was studied (reconstruction kernel, pixel size, slice thickness). A virtual CT dataset of anthropomorphic virtual subjects was acquired to demonstrate the correspondence of findings to clinical dataset, and to perform systematic imaging experiments, not possible using human subjects. The virtual subjects were imaged using a validated, scanner-specific CT simulator of a PCCT and two EICT (defined as EICT A and B) scanners. The images were evaluated using mean absolute error (MAE) of lung and emphysema density against their corresponding ground truth. RESULTS Clinical and virtual PCCT datasets showed similar trends, with sharper kernels and smaller voxel sizes increasing percentage of low-attenuation areas below -950 HU (LAA-950) by up to 15.7 ± 6.9% and 11.8 ± 5.5%, respectively. Under the conditions studied, higher doses, thinner slices, smaller pixel sizes, iterative reconstructions, and quantitative kernels with medium sharpness resulted in lower lung MAE values. While using these settings for PCCT, changes in the dose level (13 to 1.3 mGy), slice thickness (0.4 to 1.5 mm), pixel size (0.49 to 0.98 mm), reconstruction technique (70 keV-VMI to wFBP), and kernel (Qr48 to Qr60) increased lung MAE by 15.3 ± 2.0, 1.4 ± 0.6, 2.2 ± 0.3, 4.2 ± 0.8, and 9.1 ± 1.6 HU, respectively. At the optimum settings identified per scanner, PCCT images exhibited lower lung and emphysema MAE than those of EICT scanners (by 2.6 ± 1.0 and 9.6 ± 3.4 HU, compared to EICT A, and by 4.8 ± 0.8 and 7.4 ± 2.3 HU, compared to EICT B). The accuracy of lung density measurements was correlated with subjects' mean lung density (p < 0.05), measured by PCCT at optimum setting under the conditions studied. CONCLUSION Photon-counting CT demonstrated superior performance in density quantifications, with its influences of imaging parameters in line with energy-integrating CT scanners. The technology offers improvement in lung quantifications, thus demonstrating potential toward more objective assessment of respiratory conditions.
Collapse
Affiliation(s)
- Saman Sotoudeh-Paima
- Center for Virtual Imaging Trials, Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, USA
- Department of Electrical & Computer Engineering, Duke University, Durham, USA
| | - W. Paul Segars
- Center for Virtual Imaging Trials, Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, USA
- Medical Physics Graduate Program, Duke University, Durham, USA
- Department of Biomedical Engineering, Duke University, Durham, USA
| | - Dhrubajyoti Ghosh
- Department of Biostatistics and Bioinformatics, Duke University, Durham, USA
| | - Sheng Luo
- Department of Biostatistics and Bioinformatics, Duke University, Durham, USA
| | - Ehsan Samei
- Center for Virtual Imaging Trials, Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, USA
- Department of Electrical & Computer Engineering, Duke University, Durham, USA
- Medical Physics Graduate Program, Duke University, Durham, USA
- Department of Biomedical Engineering, Duke University, Durham, USA
- Department of Physics, Duke University, Durham, USA
| | - Ehsan Abadi
- Center for Virtual Imaging Trials, Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, USA
- Department of Electrical & Computer Engineering, Duke University, Durham, USA
- Medical Physics Graduate Program, Duke University, Durham, USA
| |
Collapse
|
9
|
López Diez P, Sundgaard JV, Margeta J, Diab K, Patou F, Paulsen RR. Deep reinforcement learning and convolutional autoencoders for anomaly detection of congenital inner ear malformations in clinical CT images. Comput Med Imaging Graph 2024; 113:102343. [PMID: 38325245 DOI: 10.1016/j.compmedimag.2024.102343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 01/25/2024] [Accepted: 01/28/2024] [Indexed: 02/09/2024]
Abstract
Detection of abnormalities within the inner ear is a challenging task even for experienced clinicians. In this study, we propose an automated method for automatic abnormality detection to provide support for the diagnosis and clinical management of various otological disorders. We propose a framework for inner ear abnormality detection based on deep reinforcement learning for landmark detection which is trained uniquely in normative data. In our approach, we derive two abnormality measurements: Dimage and Uimage. The first measurement, Dimage, is based on the variability of the predicted configuration of a well-defined set of landmarks in a subspace formed by the point distribution model of the location of those landmarks in normative data. We create this subspace using Procrustes shape alignment and Principal Component Analysis projection. The second measurement, Uimage, represents the degree of hesitation of the agents when approaching the final location of the landmarks and is based on the distribution of the predicted Q-values of the model for the last ten states. Finally, we unify these measurements in a combined anomaly measurement called Cimage. We compare our method's performance with a 3D convolutional autoencoder technique for abnormality detection using the patch-based mean squared error between the original and the generated image as a basis for classifying abnormal versus normal anatomies. We compare both approaches and show that our method, based on deep reinforcement learning, shows better detection performance for abnormal anatomies on both an artificial and a real clinical CT dataset of various inner ear malformations with an increase of 11.2% of the area under the ROC curve. Our method also shows more robustness against the heterogeneous quality of the images in our dataset.
Collapse
Affiliation(s)
- Paula López Diez
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark.
| | - Josefine Vilsbøll Sundgaard
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark; Novo Nordisk A/S, Denmark
| | - Jan Margeta
- KardioMe, Research & Development, Nova Dubnica, Slovakia; Oticon Medical, Research & Technology, Vallauris, France
| | - Khassan Diab
- Tashkent International Clinic, Tashkent, Uzbekistan
| | - François Patou
- Oticon Medical, Research & Technology group, Smørum, Denmark
| | - Rasmus R Paulsen
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark
| |
Collapse
|
10
|
Khajuria R, Sarwar A. Review of reinforcement learning applications in segmentation, chemotherapy, and radiotherapy of cancer. Micron 2024; 178:103583. [PMID: 38185018 DOI: 10.1016/j.micron.2023.103583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 10/16/2023] [Accepted: 12/20/2023] [Indexed: 01/09/2024]
Abstract
Owing to early diagnosis and treatment of cancer as a prerequisite in recent times, the role of machine learning has been increased substantially. The mathematically powerful and optimized solutions for the detection and cure of cancer are constantly being explored and novel models based upon standard algorithms are also being developed. Leveraging one such solution is Reinforcement Learning (RL), which is a semi-supervised type of learning. The paper presents a detailed discussion on the various RL techniques, algorithms, and open issues, in addition to the review of literature for diagnosis and treatment of cancer. A smaller number of publications for diagnosis and treatment of cancer have been reported before 2011 but now after the success of Deep Learning (DL) and the advent of Deep Reinforcement Learning (DRL), the publications have grown in number from 2017 onwards. The scope of RL for cancer diagnosis and treatment is also demystified and provides the research community with the insights of how to formulate RL problem as a Cancer diagnostic problem. RL has been found successful for landmark detection in medical images and optimal control of drugs and radiations.
Collapse
|
11
|
Hanneman K, Playford D, Dey D, van Assen M, Mastrodicasa D, Cook TS, Gichoya JW, Williamson EE, Rubin GD. Value Creation Through Artificial Intelligence and Cardiovascular Imaging: A Scientific Statement From the American Heart Association. Circulation 2024; 149:e296-e311. [PMID: 38193315 DOI: 10.1161/cir.0000000000001202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
Multiple applications for machine learning and artificial intelligence (AI) in cardiovascular imaging are being proposed and developed. However, the processes involved in implementing AI in cardiovascular imaging are highly diverse, varying by imaging modality, patient subtype, features to be extracted and analyzed, and clinical application. This article establishes a framework that defines value from an organizational perspective, followed by value chain analysis to identify the activities in which AI might produce the greatest incremental value creation. The various perspectives that should be considered are highlighted, including clinicians, imagers, hospitals, patients, and payers. Integrating the perspectives of all health care stakeholders is critical for creating value and ensuring the successful deployment of AI tools in a real-world setting. Different AI tools are summarized, along with the unique aspects of AI applications to various cardiac imaging modalities, including cardiac computed tomography, magnetic resonance imaging, and positron emission tomography. AI is applicable and has the potential to add value to cardiovascular imaging at every step along the patient journey, from selecting the more appropriate test to optimizing image acquisition and analysis, interpreting the results for classification and diagnosis, and predicting the risk for major adverse cardiac events.
Collapse
|
12
|
Rayn K, Gokhroo G, Jeffers B, Gupta V, Chaudhari S, Clark R, Magliari A, Beriwal S. Multicenter Study of Pelvic Nodal Autosegmentation Algorithm of Siemens Healthineers: Comparison of Male Versus Female Pelvis. Adv Radiat Oncol 2024; 9:101326. [PMID: 38405314 PMCID: PMC10885554 DOI: 10.1016/j.adro.2023.101326] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 07/18/2023] [Indexed: 02/27/2024] Open
Abstract
Purpose The autosegmentation algorithm of Siemens Healthineers version VA 30 (AASH) (Siemens Healthineers, Erlangen, Germany) was trained and developed in the male pelvis, with no published data on its usability in the female pelvis. This is the first multi-institutional study to describe and evaluate an artificial intelligence algorithm for autosegmentation of the pelvic nodal region by gender. Methods and Materials We retrospectively evaluated AASH pelvic nodal autosegmentation in both male and female patients treated at our network of institutions. The automated pelvic nodal contours generated by AASH were evaluated by 1 board-certified radiation oncologist. A 4-point scale was used for each nodal region contour: a score of 4 is clinically usable with minimal edits; a score of 3 requires minor edits (missing nodal contour region, cutting through vessels, or including bowel loops) in 3 or fewer computed tomography slices; a score of 2 requires major edits, as previously defined but in 4 or more computed tomography slices; and a score of 1 requires complete recontouring of the region. Pelvic nodal regions included the right and left side of the common iliac, external iliac, internal iliac, obturator, and midline presacral nodes. In addition, patients were graded based on their lowest nodal contour score. Statistical analysis was performed using Fisher exact tests and Yates-corrected χ2 tests. Results Fifty-two female and 51 male patients were included in the study, representing a total of 468 and 447 pelvic nodal regions, respectively. Ninety-six percent and 99% of contours required minor edits at most (score of 3 or 4) for female and male patients, respectively (P = .004 using Fisher exact test; P = .007 using Yates correction). No nodal regions had a statistically significant difference in scores between female and male patients. The percentage of patients requiring no more than minor edits was 87% (45 patients) and 92% (47 patients) for female and male patients, respectively (P = .53 using Fisher exact test; P = .55 using Yates correction). Conclusions AASH pelvic nodal autosegmentation performed very well in both male and female pelvic nodal regions, although with better male pelvic nodal autosegmentation. As autosegmentation becomes more widespread, it may be important to have equal representation from all sexes in training and validation of autosegmentation algorithms.
Collapse
Affiliation(s)
- Kareem Rayn
- Department of Radiation Oncology, Columbia University Irving Medical Center, New York, New York
- Varian Medical Systems Inc, Palo Alto, California
| | | | - Brian Jeffers
- Columbia University Vagelos College of Physicians and Surgeons, New York, New York
| | - Vibhor Gupta
- American Oncology Institute, Hyderabad, CA, India
| | | | - Ryan Clark
- Varian Medical Systems Inc, Palo Alto, California
| | | | - Sushil Beriwal
- Varian Medical Systems Inc, Palo Alto, California
- Division of Radiation Oncology, Allegheny Health Network Cancer Institute, Pittsburgh, Pennsylvania
| |
Collapse
|
13
|
Li J, Jiang P, An Q, Wang GG, Kong HF. Medical image identification methods: A review. Comput Biol Med 2024; 169:107777. [PMID: 38104516 DOI: 10.1016/j.compbiomed.2023.107777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/30/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
The identification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Medical image data mainly include electronic health record data and gene information data, etc. Although intelligent imaging provided a good scheme for medical image analysis over traditional methods that rely on the handcrafted features, it remains challenging due to the diversity of imaging modalities and clinical pathologies. Many medical image identification methods provide a good scheme for medical image analysis. The concepts pertinent of methods, such as the machine learning, deep learning, convolutional neural networks, transfer learning, and other image processing technologies for medical image are analyzed and summarized in this paper. We reviewed these recent studies to provide a comprehensive overview of applying these methods in various medical image analysis tasks, such as object detection, image classification, image registration, segmentation, and other tasks. Especially, we emphasized the latest progress and contributions of different methods in medical image analysis, which are summarized base on different application scenarios, including classification, segmentation, detection, and image registration. In addition, the applications of different methods are summarized in different application area, such as pulmonary, brain, digital pathology, brain, skin, lung, renal, breast, neuromyelitis, vertebrae, and musculoskeletal, etc. Critical discussion of open challenges and directions for future research are finally summarized. Especially, excellent algorithms in computer vision, natural language processing, and unmanned driving will be applied to medical image recognition in the future.
Collapse
Affiliation(s)
- Juan Li
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China; School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Pan Jiang
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China
| | - Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China.
| | - Hua-Feng Kong
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China.
| |
Collapse
|
14
|
Montgomery ME, Andersen FL, d’Este SH, Overbeck N, Cramon PK, Law I, Fischer BM, Ladefoged CN. Attenuation Correction of Long Axial Field-of-View Positron Emission Tomography Using Synthetic Computed Tomography Derived from the Emission Data: Application to Low-Count Studies and Multiple Tracers. Diagnostics (Basel) 2023; 13:3661. [PMID: 38132245 PMCID: PMC10742516 DOI: 10.3390/diagnostics13243661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 12/10/2023] [Accepted: 12/11/2023] [Indexed: 12/23/2023] Open
Abstract
Recent advancements in PET/CT, including the emergence of long axial field-of-view (LAFOV) PET/CT scanners, have increased PET sensitivity substantially. Consequently, there has been a significant reduction in the required tracer activity, shifting the primary source of patient radiation dose exposure to the attenuation correction (AC) CT scan during PET imaging. This study proposes a parameter-transferred conditional generative adversarial network (PT-cGAN) architecture to generate synthetic CT (sCT) images from non-attenuation corrected (NAC) PET images, with separate networks for [18F]FDG and [15O]H2O tracers. The study includes a total of 1018 subjects (n = 972 [18F]FDG, n = 46 [15O]H2O). Testing was performed on the LAFOV scanner for both datasets. Qualitative analysis found no differences in image quality in 30 out of 36 cases in FDG patients, with minor insignificant differences in the remaining 6 cases. Reduced artifacts due to motion between NAC PET and CT were found. For the selected organs, a mean average error of 0.45% was found for the FDG cohort, and that of 3.12% was found for the H2O cohort. Simulated low-count images were included in testing, which demonstrated good performance down to 45 s scans. These findings show that the AC of total-body PET is feasible across tracers and in low-count studies and might reduce the artifacts due to motion and metal implants.
Collapse
Affiliation(s)
- Maria Elkjær Montgomery
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
| | - Flemming Littrup Andersen
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
- Department of Clinical Medicine, Copenhagen University, 2200 København, Denmark
| | - Sabrina Honoré d’Este
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
| | - Nanna Overbeck
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
| | - Per Karkov Cramon
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
| | - Ian Law
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
- Department of Clinical Medicine, Copenhagen University, 2200 København, Denmark
| | - Barbara Malene Fischer
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
- Department of Clinical Medicine, Copenhagen University, 2200 København, Denmark
| | - Claes Nøhr Ladefoged
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen University Hospital, 2100 København, Denmark; (M.E.M.); (N.O.); (P.K.C.); (I.L.); (B.M.F.); (C.N.L.)
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Lyngby, Denmark
| |
Collapse
|
15
|
Kiefer J, Kopp M, Ruettinger T, Heiss R, Wuest W, Amarteifio P, Stroebel A, Uder M, May MS. Diagnostic Accuracy and Performance Analysis of a Scanner-Integrated Artificial Intelligence Model for the Detection of Intracranial Hemorrhages in a Traumatology Emergency Department. Bioengineering (Basel) 2023; 10:1362. [PMID: 38135956 PMCID: PMC10740704 DOI: 10.3390/bioengineering10121362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 11/03/2023] [Accepted: 11/19/2023] [Indexed: 12/24/2023] Open
Abstract
Intracranial hemorrhages require an immediate diagnosis to optimize patient management and outcomes, and CT is the modality of choice in the emergency setting. We aimed to evaluate the performance of the first scanner-integrated artificial intelligence algorithm to detect brain hemorrhages in a routine clinical setting. This retrospective study includes 435 consecutive non-contrast head CT scans. Automatic brain hemorrhage detection was calculated as a separate reconstruction job in all cases. The radiological report (RR) was always conducted by a radiology resident and finalized by a senior radiologist. Additionally, a team of two radiologists reviewed the datasets retrospectively, taking additional information like the clinical record, course, and final diagnosis into account. This consensus reading served as a reference. Statistics were carried out for diagnostic accuracy. Brain hemorrhage detection was executed successfully in 432/435 (99%) of patient cases. The AI algorithm and reference standard were consistent in 392 (90.7%) cases. One false-negative case was identified within the 52 positive cases. However, 39 positive detections turned out to be false positives. The diagnostic performance was calculated as a sensitivity of 98.1%, specificity of 89.7%, positive predictive value of 56.7%, and negative predictive value (NPV) of 99.7%. The execution of scanner-integrated AI detection of brain hemorrhages is feasible and robust. The diagnostic accuracy has a high specificity and a very high negative predictive value and sensitivity. However, many false-positive findings resulted in a relatively moderate positive predictive value.
Collapse
Affiliation(s)
- Jonas Kiefer
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
| | - Markus Kopp
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
- Imaging Science Institute, Ulmenweg 18, 91054 Erlangen, Germany;
| | - Theresa Ruettinger
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
| | - Rafael Heiss
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
- Imaging Science Institute, Ulmenweg 18, 91054 Erlangen, Germany;
| | - Wolfgang Wuest
- Martha-Maria Hospital Nuernberg, Stadenstraße 58, 90491 Nuernberg, Germany;
| | - Patrick Amarteifio
- Imaging Science Institute, Ulmenweg 18, 91054 Erlangen, Germany;
- Siemens Healthcare GmbH, Allee am Röthelheimpark 3, 91052 Erlangen, Germany
| | - Armin Stroebel
- Center for Clinical Studies CCS, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Krankenhausstraße 12, 91054 Erlangen, Germany;
| | - Michael Uder
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
- Imaging Science Institute, Ulmenweg 18, 91054 Erlangen, Germany;
| | - Matthias Stefan May
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany; (J.K.); (T.R.); (R.H.); (M.U.)
- Imaging Science Institute, Ulmenweg 18, 91054 Erlangen, Germany;
| |
Collapse
|
16
|
Gillot M, Miranda F, Baquero B, Ruellas A, Gurgel M, Al Turkestani N, Anchling L, Hutin N, Biggs E, Yatabe M, Paniagua B, Fillion-Robin JC, Allemang D, Bianchi J, Cevidanes L, Prieto JC. Automatic landmark identification in cone-beam computed tomography. Orthod Craniofac Res 2023; 26:560-567. [PMID: 36811276 PMCID: PMC10440369 DOI: 10.1111/ocr.12642] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 02/07/2023] [Accepted: 02/09/2023] [Indexed: 02/24/2023]
Abstract
OBJECTIVE To present and validate an open-source fully automated landmark placement (ALICBCT) tool for cone-beam computed tomography scans. MATERIALS AND METHODS One hundred and forty-three large and medium field of view cone-beam computed tomography (CBCT) were used to train and test a novel approach, called ALICBCT that reformulates landmark detection as a classification problem through a virtual agent placed inside volumetric images. The landmark agents were trained to navigate in a multi-scale volumetric space to reach the estimated landmark position. The agent movements decision relies on a combination of DenseNet feature network and fully connected layers. For each CBCT, 32 ground truth landmark positions were identified by 2 clinician experts. After validation of the 32 landmarks, new models were trained to identify a total of 119 landmarks that are commonly used in clinical studies for the quantification of changes in bone morphology and tooth position. RESULTS Our method achieved a high accuracy with an average of 1.54 ± 0.87 mm error for the 32 landmark positions with rare failures, taking an average of 4.2 second computation time to identify each landmark in one large 3D-CBCT scan using a conventional GPU. CONCLUSION The ALICBCT algorithm is a robust automatic identification tool that has been deployed for clinical and research use as an extension in the 3D Slicer platform allowing continuous updates for increased precision.
Collapse
Affiliation(s)
- Maxime Gillot
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, MI, Ann Arbor, USA
- CPE Lyon, Lyon, France
| | - Felicia Miranda
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, MI, Ann Arbor, USA
- Department of Orthodontics, Bauru Dental School, University of São Paulo, Bauru, Brazil
| | - Baptiste Baquero
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, MI, Ann Arbor, USA
- CPE Lyon, Lyon, France
| | - Antonio Ruellas
- Department of Orthodontics and Pediatric Dentistry, School of Dentistry, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
| | - Marcela Gurgel
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, MI, Ann Arbor, USA
| | - Najla Al Turkestani
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, MI, Ann Arbor, USA
- Department of Restorative and Aesthetic Dentistry, Faculty of Dentistry, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Luc Anchling
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, MI, Ann Arbor, USA
- CPE Lyon, Lyon, France
| | - Nathan Hutin
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, MI, Ann Arbor, USA
- CPE Lyon, Lyon, France
| | - Elizabeth Biggs
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, MI, Ann Arbor, USA
| | - Marilia Yatabe
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, MI, Ann Arbor, USA
| | | | | | | | - Jonas Bianchi
- Department of Orthodontics, University of the Pacific, San Francisco, CA, USA
| | - Lucia Cevidanes
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, MI, Ann Arbor, USA
| | - Juan Carlos Prieto
- Department of Psychiatry, University of North Carolina, Chapel Hill, NC, USA
| |
Collapse
|
17
|
Lo Piccolo F, Hinck D, Segeroth M, Sperl J, Cyriac J, Yang S, Rapaka S, Bremerich J, Sauter AW, Pradella M. Impact of retraining a deep learning algorithm for improving guideline-compliant aortic diameter measurements on non-gated chest CT. Eur J Radiol 2023; 168:111093. [PMID: 37716024 DOI: 10.1016/j.ejrad.2023.111093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 08/21/2023] [Accepted: 09/08/2023] [Indexed: 09/18/2023]
Abstract
PURPOSE/OBJECTIVE Reliable detection of thoracic aortic dilatation (TAD) is mandatory in clinical routine. For ECG-gated CT angiography, automated deep learning (DL) algorithms are established for diameter measurements according to current guidelines. For non-ECG gated CT (contrast enhanced (CE) and non-CE), however, only a few reports are available. In these reports, classification as TAD is frequently unreliable with variable result quality depending on anatomic location with the aortic root presenting with the worst results. Therefore, this study aimed to explore the impact of re-training on a previously evaluated DL tool for aortic measurements in a cohort of non-ECG gated exams. METHODS & MATERIALS A cohort of 995 patients (68 ± 12 years) with CE (n = 392) and non-CE (n = 603) chest CT exams was selected which were classified as TAD by the initial DL tool. The re-trained version featured improved robustness of centerline fitting and cross-sectional plane placement. All cases were processed by the re-trained DL tool version. DL results were evaluated by a radiologist regarding plane placement and diameter measurements. Measurements were classified as correctly measured diameters at each location whereas false measurements consisted of over-/under-estimation of diameters. RESULTS We evaluated 8948 measurements in 995 exams. The re-trained version performed 8539/8948 (95.5%) of diameter measurements correctly. 3765/8948 (42.1%) of measurements were correct in both versions, initial and re-trained DL tool (best: distal arch 655/995 (66%), worst: Aortic sinus (AS) 221/995 (22%)). In contrast, 4456/8948 (49.8%) measurements were correctly measured only by the re-trained version, in particular at the aortic root (AS: 564/995 (57%), sinotubular junction: 697/995 (70%)). In addition, the re-trained version performed 318 (3.6%) measurements which were not available previously. A total of 228 (2.5%) cases showed false measurements because of tilted planes and 181 (2.0%) over-/under-segmentations with a focus at AS (n = 137 (14%) and n = 73 (7%), respectively). CONCLUSION Re-training of the DL tool improved diameter assessment, resulting in a total of 95.5% correct measurements. Our data suggests that the re-trained DL tool can be applied even in non-ECG-gated chest CT including both, CE and non-CE exams.
Collapse
Affiliation(s)
- Francesca Lo Piccolo
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.
| | - Daniel Hinck
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.
| | - Martin Segeroth
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.
| | - Jonathan Sperl
- Siemens Healthineers, Siemensstraße 1, 91301 Forchheim, Germany.
| | - Joshy Cyriac
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.
| | - Shan Yang
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.
| | - Saikiran Rapaka
- Siemens Healthineers, 755 College Rd E, Princeton, NJ 08540, United States.
| | - Jens Bremerich
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.
| | - Alexander W Sauter
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland; Department of Radiology, Kantonsspital Baden, Im Ergel 1, 5404 Baden, Switzerland; Department of Radiology, University Hospital Tuebingen, Hoppe-Seyler-Straße 3, 7207 Tuebingen, Germany.
| | - Maurice Pradella
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.
| |
Collapse
|
18
|
Maduro Bustos LA, Sarkar A, Doyle LA, Andreou K, Noonan J, Nurbagandova D, Shah SA, Irabor OC, Mourtada F. Feasibility evaluation of novel AI-based deep-learning contouring algorithm for radiotherapy. J Appl Clin Med Phys 2023; 24:e14090. [PMID: 37464581 PMCID: PMC10647981 DOI: 10.1002/acm2.14090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 06/09/2023] [Accepted: 06/13/2023] [Indexed: 07/20/2023] Open
Abstract
PURPOSE To evaluate the clinical feasibility of the Siemens Healthineers AI-Rad Companion Organs RT VA30A (Organs-RT) auto-contouring algorithm for organs at risk (OARs) of the pelvis, thorax, and head and neck (H&N). METHODS Computed tomography (CT) datasets from 30 patients (10 pelvis, 10 thorax, and 10 H&N) were collected. Four sets of OARs were generated on each scan, one set by Organs-RT and the others by three experienced users independently. A physician (expert) then evaluated each contour by assigning a score from the following scale: 1-Must Redo, 2-Major Edits, 3-Minor Edits, 4-Clinically usable. Using the highest-scored OAR from the human users as a reference, the contours generated by Organs-RT were evaluated via Dice Similarity Coefficient (DSC), Hausdorff Distance (HDD), Mean Distance to Agreement (mDTA), Volume comparison, and visual inspection. Additionally, each human user recorded the time to delineate each structure set and time-saving efficiency was measured. RESULTS The average DSC obtained for the pelvic OARs ranged between (0.81 ± 0.06)Rectum and (0.94 ± 0.03)Bladder . (0.75 ± 0.09)Esophagus to( 0.96 ± 0.02 ) Rt . Lung ${( {0.96 \pm 0.02} )}_{{\mathrm{Rt}}.{\mathrm{\ Lung}}}$ for the thoracic OARs and (0.66 ± 0.07)Lips to (0.83 ± 0.04)Brainstem for the H&N. The average HDD in cm for the pelvis cohort ranged between (0.95 ± 0.35)Bladder to (3.62 ± 2.50)Rectum , (0.42 ± 0.06)SpinalCord to (2.09 ± 2.00)Esophagus for the thoracic set and( 0.53 ± 0.22 ) Cerv _ SpinalCord ${( {0.53 \pm 0.22} )}_{{\mathrm{Cerv}}\_{\mathrm{SpinalCord}}}$ to (1.50 ± 0.50)Mandible for the H&N region. The time-saving efficiency was 67% for H&N, 83% for pelvis, and 84% for thorax. 72.5%, 82%, and 50% of the pelvis, thorax, and H&N OARs were scored as clinically usable by the expert, respectively. CONCLUSIONS The highest agreement registered between OARs generated by Organs-RT and their respective references was for the bladder, heart, lungs, and femoral heads, with an overall DSC≥0.92. The poorest agreement was for the rectum, esophagus, and lips, with an overall DSC⩽0.81. Nonetheless, Organs-RT serves as a reliable auto-contouring tool by minimizing overall contouring time and increasing time-saving efficiency in radiotherapy treatment planning.
Collapse
Affiliation(s)
- Luis A. Maduro Bustos
- Department of Radiation OncologyChristiana Care Helen F. Graham Cancer CenterNewarkDelawareUSA
- Department of Radiation OncologyThomas Jefferson University HospitalPhiladelphiaPennsylvaniaUSA
| | - Abhirup Sarkar
- Department of Radiation OncologyChristiana Care Helen F. Graham Cancer CenterNewarkDelawareUSA
| | - Laura A. Doyle
- Department of Radiation OncologyChristiana Care Helen F. Graham Cancer CenterNewarkDelawareUSA
- Department of Radiation OncologyThomas Jefferson University HospitalPhiladelphiaPennsylvaniaUSA
| | - Kelly Andreou
- Department of Radiation OncologyChristiana Care Helen F. Graham Cancer CenterNewarkDelawareUSA
| | - Jodie Noonan
- Department of Radiation OncologyChristiana Care Helen F. Graham Cancer CenterNewarkDelawareUSA
| | - Diana Nurbagandova
- Department of Radiation OncologyChristiana Care Helen F. Graham Cancer CenterNewarkDelawareUSA
| | - SunJay A. Shah
- Department of Radiation OncologyChristiana Care Helen F. Graham Cancer CenterNewarkDelawareUSA
| | - Omoruyi Credit Irabor
- Department of Radiation OncologyThomas Jefferson University HospitalPhiladelphiaPennsylvaniaUSA
| | - Firas Mourtada
- Department of Radiation OncologyThomas Jefferson University HospitalPhiladelphiaPennsylvaniaUSA
| |
Collapse
|
19
|
Hong W, Kim SM, Choi J, Ahn J, Paeng JY, Kim H. Automated Cephalometric Landmark Detection Using Deep Reinforcement Learning. J Craniofac Surg 2023; 34:2336-2342. [PMID: 37622568 DOI: 10.1097/scs.0000000000009685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 06/25/2023] [Indexed: 08/26/2023] Open
Abstract
Accurate cephalometric landmark detection leads to accurate analysis, diagnosis, and surgical planning. Many studies on automated landmark detection have been conducted, however reinforcement learning-based networks have not yet been applied. This is the first study to apply deep Q-network (DQN) and double deep Q-network (DDQN) to automated cephalometric landmark detection to the best of our knowledge. The performance of the DQN-based network for cephalometric landmark detection was evaluated using the IEEE International Symposium of Biomedical Imaging (ISBI) 2015 Challenge data set and compared with the previously proposed methods. Furthermore, the clinical applicability of DQN-based automated cephalometric landmark detection was confirmed by testing the DQN-based and DDQN-based network using 500-patient data collected in a clinic. The DQN-based network demonstrated that the average mean radius error of 19 landmarks was smaller than 2 mm, that is, the clinically accepted level, without data augmentation and additional preprocessing. Our DQN-based and DDQN-based approaches tested with the 500-patient data set showed the average success detection rate of 67.33% and 66.04% accuracy within 2 mm, respectively, indicating the feasibility and potential of clinical application.
Collapse
Affiliation(s)
- Woojae Hong
- Department of Biomechatronic Engineering, Sungkyunkwan University, Suwon, Gyeonggi
| | - Seong-Min Kim
- Department of Biomechatronic Engineering, Sungkyunkwan University, Suwon, Gyeonggi
| | - Joongyeon Choi
- Department of Biomechatronic Engineering, Sungkyunkwan University, Suwon, Gyeonggi
| | - Jaemyung Ahn
- Department of Oral and Maxillofacial Surgery, Samsung Medical Center, Seoul, Republic of Korea
| | - Jun-Young Paeng
- Department of Oral and Maxillofacial Surgery, Samsung Medical Center, Seoul, Republic of Korea
| | - Hyunggun Kim
- Department of Biomechatronic Engineering, Sungkyunkwan University, Suwon, Gyeonggi
| |
Collapse
|
20
|
Wright R, Gomez A, Zimmer VA, Toussaint N, Khanal B, Matthew J, Skelton E, Kainz B, Rueckert D, Hajnal JV, Schnabel JA. Fast fetal head compounding from multi-view 3D ultrasound. Med Image Anal 2023; 89:102793. [PMID: 37482034 DOI: 10.1016/j.media.2023.102793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 02/26/2023] [Accepted: 03/06/2023] [Indexed: 07/25/2023]
Abstract
The diagnostic value of ultrasound images may be limited by the presence of artefacts, notably acoustic shadows, lack of contrast and localised signal dropout. Some of these artefacts are dependent on probe orientation and scan technique, with each image giving a distinct, partial view of the imaged anatomy. In this work, we propose a novel method to fuse the partially imaged fetal head anatomy, acquired from numerous views, into a single coherent 3D volume of the full anatomy. Firstly, a stream of freehand 3D US images is acquired using a single probe, capturing as many different views of the head as possible. The imaged anatomy at each time-point is then independently aligned to a canonical pose using a recurrent spatial transformer network, making our approach robust to fast fetal and probe motion. Secondly, images are fused by averaging only the most consistent and salient features from all images, producing a more detailed compounding, while minimising artefacts. We evaluated our method quantitatively and qualitatively, using image quality metrics and expert ratings, yielding state of the art performance in terms of image quality and robustness to misalignments. Being online, fast and fully automated, our method shows promise for clinical use and deployment as a real-time tool in the fetal screening clinic, where it may enable unparallelled insight into the shape and structure of the face, skull and brain.
Collapse
Affiliation(s)
- Robert Wright
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK.
| | - Alberto Gomez
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK
| | - Veronika A Zimmer
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK; Department of Informatics, Technische Universität München, Germany
| | | | - Bishesh Khanal
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK; Nepal Applied Mathematics and Informatics Institute for Research (NAAMII), Nepal
| | - Jacqueline Matthew
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK
| | - Emily Skelton
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK; School of Health Sciences, City, University of London, London, UK
| | | | - Daniel Rueckert
- Department of Computing, Imperial College London, UK; School of Medicine and Department of Informatics, Technische Universität München, Germany
| | - Joseph V Hajnal
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK.
| | - Julia A Schnabel
- School of Biomedical Engineering & Imaging Sciences, King's College London, UK; Department of Informatics, Technische Universität München, Germany; Helmholtz Zentrum München - German Research Center for Environmental Health, Germany.
| |
Collapse
|
21
|
Li X, Lei H, Zhang L, Wang M. Differentiable Logic Policy for Interpretable Deep Reinforcement Learning: A Study From an Optimization Perspective. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:11654-11667. [PMID: 37310843 DOI: 10.1109/tpami.2023.3285634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The interpretability of policies remains an important challenge in Deep Reinforcement Learning (DRL). This paper explores interpretable DRL via representing policy by Differentiable Inductive Logic Programming (DILP) and provides a theoretical and empirical study of DILP-based policy learning from an optimization perspective. We first identified a fundamental fact that DILP-based policy learning should be solved as a constrained policy optimization problem. We then proposed to use Mirror Descent for policy optimization (MDPO) to deal with the constraints of DILP-based policies. We derived the closed-form regret bound of MDPO with function approximation, which is helpful to the design of DRL frameworks. Moreover, we studied the convexity of DILP-based policy to further verify the benefits gained from MDPO. Empirically, we experimented MDPO, its on-policy variant, and 3 mainstream policy learning methods, and the results verified our theoretical analysis.
Collapse
|
22
|
Hamelink II, de Heide EEJ, Pelgrim GJGJ, Kwee TCT, van Ooijen PMAP, de Bock GHT, Vliegenthart RR. Validation of an AI-based algorithm for measurement of the thoracic aortic diameter in low-dose chest CT. Eur J Radiol 2023; 167:111067. [PMID: 37659209 DOI: 10.1016/j.ejrad.2023.111067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Revised: 08/24/2023] [Accepted: 08/25/2023] [Indexed: 09/04/2023]
Abstract
OBJECTIVES To evaluate the performance of artificial intelligence (AI) software for automatic thoracic aortic diameter assessment in a heterogeneous cohort with low-dose, non-contrast chest computed tomography (CT). MATERIALS AND METHODS Participants of the Imaging in Lifelines (ImaLife) study who underwent low-dose, non-contrast chest CT (August 2017-May 2022) were included using random samples of 80 participants <50y, ≥80y, and with thoracic aortic diameter ≥40 mm. AI-based aortic diameters at eight guideline compliant positions were compared with manual measurements. In 90 examinations (30 per group) diameters were reassessed for intra- and inter-reader variability, which was compared to discrepancy of the AI system using Bland-Altman analysis, paired samples t-testing and linear mixed models. RESULTS We analyzed 240 participants (63 ± 16 years; 50 % men). AI evaluation failed in 11 cases due to incorrect segmentation (4.6 %), leaving 229 cases for analysis. No difference was found in aortic diameter between manual and automatic measurements (32.7 ± 6.4 mm vs 32.7 ± 6.0 mm, p = 0.70). Bland-Altman analysis yielded no systematic bias and a repeatability coefficient of 4.0 mm for AI. Mean discrepancy of AI (1.3 ± 1.6 mm) was comparable to inter-reader variability (1.4 ± 1.4 mm); only at the proximal aortic arch showed AI higher discrepancy (2.0 ± 1.8 mm vs 0.9 ± 0.9 mm, p < 0.001). No difference between AI discrepancy and inter-reader variability was found for any subgroup (all: p > 0.05). CONCLUSION The AI software can accurately measure thoracic aortic diameters, with discrepancy to a human reader similar to inter-reader variability in a range from normal to dilated aortas.
Collapse
Affiliation(s)
- I Iris Hamelink
- Department of Radiology, University of Groningen, University Medical Center of Groningen, 9713GZ Groningen, The Netherlands.
| | - E Erik Jan de Heide
- Department of Radiology, University of Groningen, University Medical Center of Groningen, 9713GZ Groningen, The Netherlands.
| | - G J Gert Jan Pelgrim
- Department of Radiology, University of Groningen, University Medical Center of Groningen, 9713GZ Groningen, The Netherlands.
| | - T C Thomas Kwee
- Department of Radiology, University of Groningen, University Medical Center of Groningen, 9713GZ Groningen, The Netherlands.
| | - P M A Peter van Ooijen
- Department of Radiation Oncology, University of Groningen, University Medical Center of Groningen, 9713GZ Groningen, The Netherlands; Data Science in Health (DASH), University of Groningen, University Medical Center of Groningen, 9713GZ Groningen, The Netherlands.
| | - G H Truuske de Bock
- Department of Epidemiology, University of Groningen, University Medical Center of Groningen, 9713GZ Groningen, The Netherlands.
| | - R Rozemarijn Vliegenthart
- Department of Radiology, University of Groningen, University Medical Center of Groningen, 9713GZ Groningen, The Netherlands.
| |
Collapse
|
23
|
Wan K, Li L, Jia D, Gao S, Qian W, Wu Y, Lin H, Mu X, Gao X, Wang S, Wu F, Zhuang X. Multi-target landmark detection with incomplete images via reinforcement learning and shape prior embedding. Med Image Anal 2023; 89:102875. [PMID: 37441881 DOI: 10.1016/j.media.2023.102875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 05/05/2023] [Accepted: 06/13/2023] [Indexed: 07/15/2023]
Abstract
Medical images are generally acquired with limited field-of-view (FOV), which could lead to incomplete regions of interest (ROI), and thus impose a great challenge on medical image analysis. This is particularly evident for the learning-based multi-target landmark detection, where algorithms could be misleading to learn primarily the variation of background due to the varying FOV, failing the detection of targets. Based on learning a navigation policy, instead of predicting targets directly, reinforcement learning (RL)-based methods have the potential to tackle this challenge in an efficient manner. Inspired by this, in this work we propose a multi-agent RL framework for simultaneous multi-target landmark detection. This framework is aimed to learn from incomplete or (and) complete images to form an implicit knowledge of global structure, which is consolidated during the training stage for the detection of targets from either complete or incomplete test images. To further explicitly exploit the global structural information from incomplete images, we propose to embed a shape model into the RL process. With this prior knowledge, the proposed RL model can not only localize dozens of targets simultaneously, but also work effectively and robustly in the presence of incomplete images. We validated the applicability and efficacy of the proposed method on various multi-target detection tasks with incomplete images from practical clinics, using body dual-energy X-ray absorptiometry (DXA), cardiac MRI and head CT datasets. Results showed that our method could predict whole set of landmarks with incomplete training images up to 80% missing proportion (average distance error 2.29 cm on body DXA), and could detect unseen landmarks in regions with missing image information outside FOV of target images (average distance error 6.84 mm on 3D half-head CT). Our code will be released via https://zmiclab.github.io/projects.html.
Collapse
Affiliation(s)
- Kaiwen Wan
- School of Data Science, Fudan University, Shanghai, 200433, China
| | - Lei Li
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| | - Dengqiang Jia
- School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai, China; Hong Kong Centre for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong, China
| | - Shangqi Gao
- School of Data Science, Fudan University, Shanghai, 200433, China
| | - Wei Qian
- Shanghai Institute of Nutrition and Health, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Shanghai 200031, China
| | - Yingzhi Wu
- Department of Plastic Surgery, Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Huandong Lin
- Department of Endocrinology and Metabolism, Zhong Shan Hospital, Fudan University, 200032 Shanghai, China
| | - Xiongzheng Mu
- Department of Plastic Surgery, Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Xin Gao
- Department of Endocrinology and Metabolism, Zhong Shan Hospital, Fudan University, 200032 Shanghai, China
| | - Sijia Wang
- Shanghai Institute of Nutrition and Health, University of Chinese Academy of Sciences, Chinese Academy of Sciences, Shanghai 200031, China
| | - Fuping Wu
- Nuffield Department of Population Health, University of Oxford, Oxford, UK
| | - Xiahai Zhuang
- School of Data Science, Fudan University, Shanghai, 200433, China.
| |
Collapse
|
24
|
Nie W, Zhang C, Song D, Zhao L, Bai Y, Xie K, Liu A. Deep reinforcement learning framework for thoracic diseases classification via prior knowledge guidance. Comput Med Imaging Graph 2023; 108:102277. [PMID: 37567045 DOI: 10.1016/j.compmedimag.2023.102277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 07/25/2023] [Accepted: 07/26/2023] [Indexed: 08/13/2023]
Abstract
The chest X-ray is commonly employed in the diagnosis of thoracic diseases. Over the years, numerous approaches have been proposed to address the issue of automatic diagnosis based on chest X-rays. However, the limited availability of labeled data for related diseases remains a significant challenge in achieving accurate diagnoses. This paper focuses on the diagnostic problem of thorax diseases and presents a novel deep reinforcement learning framework. This framework incorporates prior knowledge to guide the learning process of diagnostic agents, and the model parameters can be continually updated as more data becomes available, mimicking a person's learning process. Specifically, our approach offers two key contributions: (1) prior knowledge can be acquired from pre-trained models using old data or similar data from other domains, effectively reducing the dependence on target domain data; and (2) the reinforcement learning framework enables the diagnostic agent to be as exploratory as a human, leading to improved diagnostic accuracy through continuous exploration. Moreover, this method effectively addresses the challenge of learning models with limited data, enhancing the model's generalization capability. We evaluate the performance of our approach using the well-known NIH ChestX-ray 14 and CheXpert datasets, and achieve competitive results. More importantly, in clinical application, we make considerable progress. The source code for our approach can be accessed at the following URL: https://github.com/NeaseZ/MARL.
Collapse
Affiliation(s)
- Weizhi Nie
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Chen Zhang
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Dan Song
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China.
| | - Lina Zhao
- Department of Critical Care Medicine, Tianjin Medical University General Hospital, Tianjin 300052, China
| | - Yunpeng Bai
- Department of Cardiac Surgery, Chest Hospital, Tianjin University, Tianjin 300222, China; Clinical School of Thoracic, Tianjin Medical University, Tianjin 300052, China
| | - Keliang Xie
- Department of Critical Care Medicine, Tianjin Medical University General Hospital, Tianjin 300052, China; Department of Anesthesiology, Tianjin Medical University General Hospital, Tianjin 300052, China; Tianjin Institute of Anesthesiology, Tianjin 300052, China
| | - Anan Liu
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| |
Collapse
|
25
|
Xu H, Fang Y, Chou CA, Fard N, Luo L. A reinforcement learning-based optimal control approach for managing an elective surgery backlog after pandemic disruption. Health Care Manag Sci 2023; 26:430-446. [PMID: 37084163 PMCID: PMC10119544 DOI: 10.1007/s10729-023-09636-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 03/14/2023] [Indexed: 04/22/2023]
Abstract
Contagious disease pandemics, such as COVID-19, can cause hospitals around the world to delay nonemergent elective surgeries, which results in a large surgery backlog. To develop an operational solution for providing patients timely surgical care with limited health care resources, this study proposes a stochastic control process-based method that helps hospitals make operational recovery plans to clear their surgery backlog and restore surgical activity safely. The elective surgery backlog recovery process is modeled by a general discrete-time queueing network system, which is formulated by a Markov decision process. A scheduling optimization algorithm based on the piecewise decaying [Formula: see text]-greedy reinforcement learning algorithm is proposed to make dynamic daily surgery scheduling plans considering newly arrived patients, waiting time and clinical urgency. The proposed method is tested through a set of simulated dataset, and implemented on an elective surgery backlog that built up in one large general hospital in China after the outbreak of COVID-19. The results show that, compared with the current policy, the proposed method can effectively and rapidly clear the surgery backlog caused by a pandemic while ensuring that all patients receive timely surgical care. These results encourage the wider adoption of the proposed method to manage surgery scheduling during all phases of a public health crisis.
Collapse
Affiliation(s)
- Huyang Xu
- College of Management Science, Chengdu University of Technology, Chengdu, Sichuan, China
| | - Yuanchen Fang
- Department of Industrial Engineering and Management, Business School, Sichuan University, Chengdu, Sichuan, China.
| | - Chun-An Chou
- Department of Mechanical & Industrial Engineering, Northeastern University, Boston, MA, USA
| | - Nasser Fard
- Department of Mechanical & Industrial Engineering, Northeastern University, Boston, MA, USA
| | - Li Luo
- Department of Industrial Engineering and Management, Business School, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
26
|
Geng H, Xiao D, Yang S, Fan J, Fu T, Lin Y, Bai Y, Ai D, Song H, Wang Y, Duan F, Yang J. CT2X-IRA: CT to x-ray image registration agent using domain-cross multi-scale-stride deep reinforcement learning. Phys Med Biol 2023; 68:175024. [PMID: 37549676 DOI: 10.1088/1361-6560/acede5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 08/07/2023] [Indexed: 08/09/2023]
Abstract
Objective.In computer-assisted minimally invasive surgery, the intraoperative x-ray image is enhanced by overlapping it with a preoperative CT volume to improve visualization of vital anatomical structures. Therefore, accurate and robust 3D/2D registration of CT volume and x-ray image is highly desired in clinical practices. However, previous registration methods were prone to initial misalignments and struggled with local minima, leading to issues of low accuracy and vulnerability.Approach.To improve registration performance, we propose a novel CT/x-ray image registration agent (CT2X-IRA) within a task-driven deep reinforcement learning framework, which contains three key strategies: (1) a multi-scale-stride learning mechanism provides multi-scale feature representation and flexible action step size, establishing fast and globally optimal convergence of the registration task. (2) A domain adaptation module reduces the domain gap between the x-ray image and digitally reconstructed radiograph projected from the CT volume, decreasing the sensitivity and uncertainty of the similarity measurement. (3) A weighted reward function facilitates CT2X-IRA in searching for the optimal transformation parameters, improving the estimation accuracy of out-of-plane transformation parameters under large initial misalignments.Main results.We evaluate the proposed CT2X-IRA on both the public and private clinical datasets, achieving target registration errors of 2.13 mm and 2.33 mm with the computation time of 1.5 s and 1.1 s, respectively, showing an accurate and fast workflow for CT/x-ray image rigid registration.Significance.The proposed CT2X-IRA obtains the accurate and robust 3D/2D registration of CT and x-ray images, suggesting its potential significance in clinical applications.
Collapse
Affiliation(s)
- Haixiao Geng
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Deqiang Xiao
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Shuo Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Jingfan Fan
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Tianyu Fu
- School of Medical Engineering, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Yucong Lin
- School of Medical Engineering, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Yanhua Bai
- Department of Interventional Radiology, The First Medical Center of Chinese PLA General Hospital, Beijing 100853, People's Republic of China
| | - Danni Ai
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Hong Song
- School of Computer Science, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Yongtian Wang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Feng Duan
- Department of Interventional Radiology, The First Medical Center of Chinese PLA General Hospital, Beijing 100853, People's Republic of China
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| |
Collapse
|
27
|
Zaeri N. Artificial intelligence and machine learning responses to COVID-19 related inquiries. J Med Eng Technol 2023; 47:301-320. [PMID: 38625639 DOI: 10.1080/03091902.2024.2321846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Accepted: 02/18/2024] [Indexed: 04/17/2024]
Abstract
Researchers and scientists can use computational-based models to turn linked data into useful information, aiding in disease diagnosis, examination, and viral containment due to recent artificial intelligence and machine learning breakthroughs. In this paper, we extensively study the role of artificial intelligence and machine learning in delivering efficient responses to the COVID-19 pandemic almost four years after its start. In this regard, we examine a large number of critical studies conducted by various academic and research communities from multiple disciplines, as well as practical implementations of artificial intelligence algorithms that suggest potential solutions in investigating different COVID-19 decision-making scenarios. We identify numerous areas where artificial intelligence and machine learning can impact this context, including diagnosis (using chest X-ray imaging and CT imaging), severity, tracking, treatment, and the drug industry. Furthermore, we analyse the dilemma's limits, restrictions, and hazards.
Collapse
Affiliation(s)
- Naser Zaeri
- Faculty of Computer Studies, Arab Open University, Kuwait
| |
Collapse
|
28
|
Ye H, Cheng Z, Ungvijanpunya N, Chen W, Cao L, Gou Y. Is automatic cephalometric software using artificial intelligence better than orthodontist experts in landmark identification? BMC Oral Health 2023; 23:467. [PMID: 37422630 PMCID: PMC10329795 DOI: 10.1186/s12903-023-03188-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 06/29/2023] [Indexed: 07/10/2023] Open
Abstract
BACKGROUND To evaluate the techniques used for the automatic digitization of cephalograms using artificial intelligence algorithms, highlighting the strengths and weaknesses of each one and reviewing the percentage of success in localizing each cephalometric point. METHODS Lateral cephalograms were digitized and traced by three calibrated senior orthodontic residents with or without artificial intelligence (AI) assistance. The same radiographs of 43 patients were uploaded to AI-based machine learning programs MyOrthoX, Angelalign, and Digident. Image J was used to extract x- and y-coordinates for 32 cephalometric points: 11 soft tissue landmarks and 21 hard tissue landmarks. The mean radical errors (MRE) were assessed radical to the threshold of 1.0 mm,1.5 mm, and 2 mm to compare the successful detection rate (SDR). One-way ANOVA analysis at a significance level of P < .05 was used to compare MRE and SDR. The SPSS (IBM-vs. 27.0) and PRISM (GraphPad-vs.8.0.2) software were used for the data analysis. RESULTS Experimental results showed that three methods were able to achieve detection rates greater than 85% using the 2 mm precision threshold, which is the acceptable range in clinical practice. The Angelalign group even achieved a detection rate greater than 78.08% using the 1.0 mm threshold. A marked difference in time was found between the AI-assisted group and the manual group due to heterogeneity in the performance of techniques to detect the same landmark. CONCLUSIONS AI assistance may increase efficiency without compromising accuracy with cephalometric tracings in routine clinical practice and research settings.
Collapse
Affiliation(s)
- Huayu Ye
- Department of Orthodontics, Stomatological Hospital of Chongqing Medical University, 426#, Songshi North Road, Yubei District, Chongqing, 401147 PR China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, 426#, Songshi North Road, Yubei District, Chongqing, 401147 PR China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, 426#, Songshi North Road, Yubei District, Chongqing, 401147 PR China
| | - Zixuan Cheng
- Department of Orthodontics, Stomatological Hospital of Chongqing Medical University, 426#, Songshi North Road, Yubei District, Chongqing, 401147 PR China
- Chongqing Haochi Private Dental Clinic, No. 711, Konggang Avenue, Yubei District, Chongqing, 401147 PR China
| | - Nicha Ungvijanpunya
- Faculty of Dentistry, Chulalongkorn University, 34 Henri Dunant Road, Pathumwan, Bangkok, 10330 Thailand
| | - Wenjing Chen
- Department of Orthodontics, Stomatological Hospital of Chongqing Medical University, 426#, Songshi North Road, Yubei District, Chongqing, 401147 PR China
| | - Li Cao
- Department of Orthodontics, Stomatological Hospital of Chongqing Medical University, 426#, Songshi North Road, Yubei District, Chongqing, 401147 PR China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, 426#, Songshi North Road, Yubei District, Chongqing, 401147 PR China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, 426#, Songshi North Road, Yubei District, Chongqing, 401147 PR China
| | - Yongchao Gou
- Department of Orthodontics, Stomatological Hospital of Chongqing Medical University, 426#, Songshi North Road, Yubei District, Chongqing, 401147 PR China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, 426#, Songshi North Road, Yubei District, Chongqing, 401147 PR China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, 426#, Songshi North Road, Yubei District, Chongqing, 401147 PR China
| |
Collapse
|
29
|
Gao Q, Yang L, Lu M, Jin R, Ye H, Ma T. The artificial intelligence and machine learning in lung cancer immunotherapy. J Hematol Oncol 2023; 16:55. [PMID: 37226190 DOI: 10.1186/s13045-023-01456-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 05/17/2023] [Indexed: 05/26/2023] Open
Abstract
Since the past decades, more lung cancer patients have been experiencing lasting benefits from immunotherapy. It is imperative to accurately and intelligently select appropriate patients for immunotherapy or predict the immunotherapy efficacy. In recent years, machine learning (ML)-based artificial intelligence (AI) was developed in the area of medical-industrial convergence. AI can help model and predict medical information. A growing number of studies have combined radiology, pathology, genomics, proteomics data in order to predict the expression levels of programmed death-ligand 1 (PD-L1), tumor mutation burden (TMB) and tumor microenvironment (TME) in cancer patients or predict the likelihood of immunotherapy benefits and side effects. Finally, with the advancement of AI and ML, it is believed that "digital biopsy" can replace the traditional single assessment method to benefit more cancer patients and help clinical decision-making in the future. In this review, the applications of AI in PD-L1/TMB prediction, TME prediction and lung cancer immunotherapy are discussed.
Collapse
Affiliation(s)
- Qing Gao
- Cancer Research Center, Beijing Chest Hospital, Capital Medical University, Beijing Tuberculosis and Thoracic Tumor Research Institute, Beijing, 101149, China
| | - Luyu Yang
- Department of Respiratory and Critical Care Medicine, Beijing Chest Hospital, Capital Medical University, Beijing Tuberculosis and Thoracic Tumor Institute, Beijing, 101149, China
| | - Mingjun Lu
- Cancer Research Center, Beijing Chest Hospital, Capital Medical University, Beijing Tuberculosis and Thoracic Tumor Research Institute, Beijing, 101149, China
| | - Renjing Jin
- Cancer Research Center, Beijing Chest Hospital, Capital Medical University, Beijing Tuberculosis and Thoracic Tumor Research Institute, Beijing, 101149, China
| | - Huan Ye
- Department of Respiratory and Critical Care Medicine, Beijing Chest Hospital, Capital Medical University, Beijing Tuberculosis and Thoracic Tumor Institute, Beijing, 101149, China
| | - Teng Ma
- Cancer Research Center, Beijing Chest Hospital, Capital Medical University, Beijing Tuberculosis and Thoracic Tumor Research Institute, Beijing, 101149, China.
| |
Collapse
|
30
|
Ledziński Ł, Grześk G. Artificial Intelligence Technologies in Cardiology. J Cardiovasc Dev Dis 2023; 10:jcdd10050202. [PMID: 37233169 DOI: 10.3390/jcdd10050202] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 05/03/2023] [Accepted: 05/04/2023] [Indexed: 05/27/2023] Open
Abstract
As the world produces exabytes of data, there is a growing need to find new methods that are more suitable for dealing with complex datasets. Artificial intelligence (AI) has significant potential to impact the healthcare industry, which is already on the road to change with the digital transformation of vast quantities of information. The implementation of AI has already achieved success in the domains of molecular chemistry and drug discoveries. The reduction in costs and in the time needed for experiments to predict the pharmacological activities of new molecules is a milestone in science. These successful applications of AI algorithms provide hope for a revolution in healthcare systems. A significant part of artificial intelligence is machine learning (ML), of which there are three main types-supervised learning, unsupervised learning, and reinforcement learning. In this review, the full scope of the AI workflow is presented, with explanations of the most-often-used ML algorithms and descriptions of performance metrics for both regression and classification. A brief introduction to explainable artificial intelligence (XAI) is provided, with examples of technologies that have developed for XAI. We review important AI implementations in cardiology for supervised, unsupervised, and reinforcement learning and natural language processing, emphasizing the used algorithm. Finally, we discuss the need to establish legal, ethical, and methodical requirements for the deployment of AI models in medicine.
Collapse
Affiliation(s)
- Łukasz Ledziński
- Department of Cardiology and Clinical Pharmacology, Faculty of Health Sciences, Collegium Medicum in Bydgoszcz, Nicolaus Copernicus University in Toruń, Ujejskiego 75, 85-168 Bydgoszcz, Poland
| | - Grzegorz Grześk
- Department of Cardiology and Clinical Pharmacology, Faculty of Health Sciences, Collegium Medicum in Bydgoszcz, Nicolaus Copernicus University in Toruń, Ujejskiego 75, 85-168 Bydgoszcz, Poland
| |
Collapse
|
31
|
Ran QY, Miao J, Zhou SP, Hua SH, He SY, Zhou P, Wang HX, Zheng YP, Zhou GQ. Automatic 3-D spine curve measurement in freehand ultrasound via structure-aware reinforcement learning spinous process localization. ULTRASONICS 2023; 132:107012. [PMID: 37071944 DOI: 10.1016/j.ultras.2023.107012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 03/18/2023] [Accepted: 04/10/2023] [Indexed: 05/03/2023]
Abstract
Freehand 3-D ultrasound systems have been advanced in scoliosis assessment to avoid radiation hazards, especially for teenagers. This novel 3-D imaging method also makes it possible to evaluate the spine curvature automatically from the corresponding 3-D projection images. However, most approaches neglect the three-dimensional spine deformity by only using the rendering images, thus limiting their usage in clinical applications. In this study, we proposed a structure-aware localization model to directly identify the spinous processes for automatic 3-D spine curve measurement using the images acquired with freehand 3-D ultrasound imaging. The pivot is to leverage a novel reinforcement learning (RL) framework to localize the landmarks, which adopts a multi-scale agent to boost structure representation with positional information. We also introduced a structure similarity prediction mechanism to perceive the targets with apparent spinous process structures. Finally, a two-fold filtering strategy was proposed to screen the detected spinous processes landmarks iteratively, followed by a three-dimensional spine curve fitting for the spine curvature assessments. We evaluated the proposed model on 3-D ultrasound images among subjects with different scoliotic angles. The results showed that the mean localization accuracy of the proposed landmark localization algorithm was 5.95 pixels. Also, the curvature angles on the coronal plane obtained by the new method had a high linear correlation with those by manual measurement (R = 0.86, p < 0.001). These results demonstrated the potential of our proposed method for facilitating the 3-D assessment of scoliosis, especially for 3-D spine deformity assessment.
Collapse
Affiliation(s)
- Qi-Yong Ran
- The School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Juzheng Miao
- The School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Si-Ping Zhou
- The School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Shi-Hao Hua
- The School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Si-Yuan He
- The School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Ping Zhou
- The School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Hong-Xing Wang
- The Department of Rehabilitation Medicine, Zhongda Hospital, Southeast University, Nanjing, China
| | - Yong-Ping Zheng
- The Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
| | - Guang-Quan Zhou
- The School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China.
| |
Collapse
|
32
|
An Automatic Image Processing Method Based on Artificial Intelligence for Locating the Key Boundary Points in the Central Serous Chorioretinopathy Lesion Area. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:1839387. [PMID: 36818580 PMCID: PMC9937763 DOI: 10.1155/2023/1839387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/08/2022] [Accepted: 01/25/2023] [Indexed: 02/12/2023]
Abstract
Accurately and rapidly measuring the diameter of central serous chorioretinopathy (CSCR) lesion area is the key to judge the severity of CSCR and evaluate the efficacy of the corresponding treatments. Currently, the manual measurement scheme based on a single or a small number of optical coherence tomography (OCT) B-scan images encounters the dilemma of incredibility. Although manually measuring the diameters of all OCT B-scan images of a single patient can alleviate the previous issue, the situation of inefficiency will thus arise. Additionally, manual operation is subject to subjective factors of ophthalmologists, resulting in unrepeatable measurement results. Therefore, an automatic image processing method (i.e., a joint framework) based on artificial intelligence (AI) is innovatively proposed for locating the key boundary points of CSCR lesion area to assist the diameter measurement. Firstly, the initial location module (ILM) benefiting from multitask learning is properly adjusted and tentatively achieves the preliminary location of key boundary points. Secondly, the location task is formulated as a Markov decision process, aiming at further improving the location accuracy by utilizing the single agent reinforcement learning module (SARLM). Finally, the joint framework based on the ILM and SARLM is skillfully established, in which ILM provides an initial starting point for SARLM to narrow the active region of agent, and SARLM makes up for the defect of low generalization of ILM by virtue of the independent exploration ability of agent. Experiments reveal the AI-based method which joins the multitask learning, and single agent reinforcement learning paradigms enable agents to work in local region, alleviating the time-consuming problem of SARLM, performing location task in a global scope, and improving the location accuracy of ILM, thus reflecting its effectiveness and clinical application value in the task of rapidly and accurately measuring the diameter of CSCR lesions.
Collapse
|
33
|
Huang S, Yang J, Shen N, Xu Q, Zhao Q. Artificial intelligence in lung cancer diagnosis and prognosis: Current application and future perspective. Semin Cancer Biol 2023; 89:30-37. [PMID: 36682439 DOI: 10.1016/j.semcancer.2023.01.006] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 01/18/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
Lung cancer is one of the malignant tumors with the highest incidence and mortality in the world. The overall five-year survival rate of lung cancer is relatively lower than many leading cancers. Early diagnosis and prognosis of lung cancer are essential to improve the patient's survival rate. With artificial intelligence (AI) approaches widely applied in lung cancer, early diagnosis and prediction have achieved excellent performance in recent years. This review summarizes various types of AI algorithm applications in lung cancer, including natural language processing (NLP), machine learning and deep learning, and reinforcement learning. In addition, we provides evidence regarding the application of AI in lung cancer diagnostic and clinical prognosis. This review aims to elucidate the value of AI in lung cancer diagnosis and prognosis as the novel screening decision-making for the precise treatment of lung cancer patients.
Collapse
Affiliation(s)
- Shigao Huang
- Department of Radiation Oncology, The First Affiliated Hospital, Air Force Medical University, Xi'an, Shanxi, China
| | - Jie Yang
- Chongqing Industry&Trade Polytechnic, Chongqing, China
| | - Na Shen
- Hong Kong Shue Yan University, Hong Kong, China
| | - Qingsong Xu
- Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China
| | - Qi Zhao
- Cancer Center, Institute of Translational Medicine, Faculty of Health Sciences, University of Macau, Taipa, Macau SAR, China; MoE Frontiers Science Center for Precision Oncology, University of Macau, Taipa, Macau SAR, China.
| |
Collapse
|
34
|
Pera Ó, Martínez Á, Möhler C, Hamans B, Vega F, Barral F, Becerra N, Jimenez R, Fernandez-Velilla E, Quera J, Algara M. Clinical Validation of Siemens' Syngo.via Automatic Contouring System. Adv Radiat Oncol 2023; 8:101177. [PMID: 36865668 PMCID: PMC9972393 DOI: 10.1016/j.adro.2023.101177] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 01/05/2023] [Indexed: 01/18/2023] Open
Abstract
Purpose The manual delineation of organs at risk is a process that requires a great deal of time both for the technician and for the physician. Availability of validated software tools assisted by artificial intelligence would be of great benefit, as it would significantly improve the radiation therapy workflow, reducing the time required for segmentation. The purpose of this article is to validate the deep learning-based autocontouring solution integrated in syngo.via RT Image Suite VB40 (Siemens Healthineers, Forchheim, Germany). Methods and Materials For this purpose, we have used our own specific qualitative classification system, RANK, to evaluate more than 600 contours corresponding to 18 different automatically delineated organs at risk. Computed tomography data sets of 95 different patients were included: 30 patients with lung, 30 patients with breast, and 35 male patients with pelvic cancer. The automatically generated structures were reviewed in the Eclipse Contouring module independently by 3 observers: an expert physician, an expert technician, and a junior physician. Results There is a statistically significant difference between the Dice coefficient associated with RANK 4 compared with the coefficient associated with RANKs 2 and 3 (P < .001). In total, 64% of the evaluated structures received the maximum score, 4. Only 1% of the structures were classified with the lowest score, 1. The time savings for breast, thorax, and pelvis were 87.6%, 93.5%, and 82.2%, respectively. Conclusions Siemens' syngo.via RT Image Suite offers good autocontouring results and significant time savings.
Collapse
Affiliation(s)
- Óscar Pera
- Radiation Oncology Department, Hospital del Mar, Barcelona, Spain,Institut Hospital del Mar d'Investigacions Mèdiques, Barcelona, Spain,Corresponding author: Óscar Pera, MSc
| | - Álvaro Martínez
- Radiation Oncology Department, Hospital del Mar, Barcelona, Spain
| | | | | | | | | | - Nuria Becerra
- Radiation Oncology Department, Hospital del Mar, Barcelona, Spain
| | - Rafael Jimenez
- Radiation Oncology Department, Hospital del Mar, Barcelona, Spain
| | - Enric Fernandez-Velilla
- Radiation Oncology Department, Hospital del Mar, Barcelona, Spain,Institut Hospital del Mar d'Investigacions Mèdiques, Barcelona, Spain
| | - Jaume Quera
- Radiation Oncology Department, Hospital del Mar, Barcelona, Spain,Institut Hospital del Mar d'Investigacions Mèdiques, Barcelona, Spain
| | - Manuel Algara
- Radiation Oncology Department, Hospital del Mar, Barcelona, Spain,Institut Hospital del Mar d'Investigacions Mèdiques, Barcelona, Spain,Autonomous University of Barcelona, Barcelona, Spain
| |
Collapse
|
35
|
Hu M, Zhang J, Matkovic L, Liu T, Yang X. Reinforcement learning in medical image analysis: Concepts, applications, challenges, and future directions. J Appl Clin Med Phys 2023; 24:e13898. [PMID: 36626026 PMCID: PMC9924115 DOI: 10.1002/acm2.13898] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 12/14/2022] [Accepted: 12/23/2022] [Indexed: 01/11/2023] Open
Abstract
MOTIVATION Medical image analysis involves a series of tasks used to assist physicians in qualitative and quantitative analyses of lesions or anatomical structures which can significantly improve the accuracy and reliability of medical diagnoses and prognoses. Traditionally, these tedious tasks were finished by experienced physicians or medical physicists and were marred with two major problems, low efficiency and bias. In the past decade, many machine learning methods have been applied to accelerate and automate the image analysis process. Compared to the enormous deployments of supervised and unsupervised learning models, attempts to use reinforcement learning in medical image analysis are still scarce. We hope that this review article could serve as the stepping stone for related research in the future. SIGNIFICANCE We found that although reinforcement learning has gradually gained momentum in recent years, many researchers in the medical analysis field still find it hard to understand and deploy in clinical settings. One possible cause is a lack of well-organized review articles intended for readers without professional computer science backgrounds. Rather than to provide a comprehensive list of all reinforcement learning models applied in medical image analysis, the aim of this review is to help the readers formulate and solve their medical image analysis research through the lens of reinforcement learning. APPROACH & RESULTS We selected published articles from Google Scholar and PubMed. Considering the scarcity of related articles, we also included some outstanding newest preprints. The papers were carefully reviewed and categorized according to the type of image analysis task. In this article, we first reviewed the basic concepts and popular models of reinforcement learning. Then, we explored the applications of reinforcement learning models in medical image analysis. Finally, we concluded the article by discussing the reviewed reinforcement learning approaches' limitations and possible future improvements.
Collapse
Affiliation(s)
- Mingzhe Hu
- Department of Radiation OncologySchool of MedicineEmory UniversityAtlantaGeorgiaUSA,Department of Computer Science and InformaticsEmory UniversityAtlantaGeorgiaUSA
| | - Jiahan Zhang
- Department of Radiation OncologySchool of MedicineEmory UniversityAtlantaGeorgiaUSA
| | - Luke Matkovic
- Department of Radiation OncologySchool of MedicineEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation OncologySchool of MedicineEmory UniversityAtlantaGeorgiaUSA
| | - Xiaofeng Yang
- Department of Radiation OncologySchool of MedicineEmory UniversityAtlantaGeorgiaUSA,Department of Computer Science and InformaticsEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
36
|
Iyer S, Blair A, White C, Dawes L, Moses D, Sowmya A. Vertebral compression fracture detection using imitation learning, patch based convolutional neural networks and majority voting. INFORMATICS IN MEDICINE UNLOCKED 2023. [DOI: 10.1016/j.imu.2023.101238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/09/2023] Open
|
37
|
Petersson L, Vincent K, Svedberg P, Nygren JM, Larsson I. Ethical considerations in implementing AI for mortality prediction in the emergency department: Linking theory and practice. Digit Health 2023; 9:20552076231206588. [PMID: 37829612 PMCID: PMC10566278 DOI: 10.1177/20552076231206588] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/21/2023] [Indexed: 10/14/2023] Open
Abstract
Background Artificial intelligence (AI) is predicted to be a solution for improving healthcare, increasing efficiency, and saving time and recourses. A lack of ethical principles for the use of AI in practice has been highlighted by several stakeholders due to the recent attention given to it. Research has shown an urgent need for more knowledge regarding the ethical implications of AI applications in healthcare. However, fundamental ethical principles may not be sufficient to describe ethical concerns associated with implementing AI applications. Objective The aim of this study is twofold, (1) to use the implementation of AI applications to predict patient mortality in emergency departments as a setting to explore healthcare professionals' perspectives on ethical issues in relation to ethical principles and (2) to develop a model to guide ethical considerations in AI implementation in healthcare based on ethical theory. Methods Semi-structured interviews were conducted with 18 participants. The abductive approach used to analyze the empirical data consisted of four steps alternating between inductive and deductive analyses. Results Our findings provide an ethical model demonstrating the need to address six ethical principles (autonomy, beneficence, non-maleficence, justice, explicability, and professional governance) in relation to ethical theories defined as virtue, deontology, and consequentialism when AI applications are to be implemented in clinical practice. Conclusions Ethical aspects of AI applications are broader than the prima facie principles of medical ethics and the principle of explicability. Ethical aspects thus need to be viewed from a broader perspective to cover different situations that healthcare professionals, in general, and physicians, in particular, may face when using AI applications in clinical practice.
Collapse
Affiliation(s)
- Lena Petersson
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Kalista Vincent
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Petra Svedberg
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Jens M Nygren
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Ingrid Larsson
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| |
Collapse
|
38
|
Stember JN, Shalu H. Reinforcement learning using Deep [Formula: see text] networks and [Formula: see text] learning accurately localizes brain tumors on MRI with very small training sets. BMC Med Imaging 2022; 22:224. [PMID: 36564724 PMCID: PMC9784281 DOI: 10.1186/s12880-022-00919-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Accepted: 10/22/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Supervised deep learning in radiology suffers from notorious inherent limitations: 1) It requires large, hand-annotated data sets; (2) It is non-generalizable; and (3) It lacks explainability and intuition. It has recently been proposed that reinforcement learning addresses all three of these limitations. Notable prior work applied deep reinforcement learning to localize brain tumors with radiologist eye tracking points, which limits the state-action space. Here, we generalize Deep Q Learning to a gridworld-based environment so that only the images and image masks are required. METHODS We trained a Deep [Formula: see text] network on 30 two-dimensional image slices from the BraTS brain tumor database. Each image contained one lesion. We then tested the trained Deep Q network on a separate set of 30 testing set images. For comparison, we also trained and tested a keypoint detection supervised deep learning network on the same set of training/testing images. RESULTS Whereas the supervised approach quickly overfit the training data and predictably performed poorly on the testing set (11% accuracy), the Deep [Formula: see text] learning approach showed progressive improved generalizability to the testing set over training time, reaching 70% accuracy. CONCLUSION We have successfully applied reinforcement learning to localize brain tumors on 2D contrast-enhanced MRI brain images. This represents a generalization of recent work to a gridworld setting naturally suitable for analyzing medical images. We have shown that reinforcement learning does not over-fit small training sets, and can generalize to a separate testing set.
Collapse
Affiliation(s)
- J. N. Stember
- Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, Box 29, New York, NY 10065 USA
| | - H. Shalu
- Department of Aerospace Engineering, Indian Institute of Technology Madras, Chennai, 600 036 India
| |
Collapse
|
39
|
Ginn JS, Gay HA, Hilliard J, Shah J, Mistry N, Möhler C, Hugo GD, Hao Y. A clinical and time savings evaluation of a deep learning automatic contouring algorithm. Med Dosim 2022; 48:55-60. [PMID: 36550000 DOI: 10.1016/j.meddos.2022.11.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 10/27/2022] [Accepted: 11/22/2022] [Indexed: 12/24/2022]
Abstract
Automatic contouring algorithms may streamline clinical workflows by reducing normal organ-at-risk (OAR) contouring time. Here we report the first comprehensive quantitative and qualitative evaluation, along with time savings assessment for a prototype deep learning segmentation algorithm from Siemens Healthineers. The accuracy of contours generated by the prototype were evaluated quantitatively using the Sorensen-Dice coefficient (Dice), Jaccard index (JC), and Hausdorff distance (Haus). Normal pelvic and head and neck OAR contours were evaluated retrospectively comparing the automatic and manual clinical contours in 100 patient cases. Contouring performance outliers were investigated. To quantify the time savings, a certified medical dosimetrist manually contoured de novo and, separately, edited the generated OARs for 10 head and neck and 10 pelvic patients. The automatic, edited, and manually generated contours were visually evaluated and scored by a practicing radiation oncologist on a scale of 1-4, where a higher score indicated better performance. The quantitative comparison revealed high (> 0.8) Dice and JC performance for relatively large organs such as the lungs, brain, femurs, and kidneys. Smaller elongated structures that had relatively low Dice and JC values tended to have low Hausdorff distances. Poor performing outlier cases revealed common anatomical inconsistencies including overestimation of the bladder and incorrect superior-inferior truncation of the spinal cord and femur contours. In all cases, editing contours was faster than manual contouring with an average time saving of 43.4% or 11.8 minutes per patient. The physician scored 240 structures with > 95% of structures receiving a score of 3 or 4. Of the structures reviewed, only 11 structures needed major revision or to be redone entirely. Our results indicate the evaluated auto-contouring solution has the potential to reduce clinical contouring time. The algorithm's performance is promising, but human review and some editing is required prior to clinical use.
Collapse
Affiliation(s)
- John S Ginn
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | - Hiram A Gay
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Jessica Hilliard
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | | | | | | | - Geoffrey D Hugo
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Yao Hao
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA
| |
Collapse
|
40
|
Marschner S, Datarb M, Gaasch A, Xu Z, Grbic S, Chabin G, Geiger B, Rosenman J, Corradini S, Niyazi M, Heimann T, Möhler C, Vega F, Belka C, Thieke C. A deep image-to-image network organ segmentation algorithm for radiation treatment planning: principles and evaluation. Radiat Oncol 2022; 17:129. [PMID: 35869525 PMCID: PMC9308364 DOI: 10.1186/s13014-022-02102-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 06/28/2022] [Indexed: 01/02/2023] Open
Abstract
Background We describe and evaluate a deep network algorithm which automatically contours organs at risk in the thorax and pelvis on computed tomography (CT) images for radiation treatment planning. Methods The algorithm identifies the region of interest (ROI) automatically by detecting anatomical landmarks around the specific organs using a deep reinforcement learning technique. The segmentation is restricted to this ROI and performed by a deep image-to-image network (DI2IN) based on a convolutional encoder-decoder architecture combined with multi-level feature concatenation. The algorithm is commercially available in the medical products “syngo.via RT Image Suite VB50” and “AI-Rad Companion Organs RT VA20” (Siemens Healthineers). For evaluation, thoracic CT images of 237 patients and pelvic CT images of 102 patients were manually contoured following the Radiation Therapy Oncology Group (RTOG) guidelines and compared to the DI2IN results using metrics for volume, overlap and distance, e.g., Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD95). The contours were also compared visually slice by slice. Results We observed high correlations between automatic and manual contours. The best results were obtained for the lungs (DSC 0.97, HD95 2.7 mm/2.9 mm for left/right lung), followed by heart (DSC 0.92, HD95 4.4 mm), bladder (DSC 0.88, HD95 6.7 mm) and rectum (DSC 0.79, HD95 10.8 mm). Visual inspection showed excellent agreements with some exceptions for heart and rectum. Conclusions The DI2IN algorithm automatically generated contours for organs at risk close to those by a human expert, making the contouring step in radiation treatment planning simpler and faster. Few cases still required manual corrections, mainly for heart and rectum.
Collapse
|
41
|
Xu L, Zhu S, Wen N. Deep reinforcement learning and its applications in medical imaging and radiation therapy: a survey. Phys Med Biol 2022; 67. [PMID: 36270582 DOI: 10.1088/1361-6560/ac9cb3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 10/21/2022] [Indexed: 11/07/2022]
Abstract
Reinforcement learning takes sequential decision-making approaches by learning the policy through trial and error based on interaction with the environment. Combining deep learning and reinforcement learning can empower the agent to learn the interactions and the distribution of rewards from state-action pairs to achieve effective and efficient solutions in more complex and dynamic environments. Deep reinforcement learning (DRL) has demonstrated astonishing performance in surpassing the human-level performance in the game domain and many other simulated environments. This paper introduces the basics of reinforcement learning and reviews various categories of DRL algorithms and DRL models developed for medical image analysis and radiation treatment planning optimization. We will also discuss the current challenges of DRL and approaches proposed to make DRL more generalizable and robust in a real-world environment. DRL algorithms, by fostering the designs of the reward function, agents interactions and environment models, can resolve the challenges from scarce and heterogeneous annotated medical image data, which has been a major obstacle to implementing deep learning models in the clinic. DRL is an active research area with enormous potential to improve deep learning applications in medical imaging and radiation therapy planning.
Collapse
Affiliation(s)
- Lanyu Xu
- Department of Computer Science and Engineering, Oakland University, Rochester, MI, United States of America
| | - Simeng Zhu
- Department of Radiation Oncology, Henry Ford Health Systems, Detroit, MI, United States of America
| | - Ning Wen
- Department of Radiology/The Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, People's Republic of China.,The Global Institute of Future Technology, Shanghai Jiaotong University, Shanghai, People's Republic of China
| |
Collapse
|
42
|
Negrillo-Cárdenas J, Jiménez-Pérez JR, Cañada-Oya H, Feito FR, Delgado-Martínez AD. Hybrid curvature-geometrical detection of landmarks for the automatic analysis of the reduction of supracondylar fractures of the femur. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107177. [PMID: 36242867 DOI: 10.1016/j.cmpb.2022.107177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Revised: 09/29/2022] [Accepted: 10/05/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE The analysis of the features of certain tissues is required by many procedures of modern medicine, allowing the development of more efficient treatments. The recognition of landmarks allows the planning of orthopedic and trauma surgical procedures, such as the design of prostheses or the treatment of fractures. Formerly, their detection has been carried out by hand, making the workflow inaccurate and tedious. In this paper we propose an automatic algorithm for the detection of landmarks of human femurs and an analysis of the quality of the reduction of supracondylar fractures. METHODS The detection of anatomical landmarks follows a knowledge-based approach, consisting of a hybrid strategy: curvature and spatial decomposition. Prior training is unrequired. The analysis of the reduction quality is performed by a side-to-side comparison between healthy and fractured sides. The pre-clinical validation of the technique consists of a two-stage study: Initially, we tested our algorithm with 14 healthy femurs, comparing the output with ground truth values. Then, a total of 140 virtual fractures was processed to assess the validity of our analysis of the quality of reduction. A two-sample t test and correlation coefficients between metrics and the degree of reduction have been employed to determine the reliability of the algorithm. RESULTS The average detection error of landmarks was maintained below 1.7 mm and 2∘ (p< 0.01) for points and axes, respectively. Regarding the contralateral analysis, the resulting P-values reveal the possibility to determine whether a supracondylar fracture is properly reduced or not with a 95% of confidence. Furthermore, the correlation is high between the metrics and the quality of the reduction. CONCLUSIONS This research concludes that our technique allows to classify supracondylar fracture reductions of the femur by only analyzing the detected anatomical landmarks. A initial training set is not required as input of our algorithm.
Collapse
Affiliation(s)
| | | | | | - Francisco R Feito
- Graphics and Geomatics Group of Jaén, University of Jaén, Jaén, Spain
| | - Alberto D Delgado-Martínez
- Department of Orthopedic Surgery, Complejo Hospitalario de Jaén, Jaén, Spain; Department of Health Sciences, University of Jaén, Jaén, Spain
| |
Collapse
|
43
|
Wang X, Li Y, Wang H, Huang L, Ding S. A Video Summarization Model Based on Deep Reinforcement Learning with Long-Term Dependency. SENSORS (BASEL, SWITZERLAND) 2022; 22:7689. [PMID: 36236789 PMCID: PMC9571073 DOI: 10.3390/s22197689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Revised: 09/23/2022] [Accepted: 10/03/2022] [Indexed: 06/16/2023]
Abstract
Deep summarization models have succeeded in the video summarization field based on the development of gated recursive unit (GRU) and long and short-term memory (LSTM) technology. However, for some long videos, GRU and LSTM cannot effectively capture long-term dependencies. This paper proposes a deep summarization network with auxiliary summarization losses to address this problem. We introduce an unsupervised auxiliary summarization loss module with LSTM and a swish activation function to capture the long-term dependencies for video summarization, which can be easily integrated with various networks. The proposed model is an unsupervised framework for deep reinforcement learning that does not depend on any labels or user interactions. Additionally, we implement a reward function (R(S)) that jointly considers the consistency, diversity, and representativeness of generated summaries. Furthermore, the proposed model is lightweight and can be successfully deployed on mobile devices and enhance the experience of mobile users and reduce pressure on server operations. We conducted experiments on two benchmark datasets and the results demonstrate that our proposed unsupervised approach can obtain better summaries than existing video summarization methods. Furthermore, the proposed algorithm can generate higher F scores with a nearly 6.3% increase on the SumMe dataset and a 2.2% increase on the TVSum dataset compared to the DR-DSN model.
Collapse
|
44
|
Yan K, Cai J, Jin D, Miao S, Guo D, Harrison AP, Tang Y, Xiao J, Lu J, Lu L. SAM: Self-Supervised Learning of Pixel-Wise Anatomical Embeddings in Radiological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2658-2669. [PMID: 35442886 DOI: 10.1109/tmi.2022.3169003] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Radiological images such as computed tomography (CT) and X-rays render anatomy with intrinsic structures. Being able to reliably locate the same anatomical structure across varying images is a fundamental task in medical image analysis. In principle it is possible to use landmark detection or semantic segmentation for this task, but to work well these require large numbers of labeled data for each anatomical structure and sub-structure of interest. A more universal approach would learn the intrinsic structure from unlabeled images. We introduce such an approach, called Self-supervised Anatomical eMbedding (SAM). SAM generates semantic embeddings for each image pixel that describes its anatomical location or body part. To produce such embeddings, we propose a pixel-level contrastive learning framework. A coarse-to-fine strategy ensures both global and local anatomical information are encoded. Negative sample selection strategies are designed to enhance the embedding's discriminability. Using SAM, one can label any point of interest on a template image and then locate the same body part in other images by simple nearest neighbor searching. We demonstrate the effectiveness of SAM in multiple tasks with 2D and 3D image modalities. On a chest CT dataset with 19 landmarks, SAM outperforms widely-used registration algorithms while only taking 0.23 seconds for inference. On two X-ray datasets, SAM, with only one labeled template image, surpasses supervised methods trained on 50 labeled images. We also apply SAM on whole-body follow-up lesion matching in CT and obtain an accuracy of 91%. SAM can also be applied for improving image registration and initializing CNN weights.
Collapse
|
45
|
Stember JN, Shalu H. Deep Reinforcement Learning with Automated Label Extraction from Clinical Reports Accurately Classifies 3D MRI Brain Volumes. J Digit Imaging 2022; 35:1143-1152. [PMID: 35562633 PMCID: PMC9582186 DOI: 10.1007/s10278-022-00644-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 03/02/2022] [Accepted: 04/20/2022] [Indexed: 01/12/2023] Open
Abstract
Image classification is probably the most fundamental task in radiology artificial intelligence. To reduce the burden of acquiring and labeling data sets, we employed a two-pronged strategy. We automatically extracted labels from radiology reports in Part 1. In Part 2, we used the labels to train a data-efficient reinforcement learning (RL) classifier. We applied the approach to a small set of patient images and radiology reports from our institution. For Part 1, we trained sentence-BERT (SBERT) on 90 radiology reports. In Part 2, we used the labels from the trained SBERT to train an RL-based classifier. We trained the classifier on a training set of [Formula: see text] images. We tested on a separate collection of [Formula: see text] images. For comparison, we also trained and tested a supervised deep learning (SDL) classification network on the same set of training and testing images using the same labels. Part 1: The trained SBERT model improved from 82 to [Formula: see text] accuracy. Part 2: Using Part 1's computed labels, SDL quickly overfitted the small training set. Whereas SDL showed the worst possible testing set accuracy of 50%, RL achieved [Formula: see text] testing set accuracy, with a [Formula: see text]-value of [Formula: see text]. We have shown the proof-of-principle application of automated label extraction from radiological reports. Additionally, we have built on prior work applying RL to classification using these labels, extending from 2D slices to entire 3D image volumes. RL has again demonstrated a remarkable ability to train effectively, in a generalized manner, and based on small training sets.
Collapse
Affiliation(s)
| | - Hrithwik Shalu
- Indian Institute of Technology, Madras, Chennai, India, 600036
| |
Collapse
|
46
|
Yang Y, Hu Y, Zhang X, Wang S. Two-Stage Selective Ensemble of CNN via Deep Tree Training for Medical Image Classification. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:9194-9207. [PMID: 33705343 DOI: 10.1109/tcyb.2021.3061147] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Medical image classification is an important task in computer-aided diagnosis systems. Its performance is critically determined by the descriptiveness and discriminative power of features extracted from images. With rapid development of deep learning, deep convolutional neural networks (CNNs) have been widely used to learn the optimal high-level features from the raw pixels of images for a given classification task. However, due to the limited amount of labeled medical images with certain quality distortions, such techniques crucially suffer from the training difficulties, including overfitting, local optimums, and vanishing gradients. To solve these problems, in this article, we propose a two-stage selective ensemble of CNN branches via a novel training strategy called deep tree training (DTT). In our approach, DTT is adopted to jointly train a series of networks constructed from the hidden layers of CNN in a hierarchical manner, leading to the advantage that vanishing gradients can be mitigated by supplementing gradients for hidden layers of CNN, and intrinsically obtain the base classifiers on the middle-level features with minimum computation burden for an ensemble solution. Moreover, the CNN branches as base learners are combined into the optimal classifier via the proposed two-stage selective ensemble approach based on both accuracy and diversity criteria. Extensive experiments on CIFAR-10 benchmark and two specific medical image datasets illustrate that our approach achieves better performance in terms of accuracy, sensitivity, specificity, and F1 score measurement.
Collapse
|
47
|
Pradella M, Achermann R, Sperl JI, Kärgel R, Rapaka S, Cyriac J, Yang S, Sommer G, Stieltjes B, Bremerich J, Brantner P, Sauter AW. Performance of a deep learning tool to detect missed aortic dilatation in a large chest CT cohort. Front Cardiovasc Med 2022; 9:972512. [PMID: 36072871 PMCID: PMC9441594 DOI: 10.3389/fcvm.2022.972512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Abstract
PurposeThoracic aortic (TA) dilatation (TAD) is a risk factor for acute aortic syndrome and must therefore be reported in every CT report. However, the complex anatomy of the thoracic aorta impedes TAD detection. We investigated the performance of a deep learning (DL) prototype as a secondary reading tool built to measure TA diameters in a large-scale cohort.Material and methodsConsecutive contrast-enhanced (CE) and non-CE chest CT exams with “normal” TA diameters according to their radiology reports were included. The DL-prototype (AIRad, Siemens Healthineers, Germany) measured the TA at nine locations according to AHA guidelines. Dilatation was defined as >45 mm at aortic sinus, sinotubular junction (STJ), ascending aorta (AA) and proximal arch and >40 mm from mid arch to abdominal aorta. A cardiovascular radiologist reviewed all cases with TAD according to AIRad. Multivariable logistic regression (MLR) was used to identify factors (demographics and scan parameters) associated with TAD classification by AIRad.Results18,243 CT scans (45.7% female) were successfully analyzed by AIRad. Mean age was 62.3 ± 15.9 years and 12,092 (66.3%) were CE scans. AIRad confirmed normal diameters in 17,239 exams (94.5%) and reported TAD in 1,004/18,243 exams (5.5%). Review confirmed TAD classification in 452/1,004 exams (45.0%, 2.5% total), 552 cases were false-positive but identification was easily possible using visual outputs by AIRad. MLR revealed that the following factors were significantly associated with correct TAD classification by AIRad: TAD reported at AA [odds ratio (OR): 1.12, p < 0.001] and STJ (OR: 1.09, p = 0.002), TAD found at >1 location (OR: 1.42, p = 0.008), in CE exams (OR: 2.1–3.1, p < 0.05), men (OR: 2.4, p = 0.003) and patients presenting with higher BMI (OR: 1.05, p = 0.01). Overall, 17,691/18,243 (97.0%) exams were correctly classified.ConclusionsAIRad correctly assessed the presence or absence of TAD in 17,691 exams (97%), including 452 cases with previously missed TAD independent from contrast protocol. These findings suggest its usefulness as a secondary reading tool by improving report quality and efficiency.
Collapse
Affiliation(s)
- Maurice Pradella
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, IL, United States
- *Correspondence: Maurice Pradella
| | - Rita Achermann
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | | | | | | | - Joshy Cyriac
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Shan Yang
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Gregor Sommer
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
- Hirslanden Klinik St. Anna, Luzern, Switzerland
| | - Bram Stieltjes
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Jens Bremerich
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Philipp Brantner
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
- Regional Hospitals Rheinfelden and Laufenburg, Rheinfelden, Switzerland
| | - Alexander W. Sauter
- Department of Radiology, Clinic of Radiology & Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
- Department of Radiology, University Hospital Tuebingen, University of Tuebingen, Tuebingen, Germany
| |
Collapse
|
48
|
Garrido-Oliver J, Aviles J, Córdova MM, Dux-Santoy L, Ruiz-Muñoz A, Teixido-Tura G, Maso Talou GD, Morales Ferez X, Jiménez G, Evangelista A, Ferreira-González I, Rodriguez-Palomares J, Camara O, Guala A. Machine learning for the automatic assessment of aortic rotational flow and wall shear stress from 4D flow cardiac magnetic resonance imaging. Eur Radiol 2022; 32:7117-7127. [PMID: 35976395 DOI: 10.1007/s00330-022-09068-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 06/09/2022] [Accepted: 07/26/2022] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Three-dimensional (3D) time-resolved phase-contrast cardiac magnetic resonance (4D flow CMR) allows for unparalleled quantification of blood velocity. Despite established potential in aortic diseases, the analysis is time-consuming and requires expert knowledge, hindering clinical application. The present research aimed to develop and test a fully automatic machine learning-based pipeline for aortic 4D flow CMR analysis. METHODS Four hundred and four subjects were prospectively included. Ground-truth to train the algorithms was generated by experts. The cohort was divided into training (323 patients) and testing (81) sets and used to train and test a 3D nnU-Net for segmentation and a Deep Q-Network algorithm for landmark detection. In-plane (IRF) and through-plane (SFRR) rotational flow descriptors and axial and circumferential wall shear stress (WSS) were computed at ten planes covering the ascending aorta and arch. RESULTS Automatic aortic segmentation resulted in a median Dice score (DS) of 0.949 and average symmetric surface distance of 0.839 (0.632-1.071) mm, comparable with the state of the art. Aortic landmarks were located with a precision comparable with experts in the sinotubular junction and first and third supra-aortic vessels (p = 0.513, 0.592 and 0.905, respectively) but with lower precision in the pulmonary bifurcation (p = 0.028), resulting in precise localisation of analysis planes. Automatic flow assessment showed excellent (ICC > 0.9) agreement with manual quantification of SFRR and good-to-excellent agreement (ICC > 0.75) in the measurement of IRF and axial and circumferential WSS. CONCLUSION Fully automatic analysis of complex aortic flow dynamics from 4D flow CMR is feasible. Its implementation could foster the clinical use of 4D flow CMR. KEY POINTS • 4D flow CMR allows for unparalleled aortic blood flow analysis but requires aortic segmentation and anatomical landmark identification, which are time-consuming, limiting 4D flow CMR widespread use. • A fully automatic machine learning pipeline for aortic 4D flow CMR analysis was trained with data of 323 patients and tested in 81 patients, ensuring a balanced distribution of aneurysm aetiologies. • Automatic assessment of complex flow characteristics such as rotational flow and wall shear stress showed good-to-excellent agreement with manual quantification.
Collapse
Affiliation(s)
| | - Jordina Aviles
- Physense, BCN Medtech, Department of Information and Communications Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Marcos Mejía Córdova
- Physense, BCN Medtech, Department of Information and Communications Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | | | | | - Gisela Teixido-Tura
- Vall d'Hebron Institute of Research, Barcelona, Spain
- Department of Cardiology, Hospital Vall d'Hebron Universitat Autonoma de Barcelona, Barcelona, Spain
- CIBER-CV, Instituto de Salud Carlos III, Madrid, Spain
| | | | - Xabier Morales Ferez
- Physense, BCN Medtech, Department of Information and Communications Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Guillermo Jiménez
- Physense, BCN Medtech, Department of Information and Communications Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Arturo Evangelista
- Vall d'Hebron Institute of Research, Barcelona, Spain
- Department of Cardiology, Hospital Vall d'Hebron Universitat Autonoma de Barcelona, Barcelona, Spain
- CIBER-CV, Instituto de Salud Carlos III, Madrid, Spain
| | - Ignacio Ferreira-González
- Vall d'Hebron Institute of Research, Barcelona, Spain
- Department of Cardiology, Hospital Vall d'Hebron Universitat Autonoma de Barcelona, Barcelona, Spain
- CIBER-ESP, Instituto de Salud Carlos III, Madrid, Spain
- Universitat Autonoma de Barcelona, Bellaterra, Spain
| | - Jose Rodriguez-Palomares
- Vall d'Hebron Institute of Research, Barcelona, Spain
- Department of Cardiology, Hospital Vall d'Hebron Universitat Autonoma de Barcelona, Barcelona, Spain
- CIBER-CV, Instituto de Salud Carlos III, Madrid, Spain
- Universitat Autonoma de Barcelona, Bellaterra, Spain
| | - Oscar Camara
- Physense, BCN Medtech, Department of Information and Communications Technologies, Universitat Pompeu Fabra, Barcelona, Spain.
| | - Andrea Guala
- Vall d'Hebron Institute of Research, Barcelona, Spain.
- CIBER-CV, Instituto de Salud Carlos III, Madrid, Spain.
| |
Collapse
|
49
|
Body landmark detection with an extremely small dataset using transfer learning. Pattern Anal Appl 2022. [DOI: 10.1007/s10044-022-01098-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
50
|
Jang S, Kim HI. Entropy-Aware Model Initialization for Effective Exploration in Deep Reinforcement Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:5845. [PMID: 35957399 PMCID: PMC9371101 DOI: 10.3390/s22155845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 07/25/2022] [Accepted: 08/02/2022] [Indexed: 06/15/2023]
Abstract
Effective exploration is one of the critical factors affecting performance in deep reinforcement learning. Agents acquire data to learn the optimal policy through exploration, and if it is not guaranteed, the data quality deteriorates, which leads to performance degradation. This study investigates the effect of initial entropy, which significantly influences exploration, especially in the early learning stage. The results of this study on tasks with discrete action space show that (1) low initial entropy increases the probability of learning failure, (2) the distributions of initial entropy for various tasks are biased towards low values that inhibit exploration, and (3) the initial entropy for discrete action space varies with both the initial weight and task, making it hard to control. We then devise a simple yet powerful learning strategy to deal with these limitations, namely, entropy-aware model initialization. The proposed algorithm aims to provide a model with high initial entropy to a deep reinforcement learning algorithm for effective exploration. Our experiments showed that the devised learning strategy significantly reduces learning failures and enhances performance, stability, and learning speed.
Collapse
Affiliation(s)
- Sooyoung Jang
- Intelligence Convergence Research Laboratory, Electronics and Telecommunications Research Institute (ETRI), Daejeon 34129, Korea
| | - Hyung-Il Kim
- Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute (ETRI), Daejeon 34129, Korea
| |
Collapse
|