1
|
Glielmo P, Fusco S, Gitto S, Zantonelli G, Albano D, Messina C, Sconfienza LM, Mauri G. Artificial intelligence in interventional radiology: state of the art. Eur Radiol Exp 2024; 8:62. [PMID: 38693468 PMCID: PMC11063019 DOI: 10.1186/s41747-024-00452-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 02/26/2024] [Indexed: 05/03/2024] Open
Abstract
Artificial intelligence (AI) has demonstrated great potential in a wide variety of applications in interventional radiology (IR). Support for decision-making and outcome prediction, new functions and improvements in fluoroscopy, ultrasound, computed tomography, and magnetic resonance imaging, specifically in the field of IR, have all been investigated. Furthermore, AI represents a significant boost for fusion imaging and simulated reality, robotics, touchless software interactions, and virtual biopsy. The procedural nature, heterogeneity, and lack of standardisation slow down the process of adoption of AI in IR. Research in AI is in its early stages as current literature is based on pilot or proof of concept studies. The full range of possibilities is yet to be explored.Relevance statement Exploring AI's transformative potential, this article assesses its current applications and challenges in IR, offering insights into decision support and outcome prediction, imaging enhancements, robotics, and touchless interactions, shaping the future of patient care.Key points• AI adoption in IR is more complex compared to diagnostic radiology.• Current literature about AI in IR is in its early stages.• AI has the potential to revolutionise every aspect of IR.
Collapse
Affiliation(s)
- Pierluigi Glielmo
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy.
| | - Stefano Fusco
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
| | - Salvatore Gitto
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
| | - Giulia Zantonelli
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
| | - Domenico Albano
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
- Dipartimento di Scienze Biomediche, Chirurgiche ed Odontoiatriche, Università degli Studi di Milano, Via della Commenda, 10, 20122, Milan, Italy
| | - Carmelo Messina
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
| | - Luca Maria Sconfienza
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Mangiagalli, 31, 20133, Milan, Italy
- IRCCS Istituto Ortopedico Galeazzi, Via Cristina Belgioioso, 173, 20157, Milan, Italy
| | - Giovanni Mauri
- Divisione di Radiologia Interventistica, IEO, IRCCS Istituto Europeo di Oncologia, Milan, Italy
| |
Collapse
|
2
|
Schweingruber N, Bremer J, Wiehe A, Mader MMD, Mayer C, Woo MS, Kluge S, Grensemann J, Quandt F, Gempt J, Fischer M, Thomalla G, Gerloff C, Sauvigny J, Czorlich P. Early prediction of ventricular peritoneal shunt dependency in aneurysmal subarachnoid haemorrhage patients by recurrent neural network-based machine learning using routine intensive care unit data. J Clin Monit Comput 2024:10.1007/s10877-024-01151-4. [PMID: 38512361 DOI: 10.1007/s10877-024-01151-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Accepted: 03/08/2024] [Indexed: 03/23/2024]
Abstract
Aneurysmal subarachnoid haemorrhage (aSAH) can lead to complications such as acute hydrocephalic congestion. Treatment of this acute condition often includes establishing an external ventricular drainage (EVD). However, chronic hydrocephalus develops in some patients, who then require placement of a permanent ventriculoperitoneal (VP) shunt. The aim of this study was to employ recurrent neural network (RNN)-based machine learning techniques to identify patients who require VP shunt placement at an early stage. This retrospective single-centre study included all patients who were diagnosed with aSAH and treated in the intensive care unit (ICU) between November 2010 and May 2020 (n = 602). More than 120 parameters were analysed, including routine neurocritical care data, vital signs and blood gas analyses. Various machine learning techniques, including RNNs and gradient boosting machines, were evaluated for their ability to predict VP shunt dependency. VP-shunt dependency could be predicted using an RNN after just one day of ICU stay, with an AUC-ROC of 0.77 (CI: 0.75-0.79). The accuracy of the prediction improved after four days of observation (Day 4: AUC-ROC 0.81, CI: 0.79-0.84). At that point, the accuracy of the prediction was 76% (CI: 75.98-83.09%), with a sensitivity of 85% (CI: 83-88%) and a specificity of 74% (CI: 71-78%). RNN-based machine learning has the potential to predict VP shunt dependency on Day 4 after ictus in aSAH patients using routine data collected in the ICU. The use of machine learning may allow early identification of patients with specific therapeutic needs and accelerate the execution of required procedures.
Collapse
Affiliation(s)
- Nils Schweingruber
- Department of Neurology, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Jan Bremer
- Department of Neurology, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Anton Wiehe
- Department of Neurology, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
- Department of Informatics, University of Hamburg, 22527, Hamburg, Germany
| | - Marius Marc-Daniel Mader
- Department of Neurosurgery, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany
- Institute for Stem Cell Biology and Regenerative Medicine, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Christina Mayer
- Department of Neurology, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
- Institute of Neuroimmunology and Multiple Sclerosis (INIMS), Center for Molecular Neurobiology Hamburg (ZMNH), University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Marcel Seungsu Woo
- Department of Neurology, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
- Institute of Neuroimmunology and Multiple Sclerosis (INIMS), Center for Molecular Neurobiology Hamburg (ZMNH), University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Stefan Kluge
- Department of Intensive Care Medicine, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Jörn Grensemann
- Department of Intensive Care Medicine, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Fanny Quandt
- Department of Neurology, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Jens Gempt
- Department of Neurosurgery, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany
| | - Marlene Fischer
- Department of Intensive Care Medicine, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Götz Thomalla
- Department of Neurology, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Christian Gerloff
- Department of Neurology, University Medical Center Hamburg-Eppendorf, 20246, Hamburg, Germany
| | - Jennifer Sauvigny
- Department of Neurosurgery, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany
| | - Patrick Czorlich
- Department of Neurosurgery, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany.
| |
Collapse
|
3
|
Liebert A, Das BK, Kapsner LA, Eberle J, Skwierawska D, Folle L, Schreiter H, Laun FB, Ohlmeyer S, Uder M, Wenkel E, Bickelhaupt S. Smart forecasting of artifacts in contrast-enhanced breast MRI before contrast agent administration. Eur Radiol 2023:10.1007/s00330-023-10469-7. [PMID: 38099964 DOI: 10.1007/s00330-023-10469-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Revised: 10/05/2023] [Accepted: 10/21/2023] [Indexed: 12/21/2023]
Abstract
OBJECTIVES To evaluate whether artifacts on contrast-enhanced (CE) breast MRI maximum intensity projections (MIPs) might already be forecast before gadolinium-based contrast agent (GBCA) administration during an ongoing examination by analyzing the unenhanced T1-weighted images acquired before the GBCA injection. MATERIALS AND METHODS This IRB-approved retrospective analysis consisted of n = 2884 breast CE MRI examinations after intravenous administration of GBCA, acquired with n = 4 different MRI devices at different field strengths (1.5 T/3 T) during clinical routine. CE-derived subtraction MIPs were used to conduct a multi-class multi-reader evaluation of the presence and severity of artifacts with three independent readers. An ensemble classifier (EC) of five DenseNet models was used to predict artifacts for the post-contrast subtraction MIPs, giving as the input source only the pre-contrast T1-weighted sequence. Thus, the acquisition directly preceded the GBCA injection. The area under ROC (AuROC) and diagnostics accuracy scores were used to assess the performance of the neural network in an independent holdout test set (n = 285). RESULTS After majority voting, potentially significant artifacts were detected in 53.6% (n = 1521) of all breast MRI examinations (age 49.6 ± 12.6 years). In the holdout test set (mean age 49.7 ± 11.8 years), at a specificity level of 89%, the EC could forecast around one-third of artifacts (sensitivity 31%) before GBCA administration, with an AuROC = 0.66. CONCLUSION This study demonstrates the capability of a neural network to forecast the occurrence of artifacts on CE subtraction data before the GBCA administration. If confirmed in larger studies, this might enable a workflow-blended approach to prevent breast MRI artifacts by implementing in-scan personalized predictive algorithms. CLINICAL RELEVANCE STATEMENT Some artifacts in contrast-enhanced breast MRI maximum intensity projections might be predictable before gadolinium-based contrast agent injection using a neural network. KEY POINTS • Potentially significant artifacts can be observed in a relevant proportion of breast MRI subtraction sequences after gadolinium-based contrast agent administration (GBCA). • Forecasting the occurrence of such artifacts in subtraction maximum intensity projections before GBCA administration for individual patients was feasible at 89% specificity, which allowed correctly predicting one in three future artifacts. • Further research is necessary to investigate the clinical value of such smart personalized imaging approaches.
Collapse
Affiliation(s)
- Andrzej Liebert
- Institute of Radiology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.
| | - Badhan K Das
- Institute of Radiology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Lorenz A Kapsner
- Institute of Radiology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
- Medical Center for Information and Communication Technology, Universitätsklinikum Erlangen, Erlangen, Germany
| | - Jessica Eberle
- Institute of Radiology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Dominika Skwierawska
- Institute of Radiology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Lukas Folle
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Hannes Schreiter
- Institute of Radiology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Frederik B Laun
- Institute of Radiology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Sabine Ohlmeyer
- Institute of Radiology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Michael Uder
- Institute of Radiology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Evelyn Wenkel
- Medizinische Fakultät, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
- Radiologie München, München, Germany
| | - Sebastian Bickelhaupt
- Institute of Radiology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
| |
Collapse
|
4
|
Krüger J, Opfer R, Spies L, Hedderich D, Buchert R. Voxel-based morphometry in single subjects without a scanner-specific normal database using a convolutional neural network. Eur Radiol 2023:10.1007/s00330-023-10356-1. [PMID: 37943313 DOI: 10.1007/s00330-023-10356-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Revised: 08/19/2023] [Accepted: 08/26/2023] [Indexed: 11/10/2023]
Abstract
OBJECTIVES Reliable detection of disease-specific atrophy in individual T1w-MRI by voxel-based morphometry (VBM) requires scanner-specific normal databases (NDB), which often are not available. The aim of this retrospective study was to design, train, and test a deep convolutional neural network (CNN) for single-subject VBM without the need for a NDB (CNN-VBM). MATERIALS AND METHODS The training dataset comprised 8945 T1w scans from 65 different scanners. The gold standard VBM maps were obtained by conventional VBM with a scanner-specific NDB for each of the 65 scanners. CNN-VBM was tested in an independent dataset comprising healthy controls (n = 37) and subjects with Alzheimer's disease (AD, n = 51) or frontotemporal lobar degeneration (FTLD, n = 30). A scanner-specific NDB for the generation of the gold standard VBM maps was available also for the test set. The technical performance of CNN-VBM was characterized by the Dice coefficient of CNN-VBM maps relative to VBM maps from scanner-specific VBM. For clinical testing, VBM maps were categorized visually according to the clinical diagnoses in the test set by two independent readers, separately for both VBM methods. RESULTS The VBM maps from CNN-VBM were similar to the scanner-specific VBM maps (median Dice coefficient 0.85, interquartile range [0.81, 0.90]). Overall accuracy of the visual categorization of the VBM maps for the detection of AD or FTLD was 89.8% for CNN-VBM and 89.0% for scanner-specific VBM. CONCLUSION CNN-VBM without NDB provides a similar performance in the detection of AD- and FTLD-specific atrophy as conventional VBM. CLINICAL RELEVANCE STATEMENT A deep convolutional neural network for voxel-based morphometry eliminates the need of scanner-specific normal databases without relevant performance loss and, therefore, could pave the way for the widespread clinical use of voxel-based morphometry to support the diagnosis of neurodegenerative diseases. KEY POINTS • The need of normal databases is a barrier for widespread use of voxel-based brain morphometry. • A convolutional neural network achieved a similar performance for detection of atrophy than conventional voxel-based morphometry. • Convolutional neural networks can pave the way for widespread clinical use of voxel-based morphometry.
Collapse
Affiliation(s)
| | | | | | - Dennis Hedderich
- Department of Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Ralph Buchert
- Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany.
| |
Collapse
|
5
|
Pesapane F, Trentin C, Ferrari F, Signorelli G, Tantrige P, Montesano M, Cicala C, Virgoli R, D'Acquisto S, Nicosia L, Origgi D, Cassano E. Deep learning performance for detection and classification of microcalcifications on mammography. Eur Radiol Exp 2023; 7:69. [PMID: 37934382 PMCID: PMC10630180 DOI: 10.1186/s41747-023-00384-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Accepted: 09/07/2023] [Indexed: 11/08/2023] Open
Abstract
BACKGROUND Breast cancer screening through mammography is crucial for early detection, yet the demand for mammography services surpasses the capacity of radiologists. Artificial intelligence (AI) can assist in evaluating microcalcifications on mammography. We developed and tested an AI model for localizing and characterizing microcalcifications. METHODS Three expert radiologists annotated a dataset of mammograms using histology-based ground truth. The dataset was partitioned for training, validation, and testing. Three neural networks (AlexNet, ResNet18, and ResNet34) were trained and evaluated using specific metrics including receiver operating characteristics area under the curve (AUC), sensitivity, and specificity. The reported metrics were computed on the test set (10% of the whole dataset). RESULTS The dataset included 1,000 patients aged 21-73 years and 1,986 mammograms (180 density A, 220 density B, 380 density C, and 220 density D), with 389 malignant and 611 benign groups of microcalcifications. AlexNet achieved the best performance with 0.98 sensitivity, 0.89 specificity of, and 0.98 AUC for microcalcifications detection and 0.85 sensitivity, 0.89 specificity, and 0.94 AUC of for microcalcifications classification. For microcalcifications detection, ResNet18 and ResNet34 achieved 0.96 and 0.97 sensitivity, 0.91 and 0.90 specificity and 0.98 and 0.98 AUC, retrospectively. For microcalcifications classification, ResNet18 and ResNet34 exhibited 0.75 and 0.84 sensitivity, 0.85 and 0.84 specificity, and 0.88 and 0.92 AUC, respectively. CONCLUSIONS The developed AI models accurately detect and characterize microcalcifications on mammography. RELEVANCE STATEMENT AI-based systems have the potential to assist radiologists in interpreting microcalcifications on mammograms. The study highlights the importance of developing reliable deep learning models possibly applied to breast cancer screening. KEY POINTS • A novel AI tool was developed and tested to aid radiologists in the interpretation of mammography by accurately detecting and characterizing microcalcifications. • Three neural networks (AlexNet, ResNet18, and ResNet34) were trained, validated, and tested using an annotated dataset of 1,000 patients and 1,986 mammograms. • The AI tool demonstrated high accuracy in detecting/localizing and characterizing microcalcifications on mammography, highlighting the potential of AI-based systems to assist radiologists in the interpretation of mammograms.
Collapse
Affiliation(s)
- Filippo Pesapane
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy.
| | - Chiara Trentin
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Federica Ferrari
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Giulia Signorelli
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Priyan Tantrige
- Department of Radiology, King's College Hospital NHS Foundation Trust, London, UK
| | - Marta Montesano
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| | | | | | | | - Luca Nicosia
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Daniela Origgi
- Medical Physics Unit, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Enrico Cassano
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, Milan, Italy
| |
Collapse
|
6
|
Guerra X, Rennotte S, Fetita C, Boubaya M, Debray MP, Israël-Biet D, Bernaudin JF, Valeyre D, Cadranel J, Naccache JM, Nunes H, Brillet PY. U-net convolutional neural network applied to progressive fibrotic interstitial lung disease: Is progression at CT scan associated with a clinical outcome? Respir Med Res 2023; 85:101058. [PMID: 38141579 DOI: 10.1016/j.resmer.2023.101058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 09/18/2023] [Accepted: 10/17/2023] [Indexed: 12/25/2023]
Abstract
BACKGROUND Computational advances in artificial intelligence have led to the recent emergence of U-Net convolutional neural networks (CNNs) applied to medical imaging. Our objectives were to assess the progression of fibrotic interstitial lung disease (ILD) using routine CT scans processed by a U-Net CNN developed by our research team, and to identify a progression threshold indicative of poor prognosis. METHODS CT scans and clinical history of 32 patients with idiopathic fibrotic ILDs were retrospectively reviewed. Successive CT scans were processed by the U-Net CNN and ILD quantification was obtained. Correlation between ILD and FVC changes was assessed. ROC curve was used to define a threshold of ILD progression rate (PR) to predict poor prognostic (mortality or lung transplantation). The PR threshold was used to compare the cohort survival with Kaplan Mayer curves and log-rank test. RESULTS The follow-up was 3.8 ± 1.5 years encompassing 105 CT scans, with 3.3 ± 1.1 CT scans per patient. A significant correlation between ILD and FVC changes was obtained (p = 0.004, ρ = -0.30 [95% CI: -0.16 to -0.45]). Sixteen patients (50%) experienced unfavorable outcome including 13 deaths and 3 lung transplantations. ROC curve analysis showed an aera under curve of 0.83 (p < 0.001), with an optimal cut-off PR value of 4%/year. Patients exhibiting a PR ≥ 4%/year during the first two years had a poorer prognosis (p = 0.001). CONCLUSIONS Applying a U-Net CNN to routine CT scan allowed identifying patients with a rapid progression and unfavorable outcome.
Collapse
Affiliation(s)
- Xavier Guerra
- Department of Radiology, Avicenne Hospital, Assistance Publique - Hôpitaux de Paris, Bobigny, France.
| | - Simon Rennotte
- Samovar Laboratory, Télécom SudParis, Institut Polytechnique de Paris, Evry, France
| | - Catalin Fetita
- Samovar Laboratory, Télécom SudParis, Institut Polytechnique de Paris, Evry, France
| | - Marouane Boubaya
- Clinical Research Unit, Avicenne Hospital, Assistance Publique - Hôpitaux de Paris, Sorbonne Paris-Nord, Bobigny, France
| | - Marie-Pierre Debray
- Department of Radiology, Bichat-Claude Bernard Hospital, Assistance Publique - Hôpitaux de Paris, Paris, France
| | - Dominique Israël-Biet
- Department of Pulmonology, Georges Pompidou European Hospital, Assistance Publique - Hôpitaux de Paris, Paris, France; Université Paris - Cité, Paris, France
| | - Jean-François Bernaudin
- INSERM UMR 1272 Hypoxie & Poumon SMBH, Université Sorbonne Paris - Nord, Bobigny, France; Medicine Sorbonne Université, Paris, France
| | - Dominique Valeyre
- INSERM UMR 1272 Hypoxie & Poumon SMBH, Université Sorbonne Paris - Nord, Bobigny, France; Department of Pulmonology, Avicenne Hospital, Assistance Publique - Hôpitaux de Paris, Bobigny, France
| | - Jacques Cadranel
- Medicine Sorbonne Université, Paris, France; Department of Pulmonology, Tenon Hospital, Assistance Publique - Hôpitaux de Paris, Paris, France
| | - Jean-Marc Naccache
- Department of Pulmonology, Groupe Hospitalier Paris Saint Joseph, Paris, France
| | - Hilario Nunes
- INSERM UMR 1272 Hypoxie & Poumon SMBH, Université Sorbonne Paris - Nord, Bobigny, France; Department of Pulmonology, Avicenne Hospital, Assistance Publique - Hôpitaux de Paris, Bobigny, France
| | - Pierre-Yves Brillet
- Department of Radiology, Avicenne Hospital, Assistance Publique - Hôpitaux de Paris, Bobigny, France; INSERM UMR 1272 Hypoxie & Poumon SMBH, Université Sorbonne Paris - Nord, Bobigny, France
| |
Collapse
|
7
|
Kesävuori R, Kaseva T, Salli E, Raivio P, Savolainen S, Kangasniemi M. Deep learning-aided extraction of outer aortic surface from CT angiography scans of patients with Stanford type B aortic dissection. Eur Radiol Exp 2023; 7:35. [PMID: 37380806 DOI: 10.1186/s41747-023-00342-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 04/01/2023] [Indexed: 06/30/2023] Open
Abstract
BACKGROUND Guidelines recommend that aortic dimension measurements in aortic dissection should include the aortic wall. This study aimed to evaluate two-dimensional (2D)- and three-dimensional (3D)-based deep learning approaches for extraction of outer aortic surface in computed tomography angiography (CTA) scans of Stanford type B aortic dissection (TBAD) patients and assess the speed of different whole aorta (WA) segmentation approaches. METHODS A total of 240 patients diagnosed with TBAD between January 2007 and December 2019 were retrospectively reviewed for this study; 206 CTA scans from 206 patients with acute, subacute, or chronic TBAD acquired with various scanners in multiple different hospital units were included. Ground truth (GT) WAs for 80 scans were segmented by a radiologist using an open-source software. The remaining 126 GT WAs were generated via semi-automatic segmentation process in which an ensemble of 3D convolutional neural networks (CNNs) aided the radiologist. Using 136 scans for training, 30 for validation, and 40 for testing, 2D and 3D CNNs were trained to automatically segment WA. Main evaluation metrics for outer surface extraction and segmentation accuracy were normalized surface Dice (NSD) and Dice coefficient score (DCS), respectively. RESULTS 2D CNN outperformed 3D CNN in NSD score (0.92 versus 0.90, p = 0.009), and both CNNs had equal DCS (0.96 versus 0.96, p = 0.110). Manual and semi-automatic segmentation times of one CTA scan were approximately 1 and 0.5 h, respectively. CONCLUSIONS Both CNNs segmented WA with high DCS, but based on NSD, better accuracy may be required before clinical application. CNN-based semi-automatic segmentation methods can expedite the generation of GTs. RELEVANCE STATEMENT Deep learning can speeds up the creation of ground truth segmentations. CNNs can extract the outer aortic surface in patients with type B aortic dissection. KEY POINTS • 2D and 3D convolutional neural networks (CNNs) can extract the outer aortic surface accurately. • Equal Dice coefficient score (0.96) was reached with 2D and 3D CNNs. • Deep learning can expedite the creation of ground truth segmentations.
Collapse
Affiliation(s)
- Risto Kesävuori
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland.
| | - Tuomas Kaseva
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
| | - Eero Salli
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
| | - Peter Raivio
- Department of Cardiac Surgery, Heart and Lung Center, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Sauli Savolainen
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
- Department of Physics, University of Helsinki, Helsinki, Finland
| | - Marko Kangasniemi
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, FI-00290, Helsinki, Finland
| |
Collapse
|
8
|
Vainio T, Mäkelä T, Arkko A, Savolainen S, Kangasniemi M. Leveraging open dataset and transfer learning for accurate recognition of chronic pulmonary embolism from CT angiogram maximum intensity projection images. Eur Radiol Exp 2023; 7:33. [PMID: 37340248 DOI: 10.1186/s41747-023-00346-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 04/14/2023] [Indexed: 06/22/2023] Open
Abstract
BACKGROUND Early diagnosis of the potentially fatal but curable chronic pulmonary embolism (CPE) is challenging. We have developed and investigated a novel convolutional neural network (CNN) model to recognise CPE from CT pulmonary angiograms (CTPA) based on the general vascular morphology in two-dimensional (2D) maximum intensity projection images. METHODS A CNN model was trained on a curated subset of a public pulmonary embolism CT dataset (RSPECT) with 755 CTPA studies, including patient-level labels of CPE, acute pulmonary embolism (APE), or no pulmonary embolism. CPE patients with right-to-left-ventricular ratio (RV/LV) < 1 and APE patients with RV/LV ≥ 1 were excluded from the training. Additional CNN model selection and testing were done on local data with 78 patients without the RV/LV-based exclusion. We calculated area under the receiver operating characteristic curves (AUC) and balanced accuracies to evaluate the CNN performance. RESULTS We achieved a very high CPE versus no-CPE classification AUC 0.94 and balanced accuracy 0.89 on the local dataset using an ensemble model and considering CPE to be present in either one or both lungs. CONCLUSIONS We propose a novel CNN model with excellent predictive accuracy to differentiate chronic pulmonary embolism with RV/LV ≥ 1 from acute pulmonary embolism and non-embolic cases from 2D maximum intensity projection reconstructions of CTPA. RELEVANCE STATEMENT A DL CNN model identifies chronic pulmonary embolism from CTA with an excellent predictive accuracy. KEY POINTS • Automatic recognition of CPE from computed tomography pulmonary angiography was developed. • Deep learning was applied on two-dimensional maximum intensity projection images. • A large public dataset was used for training the deep learning model. • The proposed model showed an excellent predictive accuracy.
Collapse
Affiliation(s)
- Tuomas Vainio
- Radiology, HUS Medical Imaging Center, University of Helsinki and Helsinki University Hospital, 00290, Helsinki, Finland.
| | - Teemu Mäkelä
- Radiology, HUS Medical Imaging Center, University of Helsinki and Helsinki University Hospital, 00290, Helsinki, Finland
- Department of Physics, University of Helsinki, Helsinki, Finland
| | - Anssi Arkko
- Radiology, HUS Medical Imaging Center, University of Helsinki and Helsinki University Hospital, 00290, Helsinki, Finland
| | - Sauli Savolainen
- Radiology, HUS Medical Imaging Center, University of Helsinki and Helsinki University Hospital, 00290, Helsinki, Finland
- Department of Physics, University of Helsinki, Helsinki, Finland
| | - Marko Kangasniemi
- Radiology, HUS Medical Imaging Center, University of Helsinki and Helsinki University Hospital, 00290, Helsinki, Finland
| |
Collapse
|
9
|
Bollen H, Willems S, Wegge M, Maes F, Nuyts S. Benefits of automated gross tumor volume segmentation in head and neck cancer using multi-modality information. Radiother Oncol 2023; 182:109574. [PMID: 36822358 DOI: 10.1016/j.radonc.2023.109574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 02/08/2023] [Accepted: 02/12/2023] [Indexed: 02/23/2023]
Abstract
PURPOSE Gross tumor volume (GTV) delineation for head and neck cancer (HNC) radiation therapy planning is time consuming and prone to interobserver variability (IOV). The aim of this study was (1) to develop an automated GTV delineation approach of primary tumor (GTVp) and pathologic lymph nodes (GTVn) based on a 3D convolutional neural network (CNN) exploiting multi-modality imaging input as required in clinical practice, and (2) to validate its accuracy, efficiency and IOV compared to manual delineation in a clinical setting. METHODS Two datasets were retrospectively collected from 150 clinical cases. CNNs were trained for GTV delineation with consensus delineation as ground truth, with either single (CT) or co-registered multi-modal (CT + PET or CT + MRI) imaging data as input. For validation, GTVs were delineated on 20 new cases by two observers, once manually, once by correcting the delineations generated by the CNN. RESULTS Both multi-modality CNNs performed better than the single-modality CNN and were selected for clinical validation. Mean Dice Similarity Coefficient (DSC) for (GTVp, GTVn) respectively between automated and manual delineations was (69%, 79%) for CT + PET and (59%,71%) for CT + MRI. Mean DSC between automated and corrected delineations was (81%,89%) for CT + PET and (69%,77%) for CT + MRI. Mean DSC between observers was (76%,86%) for manual delineations and (95%,96%) for corrected delineations, indicating a significant decrease in IOV (p < 10-5), while efficiency increased significantly (48%, p < 10-5). CONCLUSION Multi-modality automated delineation of GTV of HNC was shown to be more efficient and consistent compared to manual delineation in a clinical setting and beneficial over a single-modality approach.
Collapse
Affiliation(s)
- Heleen Bollen
- KU Leuven, Dept. Oncology, Laboratory of Experimental Radiotherapy, & UZ Leuven, Radiation Oncology, B-3000 Leuven, Belgium.
| | - Siri Willems
- KU Leuven, Dept. ESAT, Processing Speech and Images (PSI), & UZ Leuven, Medical Imaging Research Center, B-3000 Leuven, Belgium
| | - Marilyn Wegge
- KU Leuven, Dept. Oncology, Laboratory of Experimental Radiotherapy, & UZ Leuven, Radiation Oncology, B-3000 Leuven, Belgium
| | - Frederik Maes
- KU Leuven, Dept. ESAT, Processing Speech and Images (PSI), & UZ Leuven, Medical Imaging Research Center, B-3000 Leuven, Belgium
| | - Sandra Nuyts
- KU Leuven, Dept. Oncology, Laboratory of Experimental Radiotherapy, & UZ Leuven, Radiation Oncology, B-3000 Leuven, Belgium
| |
Collapse
|
10
|
Germann C, Meyer AN, Staib M, Sutter R, Fritz B. Performance of a deep convolutional neural network for MRI-based vertebral body measurements and insufficiency fracture detection. Eur Radiol 2022. [PMID: 36576545 DOI: 10.1007/s00330-022-09354-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Revised: 09/23/2022] [Accepted: 11/29/2022] [Indexed: 12/29/2022]
Abstract
OBJECTIVES The aim is to validate the performance of a deep convolutional neural network (DCNN) for vertebral body measurements and insufficiency fracture detection on lumbar spine MRI. METHODS This retrospective analysis included 1000 vertebral bodies in 200 patients (age 75.2 ± 9.8 years) who underwent lumbar spine MRI at multiple institutions. 160/200 patients had ≥ one vertebral body insufficiency fracture, 40/200 had no fracture. The performance of the DCNN and that of two fellowship-trained musculoskeletal radiologists in vertebral body measurements (anterior/posterior height, extent of endplate concavity, vertebral angle) and evaluation for insufficiency fractures were compared. Statistics included (a) interobserver reliability metrics using intraclass correlation coefficient (ICC), kappa statistics, and Bland-Altman analysis, and (b) diagnostic performance metrics (sensitivity, specificity, accuracy). A statistically significant difference was accepted if the 95% confidence intervals did not overlap. RESULTS The inter-reader agreement between radiologists and the DCNN was excellent for vertebral body measurements, with ICC values of > 0.94 for anterior and posterior vertebral height and vertebral angle, and good to excellent for superior and inferior endplate concavity with ICC values of 0.79-0.85. The performance of the DCNN in fracture detection yielded a sensitivity of 0.941 (0.903-0.968), specificity of 0.969 (0.954-0.980), and accuracy of 0.962 (0.948-0.973). The diagnostic performance of the DCNN was independent of the radiological institution (accuracy 0.964 vs. 0.960), type of MRI scanner (accuracy 0.957 vs. 0.964), and magnetic field strength (accuracy 0.966 vs. 0.957). CONCLUSIONS A DCNN can achieve high diagnostic performance in vertebral body measurements and insufficiency fracture detection on heterogeneous lumbar spine MRI. KEY POINTS • A DCNN has the potential for high diagnostic performance in measuring vertebral bodies and detecting insufficiency fractures of the lumbar spine.
Collapse
|
11
|
Wang Z, Zhang Z, Feng Y, Hendriks LEL, Miclea RL, Gietema H, Schoenmaekers J, Dekker A, Wee L, Traverso A. Generation of synthetic ground glass nodules using generative adversarial networks (GANs). Eur Radiol Exp 2022; 6:59. [PMID: 36447082 PMCID: PMC9708993 DOI: 10.1186/s41747-022-00311-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 10/26/2022] [Indexed: 12/05/2022] Open
Abstract
BACKGROUND Data shortage is a common challenge in developing computer-aided diagnosis systems. We developed a generative adversarial network (GAN) model to generate synthetic lung lesions mimicking ground glass nodules (GGNs). METHODS We used 216 computed tomography images with 340 GGNs from the Lung Image Database Consortium and Image Database Resource Initiative database. A GAN model retrieving information from the whole image and the GGN region was built. The generated samples were evaluated with visual Turing test performed by four experienced radiologists or pulmonologists. Radiomic features were compared between real and synthetic nodules. Performances were evaluated by area under the curve (AUC) at receiver operating characteristic analysis. In addition, we trained a classification model (ResNet) to investigate whether the synthetic GGNs can improve the performances algorithm and how performances changed as a function of labelled data used in training. RESULTS Of 51 synthetic GGNs, 19 (37%) were classified as real by clinicians. Of 93 radiomic features, 58 (62.4%) showed no significant difference between synthetic and real GGNs (p ≥ 0.052). The discrimination performances of physicians (AUC 0.68) and radiomics (AUC 0.66) were similar, with no-significantly different (p = 0.23), but clinicians achieved a better accuracy (AUC 0.74) than radiomics (AUC 0.62) (p < 0.001). The classification model trained on datasets with synthetic data performed better than models without the addition of synthetic data. CONCLUSIONS GAN has promising potential for generating GGNs. Through similar AUC, clinicians achieved better ability to diagnose whether the data is synthetic than radiomics.
Collapse
Affiliation(s)
- Zhixiang Wang
- grid.412966.e0000 0004 0480 1382Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Zhen Zhang
- grid.412966.e0000 0004 0480 1382Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands ,grid.411918.40000 0004 1798 6427Department of Radiation Oncology, Key Laboratory of Cancer Prevention and Therapy, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Tianjin, China
| | - Ying Feng
- grid.411610.30000 0004 1764 2878Department of Ultrasound, Beijing Friendship Hospital, Capital Medical University, Beijing, China ,grid.412966.e0000 0004 0480 1382Department of Obstetrics and Gynecology, GROW-School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Lizza E. L. Hendriks
- grid.412966.e0000 0004 0480 1382Department of Pulmonary Diseases, GROW School for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
| | - Razvan L. Miclea
- grid.412966.e0000 0004 0480 1382Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Hester Gietema
- grid.412966.e0000 0004 0480 1382Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Janna Schoenmaekers
- grid.412966.e0000 0004 0480 1382Department of Pulmonary Diseases, GROW School for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
| | - Andre Dekker
- grid.412966.e0000 0004 0480 1382Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Leonard Wee
- grid.412966.e0000 0004 0480 1382Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Alberto Traverso
- grid.412966.e0000 0004 0480 1382Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| |
Collapse
|
12
|
Hu Z, Cao D, Hu Y, Wang B, Zhang Y, Tang R, Zhuang J, Gao A, Chen Y, Lin Z. Diagnosis of in vivo vertical root fracture using deep learning on cone-beam CT images. BMC Oral Health 2022; 22:382. [PMID: 36064682 PMCID: PMC9446797 DOI: 10.1186/s12903-022-02422-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 08/23/2022] [Indexed: 11/10/2022] Open
Abstract
Objectives Evaluating the diagnostic efficiency of deep learning models to diagnose vertical root fracture in vivo on cone-beam CT (CBCT) images.
Materials and methods The CBCT images of 276 teeth (138 VRF teeth and 138 non-VRF teeth) were enrolled and analyzed retrospectively. The diagnostic results of these teeth were confirmed by two chief radiologists. There were two experimental groups: auto-selection group and manual selection group. A total of 552 regions of interest of teeth were cropped in manual selection group and 1118 regions of interest of teeth were cropped in auto-selection group. Three deep learning networks (ResNet50, VGG19 and DenseNet169) were used for diagnosis (3:1 for training and testing). The diagnostic efficiencies (accuracy, sensitivity, specificity, and area under the curve (AUC)) of three networks were calculated in two experiment groups. Meanwhile, 552 teeth images in manual selection group were diagnosed by a radiologist. The diagnostic efficiencies of the three deep learning network models in two experiment groups and the radiologist were calculated. Results In manual selection group, ResNet50 presented highest accuracy and sensitivity for diagnosing VRF teeth. The accuracy, sensitivity, specificity and AUC was 97.8%, 97.0%, 98.5%, and 0.99, the radiologist presented accuracy, sensitivity, and specificity as 95.3%, 96.4 and 94.2%. In auto-selection group, ResNet50 presented highest accuracy and sensitivity for diagnosing VRF teeth, the accuracy, sensitivity, specificity and AUC was 91.4%, 92.1%, 90.7% and 0.96. Conclusion In manual selection group, ResNet50 presented higher diagnostic efficiency in diagnosis of in vivo VRF teeth than VGG19, DensenNet169 and radiologist with 2 years of experience. In auto-selection group, Resnet50 also presented higher diagnostic efficiency in diagnosis of in vivo VRF teeth than VGG19 and DensenNet169. This makes it a promising auxiliary diagnostic technique to screen for VRF teeth.
Collapse
Affiliation(s)
- Ziyang Hu
- Department of Dentomaxillofacial Radiology, Nanjing Stomatological Hospital, Medical School of Nanjing University, Zhong Yang Road 30, Nanjing City, 210008, Jiangsu, People's Republic of China.,Department of Stomatology, Guangdong Medical University Affiliated Longhua Central Hospital, Shenzhen, China
| | - Dantong Cao
- Department of Dentomaxillofacial Radiology, Nanjing Stomatological Hospital, Medical School of Nanjing University, Zhong Yang Road 30, Nanjing City, 210008, Jiangsu, People's Republic of China
| | - Yanni Hu
- Department of Dentomaxillofacial Radiology, Nanjing Stomatological Hospital, Medical School of Nanjing University, Zhong Yang Road 30, Nanjing City, 210008, Jiangsu, People's Republic of China
| | - Baixin Wang
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| | - Yifan Zhang
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| | - Rong Tang
- Department of Dentomaxillofacial Radiology, Nanjing Stomatological Hospital, Medical School of Nanjing University, Zhong Yang Road 30, Nanjing City, 210008, Jiangsu, People's Republic of China
| | - Jia Zhuang
- Department of Dentomaxillofacial Radiology, Nanjing Stomatological Hospital, Medical School of Nanjing University, Zhong Yang Road 30, Nanjing City, 210008, Jiangsu, People's Republic of China
| | - Antian Gao
- Department of Dentomaxillofacial Radiology, Nanjing Stomatological Hospital, Medical School of Nanjing University, Zhong Yang Road 30, Nanjing City, 210008, Jiangsu, People's Republic of China
| | - Ying Chen
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China.
| | - Zitong Lin
- Department of Dentomaxillofacial Radiology, Nanjing Stomatological Hospital, Medical School of Nanjing University, Zhong Yang Road 30, Nanjing City, 210008, Jiangsu, People's Republic of China.
| |
Collapse
|
13
|
Vainio T, Mäkelä T, Savolainen S, Kangasniemi M. Performance of a 3D convolutional neural network in the detection of hypoperfusion at CT pulmonary angiography in patients with chronic pulmonary embolism: a feasibility study. Eur Radiol Exp 2021; 5:45. [PMID: 34557979 PMCID: PMC8460693 DOI: 10.1186/s41747-021-00235-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 07/26/2021] [Indexed: 11/18/2022] Open
Abstract
Background Chronic pulmonary embolism (CPE) is a life-threatening disease easily misdiagnosed on computed tomography. We investigated a three-dimensional convolutional neural network (CNN) algorithm for detecting hypoperfusion in CPE from computed tomography pulmonary angiography (CTPA). Methods Preoperative CTPA of 25 patients with CPE and 25 without pulmonary embolism were selected. We applied a 48%–12%–40% training-validation-testing split (12 positive and 12 negative CTPA volumes for training, 3 positives and 3 negatives for validation, 10 positives and 10 negatives for testing). The median number of axial images per CTPA was 335 (min–max, 111–570). Expert manual segmentations were used as training and testing targets. The CNN output was compared to a method in which a Hounsfield unit (HU) threshold was used to detect hypoperfusion. Receiver operating characteristic area under the curve (AUC) and Matthew correlation coefficient (MCC) were calculated with their 95% confidence interval (CI). Results The predicted segmentations of CNN showed AUC 0.87 (95% CI 0.82–0.91), those of HU-threshold method 0.79 (95% CI 0.74–0.84). The optimal global threshold values were CNN output probability ≥ 0.37 and ≤ -850 HU. Using these values, MCC was 0.46 (95% CI 0.29–0.59) for CNN and 0.35 (95% CI 0.18–0.48) for HU-threshold method (average difference in MCC in the bootstrap samples 0.11 (95% CI 0.05–0.16). A high CNN prediction probability was a strong predictor of CPE. Conclusions We proposed a deep learning method for detecting hypoperfusion in CPE from CTPA. This model may help evaluating disease extent and supporting treatment planning.
Collapse
Affiliation(s)
- Tuomas Vainio
- HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340 (Haartmaninkatu 4), FI-00290, Helsinki, Finland.
| | - Teemu Mäkelä
- HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340 (Haartmaninkatu 4), FI-00290, Helsinki, Finland.,Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Sauli Savolainen
- HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340 (Haartmaninkatu 4), FI-00290, Helsinki, Finland.,Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Marko Kangasniemi
- HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340 (Haartmaninkatu 4), FI-00290, Helsinki, Finland
| |
Collapse
|
14
|
Villena F, Pérez J, Lagos R, Dunstan J. Supporting the classification of patients in public hospitals in Chile by designing, deploying and validating a system based on natural language processing. BMC Med Inform Decis Mak 2021; 21:208. [PMID: 34210317 PMCID: PMC8252255 DOI: 10.1186/s12911-021-01565-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 06/23/2021] [Indexed: 11/22/2022] Open
Abstract
Background In Chile, a patient needing a specialty consultation or surgery has to first be referred by a general practitioner, then placed on a waiting list. The Explicit Health Guarantees (GES in Spanish) ensures, by law, the maximum time to solve 85 health problems. Usually, a health professional manually verifies if each referral, written in natural language, corresponds or not to a GES-covered disease. An error in this classification is catastrophic for patients, as it puts them on a non-prioritized waiting list, characterized by prolonged waiting times. Methods To support the manual process, we developed and deployed a system that automatically classifies referrals as GES-covered or not using historical data. Our system is based on word embeddings specially trained for clinical text produced in Chile. We used a vector representation of the reason for referral and patient's age as features for training machine learning models using human-labeled historical data. We constructed a ground truth dataset combining classifications made by three healthcare experts, which was used to validate our results. Results The best performing model over ground truth reached an AUC score of 0.94, with a weighted F1-score of 0.85 (0.87 in precision and 0.86 in recall). During seven months of continuous and voluntary use, the system has amended 87 patient misclassifications. Conclusion This system is a result of a collaboration between technical and clinical experts, and the design of the classifier was custom-tailored for a hospital's clinical workflow, which encouraged the voluntary use of the platform. Our solution can be easily expanded across other hospitals since the registry is uniform in Chile.
Collapse
Affiliation(s)
- Fabián Villena
- Center for Mathematical Modeling - CNRS UMI2807, Faculty of Physical and Mathematical Sciences, University of Chile, Santiago, Chile.,Center for Medical Informatics and Telemedicine, ICBM, Faculty of Medicine, University of Chile, Santiago, Chile
| | - Jorge Pérez
- Computer Science Department, Faculty of Physical and Mathematical Sciences, University of Chile, Santiago, Chile.,Millennium Institute for Foundational Research on Data, Santiago, Chile
| | - René Lagos
- Digital Health Unit, South East Metropolitan Health Service, Santiago, Chile
| | - Jocelyn Dunstan
- Center for Mathematical Modeling - CNRS UMI2807, Faculty of Physical and Mathematical Sciences, University of Chile, Santiago, Chile. .,Center for Medical Informatics and Telemedicine, ICBM, Faculty of Medicine, University of Chile, Santiago, Chile.
| |
Collapse
|
15
|
Borrelli P, Kaboteh R, Enqvist O, Ulén J, Trägårdh E, Kjölhede H, Edenbrandt L. Artificial intelligence-aided CT segmentation for body composition analysis: a validation study. Eur Radiol Exp 2021; 5:11. [PMID: 33694046 DOI: 10.1186/s41747-021-00210-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Accepted: 02/11/2021] [Indexed: 12/12/2022] Open
Abstract
Background Body composition is associated with survival outcome in oncological patients, but it is not routinely calculated. Manual segmentation of subcutaneous adipose tissue (SAT) and muscle is time-consuming and therefore limited to a single CT slice. Our goal was to develop an artificial-intelligence (AI)-based method for automated quantification of three-dimensional SAT and muscle volumes from CT images. Methods Ethical approvals from Gothenburg and Lund Universities were obtained. Convolutional neural networks were trained to segment SAT and muscle using manual segmentations on CT images from a training group of 50 patients. The method was applied to a separate test group of 74 cancer patients, who had two CT studies each with a median interval between the studies of 3 days. Manual segmentations in a single CT slice were used for comparison. The accuracy was measured as overlap between the automated and manual segmentations. Results The accuracy of the AI method was 0.96 for SAT and 0.94 for muscle. The average differences in volumes were significantly lower than the corresponding differences in areas in a single CT slice: 1.8% versus 5.0% (p < 0.001) for SAT and 1.9% versus 3.9% (p < 0.001) for muscle. The 95% confidence intervals for predicted volumes in an individual subject from the corresponding single CT slice areas were in the order of ± 20%. Conclusions The AI-based tool for quantification of SAT and muscle volumes showed high accuracy and reproducibility and provided a body composition analysis that is more relevant than manual analysis of a single CT slice.
Collapse
|
16
|
Castiglioni I, Ippolito D, Interlenghi M, Monti CB, Salvatore C, Schiaffino S, Polidori A, Gandola D, Messa C, Sardanelli F. Machine learning applied on chest x-ray can aid in the diagnosis of COVID-19: a first experience from Lombardy, Italy. Eur Radiol Exp 2021; 5:7. [PMID: 33527198 PMCID: PMC7850902 DOI: 10.1186/s41747-020-00203-z] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Accepted: 12/17/2020] [Indexed: 02/08/2023] Open
Abstract
BACKGROUND We aimed to train and test a deep learning classifier to support the diagnosis of coronavirus disease 2019 (COVID-19) using chest x-ray (CXR) on a cohort of subjects from two hospitals in Lombardy, Italy. METHODS We used for training and validation an ensemble of ten convolutional neural networks (CNNs) with mainly bedside CXRs of 250 COVID-19 and 250 non-COVID-19 subjects from two hospitals (Centres 1 and 2). We then tested such system on bedside CXRs of an independent group of 110 patients (74 COVID-19, 36 non-COVID-19) from one of the two hospitals. A retrospective reading was performed by two radiologists in the absence of any clinical information, with the aim to differentiate COVID-19 from non-COVID-19 patients. Real-time polymerase chain reaction served as the reference standard. RESULTS At 10-fold cross-validation, our deep learning model classified COVID-19 and non-COVID-19 patients with 0.78 sensitivity (95% confidence interval [CI] 0.74-0.81), 0.82 specificity (95% CI 0.78-0.85), and 0.89 area under the curve (AUC) (95% CI 0.86-0.91). For the independent dataset, deep learning showed 0.80 sensitivity (95% CI 0.72-0.86) (59/74), 0.81 specificity (29/36) (95% CI 0.73-0.87), and 0.81 AUC (95% CI 0.73-0.87). Radiologists' reading obtained 0.63 sensitivity (95% CI 0.52-0.74) and 0.78 specificity (95% CI 0.61-0.90) in Centre 1 and 0.64 sensitivity (95% CI 0.52-0.74) and 0.86 specificity (95% CI 0.71-0.95) in Centre 2. CONCLUSIONS This preliminary experience based on ten CNNs trained on a limited training dataset shows an interesting potential of deep learning for COVID-19 diagnosis. Such tool is in training with new CXRs to further increase its performance.
Collapse
Affiliation(s)
- Isabella Castiglioni
- Department of Physics, Università degli Studi di Milano-Bicocca, Piazza della Scienza 3, 20126, Milan, Italy
- Institute of Biomedical Imaging and Physiology, National Research Council, 20090, Segrate, Milan, Italy
| | - Davide Ippolito
- Department of Radiology, San Gerardo Hospital, Via Pergolesi 33, 20900, Monza, Italy
| | - Matteo Interlenghi
- Institute of Biomedical Imaging and Physiology, National Research Council, 20090, Segrate, Milan, Italy
| | - Caterina Beatrice Monti
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Mangiagalli 31, 20133, Milan, Italy
| | - Christian Salvatore
- Scuola Universitaria Superiore IUSS Pavia, Piazza della Vittoria 15, 27100, Pavia, Italy.
- DeepTrace Technologies S.R.L., Via Conservatorio 17, 20122, Milan, Italy.
| | - Simone Schiaffino
- Department of Radiology, IRCCS Policlinico San Donato, Via Morandi 30, 20097, San Donato Milanese, Milan, Italy
| | - Annalisa Polidori
- DeepTrace Technologies S.R.L., Via Conservatorio 17, 20122, Milan, Italy
| | - Davide Gandola
- Department of Radiology, San Gerardo Hospital, Via Pergolesi 33, 20900, Monza, Italy
| | - Cristina Messa
- School of Medicine and Surgery, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, 20126, Milan, Italy
- Fondazione Tecnomed, Università degli Studi di Milano-Bicocca, Palazzina Ciclotrone, Via Pergolesi 33, 20900, Monza, Italy
| | - Francesco Sardanelli
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Mangiagalli 31, 20133, Milan, Italy
- Department of Radiology, IRCCS Policlinico San Donato, Via Morandi 30, 20097, San Donato Milanese, Milan, Italy
| |
Collapse
|
17
|
Castiglioni I, Ippolito D, Interlenghi M, Monti CB, Salvatore C, Schiaffino S, Polidori A, Gandola D, Messa C, Sardanelli F. Machine learning applied on chest x-ray can aid in the diagnosis of COVID-19: a first experience from Lombardy, Italy. Eur Radiol Exp 2021. [PMID: 33527198 DOI: 10.1101/2020.04.08.20040907] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2023] Open
Abstract
BACKGROUND We aimed to train and test a deep learning classifier to support the diagnosis of coronavirus disease 2019 (COVID-19) using chest x-ray (CXR) on a cohort of subjects from two hospitals in Lombardy, Italy. METHODS We used for training and validation an ensemble of ten convolutional neural networks (CNNs) with mainly bedside CXRs of 250 COVID-19 and 250 non-COVID-19 subjects from two hospitals (Centres 1 and 2). We then tested such system on bedside CXRs of an independent group of 110 patients (74 COVID-19, 36 non-COVID-19) from one of the two hospitals. A retrospective reading was performed by two radiologists in the absence of any clinical information, with the aim to differentiate COVID-19 from non-COVID-19 patients. Real-time polymerase chain reaction served as the reference standard. RESULTS At 10-fold cross-validation, our deep learning model classified COVID-19 and non-COVID-19 patients with 0.78 sensitivity (95% confidence interval [CI] 0.74-0.81), 0.82 specificity (95% CI 0.78-0.85), and 0.89 area under the curve (AUC) (95% CI 0.86-0.91). For the independent dataset, deep learning showed 0.80 sensitivity (95% CI 0.72-0.86) (59/74), 0.81 specificity (29/36) (95% CI 0.73-0.87), and 0.81 AUC (95% CI 0.73-0.87). Radiologists' reading obtained 0.63 sensitivity (95% CI 0.52-0.74) and 0.78 specificity (95% CI 0.61-0.90) in Centre 1 and 0.64 sensitivity (95% CI 0.52-0.74) and 0.86 specificity (95% CI 0.71-0.95) in Centre 2. CONCLUSIONS This preliminary experience based on ten CNNs trained on a limited training dataset shows an interesting potential of deep learning for COVID-19 diagnosis. Such tool is in training with new CXRs to further increase its performance.
Collapse
Affiliation(s)
- Isabella Castiglioni
- Department of Physics, Università degli Studi di Milano-Bicocca, Piazza della Scienza 3, 20126, Milan, Italy
- Institute of Biomedical Imaging and Physiology, National Research Council, 20090, Segrate, Milan, Italy
| | - Davide Ippolito
- Department of Radiology, San Gerardo Hospital, Via Pergolesi 33, 20900, Monza, Italy
| | - Matteo Interlenghi
- Institute of Biomedical Imaging and Physiology, National Research Council, 20090, Segrate, Milan, Italy
| | - Caterina Beatrice Monti
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Mangiagalli 31, 20133, Milan, Italy
| | - Christian Salvatore
- Scuola Universitaria Superiore IUSS Pavia, Piazza della Vittoria 15, 27100, Pavia, Italy.
- DeepTrace Technologies S.R.L., Via Conservatorio 17, 20122, Milan, Italy.
| | - Simone Schiaffino
- Department of Radiology, IRCCS Policlinico San Donato, Via Morandi 30, 20097, San Donato Milanese, Milan, Italy
| | - Annalisa Polidori
- DeepTrace Technologies S.R.L., Via Conservatorio 17, 20122, Milan, Italy
| | - Davide Gandola
- Department of Radiology, San Gerardo Hospital, Via Pergolesi 33, 20900, Monza, Italy
| | - Cristina Messa
- School of Medicine and Surgery, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, 20126, Milan, Italy
- Fondazione Tecnomed, Università degli Studi di Milano-Bicocca, Palazzina Ciclotrone, Via Pergolesi 33, 20900, Monza, Italy
| | - Francesco Sardanelli
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Mangiagalli 31, 20133, Milan, Italy
- Department of Radiology, IRCCS Policlinico San Donato, Via Morandi 30, 20097, San Donato Milanese, Milan, Italy
| |
Collapse
|
18
|
Qi LL, Wang JW, Yang L, Huang Y, Zhao SJ, Tang W, Jin YJ, Zhang ZW, Zhou Z, Yu YZ, Wang YZ, Wu N. Natural history of pathologically confirmed pulmonary subsolid nodules with deep learning-assisted nodule segmentation. Eur Radiol 2020; 31:3884-3897. [PMID: 33219848 DOI: 10.1007/s00330-020-07450-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 09/29/2020] [Accepted: 10/30/2020] [Indexed: 12/17/2022]
Abstract
OBJECTIVE To explore the natural history of pulmonary subsolid nodules (SSNs) with different pathological types by deep learning-assisted nodule segmentation. METHODS Between June 2012 and June 2019, 95 resected SSNs with preoperative long-term follow-up were enrolled in this retrospective study. SSN detection and segmentation were performed on preoperative follow-up CTs using the deep learning-based Dr. Wise system. SSNs were categorized into invasive adenocarcinoma (IAC, n = 47) and non-IAC (n = 48) groups; according to the interval change during the preoperative follow-up, SSNs were divided into growth (n = 68), nongrowth (n = 22), and new emergence (n = 5) groups. We analyzed the cumulative percentages and pattern of SSN growth and identified significant factors for IAC diagnosis and SSN growth. RESULTS The mean preoperative follow-up was 42.1 ± 17.0 months. More SSNs showed growth or new emergence in the IAC than in the non-IAC group (89.4% vs. 64.6%, p = 0.009). Volume doubling time was non-significantly shorter for IACs than for non-IACs (1436.0 ± 1188.2 vs. 2087.5 ± 1799.7 days, p = 0.077). Median mass doubling time was significantly shorter for IACs than for non-IACs (821.7 vs. 1944.1 days, p = 0.001). Lobulated sign (p = 0.002) and SSN mass (p = 0.004) were significant factors for differentiating IACs. IACs showed significantly higher cumulative growth percentages than non-IACs in the first 70 months of follow-up. The growth pattern of SSNs may conform to the exponential model. The initial volume (p = 0.042) was a predictor for SSN growth. CONCLUSIONS IACs appearing as SSNs showed an indolent course. The mean growth rate was larger for IACs than for non-IACs. SSNs with larger initial volume are more likely to grow. KEY POINTS • Invasive adenocarcinomas (IACs) appearing as subsolid nodules (SSNs), with a mean volume doubling time (VDT) of 1436.0 ± 1188.2 days and median mass doubling time (MDT) of 821.7 days, showed an indolent course. • The VDT was shorter for IACs than for non-IACs (1436.0 ± 1188.2 vs. 2087.5 ± 1799.7 days), but the difference was not significant (p = 0.077). The median MDT was significantly shorter for IACs than for non-IACs (821.7 vs. 1944.1 days, p = 0.001). • SSNs with lobulated sign and larger mass (> 390.5 mg) may very likely be IACs. SSNs with larger initial volume are more likely to grow.
Collapse
Affiliation(s)
- Lin-Lin Qi
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Jian-Wei Wang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Lin Yang
- Department of Diagnostic Pathology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Yao Huang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Shi-Jun Zhao
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Wei Tang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Yu-Jing Jin
- PET-CT Center, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Ze-Wei Zhang
- PET-CT Center, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Zhen Zhou
- School of Electronic Engineering and Computer Science, Peking University, No. 5 Yiheyuan Rd., Haidian District, Beijing, 100871, China
| | - Yi-Zhou Yu
- Deepwise AI Lab, Deepwise Inc., No. 8 Haidian avenue, Sinosteel International Plaza, Beijing, 100080, China
| | - Yi-Zhou Wang
- Center on Frontiers of Computing Studies, Department of Computer Science, Peking University, No. 5 Yiheyuan Rd., Haidian District, Beijing, 100871, China
| | - Ning Wu
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China. .,PET-CT Center, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China.
| |
Collapse
|
19
|
van der Veen J, Willems S, Bollen H, Maes F, Nuyts S. Deep learning for elective neck delineation: More consistent and time efficient. Radiother Oncol 2020; 153:180-8. [PMID: 33065182 DOI: 10.1016/j.radonc.2020.10.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 09/30/2020] [Accepted: 10/05/2020] [Indexed: 11/20/2022]
Abstract
BACKGROUND/PURPOSE Delineation of the lymph node levels of the neck for irradiation of the elective clinical target volume in head and neck cancer (HNC) patients is time consuming and prone to interobserver variability (IOV), although international consensus guidelines exist. The aim of this study was to develop and validate a 3D convolutional neural network (CNN) for semi-automated delineation of all nodal neck levels, focussing on delineation accuracy, efficiency and consistency compared to manual delineation. MATERIAL/METHODS The CNN was trained on a clinical dataset of 69 HNC patients. For validation, 17 lymph node levels were manually delineated in 16 new patients by two observers, independently, using international consensus guidelines. Automated delineations were generated by applying the CNN and were subsequently corrected by both observers separately as needed for clinical acceptance. Both delineations were performed two weeks apart and blinded to each other. IOV was quantified using Dice similarity coefficient (DSC), mean surface distance (MSD) and Hausdorff distance (HD). To assess automated delineation accuracy, agreement between automated and corrected delineations were evaluated using the same measures. To assess efficiency, the time taken for manual and corrected delineations were compared. In a second step, only the clinically relevant neck levels were selected and delineated, once again manually and by applying and correcting the network. RESULTS When all lymph node levels were delineated, time taken for correcting automated delineations compared to manual delineations was significantly shorter for both observers (mean: 35 vs 52 min, p < 10-5). Based on DSC, automated delineation agreed best with corrected delineation for lymph node levels Ib, II-IVa, VIa, VIb, VIIa, VIIb (DSC >85%). Manual corrections necessary for clinical acceptance were 1.4 mm MSD on average and were especially low (<1mm) for levels II-IVa, VIa, VIIa and VIIb. IOV was significantly smaller with automated compared to manual delineations (MSD: 1.4 mm vs 2.5 mm, p < 10-11). When delineating only the clinically relevant neck levels, the correction time was also significantly shorter (mean: 8 vs 15 min, p < 10-5). Based on DSC, automated delineation agreed very well with corrected delineation (DSC > 87%). Manual corrections necessary for clinical acceptance were 1.3 mm MSD on average. IOV was significantly smaller with automated compared to manual delineations (MSD: 0.8 mm vs 2.3 mm, p < 10-3). CONCLUSION The CNN developed for automated delineation of the elective lymph node levels in the neck in HNC was shown to be more efficient and consistent compared to manual delineation, which justifies its implementation in clinical practice.
Collapse
|
20
|
Kleesiek J, Murray JM, Strack C, Prinz S, Kaissis G, Braren R. [Artificial intelligence and machine learning in oncologic imaging]. Pathologe 2020; 41:649-658. [PMID: 33052431 DOI: 10.1007/s00292-020-00827-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Machine learning (ML) is entering many areas of society, including medicine. This transformation has the potential to drastically change medicine and medical practice. These aspects become particularly clear when considering the different stages of oncologic patient care and the involved interdisciplinary and intermodality interactions. In recent publications, computers-in collaboration with humans or alone-have been outperforming humans regarding tumor identification, tumor classification, estimating prognoses, and evaluation of treatments. In addition, ML algorithms, e.g., artificial neural networks (ANNs), which constitute the drivers behind many of the latest achievements in ML, can deliver this level of performance in a reproducible, fast, and inexpensive manner. In the future, artificial intelligence applications will become an integral part of the medical profession and offer advantages for oncologic diagnostics and treatment.
Collapse
Affiliation(s)
- Jens Kleesiek
- AG Computational Radiology, Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Deutschland. .,German Cancer Consortium (DKTK), Heidelberg, Deutschland. .,Institut für Künstliche Intelligenz in der Medizin (IKIM), Universitätsklinikum Essen, Girardetstr. 6, 45131, Essen, Deutschland.
| | - Jacob M Murray
- AG Computational Radiology, Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Deutschland.,Heidelberg University, Heidelberg, Deutschland
| | - Christian Strack
- AG Computational Radiology, Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Deutschland.,Heidelberg University, Heidelberg, Deutschland
| | - Sebastian Prinz
- AG Computational Radiology, Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Deutschland.,Heidelberg University, Heidelberg, Deutschland
| | - Georgios Kaissis
- Department of Diagnostic and Interventional Radiology, School of Medicine, Technical University of Munich, München, Deutschland
| | - Rickmer Braren
- German Cancer Consortium (DKTK), Heidelberg, Deutschland.,Department of Diagnostic and Interventional Radiology, School of Medicine, Technical University of Munich, München, Deutschland
| |
Collapse
|
21
|
Narita K, Nakamura Y, Higaki T, Akagi M, Honda Y, Awai K. Deep learning reconstruction of drip-infusion cholangiography acquired with ultra-high-resolution computed tomography. Abdom Radiol (NY) 2020; 45:2698-2704. [PMID: 32248261 DOI: 10.1007/s00261-020-02508-4] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
PURPOSE Deep learning reconstruction (DLR) introduces deep convolutional neural networks into the reconstruction flow. We examined the clinical applicability of drip-infusion cholangiography (DIC) acquired on an ultra-high-resolution CT (U-HRCT) scanner reconstructed with DLR in comparison to hybrid and model-based iterative reconstruction (hybrid-IR, MBIR). METHODS This retrospective, single-institution study included 30 patients seen between January 2018 and November 2019. A radiologist recorded the standard deviation of attenuation in the paraspinal muscle as the image noise and calculated the contrast-to-noise ratio (CNR) in the common bile duct. The overall visual image quality of the bile duct on thick-slab maximum intensity projections was assessed by two other radiologists and graded on a 5-point confidence scale ranging from 1 (not delineated) to 5 (clearly delineated). The difference among hybrid-IR, MBIR, and DLR images was compared. RESULTS The image noise was significantly lower on DLR than hybrid-IR and MBIR images and the CNR and the overall visual image quality of the bile duct were significantly higher on DLR than on hybrid-IR and MBIR images (all: p < 0.001). CONCLUSION DLR resulted in significant quantitative and qualitative improvement of DIC acquired with U-HRCT.
Collapse
Affiliation(s)
- Keigo Narita
- Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Yuko Nakamura
- Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan.
| | - Toru Higaki
- Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Motonori Akagi
- Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Yukiko Honda
- Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Kazuo Awai
- Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| |
Collapse
|
22
|
Arif M, Schoots IG, Castillo Tovar J, Bangma CH, Krestin GP, Roobol MJ, Niessen W, Veenland JF. Clinically significant prostate cancer detection and segmentation in low-risk patients using a convolutional neural network on multi-parametric MRI. Eur Radiol 2020; 30:6582-92. [PMID: 32594208 DOI: 10.1007/s00330-020-07008-z] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Revised: 04/20/2020] [Accepted: 06/04/2020] [Indexed: 11/08/2022]
Abstract
Objectives To develop an automatic method for identification and segmentation of clinically significant prostate cancer in low-risk patients and to evaluate the performance in a routine clinical setting. Methods A consecutive cohort (n = 292) from a prospective database of low-risk patients eligible for the active surveillance was selected. A 3-T multi-parametric MRI at 3 months after inclusion was performed. Histopathology from biopsies was used as reference standard. MRI positivity was defined as PI-RADS score ≥ 3, histopathology positivity was defined as ISUP grade ≥ 2. The selected cohort contained four patient groups: (1) MRI-positive targeted biopsy-positive (n = 116), (2) MRI-negative systematic biopsy-negative (n = 55), (3) MRI-positive targeted biopsy-negative (n = 113), (4) MRI-negative systematic biopsy-positive (n = 8). Group 1 was further divided into three sets and a 3D convolutional neural network was trained using different combinations of these sets. Two MRI sequences (T2w, b = 800 DWI) and the ADC map were used as separate input channels for the model. After training, the model was evaluated on the remaining group 1 patients together with the patients of groups 2 and 3 to identify and segment clinically significant prostate cancer. Results The average sensitivity achieved was 82–92% at an average specificity of 43–76% with an area under the curve (AUC) of 0.65 to 0.89 for different lesion volumes ranging from > 0.03 to > 0.5 cc. Conclusions The proposed deep learning computer-aided method yields promising results in identification and segmentation of clinically significant prostate cancer and in confirming low-risk cancer (ISUP grade ≤ 1) in patients on active surveillance. Key Points • Clinically significant prostate cancer identification and segmentation on multi-parametric MRI is feasible in low-risk patients using a deep neural network. • The deep neural network for significant prostate cancer localization performs better for lesions with larger volumes sizes (> 0.5 cc) as compared to small lesions (> 0.03 cc). • For the evaluation of automatic prostate cancer segmentation methods in the active surveillance cohort, the large discordance group (MRI positive, targeted biopsy negative) should be included. Electronic supplementary material The online version of this article (10.1007/s00330-020-07008-z) contains supplementary material, which is available to authorized users.
Collapse
|
23
|
Thüring J, Rippel O, Haarburger C, Merhof D, Schad P, Bruners P, Kuhl CK, Truhn D. Multiphase CT-based prediction of Child-Pugh classification: a machine learning approach. Eur Radiol Exp 2020; 4:20. [PMID: 32249336 PMCID: PMC7131973 DOI: 10.1186/s41747-020-00148-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2019] [Accepted: 02/18/2020] [Indexed: 12/22/2022] Open
Abstract
Background To evaluate whether machine learning algorithms allow the prediction of Child-Pugh classification on clinical multiphase computed tomography (CT). Methods A total of 259 patients who underwent diagnostic abdominal CT (unenhanced, contrast-enhanced arterial, and venous phases) were included in this retrospective study. Child-Pugh scores were determined based on laboratory and clinical parameters. Linear regression (LR), Random Forest (RF), and convolutional neural network (CNN) algorithms were used to predict the Child-Pugh class. Their performances were compared to the prediction of experienced radiologists (ERs). Spearman correlation coefficients and accuracy were assessed for all predictive models. Additionally, a binary classification in low disease severity (Child-Pugh class A) and advanced disease severity (Child-Pugh class ≥ B) was performed. Results Eleven imaging features exhibited a significant correlation when adjusted for multiple comparisons with Child-Pugh class. Significant correlations between predicted and measured Child-Pugh classes were observed (ρLA = 0.35, ρRF = 0.32, ρCNN = 0.51, ρERs = 0.60; p < 0.001). Significantly better accuracies for the prediction of Child-Pugh classes versus no-information rate were found for CNN and ERs (p ≤ 0.034), not for LR and RF (p ≥ 0.384). For binary severity classification, the area under the curve at receiver operating characteristic analysis was significantly lower (p ≤ 0.042) for LR (0.71) and RF (0.69) than for CNN (0.80) and ERs (0.76), without significant differences between CNN and ERs (p = 0.144). Conclusions The performance of a CNN in assessing Child-Pugh class based on multiphase abdominal CT images is comparable to that of ERs.
Collapse
Affiliation(s)
- Johannes Thüring
- Department of Diagnostic and Interventional Radiology, RWTH Aachen University Hospital, Pauwelsstraße 30, 52072, Aachen, Germany.
| | - Oliver Rippel
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Christoph Haarburger
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Dorit Merhof
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Philipp Schad
- Department of Diagnostic and Interventional Radiology, RWTH Aachen University Hospital, Pauwelsstraße 30, 52072, Aachen, Germany
| | - Philipp Bruners
- Department of Diagnostic and Interventional Radiology, RWTH Aachen University Hospital, Pauwelsstraße 30, 52072, Aachen, Germany
| | - Christiane K Kuhl
- Department of Diagnostic and Interventional Radiology, RWTH Aachen University Hospital, Pauwelsstraße 30, 52072, Aachen, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, RWTH Aachen University Hospital, Pauwelsstraße 30, 52072, Aachen, Germany.,Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
24
|
Liu W, Liu M, Guo X, Zhang P, Zhang L, Zhang R, Kang H, Zhai Z, Tao X, Wan J, Xie S. Evaluation of acute pulmonary embolism and clot burden on CTPA with deep learning. Eur Radiol 2020; 30:3567-3575. [PMID: 32064559 DOI: 10.1007/s00330-020-06699-8] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 01/03/2020] [Accepted: 01/31/2020] [Indexed: 11/29/2022]
Abstract
OBJECTIVES To take advantage of the deep learning algorithms to detect and calculate clot burden of acute pulmonary embolism (APE) on computed tomographic pulmonary angiography (CTPA). MATERIALS AND METHODS The training set in this retrospective study consisted of 590 patients (460 with APE and 130 without APE) who underwent CTPA. A fully deep learning convolutional neural network (DL-CNN), called U-Net, was trained for the segmentation of clot. Additionally, an in-house validation set consisted of 288 patients (186 with APE and 102 without APE). In this study, we set different probability thresholds to test the performance of U-Net for the clot detection and selected sensitivity, specificity, and area under the curve (AUC) as the metrics of performance evaluation. Furthermore, we investigated the relationship between the clot burden assessed by the Qanadli score, Mastora score, and other imaging parameters on CTPA and the clot burden calculated by the DL-CNN model. RESULTS There was no statistically significant difference in AUCs with the different probability thresholds. When the probability threshold for segmentation was 0.1, the sensitivity and specificity of U-Net in detecting clot respectively were 94.6% and 76.5% while the AUC was 0.926 (95% CI 0.884-0.968). Moreover, this study displayed that the clot burden measured with U-Net was significantly correlated with the Qanadli score (r = 0.819, p < 0.001), Mastora score (r = 0.874, p < 0.001), and right ventricular functional parameters on CTPA. CONCLUSIONS DL-CNN achieved a high AUC for the detection of pulmonary emboli and can be applied to quantitatively calculate the clot burden of APE patients, which may contribute to reducing the workloads of clinicians. KEY POINTS • Deep learning can detect APE with a good performance and efficiently calculate the clot burden to reduce the physicians' workload. • Clot burden measured with deep learning highly correlates with Qanadli and Mastora scores of CTPA. • Clot burden measured with deep learning correlates with parameters of right ventricular function on CTPA.
Collapse
Affiliation(s)
- Weifang Liu
- Peking University Health Science Center, Beijing, 100871, China.,Department of Radiology, China-Japan Friendship Hospital, 2 Yinghua Dong Street, Hepingli, Chao Yang District, Beijing, 100029, China
| | - Min Liu
- Department of Radiology, China-Japan Friendship Hospital, 2 Yinghua Dong Street, Hepingli, Chao Yang District, Beijing, 100029, China.
| | - Xiaojuan Guo
- Department of Radiology, Beijing Chaoyang Hospital of Capital Medical University, Beijing, 100019, China
| | - Peiyao Zhang
- Department of Radiology, China-Japan Friendship Hospital, 2 Yinghua Dong Street, Hepingli, Chao Yang District, Beijing, 100029, China
| | - Ling Zhang
- Department of Radiology, China-Japan Friendship Hospital, 2 Yinghua Dong Street, Hepingli, Chao Yang District, Beijing, 100029, China
| | - Rongguo Zhang
- Artificial Intelligence Scholar Center, Infervision, Beijing, 100025, China
| | - Han Kang
- Artificial Intelligence Scholar Center, Infervision, Beijing, 100025, China
| | - Zhenguo Zhai
- Department of Pulmonary and Critical Care Medicine, China-Japan Friendship Hospital, Beijing, 100029, China
| | - Xincao Tao
- Department of Pulmonary and Critical Care Medicine, China-Japan Friendship Hospital, Beijing, 100029, China
| | - Jun Wan
- Department of Pulmonary and Critical Care Medicine, China-Japan Friendship Hospital, Beijing, 100029, China
| | - Sheng Xie
- Department of Radiology, China-Japan Friendship Hospital, 2 Yinghua Dong Street, Hepingli, Chao Yang District, Beijing, 100029, China.
| |
Collapse
|
25
|
Brugnara G, Isensee F, Neuberger U, Bonekamp D, Petersen J, Diem R, Wildemann B, Heiland S, Wick W, Bendszus M, Maier-Hein K, Kickingereder P. Automated volumetric assessment with artificial neural networks might enable a more accurate assessment of disease burden in patients with multiple sclerosis. Eur Radiol 2020; 30:2356-2364. [PMID: 31900702 DOI: 10.1007/s00330-019-06593-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 11/09/2019] [Accepted: 11/13/2019] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Patients with multiple sclerosis (MS) regularly undergo MRI for assessment of disease burden. However, interpretation may be time consuming and prone to intra- and interobserver variability. Here, we evaluate the potential of artificial neural networks (ANN) for automated volumetric assessment of MS disease burden and activity on MRI. METHODS A single-institutional dataset with 334 MS patients (334 MRI exams) was used to develop and train an ANN for automated identification and volumetric segmentation of T2/FLAIR-hyperintense and contrast-enhancing (CE) lesions. Independent testing was performed in a single-institutional longitudinal dataset with 82 patients (266 MRI exams). We evaluated lesion detection performance (F1 scores), lesion segmentation agreement (DICE coefficients), and lesion volume agreement (concordance correlation coefficients [CCC]). Independent evaluation was performed on the public ISBI-2015 challenge dataset. RESULTS The F1 score was maximized in the training set at a detection threshold of 7 mm3 for T2/FLAIR lesions and 14 mm3 for CE lesions. In the training set, mean F1 scores were 0.867 for T2/FLAIR lesions and 0.636 for CE lesions, as compared to 0.878 for T2/FLAIR lesions and 0.715 for CE lesions in the test set. Using these thresholds, the ANN yielded mean DICE coefficients of 0.834 and 0.878 for segmentation of T2/FLAIR and CE lesions in the training set (fivefold cross-validation). Corresponding DICE coefficients in the test set were 0.846 for T2/FLAIR lesions and 0.908 for CE lesions, and the CCC was ≥ 0.960 in each dataset. CONCLUSIONS Our results highlight the capability of ANN for quantitative state-of-the-art assessment of volumetric lesion load on MRI and potentially enable a more accurate assessment of disease burden in patients with MS. KEY POINTS • Artificial neural networks (ANN) can accurately detect and segment both T2/FLAIR and contrast-enhancing MS lesions in MRI data. • Performance of the ANN was consistent in a clinically derived dataset, with patients presenting all possible disease stages in MRI scans acquired from standard clinical routine rather than with high-quality research sequences. • Computer-aided evaluation of MS with ANN could streamline both clinical and research procedures in the volumetric assessment of MS disease burden as well as in lesion detection.
Collapse
Affiliation(s)
- Gianluca Brugnara
- Department of Neuroradiology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Fabian Isensee
- Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Ulf Neuberger
- Department of Neuroradiology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - David Bonekamp
- Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jens Petersen
- Department of Neuroradiology, University of Heidelberg Medical Center, Heidelberg, Germany
- Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Ricarda Diem
- Department of Neurology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Brigitte Wildemann
- Department of Neurology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Sabine Heiland
- Department of Neuroradiology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Wolfgang Wick
- Department of Neurology, University of Heidelberg Medical Center, Heidelberg, Germany
- Clinical Cooperation Unit Neurooncology, German Cancer Consortium (DKTK), DKFZ, Heidelberg, Germany
| | - Martin Bendszus
- Department of Neuroradiology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Klaus Maier-Hein
- Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Philipp Kickingereder
- Department of Neuroradiology, University of Heidelberg Medical Center, Heidelberg, Germany.
| |
Collapse
|
26
|
Fritz B, Marbach G, Civardi F, Fucentese SF, Pfirrmann CWA. Deep convolutional neural network-based detection of meniscus tears: comparison with radiologists and surgery as standard of reference. Skeletal Radiol 2020; 49:1207-17. [PMID: 32170334 DOI: 10.1007/s00256-020-03410-2] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Revised: 02/11/2020] [Accepted: 03/01/2020] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To clinically validate a fully automated deep convolutional neural network (DCNN) for detection of surgically proven meniscus tears. MATERIALS AND METHODS One hundred consecutive patients were retrospectively included, who underwent knee MRI and knee arthroscopy in our institution. All MRI were evaluated for medial and lateral meniscus tears by two musculoskeletal radiologists independently and by DCNN. Included patients were not part of the training set of the DCNN. Surgical reports served as the standard of reference. Statistics included sensitivity, specificity, accuracy, ROC curve analysis, and kappa statistics. RESULTS Fifty-seven percent (57/100) of patients had a tear of the medial and 24% (24/100) of the lateral meniscus, including 12% (12/100) with a tear of both menisci. For medial meniscus tear detection, sensitivity, specificity, and accuracy were for reader 1: 93%, 91%, and 92%, for reader 2: 96%, 86%, and 92%, and for the DCNN: 84%, 88%, and 86%. For lateral meniscus tear detection, sensitivity, specificity, and accuracy were for reader 1: 71%, 95%, and 89%, for reader 2: 67%, 99%, and 91%, and for the DCNN: 58%, 92%, and 84%. Sensitivity for medial meniscus tears was significantly different between reader 2 and the DCNN (p = 0.039), and no significant differences existed for all other comparisons (all p ≥ 0.092). The AUC-ROC of the DCNN was 0.882, 0.781, and 0.961 for detection of medial, lateral, and overall meniscus tear. Inter-reader agreement was very good for the medial (kappa = 0.876) and good for the lateral meniscus (kappa = 0.741). CONCLUSION DCNN-based meniscus tear detection can be performed in a fully automated manner with a similar specificity but a lower sensitivity in comparison with musculoskeletal radiologists.
Collapse
|
27
|
Hasenstab KA, Cunha GM, Higaki A, Ichikawa S, Wang K, Delgado T, Brunsing RL, Schlein A, Bittencourt LK, Schwartzman A, Fowler KJ, Hsiao A, Sirlin CB. Fully automated convolutional neural network-based affine algorithm improves liver registration and lesion co-localization on hepatobiliary phase T1-weighted MR images. Eur Radiol Exp 2019; 3:43. [PMID: 31655943 PMCID: PMC6815316 DOI: 10.1186/s41747-019-0120-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Accepted: 08/28/2019] [Indexed: 12/22/2022] Open
Abstract
BACKGROUND Liver alignment between series/exams is challenged by dynamic morphology or variability in patient positioning or motion. Image registration can improve image interpretation and lesion co-localization. We assessed the performance of a convolutional neural network algorithm to register cross-sectional liver imaging series and compared its performance to manual image registration. METHODS Three hundred fourteen patients, including internal and external datasets, who underwent gadoxetate disodium-enhanced magnetic resonance imaging for clinical care from 2011 to 2018, were retrospectively selected. Automated registration was applied to all 2,663 within-patient series pairs derived from these datasets. Additionally, 100 within-patient series pairs from the internal dataset were independently manually registered by expert readers. Liver overlap, image correlation, and intra-observation distances for manual versus automated registrations were compared using paired t tests. Influence of patient demographics, imaging characteristics, and liver uptake function was evaluated using univariate and multivariate mixed models. RESULTS Compared to the manual, automated registration produced significantly lower intra-observation distance (p < 0.001) and higher liver overlap and image correlation (p < 0.001). Intra-exam automated registration achieved 0.88 mean liver overlap and 0.44 mean image correlation for the internal dataset and 0.91 and 0.41, respectively, for the external dataset. For inter-exam registration, mean overlap was 0.81 and image correlation 0.41. Older age, female sex, greater inter-series time interval, differing uptake, and greater voxel size differences independently reduced automated registration performance (p ≤ 0.020). CONCLUSION A fully automated algorithm accurately registered the liver within and between examinations, yielding better liver and focal observation co-localization compared to manual registration.
Collapse
Affiliation(s)
- Kyle A Hasenstab
- Liver Imaging Group, Department of Radiology, University of California San Diego, La Jolla, CA, USA
- AiDA Laboratory, Department of Radiology, University of California San Diego, La Jolla, CA, USA
| | - Guilherme Moura Cunha
- Liver Imaging Group, Department of Radiology, University of California San Diego, La Jolla, CA, USA.
- Altman Clinical Translational Research Institute, 9452 Medical Center Drive, Lower Level 501, La Jolla, CA, 92037, USA.
| | - Atsushi Higaki
- Liver Imaging Group, Department of Radiology, University of California San Diego, La Jolla, CA, USA
| | - Shintaro Ichikawa
- Liver Imaging Group, Department of Radiology, University of California San Diego, La Jolla, CA, USA
| | - Kang Wang
- Liver Imaging Group, Department of Radiology, University of California San Diego, La Jolla, CA, USA
- AiDA Laboratory, Department of Radiology, University of California San Diego, La Jolla, CA, USA
| | - Timo Delgado
- Liver Imaging Group, Department of Radiology, University of California San Diego, La Jolla, CA, USA
| | - Ryan L Brunsing
- Department of Radiology, Stanford University, Palo Alto, CA, USA
| | - Alexandra Schlein
- Liver Imaging Group, Department of Radiology, University of California San Diego, La Jolla, CA, USA
| | - Leornado Kayat Bittencourt
- Abdominal and Pelvic MRI, Radiology, CDPI Clinics, DASA Company, Fluminense Federal University (UFF), Rio de Janeiro, Brazil
| | - Armin Schwartzman
- Department of Family Medicine and Public Health, University of California San Diego, La Jolla, CA, USA
| | - Katie J Fowler
- Liver Imaging Group, Department of Radiology, University of California San Diego, La Jolla, CA, USA
| | - Albert Hsiao
- AiDA Laboratory, Department of Radiology, University of California San Diego, La Jolla, CA, USA
| | - Claude B Sirlin
- Liver Imaging Group, Department of Radiology, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
28
|
Yao Y, Cifuentes J, Zheng B, Yan M. Computer algorithm can match physicians' decisions about blood transfusions. J Transl Med 2019; 17:340. [PMID: 31601245 PMCID: PMC6785926 DOI: 10.1186/s12967-019-2085-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Accepted: 09/23/2019] [Indexed: 01/11/2023] Open
Abstract
BACKGROUND Checking appropriateness of blood transfusion for quality assurance required enormous usage of time and human resources from the healthcare system. We report here a new machine learning algorithm for checking blood transfusion quality. MATERIALS AND METHODS The multilayer perceptron neural network (MLPNN) was designed to learn an expert's judgement from 4946 clinical cases. The accuracy in predicting the blood transfusion was then reported. RESULTS We achieved a 96.8% overall accuracy rate, with a 99% match rate to the experts' judgement on those appropriate cases and 90.9% on the inappropriate cases. CONCLUSIONS Machine learning algorithm can accurately match to human judgement by feeding in pre-surgical information and key laboratory variables.
Collapse
Affiliation(s)
- Yuanyuan Yao
- Department of Anesthesiology, The Second Affiliated Hospital of Zhejiang University, Hangzhou, China
| | - Jenny Cifuentes
- Program of Electrical Engineering, Universidad De La Salle, Bogotá, Colombia
| | - Bin Zheng
- Department of Surgery, University of Alberta, Edmonton, Canada
| | - Min Yan
- Department of Anesthesiology, the Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China.
| |
Collapse
|
29
|
Yu JB, Zhang TZ, Xu DY, Li KY. [Study on the method of microelectrodes implantation of artificial facial nerve prosthesis in closed mouth of orbicularis oris muscle in monkeys with facial nerve paralysis]. Zhonghua Kou Qiang Yi Xue Za Zhi 2019; 54:670-675. [PMID: 31607002 DOI: 10.3760/cma.j.issn.1002-0098.2019.10.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Objective: To explore the optimal method of microelectrode implantation that can produce efficient mouth closure with microelectrode for orbicularis oris muscle (OOM) in rhesus monkeys with unilateral peripheral facial paralysis (UPFP) in order to provide basis for the research and development of artificial facial nerve prosthesis (AFNP). Methods: Right lateral peripheral facial paralysis model on four healthy rhesus monkeys (two males and two femles, aged 5-6 years, weighed 2.0-3.0 kg) were prepared. AFNP electric stimulation was used to induce closed-mouth reaction of the affected OOM with a one-way rectangular pulse, 50 Hz frequency and 0.2 ms pulse width in vitro. Around the affected lateral OOM, four stimulus electrodes implantation positions were selected at the upper lip (position A), the lower lip (position B), the connection with the corner of the mouth to the ipsilateral tragus (position C), and the horizontal line of the mouth angle (position D). According to the different implantation positions of three stimulation electrodes on the stimulation side of AFNP and the results of our previous study, six groups of microelectrode implantation methods were designed. In Group A, two microelectrodes were implanted at position A and one microelectrode was implanted at position B; in Group B, one microelectrode was implanted at position A, B and C respectively; in Group C, one microelectrode was implanted at position A and two microelectrodes were implanted at position B; in Group D, one microelectrode was implanted at position A, B and D respectively; in Group E, one microelectrode was implanted at position A, C and D respectively; in Group F, one microelectrode was implanted at position B, C and D respectively. The minimum stimulating current (threshold current) required for effective mouth closure were recorded. The threshold and peak current values were compared using one-way ANOVA and LSD-t multiple comparisons. Results: The microelectrodes of the AFNP stimulating side in Group E and F failed to induce a smooth mouth closure. The microelectrodes in A, B, C and D group induced smooth mouth closure. The threshold current value of OOM contraction on affected side in the Group A, B, C, and D were (1.35±0.05), (1.02±0.04), (1.40±0.04) and (1.10±0.02) mA, respectively (F=295.302, P<0.001), with the lowest value in Group B and there was significant difference between the current value in Group B and those in the other groups (all P<0.05). The peak current value of OOM contraction on affected side in the four groups were (3.95±0.02), (2.95±0.03), (3.99±0.05) and (3.51±0.01) mA, respectively (F=1 014.985, P<0.001). Group B showed the best lip-closure morphology observed with naked eyes. Conclusions: When three output microelectrode of the AFNP stimulated side are separately imbedded into the upper lip, the lower lip and the connection with the corner of the mouth to the ipsilateral tragus, AFNP can sufficiently induce closed-mouth reaction. These positions are suitable as priority options microelectrodes implantation positions for the microelectrodes of the AFNP stimulated side.
Collapse
Affiliation(s)
- J B Yu
- Department of Otorhinolaryngology Head and Neck Surgery, Shanghai General Hospital of Nanjing Medical University, Shanghai 200080, China (is now working on the Department of Otorhinolaryngology, Affiliated Hospital of Yangzhou University, Yangzhou 225001, China)
| | | | - D Y Xu
- Department of Otorhinolaryngology Head and Neck Surgery, Shanghai General Hospital of Nanjing Medical University, Shanghai 200080, China (is now working on the Department of Otorhinolaryngology, Affiliated Hospital of Chifeng University, Chifeng 024050, China)
| | | |
Collapse
|
30
|
Qi LL, Wu BT, Tang W, Zhou LN, Huang Y, Zhao SJ, Liu L, Li M, Zhang L, Feng SC, Hou DH, Zhou Z, Li XL, Wang YZ, Wu N, Wang JW. Long-term follow-up of persistent pulmonary pure ground-glass nodules with deep learning-assisted nodule segmentation. Eur Radiol 2019; 30:744-755. [PMID: 31485837 DOI: 10.1007/s00330-019-06344-z] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Revised: 05/16/2019] [Accepted: 06/27/2019] [Indexed: 12/19/2022]
Abstract
OBJECTIVE To investigate the natural history of persistent pulmonary pure ground-glass nodules (pGGNs) with deep learning-assisted nodule segmentation. METHODS Between January 2007 and October 2018, 110 pGGNs from 110 patients with 573 follow-up CT scans were included in this retrospective study. pGGN automatic segmentation was performed on initial and all follow-up CT scans using the Dr. Wise system based on convolution neural networks. Subsequently, pGGN diameter, density, volume, mass, volume doubling time (VDT), and mass doubling time (MDT) were calculated automatically. Enrolled pGGNs were categorized into growth, 52 (47.3%), and non-growth, 58 (52.7%), groups according to volume growth. Kaplan-Meier analyses with the log-rank test and Cox proportional hazards regression analysis were conducted to analyze the cumulative percentages of pGGN growth and identify risk factors for growth. RESULTS The mean follow-up period of the enrolled pGGNs was 48.7 ± 23.8 months. The median VDT of the 52 pGGNs having grown was 1448 (range, 339-8640) days, and their median MDT was 1332 (range, 290-38,912) days. The 12-month, 24.7-month, and 60.8-month cumulative percentages of pGGN growth were 10%, 25.5%, and 51.1%, respectively, and they significantly differed among the initial diameter, volume, and mass subgroups (all p < 0.001). The growth pattern of pGGNs may conform to the exponential model. Lobulated sign (p = 0.044), initial mean diameter (p < 0.001), volume (p = 0.003), and mass (p = 0.023) predicted pGGN growth. CONCLUSIONS Persistent pGGNs showed an indolent course. Deep learning can assist in accurately elucidating the natural history of pGGNs. pGGNs with lobulated sign and larger initial diameter, volume, and mass are more likely to grow. KEY POINTS • The pure ground-glass nodule (pGGN) segmentation accuracy of the Dr. Wise system based on convolution neural networks (CNNs) was 96.5% (573/594). • The median volume doubling time (VDT) of 52 pure ground-glass nodules (pGGNs) having grown was 1448 days (range, 339-8640 days), and their median mass doubling time (MDT) was 1332 days (range, 290-38,912 days). The mean time to growth in volume was 854 ± 675 days (range, 116-2856 days). • The 12-month, 24.7-month, and 60.8-month cumulative percentages of pGGN growth were 10%, 25.5%, and 51.1%, respectively, and they significantly differed among the initial diameter, volume, and mass subgroups (all p values < 0.001). The growth pattern of pure ground-glass nodules may conform to exponential model.
Collapse
Affiliation(s)
- Lin-Lin Qi
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Bo-Tong Wu
- School of Electronic Engineering and Computer Science, Peking University, No. 5 Yiheyuan Rd., Haidian District, Beijing, 100871, China.,Peng Cheng Laboratory, Vanke Cloud City Phase I Building 8, Xili Street, Nanshan District, Shenzhen, 518055, Guangdong, China.,Deepwise AI Lab, Deepwise Inc., No. 8 Haidian avenue, Sinosteel International Plaza, Beijing, 100080, China
| | - Wei Tang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Li-Na Zhou
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Yao Huang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Shi-Jun Zhao
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Li Liu
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Meng Li
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Li Zhang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Shi-Chao Feng
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Dong-Hui Hou
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Zhen Zhou
- School of Electronic Engineering and Computer Science, Peking University, No. 5 Yiheyuan Rd., Haidian District, Beijing, 100871, China.,Peng Cheng Laboratory, Vanke Cloud City Phase I Building 8, Xili Street, Nanshan District, Shenzhen, 518055, Guangdong, China.,Deepwise AI Lab, Deepwise Inc., No. 8 Haidian avenue, Sinosteel International Plaza, Beijing, 100080, China
| | - Xiu-Li Li
- Peng Cheng Laboratory, Vanke Cloud City Phase I Building 8, Xili Street, Nanshan District, Shenzhen, 518055, Guangdong, China.,Deepwise AI Lab, Deepwise Inc., No. 8 Haidian avenue, Sinosteel International Plaza, Beijing, 100080, China
| | - Yi-Zhou Wang
- School of Electronic Engineering and Computer Science, Peking University, No. 5 Yiheyuan Rd., Haidian District, Beijing, 100871, China.,Peng Cheng Laboratory, Vanke Cloud City Phase I Building 8, Xili Street, Nanshan District, Shenzhen, 518055, Guangdong, China.,Deepwise AI Lab, Deepwise Inc., No. 8 Haidian avenue, Sinosteel International Plaza, Beijing, 100080, China
| | - Ning Wu
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China. .,PET-CT Center, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China.
| | - Jian-Wei Wang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China.
| |
Collapse
|
31
|
Su G, Wen J, Zhu Z, Liu Z, Zhao W, Sun X, Hu G, Xie G. An Approach of Integrating Domain Knowledge into Data-Driven Diagnostic Model. Stud Health Technol Inform 2019; 264:1594-1595. [PMID: 31438248 DOI: 10.3233/shti190551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
A diagnostic model of general diseases could help general practitioners to decrease misdiagnoses and reduce workload. In this paper, we developed a neural network model that can classify potential diagnoses among 100 selected common diseases based on ambulatory health care data. We propose a novel approach to integrate domain knowledge into neural network training. The evaluation results show our model outperforming the baseline model in terms of knowledge consistency and model generalization.
Collapse
Affiliation(s)
- Guanxu Su
- Ping An Health Technology, Beijing, China
| | | | | | - Zhuo Liu
- Ping An Health Technology, Beijing, China
| | - Wei Zhao
- Ping An Health Technology, Beijing, China
| | | | - Gang Hu
- Ping An Health Technology, Beijing, China
| | | |
Collapse
|
32
|
Le Berre A, Kamagata K, Otsuka Y, Andica C, Hatano T, Saccenti L, Ogawa T, Takeshige-Amano H, Wada A, Suzuki M, Hagiwara A, Irie R, Hori M, Oyama G, Shimo Y, Umemura A, Hattori N, Aoki S. Convolutional neural network-based segmentation can help in assessing the substantia nigra in neuromelanin MRI. Neuroradiology 2019; 61:1387-1395. [PMID: 31401723 PMCID: PMC6848644 DOI: 10.1007/s00234-019-02279-w] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Accepted: 08/01/2019] [Indexed: 12/24/2022]
Abstract
Purpose This study aimed to evaluate the accuracy and diagnostic test performance of the U-net-based segmentation method in neuromelanin magnetic resonance imaging (NM-MRI) compared to the established manual segmentation method for Parkinson’s disease (PD) diagnosis. Methods NM-MRI datasets from two different 3T-scanners were used: a “principal dataset” with 122 participants and an “external validation dataset” with 24 participants, including 62 and 12 PD patients, respectively. Two radiologists performed SNpc manual segmentation. Inter-reader precision was determined using Dice coefficients. The U-net was trained with manual segmentation as ground truth and Dice coefficients used to measure accuracy. Training and validation steps were performed on the principal dataset using a 4-fold cross-validation method. We tested the U-net on the external validation dataset. SNpc hyperintense areas were estimated from U-net and manual segmentation masks, replicating a previously validated thresholding method, and their diagnostic test performances for PD determined. Results For SNpc segmentation, U-net accuracy was comparable to inter-reader precision in the principal dataset (Dice coefficient: U-net, 0.83 ± 0.04; inter-reader, 0.83 ± 0.04), but lower in external validation dataset (Dice coefficient: U-net, 079 ± 0.04; inter-reader, 0.85 ± 0.03). Diagnostic test performances for PD were comparable between U-net and manual segmentation methods in both principal (area under the receiver operating characteristic curve: U-net, 0.950; manual, 0.948) and external (U-net, 0.944; manual, 0.931) datasets. Conclusion U-net segmentation provided relatively high accuracy in the evaluation of the SNpc in NM-MRI and yielded diagnostic performance comparable to that of the established manual method. Electronic supplementary material The online version of this article (10.1007/s00234-019-02279-w) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Alice Le Berre
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan.,Department of Radiology, Université Paris Descartes, 12 rue de l'Ecole de Medecine, 75006, Paris, France
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan.
| | - Yujiro Otsuka
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan.,Milliman Inc., Tokyo, Japan
| | - Christina Andica
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Taku Hatano
- Department of Neurology, Juntendo University School of Medicine, Tokyo, Japan
| | - Laetitia Saccenti
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan.,Department of Radiology, Université Paris Descartes, 12 rue de l'Ecole de Medecine, 75006, Paris, France
| | - Takashi Ogawa
- Department of Neurology, Juntendo University School of Medicine, Tokyo, Japan
| | | | - Akihiko Wada
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Michimasa Suzuki
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Akifumi Hagiwara
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Ryusuke Irie
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Masaaki Hori
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Genko Oyama
- Department of Neurology, Juntendo University School of Medicine, Tokyo, Japan
| | - Yashushi Shimo
- Department of Neurology, Juntendo University School of Medicine, Tokyo, Japan
| | - Atsushi Umemura
- Department of Neurosurgery, Juntendo University School of Medicine, Tokyo, Japan
| | - Nobutaka Hattori
- Department of Neurology, Juntendo University School of Medicine, Tokyo, Japan
| | - Shigeki Aoki
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1, Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| |
Collapse
|
33
|
Fleury E, Marcomini K. Performance of machine learning software to classify breast lesions using BI-RADS radiomic features on ultrasound images. Eur Radiol Exp 2019; 3:34. [PMID: 31385114 PMCID: PMC6682836 DOI: 10.1186/s41747-019-0112-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Accepted: 07/02/2019] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The purpose of this work was to evaluate computable Breast Imaging Reporting and Data System (BI-RADS) radiomic features to classify breast masses on ultrasound B-mode images. METHODS The database consisted of 206 consecutive lesions (144 benign and 62 malignant) proved by percutaneous biopsy in a prospective study approved by the local ethical committee. A radiologist manually delineated the contour of the lesions on greyscale images. We extracted the main ten radiomic features based on the BI-RADS lexicon and classified the lesions as benign or malignant using a bottom-up approach for five machine learning (ML) methods: multilayer perceptron (MLP), decision tree (DT), linear discriminant analysis (LDA), random forest (RF), and support vector machine (SVM). We performed a 10-fold cross validation for training and testing of all classifiers. Receiver operating characteristic (ROC) analysis was used for providing the area under the curve with 95% confidence intervals (CI). RESULTS The classifier with the highest AUC at ROC analysis was SVM (AUC = 0.840, 95% CI 0.6667-0.9762), with 71.4% sensitivity (95% CI 0.6479-0.8616) and 76.9% specificity (95% CI 0.6148-0.8228). The best AUC for each method was 0.744 (95% CI 0.677-0.774) for DT, 0.818 (95% CI 0.6667-0.9444) for LDA, 0.811 (95% CI 0.710-0.892) for RF, and 0.806 (95% CI 0.677-0.839) for MLP. Lesion margin and orientation were the optimal features for all the machine learning methods. CONCLUSIONS ML can aid the distinction between benign and malignant breast lesion on ultrasound images using quantified BI-RADS descriptors. SVM provided the highest ROC-AUC (0.840).
Collapse
Affiliation(s)
- Eduardo Fleury
- Instituto Brasileiro de Controle do Câncer (IBCC), São Paulo, Brazil. .,Centro Universitário São Camilo, Curso de Medicina, São Paulo, Brazil.
| | | |
Collapse
|
34
|
Mu CC, Li G. [Research progress in medical imaging based on deep learning of neural network]. Zhonghua Kou Qiang Yi Xue Za Zhi 2019; 54:492-7. [PMID: 31288331 DOI: 10.3760/cma.j.issn.1002-0098.2019.07.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
The development of computer hardware allows rapid accumulation of medical imaging data. Deep learning has shown great potential in medical imaging data analysis and establish a new area of machine learning. The commonly used deep learning models were firstly introduced in the paper, and then, summarized with the application of deep learning in the detection, classification, diagnosis, segmentation, identification of medical imaging. The application of deep learning in oral and maxillofacial radiology and other discipline of stomatology was proposed. At the end, the paper discussed the problems of deep learning in medical imaging research.
Collapse
|
35
|
Akagi M, Nakamura Y, Higaki T, Narita K, Honda Y, Zhou J, Yu Z, Akino N, Awai K. Deep learning reconstruction improves image quality of abdominal ultra-high-resolution CT. Eur Radiol 2019; 29:6163-6171. [PMID: 30976831 DOI: 10.1007/s00330-019-06170-3] [Citation(s) in RCA: 211] [Impact Index Per Article: 42.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2019] [Revised: 02/22/2019] [Accepted: 03/14/2019] [Indexed: 12/12/2022]
Abstract
OBJECTIVES Deep learning reconstruction (DLR) is a new reconstruction method; it introduces deep convolutional neural networks into the reconstruction flow. This study was conducted in order to examine the clinical applicability of abdominal ultra-high-resolution CT (U-HRCT) exams reconstructed with a new DLR in comparison to hybrid and model-based iterative reconstruction (hybrid-IR, MBIR). METHODS Our retrospective study included 46 patients seen between December 2017 and April 2018. A radiologist recorded the standard deviation of attenuation in the paraspinal muscle as the image noise and calculated the contrast-to-noise ratio (CNR) for the aorta, portal vein, and liver. The overall image quality was assessed by two other radiologists and graded on a 5-point confidence scale ranging from 1 (unacceptable) to 5 (excellent). The difference between CT images subjected to hybrid-IR, MBIR, and DLR was compared. RESULTS The image noise was significantly lower and the CNR was significantly higher on DLR than hybrid-IR and MBIR images (p < 0.01). DLR images received the highest and MBIR images the lowest scores for overall image quality. CONCLUSIONS DLR improved the quality of abdominal U-HRCT images. KEY POINTS • The potential degradation due to increased noise may prevent implementation of ultra-high-resolution CT in the abdomen. • Image noise and overall image quality for hepatic ultra-high-resolution CT images improved with deep learning reconstruction as compared to hybrid- and model-based iterative reconstruction.
Collapse
Affiliation(s)
- Motonori Akagi
- Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, Japan
| | - Yuko Nakamura
- Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, Japan.
| | - Toru Higaki
- Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, Japan
| | - Keigo Narita
- Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, Japan
| | - Yukiko Honda
- Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, Japan
| | - Jian Zhou
- Canon Medical Research USA, Inc., Vernon Hills, IL, USA
| | - Zhou Yu
- Canon Medical Research USA, Inc., Vernon Hills, IL, USA
| | | | - Kazuo Awai
- Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, Japan
| |
Collapse
|
36
|
Öman O, Mäkelä T, Salli E, Savolainen S, Kangasniemi M. 3D convolutional neural networks applied to CT angiography in the detection of acute ischemic stroke. Eur Radiol Exp 2019; 3:8. [PMID: 30758694 PMCID: PMC6374492 DOI: 10.1186/s41747-019-0085-6] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Accepted: 01/04/2019] [Indexed: 12/23/2022] Open
Abstract
Background The aim of this study was to investigate the feasibility of ischemic stroke detection from computed tomography angiography source images (CTA-SI) using three-dimensional convolutional neural networks. Methods CTA-SI of 60 patients with a suspected acute ischemic stroke of the middle cerebral artery were randomly selected for this study; 30 patients were used in the neural network training, and the subsequent testing was performed using the remaining 30 patients. The training and testing were based on manually segmented lesions. Cerebral hemispheric comparison CTA and non-contrast computed tomography (NCCT) were studied as additional input features. Results All ischemic lesions in the testing data were correctly lateralized, and a high correspondence to manual segmentations was achieved. Patients with a diagnosed stroke had clinically relevant regions labeled infarcted with a 0.93 sensitivity and 0.82 specificity. The highest achieved voxel-wise area under receiver operating characteristic curve was 0.93, and the highest Dice similarity coefficient was 0.61. When cerebral hemispheric comparison was used as an input feature, the algorithm performance improved. Only a slight effect was seen when NCCT was included. Conclusion The results support the hypothesis that an acute ischemic stroke lesion can be detected with 3D convolutional neural network-based software from CTA-SI. Utilizing information from the contralateral hemisphere appears to be beneficial for reducing false positive findings.
Collapse
Affiliation(s)
- Olli Öman
- HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340 (Haartmaninkatu 4), FI-00290, Helsinki, Finland.
| | - Teemu Mäkelä
- HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340 (Haartmaninkatu 4), FI-00290, Helsinki, Finland.,Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Eero Salli
- HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340 (Haartmaninkatu 4), FI-00290, Helsinki, Finland
| | - Sauli Savolainen
- HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340 (Haartmaninkatu 4), FI-00290, Helsinki, Finland.,Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Marko Kangasniemi
- HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340 (Haartmaninkatu 4), FI-00290, Helsinki, Finland
| |
Collapse
|
37
|
Lee Y, Ragguett RM, Mansur RB, Boutilier JJ, Rosenblat JD, Trevizol A, Brietzke E, Lin K, Pan Z, Subramaniapillai M, Chan TCY, Fus D, Park C, Musial N, Zuckerman H, Chen VCH, Ho R, Rong C, McIntyre RS. Applications of machine learning algorithms to predict therapeutic outcomes in depression: A meta-analysis and systematic review. J Affect Disord 2018; 241:519-532. [PMID: 30153635 DOI: 10.1016/j.jad.2018.08.073] [Citation(s) in RCA: 136] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Revised: 07/12/2018] [Accepted: 08/12/2018] [Indexed: 02/07/2023]
Abstract
BACKGROUND No previous study has comprehensively reviewed the application of machine learning algorithms in mood disorders populations. Herein, we qualitatively and quantitatively evaluate previous studies of machine learning-devised models that predict therapeutic outcomes in mood disorders populations. METHODS We searched Ovid MEDLINE/PubMed from inception to February 8, 2018 for relevant studies that included adults with bipolar or unipolar depression; assessed therapeutic outcomes with a pharmacological, neuromodulatory, or manual-based psychotherapeutic intervention for depression; applied a machine learning algorithm; and reported predictors of therapeutic response. A random-effects meta-analysis of proportions and meta-regression analyses were conducted. RESULTS We identified 639 records: 75 full-text publications were assessed for eligibility; 26 studies (n=17,499) and 20 studies (n=6325) were included in qualitative and quantitative review, respectively. Classification algorithms were able to predict therapeutic outcomes with an overall accuracy of 0.82 (95% confidence interval [CI] of [0.77, 0.87]). Pooled estimates of classification accuracy were significantly greater (p < 0.01) in models informed by multiple data types (e.g., composite of phenomenological patient features and neuroimaging or peripheral gene expression data; pooled proportion [95% CI] = 0.93[0.86, 0.97]) when compared to models with lower-dimension data types (pooledproportion=0.68[0.62,0.74]to0.85[0.81,0.88]). LIMITATIONS Most studies were retrospective; differences in machine learning algorithms and their implementation (e.g., cross-validation, hyperparameter tuning); cannot infer importance of individual variables fed into learning algorithm. CONCLUSIONS Machine learning algorithms provide a powerful conceptual and analytic framework capable of integrating multiple data types and sources. An integrative approach may more effectively model neurobiological components as functional modules of pathophysiology embedded within the complex, social dynamics that influence the phenomenology of mental disorders.
Collapse
Affiliation(s)
- Yena Lee
- Institute of Medical Science, University of Toronto, Toronto, Canada; Mood Disorders Psychopharmacology Unit, University Health Network, Toronto, Canada; Brain and Cognition Discovery Foundation, Toronto, Canada
| | - Renee-Marie Ragguett
- Mood Disorders Psychopharmacology Unit, University Health Network, Toronto, Canada; Brain and Cognition Discovery Foundation, Toronto, Canada
| | - Rodrigo B Mansur
- Mood Disorders Psychopharmacology Unit, University Health Network, Toronto, Canada; Department of Psychiatry, University of Toronto, Toronto, Canada; Brain and Cognition Discovery Foundation, Toronto, Canada
| | - Justin J Boutilier
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada
| | - Joshua D Rosenblat
- Mood Disorders Psychopharmacology Unit, University Health Network, Toronto, Canada; Department of Psychiatry, University of Toronto, Toronto, Canada
| | - Alisson Trevizol
- Mood Disorders Psychopharmacology Unit, University Health Network, Toronto, Canada
| | - Elisa Brietzke
- Mood Disorders Psychopharmacology Unit, University Health Network, Toronto, Canada; Department of Psychiatry, Federal University of Sao Paulo, Sao Paulo, Brazil
| | - Kangguang Lin
- Laboratory of Emotion and Cognition, Department of Affective Disorders, Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, China; Department of Neuropsychology, University of Hong Kong, Hong Kong, China
| | - Zihang Pan
- Institute of Medical Science, University of Toronto, Toronto, Canada; Mood Disorders Psychopharmacology Unit, University Health Network, Toronto, Canada; Brain and Cognition Discovery Foundation, Toronto, Canada
| | - Mehala Subramaniapillai
- Mood Disorders Psychopharmacology Unit, University Health Network, Toronto, Canada; Brain and Cognition Discovery Foundation, Toronto, Canada
| | - Timothy C Y Chan
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada
| | - Dominika Fus
- Mood Disorders Psychopharmacology Unit, University Health Network, Toronto, Canada; Brain and Cognition Discovery Foundation, Toronto, Canada
| | - Caroline Park
- Institute of Medical Science, University of Toronto, Toronto, Canada; Mood Disorders Psychopharmacology Unit, University Health Network, Toronto, Canada; Brain and Cognition Discovery Foundation, Toronto, Canada
| | - Natalie Musial
- Mood Disorders Psychopharmacology Unit, University Health Network, Toronto, Canada; Brain and Cognition Discovery Foundation, Toronto, Canada
| | - Hannah Zuckerman
- Mood Disorders Psychopharmacology Unit, University Health Network, Toronto, Canada; Brain and Cognition Discovery Foundation, Toronto, Canada
| | - Vincent Chin-Hung Chen
- School of Medicine, Chang Gung University, Taoyuan, Taiwan; Department of Psychiatry, Chang Gung Memorial Hospital, Chiayi, Taiwan
| | - Roger Ho
- Department of Psychological Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Carola Rong
- Mood Disorders Psychopharmacology Unit, University Health Network, Toronto, Canada; Brain and Cognition Discovery Foundation, Toronto, Canada
| | - Roger S McIntyre
- Institute of Medical Science, University of Toronto, Toronto, Canada; Mood Disorders Psychopharmacology Unit, University Health Network, Toronto, Canada; Brain and Cognition Discovery Foundation, Toronto, Canada; Department of Psychiatry, University of Toronto, Toronto, Canada; Department of Pharmacology, University of Toronto, Toronto, Canada.
| |
Collapse
|
38
|
Pesapane F, Codari M, Sardanelli F. Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur Radiol Exp 2018; 2:35. [PMID: 30353365 PMCID: PMC6199205 DOI: 10.1186/s41747-018-0061-6] [Citation(s) in RCA: 298] [Impact Index Per Article: 49.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Accepted: 07/31/2018] [Indexed: 02/08/2023] Open
Abstract
One of the most promising areas of health innovation is the application of artificial intelligence (AI), primarily in medical imaging. This article provides basic definitions of terms such as “machine/deep learning” and analyses the integration of AI into radiology. Publications on AI have drastically increased from about 100–150 per year in 2007–2008 to 700–800 per year in 2016–2017. Magnetic resonance imaging and computed tomography collectively account for more than 50% of current articles. Neuroradiology appears in about one-third of the papers, followed by musculoskeletal, cardiovascular, breast, urogenital, lung/thorax, and abdomen, each representing 6–9% of articles. With an irreversible increase in the amount of data and the possibility to use AI to identify findings either detectable or not by the human eye, radiology is now moving from a subjective perceptual skill to a more objective science. Radiologists, who were on the forefront of the digital era in medicine, can guide the introduction of AI into healthcare. Yet, they will not be replaced because radiology includes communication of diagnosis, consideration of patient’s values and preferences, medical judgment, quality assurance, education, policy-making, and interventional procedures. The higher efficiency provided by AI will allow radiologists to perform more value-added tasks, becoming more visible to patients and playing a vital role in multidisciplinary clinical teams.
Collapse
Affiliation(s)
- Filippo Pesapane
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono 7, 20122, Milan, Italy
| | - Marina Codari
- Unit of Radiology, IRCCS Policlinico San Donato, Via Morandi 30, 20097 San Donato Milanese, Milan, Italy.
| | - Francesco Sardanelli
- Unit of Radiology, IRCCS Policlinico San Donato, Via Morandi 30, 20097 San Donato Milanese, Milan, Italy.,Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Morandi 30, 20097 San Donato Milanese, Milan, Italy
| |
Collapse
|
39
|
Hirasawa T, Aoyama K, Tanimoto T, Ishihara S, Shichijo S, Ozawa T, Ohnishi T, Fujishiro M, Matsuo K, Fujisaki J, Tada T. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer 2018; 21:653-60. [PMID: 29335825 DOI: 10.1007/s10120-018-0793-2] [Citation(s) in RCA: 373] [Impact Index Per Article: 62.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/03/2017] [Accepted: 01/08/2018] [Indexed: 02/06/2023]
Abstract
BACKGROUND Image recognition using artificial intelligence with deep learning through convolutional neural networks (CNNs) has dramatically improved and been increasingly applied to medical fields for diagnostic imaging. We developed a CNN that can automatically detect gastric cancer in endoscopic images. METHODS A CNN-based diagnostic system was constructed based on Single Shot MultiBox Detector architecture and trained using 13,584 endoscopic images of gastric cancer. To evaluate the diagnostic accuracy, an independent test set of 2296 stomach images collected from 69 consecutive patients with 77 gastric cancer lesions was applied to the constructed CNN. RESULTS The CNN required 47 s to analyze 2296 test images. The CNN correctly diagnosed 71 of 77 gastric cancer lesions with an overall sensitivity of 92.2%, and 161 non-cancerous lesions were detected as gastric cancer, resulting in a positive predictive value of 30.6%. Seventy of the 71 lesions (98.6%) with a diameter of 6 mm or more as well as all invasive cancers were correctly detected. All missed lesions were superficially depressed and differentiated-type intramucosal cancers that were difficult to distinguish from gastritis even for experienced endoscopists. Nearly half of the false-positive lesions were gastritis with changes in color tone or an irregular mucosal surface. CONCLUSION The constructed CNN system for detecting gastric cancer could process numerous stored endoscopic images in a very short time with a clinically relevant diagnostic ability. It may be well applicable to daily clinical practice to reduce the burden of endoscopists.
Collapse
|
40
|
Gálvez JA, Jalali A, Ahumada L, Simpao AF, Rehman MA. Neural Network Classifier for Automatic Detection of Invasive Versus Noninvasive Airway Management Technique Based on Respiratory Monitoring Parameters in a Pediatric Anesthesia. J Med Syst 2017; 41:153. [PMID: 28836107 DOI: 10.1007/s10916-017-0787-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Accepted: 07/20/2017] [Indexed: 01/09/2023]
Abstract
Children undergoing general anesthesia require airway monitoring by an anesthesia provider. The airway may be supported with noninvasive devices such as face mask or invasive devices such as a laryngeal mask airway or an endotracheal tube. The physiologic data stored provides an opportunity to apply machine learning algorithms distinguish between these modes based on pattern recognition. We retrieved three data sets from patients receiving general anesthesia in 2015 with either mask, laryngeal mask airway or endotracheal tube. Patients underwent myringotomy, tonsillectomy, adenoidectomy or inguinal hernia repair procedures. We retrieved measurements for end-tidal carbon dioxide, tidal volume, and peak inspiratory pressure and calculated statistical features for each data element per patient. We applied machine learning algorithms (decision tree, support vector machine, and neural network) to classify patients into noninvasive or invasive airway device support. We identified 300 patients per group (mask, laryngeal mask airway, and endotracheal tube) for a total of 900 patients. The neural network classifier performed better than the boosted trees and support vector machine classifiers based on the test data sets. The sensitivity, specificity, and accuracy for neural network classification are 97.5%, 96.3%, and 95.8%. In contrast, the sensitivity, specificity, and accuracy of support vector machine are 89.1%, 92.3%, and 88.3% and with the boosted tree classifier they are 93.8%, 92.1%, and 91.4%. We describe a method to automatically distinguish between noninvasive and invasive airway device support in a pediatric surgical setting based on respiratory monitoring parameters. The results show that the neural network classifier algorithm can accurately classify noninvasive and invasive airway device support.
Collapse
Affiliation(s)
- Jorge A Gálvez
- Section of Biomedical Informatics, Department of Anesthesiology & Critical Care Medicine, The Children's Hospital of Philadelphia, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, 19104, USA.
| | - Ali Jalali
- Section of Biomedical Informatics, Department of Anesthesiology & Critical Care Medicine, The Children's Hospital of Philadelphia, Philadelphia, PA, 19104, USA
| | - Luis Ahumada
- Enterprise Analytics and Reporting, The Children's Hospital of Philadelphia, Philadelphia, PA, 19104, USA
| | - Allan F Simpao
- Section of Biomedical Informatics, Department of Anesthesiology & Critical Care Medicine, The Children's Hospital of Philadelphia, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, 19104, USA
| | - Mohamed A Rehman
- Section of Biomedical Informatics, Department of Anesthesiology & Critical Care Medicine, The Children's Hospital of Philadelphia, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, 19104, USA
| |
Collapse
|
41
|
Mao YT, Chen ZM, Xu L. [The application of artificial neural network on the assessment of lexical tone production of pediatric cochlear implant users]. Zhonghua Er Bi Yan Hou Tou Jing Wai Ke Za Zhi 2017; 52:573-579. [PMID: 28822408 DOI: 10.3760/cma.j.issn.1673-0860.2017.08.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Objective: The present study was carried out to explore the tone production ability of the Mandarin-speaking children with cochlear implants (CI) by using an artificial neural network model and to examine the potential contributing factors underlining their tone production performance. The results of this study might provide useful guidelines for post-operative rehabilitation processes of pediatric CI users. Methods: Two hundred and seventy-eight prelingually deafened children who received unilateral CI participated in this study. As controls, 170 similarly-aged children with normal hearing (NH) were recruited. A total of 36 Chinese monosyllabic words were selected as the tone production targets. Vocal production samples were recorded and the fundamental frequency (F0) contour of each syllable was extracted using an auto-correlation algorithm followed by manual correction. An artificial neural network was created in MATLAB to classify the tone production. The relationships between tone production and several demographic factors were evaluated. Results: Pediatric CI users produced Mandarin tones much less accurately than did the NH children (58.8% vs. 91.5% correct). Tremendous variability in tone production performance existed among the CI children. Tones 2 and 3 were produced less accurately than tones 1 and 4 for both groups. For the CI group, all tones when in error tended to be judged as tone 1. The tone production accuracy was negatively correlated with age at implantation and positively correlated with CI use duration with correlation coefficients (r) of -0.215 (P=0.003) and 0.203 (P=0.005), respectively. Age was one of the determinants of tonal ability for NH children. Conclusions: For children with severe to profound hearing loss, early implantation and persistent use of CI are beneficial to their tone production development. Artificial neural network is a convenient and reliable assessment tool for the development of tonal ability of hearing-impaired children who are in the rehabilitation processes that focus on speech and language expression.
Collapse
Affiliation(s)
- Y T Mao
- Department of Radiology, Xiangya Hospital, Central South University, Changsha 410008, China; School of Rehabilitation and Communication Sciences, Ohio University, Athens, OH 45701, USA
| | - Z M Chen
- Department of Rehabilitation Medicine, Language Disorder Center, the First Affiliated Hospital of Jinan University, Guangzhou 510630, China
| | - L Xu
- School of Rehabilitation and Communication Sciences, Ohio University, Athens, OH 45701, USA
| |
Collapse
|
42
|
Demirci F, Akan P, Kume T, Sisman AR, Erbayraktar Z, Sevinc S. Artificial Neural Network Approach in Laboratory Test Reporting: Learning Algorithms. Am J Clin Pathol 2016; 146:227-37. [PMID: 27473741 DOI: 10.1093/ajcp/aqw104] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
OBJECTIVES In the field of laboratory medicine, minimizing errors and establishing standardization is only possible by predefined processes. The aim of this study was to build an experimental decision algorithm model open to improvement that would efficiently and rapidly evaluate the results of biochemical tests with critical values by evaluating multiple factors concurrently. METHODS The experimental model was built by Weka software (Weka, Waikato, New Zealand) based on the artificial neural network method. Data were received from Dokuz Eylül University Central Laboratory. "Training sets" were developed for our experimental model to teach the evaluation criteria. After training the system, "test sets" developed for different conditions were used to statistically assess the validity of the model. RESULTS After developing the decision algorithm with three iterations of training, no result was verified that was refused by the laboratory specialist. The sensitivity of the model was 91% and specificity was 100%. The estimated κ score was 0.950. CONCLUSIONS This is the first study based on an artificial neural network to build an experimental assessment and decision algorithm model. By integrating our trained algorithm model into a laboratory information system, it may be possible to reduce employees' workload without compromising patient safety.
Collapse
Affiliation(s)
- Ferhat Demirci
- From the Clinical Biochemistry Laboratory, Dr Suat Seren Chest Disease and Thoracic Surgery Training and Research Hospital, Izmir, Turkey; Department of Neurosciences, The Institute of Health Sciences
| | - Pinar Akan
- Department of Neurosciences, The Institute of Health Sciences Department of Biochemistry Faculty of Medicine
| | - Tuncay Kume
- Department of Biochemistry Faculty of Medicine
| | | | | | - Suleyman Sevinc
- Department of Computer Engineering, Faculty of Engineering, Dokuz Eylül University, Izmir, Turkey
| |
Collapse
|