1
|
Hillis JM, Visser JJ, Cliff ERS, van der Geest-Aspers K, Bizzo BC, Dreyer KJ, Adams-Prassl J, Andriole KP. The lucent yet opaque challenge of regulating artificial intelligence in radiology. NPJ Digit Med 2024; 7:69. [PMID: 38491126 PMCID: PMC10942968 DOI: 10.1038/s41746-024-01071-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Accepted: 03/07/2024] [Indexed: 03/18/2024] Open
Affiliation(s)
- James M Hillis
- Data Science Office, Mass General Brigham, Boston, MA, USA.
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA.
- Harvard Medical School, Boston, MA, USA.
| | - Jacob J Visser
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Edward R Scheffer Cliff
- Harvard Medical School, Boston, MA, USA
- Program on Regulation, Therapeutics and Law, Brigham and Women's Hospital, Boston, MA, USA
| | | | - Bernardo C Bizzo
- Data Science Office, Mass General Brigham, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Keith J Dreyer
- Data Science Office, Mass General Brigham, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | | | - Katherine P Andriole
- Data Science Office, Mass General Brigham, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
- Department of Radiology, Brigham and Women's Hospital, Boston, MA, USA
| |
Collapse
|
2
|
Dasegowda G, Bizzo BC, Gupta RV, Kaviani P, Ebrahimian S, Ricciardelli D, Abedi-Tari F, Neumark N, Digumarthy SR, Kalra MK, Dreyer KJ. Radiologist-Trained AI Model for Identifying Suboptimal Chest-Radiographs. Acad Radiol 2023; 30:2921-2930. [PMID: 37019698 DOI: 10.1016/j.acra.2023.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 02/28/2023] [Accepted: 03/06/2023] [Indexed: 04/05/2023]
Abstract
RATIONALE AND OBJECTIVES Suboptimal chest radiographs (CXR) can limit interpretation of critical findings. Radiologist-trained AI models were evaluated for differentiating suboptimal(sCXR) and optimal(oCXR) chest radiographs. MATERIALS AND METHODS Our IRB-approved study included 3278 CXRs from adult patients (mean age 55 ± 20 years) identified from a retrospective search of CXR in radiology reports from 5 sites. A chest radiologist reviewed all CXRs for the cause of suboptimality. The de-identified CXRs were uploaded into an AI server application for training and testing 5 AI models. The training set consisted of 2202 CXRs (n = 807 oCXR; n = 1395 sCXR) while 1076 CXRs (n = 729 sCXR; n = 347 oCXR) were used for testing. Data were analyzed with the Area under the curve (AUC) for the model's ability to classify oCXR and sCXR correctly. RESULTS For the two-class classification into sCXR or oCXR from all sites, for CXR with missing anatomy, AI had sensitivity, specificity, accuracy, and AUC of 78%, 95%, 91%, 0.87(95% CI 0.82-0.92), respectively. AI identified obscured thoracic anatomy with 91% sensitivity, 97% specificity, 95% accuracy, and 0.94 AUC (95% CI 0.90-0.97). Inadequate exposure with 90% sensitivity, 93% specificity, 92% accuracy, and AUC of 0.91 (95% CI 0.88-0.95). The presence of low lung volume was identified with 96% sensitivity, 92% specificity, 93% accuracy, and 0.94 AUC (95% CI 0.92-0.96). The sensitivity, specificity, accuracy, and AUC of AI in identifying patient rotation were 92%, 96%, 95%, and 0.94 (95% CI 0.91-0.98), respectively. CONCLUSION The radiologist-trained AI models can accurately classify optimal and suboptimal CXRs. Such AI models at the front end of radiographic equipment can enable radiographers to repeat sCXRs when necessary.
Collapse
Affiliation(s)
- Giridhar Dasegowda
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114; Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114
| | - Bernardo C Bizzo
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114; Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114
| | - Reya V Gupta
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114
| | - Parisa Kaviani
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114; Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114
| | - Shadi Ebrahimian
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114; Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114
| | - Debra Ricciardelli
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114
| | - Faezeh Abedi-Tari
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114
| | - Nir Neumark
- Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114
| | - Subba R Digumarthy
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114; Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114; Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114.
| | - Keith J Dreyer
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA 02114; Mass General Brigham Data Science Office (DSO), 100 Cambridge St, Boston, MA, US 02114
| |
Collapse
|
3
|
Bizzo BC, Dasegowda G, Bridge C, Miller B, Hillis JM, Kalra MK, Durniak K, Stout M, Schultz T, Alkasab T, Dreyer KJ. Addressing the Challenges of Implementing Artificial Intelligence Tools in Clinical Practice: Principles From Experience. J Am Coll Radiol 2023; 20:352-360. [PMID: 36922109 DOI: 10.1016/j.jacr.2023.01.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 01/19/2023] [Accepted: 01/24/2023] [Indexed: 03/14/2023]
Abstract
The multitude of artificial intelligence (AI)-based solutions, vendors, and platforms poses a challenging proposition to an already complex clinical radiology practice. Apart from assessing and ensuring acceptable local performance and workflow fit to improve imaging services, AI tools require multiple stakeholders, including clinical, technical, and financial, who collaborate to move potential deployable applications to full clinical deployment in a structured and efficient manner. Postdeployment monitoring and surveillance of such tools require an infrastructure that ensures proper and safe use. Herein, the authors describe their experience and framework for implementing and supporting the use of AI applications in radiology workflow.
Collapse
Affiliation(s)
- Bernardo C Bizzo
- Senior Director, Data Science Office, Mass General Brigham, Boston, Massachusetts; Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts; Data Science Office, Mass General Brigham, Boston, Massachusetts.
| | - Giridhar Dasegowda
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts; Data Science Office, Mass General Brigham, Boston, Massachusetts
| | - Christopher Bridge
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts; Data Science Office, Mass General Brigham, Boston, Massachusetts
| | - Benjamin Miller
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts; Data Science Office, Mass General Brigham, Boston, Massachusetts
| | - James M Hillis
- Data Science Office, Mass General Brigham, Boston, Massachusetts; Director of Clinical Operations, Data Science Office, Mass General Brigham, Boston, Massachusetts
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts; Data Science Office, Mass General Brigham, Boston, Massachusetts; Director, Webster Center for Quality and Safety, Massachusetts General Hospital, Boston, Massachusetts
| | - Kimberly Durniak
- Senior Director, Data Science Office, Mass General Brigham, Boston, Massachusetts
| | - Markus Stout
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts; Data Science Office, Mass General Brigham, Boston, Massachusetts; Senior Director, Medical Imaging Informatics, Mass General Brigham, Boston, Massachusetts
| | - Thomas Schultz
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts; Data Science Office, Mass General Brigham, Boston, Massachusetts; Senior Director, Enterprise Medical Imaging, Mass General Brigham, Boston, Massachusetts
| | - Tarik Alkasab
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts; Data Science Office, Mass General Brigham, Boston, Massachusetts; Associate Chair for Enterprise IT/Informatics, Massachusetts General Hospital, Boston, Massachusetts; Co-Medical Director, Medical Imaging Informatics, Mass General Brigham, Boston, Massachusetts
| | - Keith J Dreyer
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts; Data Science Office, Mass General Brigham, Boston, Massachusetts; Chief Data Science Officer and Chief Imaging Information Officer, Mass General Brigham, Boston, Massachusetts; Vice Chair of Radiology, Massachusetts General Hospital, Boston, Massachusetts; Chief Science Officer, Data Science Institute, American College of Radiology, Reston, Virginia
| |
Collapse
|
4
|
Ebrahimian S, Digumarthy SR, Bizzo BC, Dreyer KJ, Kalra MK. Automatic segmentation and measurement of tracheal collapsibility in tracheomalacia. Clin Imaging 2023; 95:47-51. [PMID: 36610270 DOI: 10.1016/j.clinimag.2022.11.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 11/15/2022] [Accepted: 11/28/2022] [Indexed: 12/12/2022]
Abstract
PURPOSE To assess feasibility of automated segmentation and measurement of tracheal collapsibility for detecting tracheomalacia on inspiratory and expiratory chest CT images. METHODS Our study included 123 patients (age 67 ± 11 years; female: male 69:54) who underwent clinically indicated chest CT examinations in both inspiration and expiration phases. A thoracic radiologist measured anteroposterior length of trachea in inspiration and expiration phase image at the level of maximum collapsibility or aortic arch (in absence of luminal change). Separately, another investigator separately processed the inspiratory and expiratory DICOM CT images with Airway Segmentation component of a commercial COPD software (IntelliSpace Portal, Philips Healthcare). Upon segmentation, the software automatically estimated average lumen diameter (in mm) and lumen area (sq.mm) both along the entire length of trachea and at the level of aortic arch. Data were analyzed with independent t-tests and area under the receiver operating characteristic curve (AUC). RESULTS Of the 123 patients, 48 patients had tracheomalacia and 75 patients did not. Ratios of inspiration to expiration phases average lumen area and lumen diameter from the length of trachea had the highest AUC of 0.93 (95% CI = 0.88-0.97) for differentiating presence and absence of tracheomalacia. A decrease of ≥25% in average lumen diameter had sensitivity of 82% and specificity of 87% for detecting tracheomalacia. A decrease of ≥40% in the average lumen area had sensitivity and specificity of 86% for detecting tracheomalacia. CONCLUSION Automatic segmentation and measurement of tracheal dimension over the entire tracheal length is more accurate than a single-level measurement for detecting tracheomalacia.
Collapse
Affiliation(s)
- Shadi Ebrahimian
- Department of Radiology, Massachusetts General Hospital, 75 Blossom Court, Suite 248, Boston, MA 02114, USA; Harvard Medical School, Boston, MA, USA.
| | - Subba R Digumarthy
- Department of Radiology, Massachusetts General Hospital, 75 Blossom Court, Suite 248, Boston, MA 02114, USA; Harvard Medical School, Boston, MA, USA.
| | - Bernardo C Bizzo
- Department of Radiology, Massachusetts General Hospital, 75 Blossom Court, Suite 248, Boston, MA 02114, USA; Harvard Medical School, Boston, MA, USA; MGH & BWH Center for Clinical Data Science, Boston, USA.
| | - Keith J Dreyer
- Department of Radiology, Massachusetts General Hospital, 75 Blossom Court, Suite 248, Boston, MA 02114, USA; Harvard Medical School, Boston, MA, USA; MGH & BWH Center for Clinical Data Science, Boston, USA.
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital, 75 Blossom Court, Suite 248, Boston, MA 02114, USA; Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
5
|
Dasegowda G, Bizzo BC, Kaviani P, Karout L, Ebrahimian S, Digumarthy SR, Neumark N, Hillis JM, Kalra MK, Dreyer KJ. Auto-Detection of Motion Artifacts on CT Pulmonary Angiograms with a Physician-Trained AI Algorithm. Diagnostics (Basel) 2023; 13:778. [PMID: 36832266 PMCID: PMC9955317 DOI: 10.3390/diagnostics13040778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 02/02/2023] [Accepted: 02/16/2023] [Indexed: 02/22/2023] Open
Abstract
Purpose: Motion-impaired CT images can result in limited or suboptimal diagnostic interpretation (with missed or miscalled lesions) and patient recall. We trained and tested an artificial intelligence (AI) model for identifying substantial motion artifacts on CT pulmonary angiography (CTPA) that have a negative impact on diagnostic interpretation. Methods: With IRB approval and HIPAA compliance, we queried our multicenter radiology report database (mPower, Nuance) for CTPA reports between July 2015 and March 2022 for the following terms: "motion artifacts", "respiratory motion", "technically inadequate", and "suboptimal" or "limited exam". All CTPA reports were from two quaternary (Site A, n = 335; B, n = 259) and a community (C, n = 199) healthcare sites. A thoracic radiologist reviewed CT images of all positive hits for motion artifacts (present or absent) and their severity (no diagnostic effect or major diagnostic impairment). Coronal multiplanar images from 793 CTPA exams were de-identified and exported offline into an AI model building prototype (Cognex Vision Pro, Cognex Corporation) to train an AI model to perform two-class classification ("motion" or "no motion") with data from the three sites (70% training dataset, n = 554; 30% validation dataset, n = 239). Separately, data from Site A and Site C were used for training and validating; testing was performed on the Site B CTPA exams. A five-fold repeated cross-validation was performed to evaluate the model performance with accuracy and receiver operating characteristics analysis (ROC). Results: Among the CTPA images from 793 patients (mean age 63 ± 17 years; 391 males, 402 females), 372 had no motion artifacts, and 421 had substantial motion artifacts. The statistics for the average performance of the AI model after five-fold repeated cross-validation for the two-class classification included 94% sensitivity, 91% specificity, 93% accuracy, and 0.93 area under the ROC curve (AUC: 95% CI 0.89-0.97). Conclusion: The AI model used in this study can successfully identify CTPA exams with diagnostic interpretation limiting motion artifacts in multicenter training and test datasets. Clinical relevance: The AI model used in the study can help alert technologists about the presence of substantial motion artifacts on CTPA, where a repeat image acquisition can help salvage diagnostic information.
Collapse
Affiliation(s)
- Giridhar Dasegowda
- Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
- Mass General Brigham Data Science Office, Boston, MA 02114, USA
| | - Bernardo C. Bizzo
- Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
- Mass General Brigham Data Science Office, Boston, MA 02114, USA
| | - Parisa Kaviani
- Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
- Mass General Brigham Data Science Office, Boston, MA 02114, USA
| | - Lina Karout
- Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
- Mass General Brigham Data Science Office, Boston, MA 02114, USA
| | - Shadi Ebrahimian
- Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
| | - Subba R. Digumarthy
- Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
| | - Nir Neumark
- Mass General Brigham Data Science Office, Boston, MA 02114, USA
| | - James M. Hillis
- Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
- Mass General Brigham Data Science Office, Boston, MA 02114, USA
| | - Mannudeep K. Kalra
- Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
- Mass General Brigham Data Science Office, Boston, MA 02114, USA
| | - Keith J. Dreyer
- Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
- Mass General Brigham Data Science Office, Boston, MA 02114, USA
| |
Collapse
|
6
|
Robinson-Weiss C, Patel J, Bizzo BC, Glazer DI, Bridge CP, Andriole KP, Dabiri B, Chin JK, Dreyer K, Kalpathy-Cramer J, Mayo-Smith WW. Machine Learning for Adrenal Gland Segmentation and Classification of Normal and Adrenal Masses at CT. Radiology 2023; 306:e220101. [PMID: 36125375 DOI: 10.1148/radiol.220101] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Background Adrenal masses are common, but radiology reporting and recommendations for management can be variable. Purpose To create a machine learning algorithm to segment adrenal glands on contrast-enhanced CT images and classify glands as normal or mass-containing and to assess algorithm performance. Materials and Methods This retrospective study included two groups of contrast-enhanced abdominal CT examinations (development data set and secondary test set). Adrenal glands in the development data set were manually segmented by radiologists. Images in both the development data set and the secondary test set were manually classified as normal or mass-containing. Deep learning segmentation and classification models were trained on the development data set and evaluated on both data sets. Segmentation performance was evaluated with use of the Dice similarity coefficient (DSC), and classification performance with use of sensitivity and specificity. Results The development data set contained 274 CT examinations (251 patients; median age, 61 years; 133 women), and the secondary test set contained 991 CT examinations (991 patients; median age, 62 years; 578 women). The median model DSC on the development test set was 0.80 (IQR, 0.78-0.89) for normal glands and 0.84 (IQR, 0.79-0.90) for adrenal masses. On the development reader set, the median interreader DSC was 0.89 (IQR, 0.78-0.93) for normal glands and 0.89 (IQR, 0.85-0.97) for adrenal masses. Interreader DSC for radiologist manual segmentation did not differ from automated machine segmentation (P = .35). On the development test set, the model had a classification sensitivity of 83% (95% CI: 55, 95) and specificity of 89% (95% CI: 75, 96). On the secondary test set, the model had a classification sensitivity of 69% (95% CI: 58, 79) and specificity of 91% (95% CI: 90, 92). Conclusion A two-stage machine learning pipeline was able to segment the adrenal glands and differentiate normal adrenal glands from those containing masses. © RSNA, 2022 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Cory Robinson-Weiss
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Jay Patel
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Bernardo C Bizzo
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Daniel I Glazer
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Christopher P Bridge
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Katherine P Andriole
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Borna Dabiri
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - John K Chin
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Keith Dreyer
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - Jayashree Kalpathy-Cramer
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| | - William W Mayo-Smith
- From the Department of Radiology, Brigham and Women's Hospital (BWH), Harvard Medical School, 75 Francis St, Boston, MA 02115 (C.R.W., D.I.G., K.P.A., B.D., W.W.M-S.); Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Mass (J.P., C.P.B., J. Kalpathy-Cramer); Health Sciences and Technology Department, Massachusetts Institute of Technology, Cambridge, Mass (J.P.); Department of Radiology, Massachusetts General Hospital (MGH), Harvard Medical School, Boston, Mass (B.C.B., K.D.); and MGH & BWH Center for Clinical Data Science, Boston, Mass (B.C.B., C.P.B., K.P.A., J. K. Chin, K.D., J. Kalpathy-Cramer)
| |
Collapse
|
7
|
Haber MA, Biondetti GP, Gauriau R, Comeau DS, Chin JK, Bizzo BC, Strout J, Golby AJ, Andriole KP. Detection of idiopathic normal pressure hydrocephalus on head CT using a deep convolutional neural network. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08225-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
8
|
Daye D, Wiggins WF, Lungren MP, Alkasab T, Kottler N, Allen B, Roth CJ, Bizzo BC, Durniak K, Brink JA, Larson DB, Dreyer KJ, Langlotz CP. Implementation of Clinical Artificial Intelligence in Radiology: Who Decides and How? Radiology 2022; 305:555-563. [PMID: 35916673 PMCID: PMC9713445 DOI: 10.1148/radiol.212151] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 03/30/2022] [Accepted: 04/12/2022] [Indexed: 01/03/2023]
Abstract
As the role of artificial intelligence (AI) in clinical practice evolves, governance structures oversee the implementation, maintenance, and monitoring of clinical AI algorithms to enhance quality, manage resources, and ensure patient safety. In this article, a framework is established for the infrastructure required for clinical AI implementation and presents a road map for governance. The road map answers four key questions: Who decides which tools to implement? What factors should be considered when assessing an application for implementation? How should applications be implemented in clinical practice? Finally, how should tools be monitored and maintained after clinical implementation? Among the many challenges for the implementation of AI in clinical practice, devising flexible governance structures that can quickly adapt to a changing environment will be essential to ensure quality patient care and practice improvement objectives.
Collapse
Affiliation(s)
- Dania Daye
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Walter F. Wiggins
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Matthew P. Lungren
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Tarik Alkasab
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Nina Kottler
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Bibb Allen
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Christopher J. Roth
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Bernardo C. Bizzo
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Kimberly Durniak
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - James A. Brink
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - David B. Larson
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | | | | |
Collapse
|
9
|
Hillis JM, Bizzo BC, Mercaldo S, Chin JK, Newbury-Chaet I, Digumarthy SR, Gilman MD, Muse VV, Bottrell G, Seah JC, Jones CM, Kalra MK, Dreyer KJ. Evaluation of an Artificial Intelligence Model for Detection of Pneumothorax and Tension Pneumothorax in Chest Radiographs. JAMA Netw Open 2022; 5:e2247172. [PMID: 36520432 PMCID: PMC9856508 DOI: 10.1001/jamanetworkopen.2022.47172] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
IMPORTANCE Early detection of pneumothorax, most often via chest radiography, can help determine need for emergent clinical intervention. The ability to accurately detect and rapidly triage pneumothorax with an artificial intelligence (AI) model could assist with earlier identification and improve care. OBJECTIVE To compare the accuracy of an AI model vs consensus thoracic radiologist interpretations in detecting any pneumothorax (incorporating both nontension and tension pneumothorax) and tension pneumothorax. DESIGN, SETTING, AND PARTICIPANTS This diagnostic study was a retrospective standalone performance assessment using a data set of 1000 chest radiographs captured between June 1, 2015, and May 31, 2021. The radiographs were obtained from patients aged at least 18 years at 4 hospitals in the Mass General Brigham hospital network in the United States. Included radiographs were selected using 2 strategies from all chest radiography performed at the hospitals, including inpatient and outpatient. The first strategy identified consecutive radiographs with pneumothorax through a manual review of radiology reports, and the second strategy identified consecutive radiographs with tension pneumothorax using natural language processing. For both strategies, negative radiographs were selected by taking the next negative radiograph acquired from the same radiography machine as each positive radiograph. The final data set was an amalgamation of these processes. Each radiograph was interpreted independently by up to 3 radiologists to establish consensus ground-truth interpretations. Each radiograph was then interpreted by the AI model for the presence of pneumothorax and tension pneumothorax. This study was conducted between July and October 2021, with the primary analysis performed between October and November 2021. MAIN OUTCOMES AND MEASURES The primary end points were the areas under the receiver operating characteristic curves (AUCs) for the detection of pneumothorax and tension pneumothorax. The secondary end points were the sensitivities and specificities for the detection of pneumothorax and tension pneumothorax. RESULTS The final analysis included radiographs from 985 patients (mean [SD] age, 60.8 [19.0] years; 436 [44.3%] female patients), including 307 patients with nontension pneumothorax, 128 patients with tension pneumothorax, and 550 patients without pneumothorax. The AI model detected any pneumothorax with an AUC of 0.979 (95% CI, 0.970-0.987), sensitivity of 94.3% (95% CI, 92.0%-96.3%), and specificity of 92.0% (95% CI, 89.6%-94.2%) and tension pneumothorax with an AUC of 0.987 (95% CI, 0.980-0.992), sensitivity of 94.5% (95% CI, 90.6%-97.7%), and specificity of 95.3% (95% CI, 93.9%-96.6%). CONCLUSIONS AND RELEVANCE These findings suggest that the assessed AI model accurately detected pneumothorax and tension pneumothorax in this chest radiograph data set. The model's use in the clinical workflow could lead to earlier identification and improved care for patients with pneumothorax.
Collapse
Affiliation(s)
- James M. Hillis
- Data Science Office, Mass General Brigham, Boston, Massachusetts
- Department of Neurology, Massachusetts General Hospital, Boston
- Harvard Medical School, Boston, Massachusetts
| | - Bernardo C. Bizzo
- Data Science Office, Mass General Brigham, Boston, Massachusetts
- Harvard Medical School, Boston, Massachusetts
- Department of Radiology, Massachusetts General Hospital, Boston
| | - Sarah Mercaldo
- Data Science Office, Mass General Brigham, Boston, Massachusetts
- Harvard Medical School, Boston, Massachusetts
- Department of Radiology, Massachusetts General Hospital, Boston
| | - John K. Chin
- Data Science Office, Mass General Brigham, Boston, Massachusetts
| | | | - Subba R. Digumarthy
- Harvard Medical School, Boston, Massachusetts
- Department of Radiology, Massachusetts General Hospital, Boston
| | - Matthew D. Gilman
- Harvard Medical School, Boston, Massachusetts
- Department of Radiology, Massachusetts General Hospital, Boston
| | - Victorine V. Muse
- Harvard Medical School, Boston, Massachusetts
- Department of Radiology, Massachusetts General Hospital, Boston
| | | | | | - Catherine M. Jones
- Annalise-AI, Sydney, Australia
- I-MED Radiology Network, Brisbane, Australia
- Faculty of Medicine and Health, University of Sydney, Sydney, Australia
| | - Mannudeep K. Kalra
- Data Science Office, Mass General Brigham, Boston, Massachusetts
- Harvard Medical School, Boston, Massachusetts
- Department of Radiology, Massachusetts General Hospital, Boston
| | - Keith J. Dreyer
- Data Science Office, Mass General Brigham, Boston, Massachusetts
- Harvard Medical School, Boston, Massachusetts
- Department of Radiology, Massachusetts General Hospital, Boston
| |
Collapse
|
10
|
Daye D, Wiggins WF, Lungren MP, Alkasab T, Kottler N, Allen B, Roth CJ, Bizzo BC, Durniak K, Brink JA, Larson DB, Dreyer KJ, Langlotz CP. Implementation of Clinical Artificial Intelligence in Radiology: Who Decides and How? Radiology 2022; 305:E62. [PMID: 36154286 DOI: 10.1148/radiol.229021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
11
|
Ahn JS, Ebrahimian S, McDermott S, Lee S, Naccarato L, Di Capua JF, Wu MY, Zhang EW, Muse V, Miller B, Sabzalipour F, Bizzo BC, Dreyer KJ, Kaviani P, Digumarthy SR, Kalra MK. Association of Artificial Intelligence-Aided Chest Radiograph Interpretation With Reader Performance and Efficiency. JAMA Netw Open 2022; 5:e2229289. [PMID: 36044215 PMCID: PMC9434361 DOI: 10.1001/jamanetworkopen.2022.29289] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
IMPORTANCE The efficient and accurate interpretation of radiologic images is paramount. OBJECTIVE To evaluate whether a deep learning-based artificial intelligence (AI) engine used concurrently can improve reader performance and efficiency in interpreting chest radiograph abnormalities. DESIGN, SETTING, AND PARTICIPANTS This multicenter cohort study was conducted from April to November 2021 and involved radiologists, including attending radiologists, thoracic radiology fellows, and residents, who independently participated in 2 observer performance test sessions. The sessions included a reading session with AI and a session without AI, in a randomized crossover manner with a 4-week washout period in between. The AI produced a heat map and the image-level probability of the presence of the referrable lesion. The data used were collected at 2 quaternary academic hospitals in Boston, Massachusetts: Beth Israel Deaconess Medical Center (The Medical Information Mart for Intensive Care Chest X-Ray [MIMIC-CXR]) and Massachusetts General Hospital (MGH). MAIN OUTCOMES AND MEASURES The ground truths for the labels were created via consensual reading by 2 thoracic radiologists. Each reader documented their findings in a customized report template, in which the 4 target chest radiograph findings and the reader confidence of the presence of each finding was recorded. The time taken for reporting each chest radiograph was also recorded. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) were calculated for each target finding. RESULTS A total of 6 radiologists (2 attending radiologists, 2 thoracic radiology fellows, and 2 residents) participated in the study. The study involved a total of 497 frontal chest radiographs-247 from the MIMIC-CXR data set (demographic data for patients were not available) and 250 chest radiographs from MGH (mean [SD] age, 63 [16] years; 133 men [53.2%])-from adult patients with and without 4 target findings (pneumonia, nodule, pneumothorax, and pleural effusion). The target findings were found in 351 of 497 chest radiographs. The AI was associated with higher sensitivity for all findings compared with the readers (nodule, 0.816 [95% CI, 0.732-0.882] vs 0.567 [95% CI, 0.524-0.611]; pneumonia, 0.887 [95% CI, 0.834-0.928] vs 0.673 [95% CI, 0.632-0.714]; pleural effusion, 0.872 [95% CI, 0.808-0.921] vs 0.889 [95% CI, 0.862-0.917]; pneumothorax, 0.988 [95% CI, 0.932-1.000] vs 0.792 [95% CI, 0.756-0.827]). AI-aided interpretation was associated with significantly improved reader sensitivities for all target findings, without negative impacts on the specificity. Overall, the AUROCs of readers improved for all 4 target findings, with significant improvements in detection of pneumothorax and nodule. The reporting time with AI was 10% lower than without AI (40.8 vs 36.9 seconds; difference, 3.9 seconds; 95% CI, 2.9-5.2 seconds; P < .001). CONCLUSIONS AND RELEVANCE These findings suggest that AI-aided interpretation was associated with improved reader performance and efficiency for identifying major thoracic findings on a chest radiograph.
Collapse
Affiliation(s)
| | - Shadi Ebrahimian
- Division of Thoracic Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
- Internal Medicine, Icahn School of Medicine at Mount Sinai, Elmhurst Hospital Center, Elmhurst, New York
| | - Shaunagh McDermott
- Division of Thoracic Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | | | - Laura Naccarato
- Division of Thoracic Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - John F. Di Capua
- Division of Thoracic Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Markus Y. Wu
- Division of Thoracic Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Eric W. Zhang
- Division of Thoracic Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Victorine Muse
- Division of Thoracic Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Benjamin Miller
- Division of Thoracic Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
- Data Science Office, Mass General Brigham, Boston, Massachusetts
| | - Farid Sabzalipour
- Division of Thoracic Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
- Data Science Office, Mass General Brigham, Boston, Massachusetts
| | - Bernardo C. Bizzo
- Division of Thoracic Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
- Data Science Office, Mass General Brigham, Boston, Massachusetts
| | - Keith J. Dreyer
- Division of Thoracic Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
- Data Science Office, Mass General Brigham, Boston, Massachusetts
| | - Parisa Kaviani
- Division of Thoracic Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Subba R. Digumarthy
- Division of Thoracic Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Mannudeep K. Kalra
- Division of Thoracic Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
- Data Science Office, Mass General Brigham, Boston, Massachusetts
| |
Collapse
|
12
|
Li MD, Arun NT, Aggarwal M, Gupta S, Singh P, Little BP, Mendoza DP, Corradi GC, Takahashi MS, Ferraciolli SF, Succi MD, Lang M, Bizzo BC, Dayan I, Kitamura FC, Kalpathy-Cramer J. Multi-population generalizability of a deep learning-based chest radiograph severity score for COVID-19. Medicine (Baltimore) 2022; 101:e29587. [PMID: 35866818 PMCID: PMC9302282 DOI: 10.1097/md.0000000000029587] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 04/21/2022] [Accepted: 04/28/2022] [Indexed: 01/04/2023] Open
Abstract
To tune and test the generalizability of a deep learning-based model for assessment of COVID-19 lung disease severity on chest radiographs (CXRs) from different patient populations. A published convolutional Siamese neural network-based model previously trained on hospitalized patients with COVID-19 was tuned using 250 outpatient CXRs. This model produces a quantitative measure of COVID-19 lung disease severity (pulmonary x-ray severity (PXS) score). The model was evaluated on CXRs from 4 test sets, including 3 from the United States (patients hospitalized at an academic medical center (N = 154), patients hospitalized at a community hospital (N = 113), and outpatients (N = 108)) and 1 from Brazil (patients at an academic medical center emergency department (N = 303)). Radiologists from both countries independently assigned reference standard CXR severity scores, which were correlated with the PXS scores as a measure of model performance (Pearson R). The Uniform Manifold Approximation and Projection (UMAP) technique was used to visualize the neural network results. Tuning the deep learning model with outpatient data showed high model performance in 2 United States hospitalized patient datasets (R = 0.88 and R = 0.90, compared to baseline R = 0.86). Model performance was similar, though slightly lower, when tested on the United States outpatient and Brazil emergency department datasets (R = 0.86 and R = 0.85, respectively). UMAP showed that the model learned disease severity information that generalized across test sets. A deep learning model that extracts a COVID-19 severity score on CXRs showed generalizable performance across multiple populations from 2 continents, including outpatients and hospitalized patients.
Collapse
Affiliation(s)
- Matthew D. Li
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Nishanth T. Arun
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Mehak Aggarwal
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Sharut Gupta
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Brent P. Little
- Division of Thoracic Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Dexter P. Mendoza
- Division of Thoracic Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | | | | | | | - Marc D. Succi
- Division of Emergency Radiology, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Min Lang
- Division of Thoracic Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Bernardo C. Bizzo
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- MGH and BWH Center for Clinical Data Science, Mass General Brigham, Boston, MA, USA
| | - Ittai Dayan
- MGH and BWH Center for Clinical Data Science, Mass General Brigham, Boston, MA, USA
| | - Felipe C. Kitamura
- Diagnósticos da América SA (DASA), São Paulo, Brazil
- Department of Diagnostic Imaging, Universidade Federal de São Paulo, São Paulo, Brazil
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- MGH and BWH Center for Clinical Data Science, Mass General Brigham, Boston, MA, USA
| |
Collapse
|
13
|
Abstract
Artificial intelligence is already innovating in the provision of neurologic care. This review explores key artificial intelligence concepts; their application to neurologic diagnosis, prognosis, and treatment; and challenges that await their broader adoption. The development of new diagnostic biomarkers, individualization of prognostic information, and improved access to treatment are among the plethora of possibilities. These advances, however, reflect only the tip of the iceberg for the ways in which artificial intelligence may transform neurologic care in the future.
Collapse
Affiliation(s)
- James M Hillis
- Digital Clinical Research Organization, Data Science Office, Mass General Brigham, Boston, Massachusetts.,Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Bernardo C Bizzo
- Digital Clinical Research Organization, Data Science Office, Mass General Brigham, Boston, Massachusetts.,Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
14
|
Ebrahimian S, Kalra MK, Agarwal S, Bizzo BC, Elkholy M, Wald C, Allen B, Dreyer KJ. FDA-regulated AI Algorithms: Trends, Strengths, and Gaps of Validation Studies. Acad Radiol 2022; 29:559-566. [PMID: 34969610 DOI: 10.1016/j.acra.2021.09.002] [Citation(s) in RCA: 37] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 08/24/2021] [Accepted: 09/04/2021] [Indexed: 12/31/2022]
Abstract
RATIONALE AND OBJECTIVES To assess key trends, strengths, and gaps in validation studies of the Food and Drug Administration (FDA)-regulated imaging-based artificial intelligence/machine learning (AI/ML) algorithms. MATERIALS AND METHODS We audited publicly available details of regulated AI/ML algorithms in imaging from 2008 until April 2021. We reviewed 127 regulated software (118 AI/ML) to classify information related to their parent company, subspecialty, body area and specific anatomy type, imaging modality, date of FDA clearance, indications for use, target pathology (such as trauma) and findings (such as fracture), technique (CAD triage, CAD detection and/or characterization, CAD acquisition or improvement, and image processing/quantification), product performance, presence, type, strength and availability of clinical validation data. Pertaining to validation data, where available, we recorded the number of patients or studies included, sensitivity, specificity, accuracy, and/or receiver operating characteristic area under the curve, along with information on ground-truthing of use-cases. Data were analyzed with pivot tables and charts for descriptive statistics and trends. RESULTS We noted an increasing number of FDA-regulated AI/ML from 2008 to 2021. Seventeen (17/118) regulated AI/ML algorithms posted no validation claims or data. Just 9/118 reviewed AI/ML algorithms had a validation dataset sizes of over 1000 patients. The most common type of AI/ML included image processing/quantification (IPQ; n = 59/118), and triage (CADt; n = 27/118). Brain, breast, and lungs dominated the targeted body regions of interest. CONCLUSION Insufficient public information on validation datasets in several FDA-regulated AI/ML algorithms makes it difficult to justify clinical applications since their generalizability and presence of bias cannot be inferred.
Collapse
|
15
|
Bridge CP, Bizzo BC, Hillis JM, Chin JK, Comeau DS, Gauriau R, Macruz F, Pawar J, Noro FTC, Sharaf E, Straus Takahashi M, Wright B, Kalafut JF, Andriole KP, Pomerantz SR, Pedemonte S, González RG. Development and clinical application of a deep learning model to identify acute infarct on magnetic resonance imaging. Sci Rep 2022; 12:2154. [PMID: 35140277 PMCID: PMC8828773 DOI: 10.1038/s41598-022-06021-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 01/18/2022] [Indexed: 11/09/2022] Open
Abstract
Stroke is a leading cause of death and disability. The ability to quickly identify the presence of acute infarct and quantify the volume on magnetic resonance imaging (MRI) has important treatment implications. We developed a machine learning model that used the apparent diffusion coefficient and diffusion weighted imaging series. It was trained on 6,657 MRI studies from Massachusetts General Hospital (MGH; Boston, USA). All studies were labelled positive or negative for infarct (classification annotation) with 377 having the region of interest outlined (segmentation annotation). The different annotation types facilitated training on more studies while not requiring the extensive time to manually segment every study. We initially validated the model on studies sequestered from the training set. We then tested the model on studies from three clinical scenarios: consecutive stroke team activations for 6-months at MGH, consecutive stroke team activations for 6-months at a hospital that did not provide training data (Brigham and Women’s Hospital [BWH]; Boston, USA), and an international site (Diagnósticos da América SA [DASA]; Brazil). The model results were compared to radiologist ground truth interpretations. The model performed better when trained on classification and segmentation annotations (area under the receiver operating curve [AUROC] 0.995 [95% CI 0.992–0.998] and median Dice coefficient for segmentation overlap of 0.797 [IQR 0.642–0.861]) compared to segmentation annotations alone (AUROC 0.982 [95% CI 0.972–0.990] and Dice coefficient 0.776 [IQR 0.584–0.857]). The model accurately identified infarcts for MGH stroke team activations (AUROC 0.964 [95% CI 0.943–0.982], 381 studies), BWH stroke team activations (AUROC 0.981 [95% CI 0.966–0.993], 247 studies), and at DASA (AUROC 0.998 [95% CI 0.993–1.000], 171 studies). The model accurately segmented infarcts with Pearson correlation comparing model output and ground truth volumes between 0.968 and 0.986 for the three scenarios. Acute infarct can be accurately detected and segmented on MRI in real-world clinical scenarios using a machine learning model.
Collapse
Affiliation(s)
- Christopher P Bridge
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA.,Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, USA.,Harvard Medical School, Boston, USA.,Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - Bernardo C Bizzo
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA. .,Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, USA. .,Harvard Medical School, Boston, USA. .,Department of Radiology, Massachusetts General Hospital, Boston, USA. .,Diagnósticos da América SA, São Paulo, Brazil. .,MGH & BWH Center for Clinical Data Science, Mass General Brigham, Suite 1303, Floor 13, 100 Cambridge St, Boston, MA, 02114, USA.
| | - James M Hillis
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA.,Harvard Medical School, Boston, USA.,Department of Neurology, Massachusetts General Hospital, Boston, USA
| | - John K Chin
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | - Donnella S Comeau
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | - Romane Gauriau
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | - Fabiola Macruz
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | - Jayashri Pawar
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | - Flavia T C Noro
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | - Elshaimaa Sharaf
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | | | - Bradley Wright
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | | | - Katherine P Andriole
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA.,Harvard Medical School, Boston, USA.,Department of Radiology, Brigham and Women's Hospital, Boston, USA
| | - Stuart R Pomerantz
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA.,Harvard Medical School, Boston, USA.,Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - Stefano Pedemonte
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA
| | - R Gilberto González
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, USA.,Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, USA.,Harvard Medical School, Boston, USA.,Department of Radiology, Massachusetts General Hospital, Boston, USA
| |
Collapse
|
16
|
Almeida RR, Bizzo BC, Singh R, Andriole KP, Alkasab TK. Computer-assisted Reporting and Decision Support Increases Compliance with Follow-up Imaging and Hormonal Screening of Adrenal Incidentalomas. Acad Radiol 2022; 29:236-244. [PMID: 33583714 DOI: 10.1016/j.acra.2021.01.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Revised: 01/07/2021] [Accepted: 01/13/2021] [Indexed: 11/01/2022]
Abstract
OBJECTIVE To assess the impact of using a computer-assisted reporting and decision support (CAR/DS) tool at the radiologist point-of-care on ordering provider compliance with recommendations for adrenal incidentaloma workup. METHOD Abdominal CT reports describing adrenal incidentalomas (2014 - 2016) were retrospectively extracted from the radiology database. Exclusion criteria were history of cancer, suspected functioning adrenal tumor, dominant nodule size < 1 cm or ≥ 4 cm, myelolipomas, cysts, and hematomas. Multivariable logistic regression models were employed to predict follow-up imaging (FUI) and hormonal screening orders as a function of patient age and sex, nodule size, and CAR/DS use. CAR/DS reports were compared to conventional reports regarding ordering provider compliance with, frequency, and completeness of, guideline-warranted recommendations for FUI and hormonal screening of adrenal incidentalomas using Chi-square test. RESULT Of 174 patients (mean age 62.4; 51.1% women) with adrenal incidentalomas, 62% (108/174) received CAR/DS-based recommendations versus 38% (66/174) unassisted recommendations. CAR/DS use was an independent predictor of provider compliance both with FUI (Odds Ratio [OR]=2.47, p = 0.02) and hormonal screening (OR=2.38, p = 0.04). CAR/DS reports recommended FUI (97.2%,105/108) and hormonal screening (87.0%,94/108) more often than conventional reports (respectively, 69.7% [46/66], 3.0% [2/66], both p <0.0001). CAR/DS recommendations more frequently included instructions for FUI time, protocol, and modality than conventional reports (all p <0.001). CONCLUSION Ordering providers were at least twice as likely to comply with report recommendations for FUI and hormonal evaluation of adrenal incidentalomas generated using CAR/DS versus unassisted reporting. CAR/DS-directed recommendations were more adherent to guidelines than those generated without.
Collapse
|
17
|
Pourvaziri A, Narayan AK, Tso D, Baliyan V, Glover M, Bizzo BC, Kako B, Succi MD, Lev MH, Flores EJ. Imaging Information Overload: Quantifying the burden of interpretive and non-interpretive tasks for CT angiography for aortic pathologies in emergency radiology. Curr Probl Diagn Radiol 2022; 51:546-551. [DOI: 10.1067/j.cpradiol.2022.01.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 12/18/2021] [Accepted: 01/05/2022] [Indexed: 12/20/2022]
|
18
|
Bizzo BC, Almeida RR, Alkasab TK. Artificial Intelligence Enabling Radiology Reporting. Radiol Clin North Am 2021; 59:1045-1052. [PMID: 34689872 DOI: 10.1016/j.rcl.2021.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
The radiology reporting process is beginning to incorporate structured, semantically labeled data. Tools based on artificial intelligence technologies using a structured reporting context can assist with internal report consistency and longitudinal tracking. To-do lists of relevant issues could be assembled by artificial intelligence tools, incorporating components of the patient's history. Radiologists will review and select artificial intelligence-generated and other data to be transmitted to the electronic health record and generate feedback for ongoing improvement of artificial intelligence tools. These technologies should make reports more valuable by making reports more accessible and better able to integrate into care pathways.
Collapse
Affiliation(s)
- Bernardo C Bizzo
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit Street, Founders 210, Boston, MA 02114, USA
| | - Renata R Almeida
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115, USA
| | - Tarik K Alkasab
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit Street, Founders 210, Boston, MA 02114, USA.
| |
Collapse
|
19
|
Bizzo BC, Almeida RR, Alkasab TK. Data Management in Artificial Intelligence-Assisted Radiology Reporting. J Am Coll Radiol 2021; 18:1485-1488. [PMID: 34624236 DOI: 10.1016/j.jacr.2021.09.017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 09/27/2021] [Accepted: 09/27/2021] [Indexed: 10/20/2022]
Affiliation(s)
- Bernardo C Bizzo
- Harvard Medical School, Boston, Massachusetts; Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Renata R Almeida
- Harvard Medical School, Boston, Massachusetts; Department of Radiology, Brigham and Women's Hospital, Boston, Massachusetts
| | - Tarik K Alkasab
- Harvard Medical School, Boston, Massachusetts; Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts; Enterprise Informatics/IT, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts.
| |
Collapse
|
20
|
Bizzo BC, Almeida RR, Alkasab TK. Computer-Assisted Reporting and Decision Support in Standardized Radiology Reporting for Cancer Imaging. JCO Clin Cancer Inform 2021; 5:426-434. [PMID: 33852324 DOI: 10.1200/cci.20.00129] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
PURPOSE Recent advances in structured reporting are providing an opportunity to enhance cancer imaging assessment to drive value-based care and improve patient safety. METHODS The computer-assisted reporting and decision support (CAR/DS) framework has been developed to enable systematic ingestion of guidelines as clinical decision structured reporting tools embedded within the radiologist's workflow. RESULTS CAR/DS tools can reduce the radiology reporting variability and increase compliance with clinical guidelines. The lung cancer use-case is used to describe various scenarios of a cancer imaging structured reporting pathway, including incidental findings, screening, staging, and restaging or continued care. Various aspects of these tools are also described using cancer-related examples for different imaging modalities and applications such as calculators. Such systems can leverage artificial intelligence (AI) algorithms to assist with the generation of structured reports and there are opportunities for new AI applications to be created using the structured data associated with CAR/DS tools. CONCLUSION These AI-enabled systems are starting to allow information from multiple sources to be integrated and inserted into structured reports to drive improvements in clinical decision support and patient care.
Collapse
Affiliation(s)
- Bernardo C Bizzo
- Harvard Medical School, Boston, MA.,Department of Radiology, Massachusetts General Hospital, Boston, MA.,Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
| | - Renata R Almeida
- Harvard Medical School, Boston, MA.,Department of Radiology, Brigham and Women's Hospital, Boston, MA
| | - Tarik K Alkasab
- Harvard Medical School, Boston, MA.,Department of Radiology, Massachusetts General Hospital, Boston, MA
| |
Collapse
|
21
|
Ebrahimian S, Oliveira Bernardo M, Alberto Moscatelli A, Tapajos J, Leitão Tapajós L, Jamil Khoury H, Babaei R, Karimi Mobin H, Mohseni I, Arru C, Carriero A, Falaschi Z, Pasche A, Saba L, Homayounieh F, Bizzo BC, Vassileva J, Kalra MK. Investigating centering, scan length, and arm position impact on radiation dose across 4 countries from 4 continents during pandemic: Mitigating key radioprotection issues. Phys Med 2021; 84:125-131. [PMID: 33894582 PMCID: PMC8058535 DOI: 10.1016/j.ejmp.2021.04.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 03/24/2021] [Accepted: 04/01/2021] [Indexed: 12/15/2022] Open
Abstract
Purpose Optimization of CT scan practices can help achieve and maintain optimal radiation protection. The aim was to assess centering, scan length, and positioning of patients undergoing chest CT for suspected or known COVID-19 pneumonia and to investigate their effect on associated radiation doses. Methods With respective approvals from institutional review boards, we compiled CT imaging and radiation dose data from four hospitals belonging to four countries (Brazil, Iran, Italy, and USA) on 400 adult patients who underwent chest CT for suspected or known COVID-19 pneumonia between April 2020 and August 2020. We recorded patient demographics and volume CT dose index (CTDIvol) and dose length product (DLP). From thin-section CT images of each patient, we estimated the scan length and recorded the first and last vertebral bodies at the scan start and end locations. Patient mis-centering and arm position were recorded. Data were analyzed with analysis of variance (ANOVA). Results The extent and frequency of patient mis-centering did not differ across the four CT facilities (>0.09). The frequency of patients scanned with arms by their side (11–40% relative to those with arms up) had greater mis-centering and higher CTDIvol and DLP at 2/4 facilities (p = 0.027–0.05). Despite lack of variations in effective diameters (p = 0.14), there were significantly variations in scan lengths, CTDIvol and DLP across the four facilities (p < 0.001). Conclusions Mis-centering, over-scanning, and arms by the side are frequent issues with use of chest CT in COVID-19 pneumonia and are associated with higher radiation doses.
Collapse
Affiliation(s)
- Shadi Ebrahimian
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, Boston, MA, USA
| | - Monica Oliveira Bernardo
- Hospital Miguel Soeiro - UNIMED, Pontificia University Catholic of São Paulo - PUC-SP, Sorocaba, São Paulo, Brazil
| | - Antônio Alberto Moscatelli
- Hospital Miguel Soeiro - UNIMED, Pontificia University Catholic of São Paulo - PUC-SP, Sorocaba, São Paulo, Brazil
| | - Juliana Tapajos
- Hospital Delphina Rinaldi Abdel Aziz, Manaus, Amazonas, Brazil
| | | | - Helen Jamil Khoury
- Nuclear Energy Department, Federal University of Pernambuco, Recife, Pernambuco, Brazil
| | - Rosa Babaei
- Department of Radiology, Firoozgar Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Hadi Karimi Mobin
- Department of Radiology, Firoozgar Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Iman Mohseni
- Department of Radiology, Firoozgar Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Chiara Arru
- Azienda Ospedaliera Universitaria di Cagliari, Cagliari, Italy
| | | | | | | | - Luca Saba
- Azienda Ospedaliera Universitaria di Cagliari, Cagliari, Italy
| | - Fatemeh Homayounieh
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, Boston, MA, USA
| | - Bernardo C Bizzo
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, Boston, MA, USA
| | - Jenia Vassileva
- Radiation Protection of Patients Unit, International Atomic Energy Agency, Vienna, Austria
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
22
|
Gauriau R, Bizzo BC, Kitamura FC, Landi Junior O, Ferraciolli SF, Macruz FBC, Sanchez TA, Garcia MRT, Vedolin LM, Domingues RC, Gasparetto EL, Andriole KP. A Deep Learning-based Model for Detecting Abnormalities on Brain MR Images for Triaging: Preliminary Results from a Multisite Experience. Radiol Artif Intell 2021; 3:e200184. [PMID: 34350408 DOI: 10.1148/ryai.2021200184] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 03/02/2021] [Accepted: 04/07/2021] [Indexed: 11/11/2022]
Abstract
Purpose To develop a deep learning model for detecting brain abnormalities on MR images. Materials and Methods In this retrospective study, a deep learning approach using T2-weighted fluid-attenuated inversion recovery images was developed to classify brain MRI findings as "likely normal" or "likely abnormal." A convolutional neural network model was trained on a large, heterogeneous dataset collected from two different continents and covering a broad panel of pathologic conditions, including neoplasms, hemorrhages, infarcts, and others. Three datasets were used. Dataset A consisted of 2839 patients, dataset B consisted of 6442 patients, and dataset C consisted of 1489 patients and was only used for testing. Datasets A and B were split into training, validation, and test sets. A total of three models were trained: model A (using only dataset A), model B (using only dataset B), and model A + B (using training datasets from A and B). All three models were tested on subsets from dataset A, dataset B, and dataset C separately. The evaluation was performed by using annotations based on the images, as well as labels based on the radiology reports. Results Model A trained on dataset A from one institution and tested on dataset C from another institution reached an F1 score of 0.72 (95% CI: 0.70, 0.74) and an area under the receiver operating characteristic curve of 0.78 (95% CI: 0.75, 0.80) when compared with findings from the radiology reports. Conclusion The model shows relatively good performance for differentiating between likely normal and likely abnormal brain examination findings by using data from different institutions.Keywords: MR-Imaging, Head/Neck, Computer Applications-General (Informatics), Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms© RSNA, 2021Supplemental material is available for this article.
Collapse
Affiliation(s)
- Romane Gauriau
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Bernardo C Bizzo
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Felipe C Kitamura
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Osvaldo Landi Junior
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Suely F Ferraciolli
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Fabiola B C Macruz
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Tiago A Sanchez
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Marcio R T Garcia
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Leonardo M Vedolin
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Romeu C Domingues
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Emerson L Gasparetto
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| | - Katherine P Andriole
- MGH & BWH Center for Clinical Data Science, Ste 1303, Floor 13, 100 Cambridge St, Boston, MA 02114 (R.G., B.C.B., F.B.C.M., K.P.A.); Department of Artificial Intelligence, Diagnósticos da América, São Paulo, Brazil (B.C.B., F.C.K., O.L.J., S.F.F., M.R.T.G., L.M.V., R.C.D., E.L.G.); Head of AI, Diagnósticos da América SA, São Paulo, Brazil (F.C.K.); Department of Radiology, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil (B.C.B., T.A.S., E.L.G.); Department of Radiology, Massachusetts General Hospital, Boston, Mass (B.C.B.); and Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Harvard University, Boston, Mass (K.P.A.)
| |
Collapse
|
23
|
Homayounieh F, Bezerra Cavalcanti Rockenbach MA, Ebrahimian S, Doda Khera R, Bizzo BC, Buch V, Babaei R, Karimi Mobin H, Mohseni I, Mitschke M, Zimmermann M, Durlak F, Rauch F, Digumarthy SR, Kalra MK. Multicenter Assessment of CT Pneumonia Analysis Prototype for Predicting Disease Severity and Patient Outcome. J Digit Imaging 2021; 34:320-329. [PMID: 33634416 PMCID: PMC7906242 DOI: 10.1007/s10278-021-00430-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 01/08/2021] [Accepted: 02/02/2021] [Indexed: 12/14/2022] Open
Abstract
To perform a multicenter assessment of the CT Pneumonia Analysis prototype for predicting disease severity and patient outcome in COVID-19 pneumonia both without and with integration of clinical information. Our IRB-approved observational study included consecutive 241 adult patients (> 18 years; 105 females; 136 males) with RT-PCR-positive COVID-19 pneumonia who underwent non-contrast chest CT at one of the two tertiary care hospitals (site A: Massachusetts General Hospital, USA; site B: Firoozgar Hospital Iran). We recorded patient age, gender, comorbid conditions, laboratory values, intensive care unit (ICU) admission, mechanical ventilation, and final outcome (recovery or death). Two thoracic radiologists reviewed all chest CTs to record type, extent of pulmonary opacities based on the percentage of lobe involved, and severity of respiratory motion artifacts. Thin-section CT images were processed with the prototype (Siemens Healthineers) to obtain quantitative features including lung volumes, volume and percentage of all-type and high-attenuation opacities (≥ -200 HU), and mean HU and standard deviation of opacities within a given lung region. These values are estimated for the total combined lung volume, and separately for each lung and each lung lobe. Multivariable analyses of variance (MANOVA) and multiple logistic regression were performed for data analyses. About 26% of chest CTs (62/241) had moderate to severe motion artifacts. There were no significant differences in the AUCs of quantitative features for predicting disease severity with and without motion artifacts (AUC 0.94-0.97) as well as for predicting patient outcome (AUC 0.7-0.77) (p > 0.5). Combination of the volume of all-attenuation opacities and the percentage of high-attenuation opacities (AUC 0.76-0.82, 95% confidence interval (CI) 0.73-0.82) had higher AUC for predicting ICU admission than the subjective severity scores (AUC 0.69-0.77, 95% CI 0.69-0.81). Despite a high frequency of motion artifacts, quantitative features of pulmonary opacities from chest CT can help differentiate patients with favorable and adverse outcomes.
Collapse
Affiliation(s)
- Fatemeh Homayounieh
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, Boston, MA USA
| | | | - Shadi Ebrahimian
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, Boston, MA USA
| | - Ruhani Doda Khera
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, Boston, MA USA
| | - Bernardo C. Bizzo
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, Boston, MA USA
- MGH & BWH Center for Clinical Data Science, Boston, MA USA
| | - Varun Buch
- MGH & BWH Center for Clinical Data Science, Boston, MA USA
| | - Rosa Babaei
- Department of Radiology, Firoozgar Hospital and Iran University of Medical Sciences, Tehran, Iran
| | - Hadi Karimi Mobin
- Department of Radiology, Firoozgar Hospital and Iran University of Medical Sciences, Tehran, Iran
| | - Iman Mohseni
- Department of Radiology, Firoozgar Hospital and Iran University of Medical Sciences, Tehran, Iran
| | | | | | - Felix Durlak
- Diagnostic Imaging, Siemens Healthcare GmbH, Erlangen, Germany
| | - Franziska Rauch
- Diagnostic Imaging, Siemens Healthcare GmbH, Erlangen, Germany
| | - Subba R Digumarthy
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, Boston, MA USA
| | - Mannudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, Boston, MA USA
| |
Collapse
|
24
|
Ebrahimian S, Homayounieh F, Rockenbach MABC, Putha P, Raj T, Dayan I, Bizzo BC, Buch V, Wu D, Kim K, Li Q, Digumarthy SR, Kalra MK. Artificial intelligence matches subjective severity assessment of pneumonia for prediction of patient outcome and need for mechanical ventilation: a cohort study. Sci Rep 2021; 11:858. [PMID: 33441578 PMCID: PMC7807029 DOI: 10.1038/s41598-020-79470-0] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 12/04/2020] [Indexed: 02/08/2023] Open
Abstract
To compare the performance of artificial intelligence (AI) and Radiographic Assessment of Lung Edema (RALE) scores from frontal chest radiographs (CXRs) for predicting patient outcomes and the need for mechanical ventilation in COVID-19 pneumonia. Our IRB-approved study included 1367 serial CXRs from 405 adult patients (mean age 65 ± 16 years) from two sites in the US (Site A) and South Korea (Site B). We recorded information pertaining to patient demographics (age, gender), smoking history, comorbid conditions (such as cancer, cardiovascular and other diseases), vital signs (temperature, oxygen saturation), and available laboratory data (such as WBC count and CRP). Two thoracic radiologists performed the qualitative assessment of all CXRs based on the RALE score for assessing the severity of lung involvement. All CXRs were processed with a commercial AI algorithm to obtain the percentage of the lung affected with findings related to COVID-19 (AI score). Independent t- and chi-square tests were used in addition to multiple logistic regression with Area Under the Curve (AUC) as output for predicting disease outcome and the need for mechanical ventilation. The RALE and AI scores had a strong positive correlation in CXRs from each site (r2 = 0.79-0.86; p < 0.0001). Patients who died or received mechanical ventilation had significantly higher RALE and AI scores than those with recovery or without the need for mechanical ventilation (p < 0.001). Patients with a more substantial difference in baseline and maximum RALE scores and AI scores had a higher prevalence of death and mechanical ventilation (p < 0.001). The addition of patients' age, gender, WBC count, and peripheral oxygen saturation increased the outcome prediction from 0.87 to 0.94 (95% CI 0.90-0.97) for RALE scores and from 0.82 to 0.91 (95% CI 0.87-0.95) for the AI scores. AI algorithm is as robust a predictor of adverse patient outcome (death or need for mechanical ventilation) as subjective RALE scores in patients with COVID-19 pneumonia.
Collapse
Affiliation(s)
- Shadi Ebrahimian
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA, 02114, USA.
| | - Fatemeh Homayounieh
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA, 02114, USA
| | | | - Preetham Putha
- Employee of qure.ai, Level 6, Oberoi Commerz II, Goregaon East, Mumbai, 400063, India
| | - Tarun Raj
- Employee of qure.ai, Level 6, Oberoi Commerz II, Goregaon East, Mumbai, 400063, India
| | - Ittai Dayan
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA, 02114, USA
- MGH & BWH Center for Clinical Data Science, Boston, MA, USA
| | - Bernardo C Bizzo
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA, 02114, USA
- MGH & BWH Center for Clinical Data Science, Boston, MA, USA
| | - Varun Buch
- MGH & BWH Center for Clinical Data Science, Boston, MA, USA
| | - Dufan Wu
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA, 02114, USA
- Gordon Center for Medical Imaging, Bartlett 501, 55 Fruit Street, Boston, MA, 02114, USA
| | - Kyungsang Kim
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA, 02114, USA
- Gordon Center for Medical Imaging, Bartlett 501, 55 Fruit Street, Boston, MA, 02114, USA
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA, 02114, USA
- Gordon Center for Medical Imaging, Bartlett 501, 55 Fruit Street, Boston, MA, 02114, USA
| | - Subba R Digumarthy
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA, 02114, USA
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital and the Harvard Medical School, 75 Blossom Court, Suite 248, Boston, MA, 02114, USA
| |
Collapse
|
25
|
Li MD, Arun NT, Aggarwal M, Gupta S, Singh P, Little BP, Mendoza DP, Corradi GCA, Takahashi MS, Ferraciolli SF, Succi MD, Lang M, Bizzo BC, Dayan I, Kitamura FC, Kalpathy-Cramer J. Improvement and Multi-Population Generalizability of a Deep Learning-Based Chest Radiograph Severity Score for COVID-19. medRxiv 2020. [PMID: 32995811 DOI: 10.1101/2020.09.15.20195453] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
PURPOSE To improve and test the generalizability of a deep learning-based model for assessment of COVID-19 lung disease severity on chest radiographs (CXRs) from different patient populations. MATERIALS AND METHODS A published convolutional Siamese neural network-based model previously trained on hospitalized patients with COVID-19 was tuned using 250 outpatient CXRs. This model produces a quantitative measure of COVID-19 lung disease severity (pulmonary x-ray severity (PXS) score). The model was evaluated on CXRs from four test sets, including 3 from the United States (patients hospitalized at an academic medical center (N=154), patients hospitalized at a community hospital (N=113), and outpatients (N=108)) and 1 from Brazil (patients at an academic medical center emergency department (N=303)). Radiologists from both countries independently assigned reference standard CXR severity scores, which were correlated with the PXS scores as a measure of model performance (Pearson r). The Uniform Manifold Approximation and Projection (UMAP) technique was used to visualize the neural network results. RESULTS Tuning the deep learning model with outpatient data improved model performance in two United States hospitalized patient datasets (r=0.88 and r=0.90, compared to baseline r=0.86). Model performance was similar, though slightly lower, when tested on the United States outpatient and Brazil emergency department datasets (r=0.86 and r=0.85, respectively). UMAP showed that the model learned disease severity information that generalized across test sets. CONCLUSIONS Performance of a deep learning-based model that extracts a COVID-19 severity score on CXRs improved using training data from a different patient cohort (outpatient versus hospitalized) and generalized across multiple populations.
Collapse
|
26
|
Bizzo BC, Almeida RR, Michalski MH, Alkasab TK. Artificial Intelligence and Clinical Decision Support for Radiologists and Referring Providers. J Am Coll Radiol 2020; 16:1351-1356. [PMID: 31492414 DOI: 10.1016/j.jacr.2019.06.010] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Revised: 06/03/2019] [Accepted: 06/04/2019] [Indexed: 01/05/2023]
Abstract
Recent advances in artificial intelligence (AI) are providing an opportunity to enhance existing clinical decision support (CDS) tools to improve patient safety and drive value-based imaging. We discuss the advantages and potential applications that may be realized with the synergy between AI and CDS systems. From the perspective of both radiologist and ordering provider, CDS could be significantly empowered using AI. CDS enhanced by AI could reduce friction in radiology workflows and can aid AI developers to identify relevant imaging features their tools should be seeking to extract from images. Furthermore, these systems can generate structured data to be used as input to develop machine learning algorithms, which can drive downstream care pathways. For referring providers, an AI-enabled CDS solution could enable an evolution from existing imaging-centric CDS toward decision support that takes into account a holistic patient perspective. More intelligent CDS could suggest imaging examinations in highly complex clinical scenarios, assist on the identification of appropriate imaging opportunities at the health system level, suggest appropriate individualized screening, or aid health care providers to ensure continuity of care. AI has the potential to enable the next generation of CDS, improving patient care and enhancing providers' and radiologists' experience.
Collapse
Affiliation(s)
- Bernardo C Bizzo
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts; MGH & BWH Center for Clinical Data Science, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts
| | - Renata R Almeida
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts; MGH & BWH Center for Clinical Data Science, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts
| | - Mark H Michalski
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts; MGH & BWH Center for Clinical Data Science, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts
| | - Tarik K Alkasab
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts; MGH & BWH Center for Clinical Data Science, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts.
| |
Collapse
|
27
|
Alkasab TK, Bizzo BC, Berland LL, Nair S, Pandharipande PV, Harvey HB. Creation of an Open Framework for Point-of-Care Computer-Assisted Reporting and Decision Support Tools for Radiologists. J Am Coll Radiol 2017. [DOI: 10.1016/j.jacr.2017.04.031] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|