1
|
Revolutionizing Shoulder MRI: Accelerated Imaging with Deep Learning Reconstruction. Radiology 2024; 310:e233301. [PMID: 38193840 DOI: 10.1148/radiol.233301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
|
2
|
Artificial Intelligence for Improved Hepatosplenomegaly Diagnosis. Curr Probl Diagn Radiol 2023; 52:501-504. [PMID: 37277270 DOI: 10.1067/j.cpradiol.2023.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 04/14/2023] [Accepted: 05/08/2023] [Indexed: 06/07/2023]
Abstract
Hepatosplenomegaly is commonly diagnosed by radiologists based on single dimension measurements and heuristic cut-offs. Volumetric measurements may be more accurate for diagnosing organ enlargement. Artificial intelligence techniques may be able to automatically calculate liver and spleen volume and facilitate more accurate diagnosis. After IRB approval, 2 convolutional neural networks (CNN) were developed to automatically segment the liver and spleen on a training dataset comprised of 500 single-phase, contrast-enhanced CT abdomen and pelvis examinations. A separate dataset of ten thousand sequential examinations at a single institution was segmented with these CNNs. Performance was evaluated on a 1% subset and compared with manual segmentations using Sorensen-Dice coefficients and Pearson correlation coefficients. Radiologist reports were reviewed for diagnosis of hepatomegaly and splenomegaly and compared with calculated volumes. Abnormal enlargement was defined as greater than 2 standard deviations above the mean. Median Dice coefficients for liver and spleen segmentation were 0.988 and 0.981, respectively. Pearson correlation coefficients of CNN-derived estimates of organ volume against the gold-standard manual annotation were 0.999 for the liver and spleen (P < 0.001). Average liver volume was 1556.8 ± 498.7 cc and average spleen volume was 194.6 ± 123.0 cc. There were significant differences in average liver and spleen volumes between male and female patients. Thus, the volume thresholds for ground-truth determination of hepatomegaly and splenomegaly were determined separately for each sex. Radiologist classification of hepatomegaly was 65% sensitive, 91% specific, with a positive predictive value (PPV) of 23% and an negative predictive value (NPV) of 98%. Radiologist classification of splenomegaly was 68% sensitive, 97% specific, with a positive predictive value (PPV) of 50% and a negative predictive value (NPV) of 99%. Convolutional neural networks can accurately segment the liver and spleen and may be helpful to improve radiologist accuracy in the diagnosis of hepatomegaly and splenomegaly.
Collapse
|
3
|
Ethical Considerations and Fairness in the Use of Artificial Intelligence for Neuroradiology. AJNR Am J Neuroradiol 2023; 44:1242-1248. [PMID: 37652578 PMCID: PMC10631523 DOI: 10.3174/ajnr.a7963] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 07/07/2023] [Indexed: 09/02/2023]
Abstract
In this review, concepts of algorithmic bias and fairness are defined qualitatively and mathematically. Illustrative examples are given of what can go wrong when unintended bias or unfairness in algorithmic development occurs. The importance of explainability, accountability, and transparency with respect to artificial intelligence algorithm development and clinical deployment is discussed. These are grounded in the concept of "primum no nocere" (first, do no harm). Steps to mitigate unfairness and bias in task definition, data collection, model definition, training, testing, deployment, and feedback are provided. Discussions on the implementation of fairness criteria that maximize benefit and minimize unfairness and harm to neuroradiology patients will be provided, including suggestions for neuroradiologists to consider as artificial intelligence algorithms gain acceptance into neuroradiology practice and become incorporated into routine clinical workflow.
Collapse
|
4
|
Deep Learning Approach for Differentiating Etiologies of Pediatric Retinal Hemorrhages: A Multicenter Study. Int J Mol Sci 2023; 24:15105. [PMID: 37894785 PMCID: PMC10606803 DOI: 10.3390/ijms242015105] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 09/29/2023] [Accepted: 10/10/2023] [Indexed: 10/29/2023] Open
Abstract
Retinal hemorrhages in pediatric patients can be a diagnostic challenge for ophthalmologists. These hemorrhages can occur due to various underlying etiologies, including abusive head trauma, accidental trauma, and medical conditions. Accurate identification of the etiology is crucial for appropriate management and legal considerations. In recent years, deep learning techniques have shown promise in assisting healthcare professionals in making more accurate and timely diagnosis of a variety of disorders. We explore the potential of deep learning approaches for differentiating etiologies of pediatric retinal hemorrhages. Our study, which spanned multiple centers, analyzed 898 images, resulting in a final dataset of 597 retinal hemorrhage fundus photos categorized into medical (49.9%) and trauma (50.1%) etiologies. Deep learning models, specifically those based on ResNet and transformer architectures, were applied; FastViT-SA12, a hybrid transformer model, achieved the highest accuracy (90.55%) and area under the receiver operating characteristic curve (AUC) of 90.55%, while ResNet18 secured the highest sensitivity value (96.77%) on an independent test dataset. The study highlighted areas for optimization in artificial intelligence (AI) models specifically for pediatric retinal hemorrhages. While AI proves valuable in diagnosing these hemorrhages, the expertise of medical professionals remains irreplaceable. Collaborative efforts between AI specialists and pediatric ophthalmologists are crucial to fully harness AI's potential in diagnosing etiologies of pediatric retinal hemorrhages.
Collapse
|
5
|
Impact of an automated large vessel occlusion detection tool on clinical workflow and patient outcomes. Front Neurol 2023; 14:1179250. [PMID: 37305764 PMCID: PMC10248058 DOI: 10.3389/fneur.2023.1179250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 05/05/2023] [Indexed: 06/13/2023] Open
Abstract
Purpose Automated large vessel occlusion (LVO) tools allow for prompt identification of positive LVO cases, but little is known about their role in acute stroke triage when implemented in a real-world setting. The purpose of this study was to evaluate the automated LVO detection tool's impact on acute stroke workflow and clinical outcomes. Materials and methods Consecutive patients with a computed tomography angiography (CTA) presenting with suspected acute ischemic stroke were compared before and after the implementation of an AI tool, RAPID LVO (RAPID 4.9, iSchemaView, Menlo Park, CA). Radiology CTA report turnaround times (TAT), door-to-treatment times, and the NIH stroke scale (NIHSS) after treatment were evaluated. Results A total of 439 cases in the pre-AI group and 321 cases in the post-AI group were included, with 62 (14.12%) and 43 (13.40%) cases, respectively, receiving acute therapies. The AI tool demonstrated a sensitivity of 0.96, a specificity of 0.85, a negative predictive value of 0.99, and a positive predictive value of 0.53. Radiology CTA report TAT significantly improved post-AI (mean 30.58 min for pre-AI vs. 22 min for post-AI, p < 0.0005), notably at the resident level (p < 0.0003) but not at higher levels of expertise. There were no differences in door-to-treatment times, but the NIHSS at discharge was improved for the pre-AI group adjusted for confounders (parameter estimate = 3.97, p < 0.01). Conclusion Implementation of an automated LVO detection tool improved radiology TAT but did not translate to improved stroke metrics and outcomes in a real-world setting.
Collapse
|
6
|
Deep Learning-Based Algorithm for Automatic Detection of Pulmonary Embolism in Chest CT Angiograms. Diagnostics (Basel) 2023; 13:diagnostics13071324. [PMID: 37046542 PMCID: PMC10093638 DOI: 10.3390/diagnostics13071324] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/23/2023] [Accepted: 03/30/2023] [Indexed: 04/05/2023] Open
Abstract
Purpose: Since the prompt recognition of acute pulmonary embolism (PE) and the immediate initiation of treatment can significantly reduce the risk of death, we developed a deep learning (DL)-based application aimed to automatically detect PEs on chest computed tomography angiograms (CTAs) and alert radiologists for an urgent interpretation. Convolutional neural networks (CNNs) were used to design the application. The associated algorithm used a hybrid 3D/2D UNet topology. The training phase was performed on datasets adequately distributed in terms of vendors, patient age, slice thickness, and kVp. The objective of this study was to validate the performance of the algorithm in detecting suspected PEs on CTAs. Methods: The validation dataset included 387 anonymized real-world chest CTAs from multiple clinical sites (228 U.S. cities). The data were acquired on 41 different scanner models from five different scanner makers. The ground truth (presence or absence of PE on CTA images) was established by three independent U.S. board-certified radiologists. Results: The algorithm correctly identified 170 of 186 exams positive for PE (sensitivity 91.4% [95% CI: 86.4–95.0%]) and 184 of 201 exams negative for PE (specificity 91.5% [95% CI: 86.8–95.0%]), leading to an accuracy of 91.5%. False negative cases were either chronic PEs or PEs at the limit of subsegmental arteries and close to partial volume effect artifacts. Most of the false positive findings were due to contrast agent-related fluid artifacts, pulmonary veins, and lymph nodes. Conclusions: The DL-based algorithm has a high degree of diagnostic accuracy with balanced sensitivity and specificity for the detection of PE on CTAs.
Collapse
|
7
|
Automated detection of IVC filters on radiographs with deep convolutional neural networks. Abdom Radiol (NY) 2023; 48:758-764. [PMID: 36371471 PMCID: PMC9902407 DOI: 10.1007/s00261-022-03734-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 10/26/2022] [Accepted: 10/26/2022] [Indexed: 11/13/2022]
Abstract
PURPOSE To create an algorithm able to accurately detect IVC filters on radiographs without human assistance, capable of being used to screen radiographs to identify patients needing IVC filter retrieval. METHODS A primary dataset of 5225 images, 30% of which included IVC filters, was assembled and annotated. 85% of the data was used to train a Cascade R-CNN (Region Based Convolutional Neural Network) object detection network incorporating a pre-trained ResNet-50 backbone. The remaining 15% of the data, independently annotated by three radiologists, was used as a test set to assess performance. The algorithm was also assessed on an independently constructed 1424-image dataset, drawn from a different institution than the primary dataset. RESULTS On the primary test set, the algorithm achieved a sensitivity of 96.2% (95% CI 92.7-98.1%) and a specificity of 98.9% (95% CI 97.4-99.5%). Results were similar on the external test set: sensitivity 97.9% (95% CI 96.2-98.9%), specificity 99.6 (95% CI 98.9-99.9%). CONCLUSION Fully automated detection of IVC filters on radiographs with high sensitivity and excellent specificity required for an automated screening system can be achieved using object detection neural networks. Further work will develop a system for identifying patients for IVC filter retrieval based on this algorithm.
Collapse
|
8
|
Point-of-Care Brain MRI: Preliminary Results from a Single-Center Retrospective Study. Radiology 2022; 305:666-671. [PMID: 35916678 PMCID: PMC9713449 DOI: 10.1148/radiol.211721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 05/13/2022] [Accepted: 06/03/2022] [Indexed: 11/11/2022]
Abstract
Background Point-of-care (POC) MRI is a bedside imaging technology with fewer than five units in clinical use in the United States and a paucity of scientific studies on clinical applications. Purpose To evaluate the clinical and operational impacts of deploying POC MRI in emergency department (ED) and intensive care unit (ICU) patient settings for bedside neuroimaging, including the turnaround time. Materials and Methods In this preliminary retrospective study, all patients in the ED and ICU at a single academic medical center who underwent noncontrast brain MRI from January 2021 to June 2021 were investigated to determine the number of patients who underwent bedside POC MRI. Turnaround time, examination limitations, relevant findings, and potential CT and fixed MRI findings were recorded for patients who underwent POC MRI. Descriptive statistics were used to describe clinical variables. The Mann-Whitney U test was used to compare the turnaround time between POC MRI and fixed MRI examinations. Results Of 638 noncontrast brain MRI examinations, 36 POC MRI examinations were performed in 35 patients (median age, 66 years [IQR, 57-77 years]; 21 women), with one patient undergoing two POC MRI examinations. Of the 36 POC MRI examinations, 13 (36%) occurred in the ED and 23 (64%) in the ICU. There were 12 of 36 (33%) POC MRI examinations interpreted as negative, 14 of 36 (39%) with clinically significant imaging findings, and 10 of 36 (28%) deemed nondiagnostic for reasons such as patient motion. Of 23 diagnostic POC MRI examinations with comparison CT available, three (13%) demonstrated acute infarctions not apparent on CT scans. Of seven diagnostic POC MRI examinations with subsequent fixed MRI examinations, two (29%) demonstrated missed versus interval subcentimeter infarctions, while the remaining demonstrated no change. The median turnaround time of POC MRI was 3.4 hours in the ED and 5.3 hours in the ICU. Conclusion Point-of-care (POC) MRI was performed rapidly in the emergency department and intensive care unit. A few POC MRI examinations demonstrated acute infarctions not apparent at standard-of-care CT examinations. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Anzai and Moy in this issue.
Collapse
|
9
|
Head-to-head comparison of commercial artificial intelligence solutions for detection of large vessel occlusion at a comprehensive stroke center. Front Neurol 2022; 13:1026609. [PMID: 36299266 PMCID: PMC9588973 DOI: 10.3389/fneur.2022.1026609] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 09/21/2022] [Indexed: 11/22/2022] Open
Abstract
Purpose Despite the availability of commercial artificial intelligence (AI) tools for large vessel occlusion (LVO) detection, there is paucity of data comparing traditional machine learning and deep learning solutions in a real-world setting. The purpose of this study is to compare and validate the performance of two AI-based tools (RAPID LVO and CINA LVO) for LVO detection. Materials and methods This was a retrospective, single center study performed at a comprehensive stroke center from December 2020 to June 2021. CT angiography (n = 263) for suspected stroke were evaluated for LVO. RAPID LVO is a traditional machine learning model which primarily relies on vessel density threshold assessment, while CINA LVO is an end-to-end deep learning tool implemented with multiple neural networks for detection and localization tasks. Reasons for errors were also recorded. Results There were 29 positive and 224 negative LVO cases by ground truth assessment. RAPID LVO demonstrated an accuracy of 0.86, sensitivity of 0.90, specificity of 0.86, positive predictive value of 0.45, and negative predictive value of 0.98, while CINA demonstrated an accuracy of 0.96, sensitivity of 0.76, specificity of 0.98, positive predictive value of 0.85, and negative predictive value of 0.97. Conclusion Both tools successfully detected most anterior circulation occlusions. RAPID LVO had higher sensitivity while CINA LVO had higher accuracy and specificity. Interestingly, both tools were able to detect some, but not all M2 MCA occlusions. This is the first study to compare traditional and deep learning LVO tools in the clinical setting.
Collapse
|
10
|
Diagnostic Roots Radiofrequency Sensory Stimulation Looking for Symptomatic Injured Roots in Multiple Lumbar Stenosis. Korean J Neurotrauma 2022; 18:296-305. [PMID: 36381438 PMCID: PMC9634327 DOI: 10.13004/kjnt.2022.18.e26] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 03/29/2022] [Accepted: 04/01/2022] [Indexed: 11/24/2022] Open
Abstract
Objective We present how to perform radiofrequency sensory stimulation (RFSS) and whether RFSS could be helpful in identifying symptomatic injured roots in multilevel lumbar stenosis. Methods Consecutive patients who underwent RFSS from 2010 to 2012 were enrolled. To identify pathologic lesions, RFSS was performed for suspicious roots, as determined using lumbar magnetic resonance imaging (MRI). The RFSS procedure resembled transforaminal root block. During RFSS of the suspicious root, patients could indicate whether stimulation induced their usual pain and/or sensory changes and could indicate whether the same leg area was affected. The number of possible symptomatic roots on MRI was evaluated before and after RFSS. Based on the RFSS results, we confirmed the presence of symptomatic nerve root(s) and performed surgical decompression. Surgical results, such as numeric rating scale (NRS) scores for low back pain (LBP) and leg pain (LP), and Oswestry disability index (ODI), were evaluated. Results Ten patients were enrolled in the study. Their mean age was 70.1±9.7 years. Clinically, NRS-LBP, NRS-LP, and ODI before surgery were 5.1%, 7.5%, and 53.2%, respectively. The mean number of suspicious roots was 2.6±0.8. After RFSS, the mean number of symptomatic roots was 1.6±1.0. On average, 1.4 lumbar segments were decompressed. The follow-up period was 35.3±12.8 months. At the last follow-up, NRS-LBP, NRS-LP, and ODI were 3.1%, 1.5%, and 35.3%, respectively. There was no recurrence or need for further surgical treatment for lumbar stenosis. Conclusion RFSS is a potentially helpful diagnostic tool for verifying and localizing symptomatic injured root lesions, particularly in patients with multilevel spinal stenosis.
Collapse
|
11
|
Artificial Intelligence Assessment of Renal Scarring (AIRS Study). KIDNEY360 2021; 3:83-90. [PMID: 35368566 PMCID: PMC8967621 DOI: 10.34067/kid.0003662021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 11/11/2021] [Indexed: 01/10/2023]
Abstract
Background The goal of the Artificial Intelligence in Renal Scarring (AIRS) study is to develop machine learning tools for noninvasive quantification of kidney fibrosis from imaging scans. Methods We conducted a retrospective analysis of patients who had one or more abdominal computed tomography (CT) scans within 6 months of a kidney biopsy. The final cohort encompassed 152 CT scans from 92 patients, which included images of 300 native kidneys and 76 transplant kidneys. Two different convolutional neural networks (slice-level and voxel-level classifiers) were tested to differentiate severe versus mild/moderate kidney fibrosis (≥50% versus <50%). Interstitial fibrosis and tubular atrophy scores from kidney biopsy reports were used as ground-truth. Results The two machine learning models demonstrated similar positive predictive value (0.886 versus 0.935) and accuracy (0.831 versus 0.879). Conclusions In summary, machine learning algorithms are a promising noninvasive diagnostic tool to quantify kidney fibrosis from CT scans. The clinical utility of these prediction tools, in terms of avoiding renal biopsy and associated bleeding risks in patients with severe fibrosis, remains to be validated in prospective clinical trials.
Collapse
|
12
|
Identification and Localization of Endotracheal Tube on Chest Radiographs Using a Cascaded Convolutional Neural Network Approach. J Digit Imaging 2021; 34:898-904. [PMID: 34027589 PMCID: PMC8455772 DOI: 10.1007/s10278-021-00463-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 03/02/2021] [Accepted: 05/06/2021] [Indexed: 11/27/2022] Open
Abstract
Rapid and accurate assessment of endotracheal tube (ETT) location is essential in the intensive care unit (ICU) setting, where timely identification of a mispositioned support device may prevent significant patient morbidity and mortality. This study proposes a series of deep learning-based algorithms which together iteratively identify and localize the position of an ETT relative to the carina on chest radiographs. Using the open-source MIMIC Chest X-Ray (MIMIC-CXR) dataset, a total of 16,000 patients were identified (8000 patients with an ETT and 8000 patients without an ETT). Three different convolutional neural network (CNN) algorithms were created. First, a regression loss function CNN was trained to estimate the coordinate location of the carina, which was then used to crop the original radiograph to the distal trachea and proximal bronchi. Second, a classifier CNN was trained using the cropped inputs to determine the presence or absence of an ETT. Finally, for radiographs containing an ETT, a third regression CNN was trained to both refine the coordinate location of the carina and identify the location of the distal ETT tip. Model accuracy was assessed by comparing the absolute distance of prediction and ground-truth coordinates as well as CNN predictions relative to measurements documented in original radiologic reports. Upon five-fold cross validation, binary classification for the presence or absence of ETT demonstrated an accuracy, sensitivity, specificity, PPV, NPV, and AUC of 97.14%, 97.37%, 96.89%, 97.12%, 97.15%, and 99.58% respectively. CNN predicted coordinate location of the carina, and distal ETT tip was estimated within a median error of 0.46 cm and 0.60 cm from ground-truth annotations respectively. Overall final CNN assessment of distance between the carina and distal ETT tip was predicted within a median error of 0.60 cm from manual ground-truth annotations, and a median error of 0.66 cm from measurements documented in the original radiology reports. A serial cascaded CNN approach demonstrates high accuracy for both identification and localization of ETT tip and carina on chest radiographs. High performance of the proposed multi-step strategy is in part related to iterative refinement of coordinate localization as well as explicit image cropping which focuses algorithm attention to key anatomic regions of interest.
Collapse
|
13
|
Validation of a Deep Learning Tool in the Detection of Intracranial Hemorrhage and Large Vessel Occlusion. Front Neurol 2021; 12:656112. [PMID: 33995252 PMCID: PMC8116960 DOI: 10.3389/fneur.2021.656112] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 04/07/2021] [Indexed: 11/13/2022] Open
Abstract
Purpose: Recently developed machine-learning algorithms have demonstrated strong performance in the detection of intracranial hemorrhage (ICH) and large vessel occlusion (LVO). However, their generalizability is often limited by geographic bias of studies. The aim of this study was to validate a commercially available deep learning-based tool in the detection of both ICH and LVO across multiple hospital sites and vendors throughout the U.S. Materials and Methods: This was a retrospective and multicenter study using anonymized data from two institutions. Eight hundred fourteen non-contrast CT cases and 378 CT angiography cases were analyzed to evaluate ICH and LVO, respectively. The tool's ability to detect and quantify ICH, LVO, and their various subtypes was assessed among multiple CT vendors and hospitals across the United States. Ground truth was based off imaging interpretations from two board-certified neuroradiologists. Results: There were 255 positive and 559 negative ICH cases. Accuracy was 95.6%, sensitivity was 91.4%, and specificity was 97.5% for the ICH tool. ICH was further stratified into the following subtypes: intraparenchymal, intraventricular, epidural/subdural, and subarachnoid with true positive rates of 92.9, 100, 94.3, and 89.9%, respectively. ICH true positive rates by volume [small (<5 mL), medium (5–25 mL), and large (>25 mL)] were 71.8, 100, and 100%, respectively. There were 156 positive and 222 negative LVO cases. The LVO tool demonstrated an accuracy of 98.1%, sensitivity of 98.1%, and specificity of 98.2%. A subset of 55 randomly selected cases were also assessed for LVO detection at various sites, including the distal internal carotid artery, middle cerebral artery M1 segment, proximal middle cerebral artery M2 segment, and distal middle cerebral artery M2 segment with an accuracy of 97.0%, sensitivity of 94.3%, and specificity of 97.4%. Conclusion: Deep learning tools can be effective in the detection of both ICH and LVO across a wide variety of hospital systems. While some limitations were identified, specifically in the detection of small ICH and distal M2 occlusion, this study highlights a deep learning tool that can assist radiologists in the detection of emergent findings in a variety of practice settings.
Collapse
|
14
|
Outcomes of Artificial Intelligence Volumetric Assessment of Kidneys and Renal Tumors for Preoperative Assessment of Nephron Sparing Interventions. J Endourol 2021; 35:1411-1418. [PMID: 33847156 DOI: 10.1089/end.2020.1125] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Background Renal cell carcinoma is the most common kidney cancer and the 13th most common cause of cancer death worldwide. Partial nephrectomy and percutaneous ablation, increasingly utilized to treat small renal masses and preserve renal parenchyma, require precise preoperative imaging interpretation. We sought to develop and evaluate a convolutional neural network (CNN), a type of deep learning artificial intelligence, to act as a surgical planning aid by determining renal tumor and kidney volumes via segmentation on single-phase computed tomography (CT). Materials and Methods After institutional review board approval, the CT images of 319 patients were retrospectively analyzed. Two distinct CNNs were developed for (1) bounding cube localization of the right and left hemi-abdomen and (2) segmentation of the renal parenchyma and tumor within each bounding cube. Training was performed on a randomly selected cohort of 269 patients. CNN performance was evaluated on a separate cohort of 50 patients using Sorensen-Dice coefficients (which measures the spatial overlap between the manually segmented and neural network derived segmentations) and Pearson correlation coefficients. Experiments were run on a GPU-optimized workstation with a single NVIDIA GeForce GTX Titan X (12GB, Maxwell architecture). Results Median Dice coefficients for kidney and tumor segmentation were 0.970 and 0.816, respectively; Pearson correlation coefficients between CNN-generated and human-annotated estimates for kidney and tumor volume were 0.998 and 0.993 (p < 0.001), respectively. End-to-end trained CNNs were able to perform renal parenchyma and tumor segmentation on a new test case in an average of 5.6 seconds. Conclusions Initial experience with automated deep learning artificial intelligence demonstrates that it is capable of rapidly and accurately segmenting kidneys and renal tumors on single-phase contrast-enhanced CT scans and calculating tumor and renal volumes.
Collapse
|
15
|
Artificial Intelligence and Acute Stroke Imaging. AJNR Am J Neuroradiol 2021; 42:2-11. [PMID: 33243898 PMCID: PMC7814792 DOI: 10.3174/ajnr.a6883] [Citation(s) in RCA: 78] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 07/22/2020] [Indexed: 12/12/2022]
Abstract
Artificial intelligence technology is a rapidly expanding field with many applications in acute stroke imaging, including ischemic and hemorrhage subtypes. Early identification of acute stroke is critical for initiating prompt intervention to reduce morbidity and mortality. Artificial intelligence can help with various aspects of the stroke treatment paradigm, including infarct or hemorrhage detection, segmentation, classification, large vessel occlusion detection, Alberta Stroke Program Early CT Score grading, and prognostication. In particular, emerging artificial intelligence techniques such as convolutional neural networks show promise in performing these imaging-based tasks efficiently and accurately. The purpose of this review is twofold: first, to describe AI methods and available public and commercial platforms in stroke imaging, and second, to summarize the literature of current artificial intelligence-driven applications for acute stroke triage, surveillance, and prediction.
Collapse
|
16
|
Integrating Eye Tracking and Speech Recognition Accurately Annotates MR Brain Images for Deep Learning: Proof of Principle. Radiol Artif Intell 2021; 3:e200047. [PMID: 33842890 PMCID: PMC7845782 DOI: 10.1148/ryai.2020200047] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 07/23/2020] [Accepted: 08/03/2020] [Indexed: 12/19/2022]
Abstract
PURPOSE To generate and assess an algorithm combining eye tracking and speech recognition to extract brain lesion location labels automatically for deep learning (DL). MATERIALS AND METHODS In this retrospective study, 700 two-dimensional brain tumor MRI scans from the Brain Tumor Segmentation database were clinically interpreted. For each image, a single radiologist dictated a standard phrase describing the lesion into a microphone, simulating clinical interpretation. Eye-tracking data were recorded simultaneously. Using speech recognition, gaze points corresponding to each lesion were obtained. Lesion locations were used to train a keypoint detection convolutional neural network to find new lesions. A network was trained to localize lesions for an independent test set of 85 images. The statistical measure to evaluate our method was percent accuracy. RESULTS Eye tracking with speech recognition was 92% accurate in labeling lesion locations from the training dataset, thereby demonstrating that fully simulated interpretation can yield reliable tumor location labels. These labels became those that were used to train the DL network. The detection network trained on these labels predicted lesion location of a separate testing set with 85% accuracy. CONCLUSION The DL network was able to locate brain tumors on the basis of training data that were labeled automatically from simulated clinical image interpretation.© RSNA, 2020.
Collapse
|
17
|
Development and external validation of a prognostic tool for COVID-19 critical disease. PLoS One 2020; 15:e0242953. [PMID: 33296357 PMCID: PMC7725393 DOI: 10.1371/journal.pone.0242953] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 10/10/2020] [Indexed: 01/06/2023] Open
Abstract
Background The rapid spread of coronavirus disease 2019 (COVID-19) revealed significant constraints in critical care capacity. In anticipation of subsequent waves, reliable prediction of disease severity is essential for critical care capacity management and may enable earlier targeted interventions to improve patient outcomes. The purpose of this study is to develop and externally validate a prognostic model/clinical tool for predicting COVID-19 critical disease at presentation to medical care. Methods This is a retrospective study of a prognostic model for the prediction of COVID-19 critical disease where critical disease was defined as ICU admission, ventilation, and/or death. The derivation cohort was used to develop a multivariable logistic regression model. Covariates included patient comorbidities, presenting vital signs, and laboratory values. Model performance was assessed on the validation cohort by concordance statistics. The model was developed with consecutive patients with COVID-19 who presented to University of California Irvine Medical Center in Orange County, California. External validation was performed with a random sample of patients with COVID-19 at Emory Healthcare in Atlanta, Georgia. Results Of a total 3208 patients tested in the derivation cohort, 9% (299/3028) were positive for COVID-19. Clinical data including past medical history and presenting laboratory values were available for 29% (87/299) of patients (median age, 48 years [range, 21–88 years]; 64% [36/55] male). The most common comorbidities included obesity (37%, 31/87), hypertension (37%, 32/87), and diabetes (24%, 24/87). Critical disease was present in 24% (21/87). After backward stepwise selection, the following factors were associated with greatest increased risk of critical disease: number of comorbidities, body mass index, respiratory rate, white blood cell count, % lymphocytes, serum creatinine, lactate dehydrogenase, high sensitivity troponin I, ferritin, procalcitonin, and C-reactive protein. Of a total of 40 patients in the validation cohort (median age, 60 years [range, 27–88 years]; 55% [22/40] male), critical disease was present in 65% (26/40). Model discrimination in the validation cohort was high (concordance statistic: 0.94, 95% confidence interval 0.87–1.01). A web-based tool was developed to enable clinicians to input patient data and view likelihood of critical disease. Conclusions and relevance We present a model which accurately predicted COVID-19 critical disease risk using comorbidities and presenting vital signs and laboratory values, on derivation and validation cohorts from two different institutions. If further validated on additional cohorts of patients, this model/clinical tool may provide useful prognostication of critical care needs.
Collapse
|
18
|
Abstract
Deep learning with convolutional neural networks (CNNs) has experienced tremendous growth in multiple healthcare applications and has been shown to have high accuracy in semantic segmentation of medical (e.g., radiology and pathology) images. However, a key barrier in the required training of CNNs is obtaining large-scale and precisely annotated imaging data. We sought to address the lack of annotated data with eye tracking technology. As a proof of principle, our hypothesis was that segmentation masks generated with the help of eye tracking (ET) would be very similar to those rendered by hand annotation (HA). Additionally, our goal was to show that a CNN trained on ET masks would be equivalent to one trained on HA masks, the latter being the current standard approach. Step 1: Screen captures of 19 publicly available radiologic images of assorted structures within various modalities were analyzed. ET and HA masks for all regions of interest (ROIs) were generated from these image datasets. Step 2: Utilizing a similar approach, ET and HA masks for 356 publicly available T1-weighted postcontrast meningioma images were generated. Three hundred six of these image + mask pairs were used to train a CNN with U-net-based architecture. The remaining 50 images were used as the independent test set. Step 1: ET and HA masks for the nonneurological images had an average Dice similarity coefficient (DSC) of 0.86 between each other. Step 2: Meningioma ET and HA masks had an average DSC of 0.85 between each other. After separate training using both approaches, the ET approach performed virtually identically to HA on the test set of 50 images. The former had an area under the curve (AUC) of 0.88, while the latter had AUC of 0.87. ET and HA predictions had trimmed mean DSCs compared to the original HA maps of 0.73 and 0.74, respectively. These trimmed DSCs between ET and HA were found to be statistically equivalent with a p value of 0.015. We have demonstrated that ET can create segmentation masks suitable for deep learning semantic segmentation. Future work will integrate ET to produce masks in a faster, more natural manner that distracts less from typical radiology clinical workflow.
Collapse
|
19
|
Impact of COVID-19 on Acute Stroke Presentation at a Comprehensive Stroke Center. Front Neurol 2020; 11:850. [PMID: 32922355 PMCID: PMC7456804 DOI: 10.3389/fneur.2020.00850] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Accepted: 07/07/2020] [Indexed: 01/08/2023] Open
Abstract
Background: COVID-19 has impacted healthcare in many ways, including presentation of acute stroke. Since time-sensitive thrombolysis is essential for reducing morbidity and mortality in acute stroke, any delays due to the pandemic can have serious consequences. Methods: We retrospectively reviewed the electronic medical records for patients presenting with acute ischemic stroke at a comprehensive stroke center in March–April 2020 (the early months of COVID-19) and compared to the same time period in 2019. Stroke metrics such as incidence, time to arrival, and immediate outcomes were assessed. Results: There were 48 acute ischemic strokes (of which 7 were transfers) in March–April 2020 compared to 64 (of which 12 were transfers) in 2019. The average last known well to arrival time (±SD) for stroke codes was 1,041 (±1682.1) min in 2020 and 554 (±604.9) min in 2019. Of the patients presenting directly to the ED with a known last known well time, 27.8% (10/36) presented in the first 4.5 h in 2020, in contrast to 40.5% (15/37) in 2019. Patients who died comprised 10.4% of the stroke cohort in 2020 (5/48) compared to 6.3% in 2019 (4/64). Conclusions: During the first 2 months of COVID-19, there were fewer overall stroke cases who presented to our hospital, and of these cases, there was delayed presentation in comparison to the same time period in 2019. Recognizing how stroke presentation may be affected by COVID-19 would allow for optimization of established stroke triage algorithms in order to ensure safe and timely delivery of stroke care during a pandemic.
Collapse
|
20
|
Artificial Intelligence in Neuroradiology: Current Status and Future Directions. AJNR Am J Neuroradiol 2020; 41:E52-E59. [PMID: 32732276 PMCID: PMC7658873 DOI: 10.3174/ajnr.a6681] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Fueled by new techniques, computational tools, and broader availability of imaging data, artificial intelligence has the potential to transform the practice of neuroradiology. The recent exponential increase in publications related to artificial intelligence and the central focus on artificial intelligence at recent professional and scientific radiology meetings underscores the importance. There is growing momentum behind leveraging artificial intelligence techniques to improve workflow and diagnosis and treatment and to enhance the value of quantitative imaging techniques. This article explores the reasons why neuroradiologists should care about the investments in new artificial intelligence applications, highlights current activities and the roles neuroradiologists are playing, and renders a few predictions regarding the near future of artificial intelligence in neuroradiology.
Collapse
|
21
|
Applications of Artificial Intelligence to Prostate Multiparametric MRI (mpMRI): Current and Emerging Trends. Cancers (Basel) 2020; 12:E1204. [PMID: 32403240 PMCID: PMC7281682 DOI: 10.3390/cancers12051204] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 05/02/2020] [Accepted: 05/08/2020] [Indexed: 01/13/2023] Open
Abstract
Prostate carcinoma is one of the most prevalent cancers worldwide. Multiparametric magnetic resonance imaging (mpMRI) is a non-invasive tool that can improve prostate lesion detection, classification, and volume quantification. Machine learning (ML), a branch of artificial intelligence, can rapidly and accurately analyze mpMRI images. ML could provide better standardization and consistency in identifying prostate lesions and enhance prostate carcinoma management. This review summarizes ML applications to prostate mpMRI and focuses on prostate organ segmentation, lesion detection and segmentation, and lesion characterization. A literature search was conducted to find studies that have applied ML methods to prostate mpMRI. To date, prostate organ segmentation and volume approximation have been well executed using various ML techniques. Prostate lesion detection and segmentation are much more challenging tasks for ML and were attempted in several studies. They largely remain unsolved problems due to data scarcity and the limitations of current ML algorithms. By contrast, prostate lesion characterization has been successfully completed in several studies because of better data availability. Overall, ML is well situated to become a tool that enhances radiologists' accuracy and speed.
Collapse
|
22
|
|
23
|
Abstract
Bone age assessment (BAA) is a commonly performed diagnostic study in pediatric radiology to assess skeletal maturity. The most commonly utilized method for assessment of BAA is the Greulich and Pyle method (Pediatr Radiol 46.9:1269-1274, 2016; Arch Dis Child 81.2:172-173, 1999) atlas. The evaluation of BAA can be a tedious and time-consuming process for the radiologist. As such, several computer-assisted detection/diagnosis (CAD) methods have been proposed for automation of BAA. Classical CAD tools have traditionally relied on hard-coded algorithmic features for BAA which suffer from a variety of drawbacks. Recently, the advent and proliferation of convolutional neural networks (CNNs) has shown promise in a variety of medical imaging applications. There have been at least two published applications of using deep learning for evaluation of bone age (Med Image Anal 36:41-51, 2017; JDI 1-5, 2017). However, current implementations are limited by a combination of both architecture design and relatively small datasets. The purpose of this study is to demonstrate the benefits of a customized neural network algorithm carefully calibrated to the evaluation of bone age utilizing a relatively large institutional dataset. In doing so, this study will aim to show that advanced architectures can be successfully trained from scratch in the medical imaging domain and can generate results that outperform any existing proposed algorithm. The training data consisted of 10,289 images of different skeletal age examinations, 8909 from the hospital Picture Archiving and Communication System at our institution and 1383 from the public Digital Hand Atlas Database. The data was separated into four cohorts, one each for male and female children above the age of 8, and one each for male and female children below the age of 10. The testing set consisted of 20 radiographs of each 1-year-age cohort from 0 to 1 years to 14-15+ years, half male and half female. The testing set included left-hand radiographs done for bone age assessment, trauma evaluation without significant findings, and skeletal surveys. A 14 hidden layer-customized neural network was designed for this study. The network included several state of the art techniques including residual-style connections, inception layers, and spatial transformer layers. Data augmentation was applied to the network inputs to prevent overfitting. A linear regression output was utilized. Mean square error was used as the network loss function and mean absolute error (MAE) was utilized as the primary performance metric. MAE accuracies on the validation and test sets for young females were 0.654 and 0.561 respectively. For older females, validation and test accuracies were 0.662 and 0.497 respectively. For young males, validation and test accuracies were 0.649 and 0.585 respectively. Finally, for older males, validation and test set accuracies were 0.581 and 0.501 respectively. The female cohorts were trained for 900 epochs each and the male cohorts were trained for 600 epochs. An eightfold cross-validation set was employed for hyperparameter tuning. Test error was obtained after training on a full data set with the selected hyperparameters. Using our proposed customized neural network architecture on our large available data, we achieved an aggregate validation and test set mean absolute errors of 0.637 and 0.536 respectively. To date, this is the best published performance on utilizing deep learning for bone age assessment. Our results support our initial hypothesis that customized, purpose-built neural networks provide improved performance over networks derived from pre-trained imaging data sets. We build on that initial work by showing that the addition of state-of-the-art techniques such as residual connections and inception architecture further improves prediction accuracy. This is important because the current assumption for use of residual and/or inception architectures is that a large pre-trained network is required for successful implementation given the relatively small datasets in medical imaging. Instead we show that a small, customized architecture incorporating advanced CNN strategies can indeed be trained from scratch, yielding significant improvements in algorithm accuracy. It should be noted that for all four cohorts, testing error outperformed validation error. One reason for this is that our ground truth for our test set was obtained by averaging two pediatric radiologist reads compared to our training data for which only a single read was used. This suggests that despite relatively noisy training data, the algorithm could successfully model the variation between observers and generate estimates that are close to the expected ground truth.
Collapse
|
24
|
Hybrid 3D/2D Convolutional Neural Network for Hemorrhage Evaluation on Head CT. AJNR Am J Neuroradiol 2018; 39:1609-1616. [PMID: 30049723 DOI: 10.3174/ajnr.a5742] [Citation(s) in RCA: 122] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2018] [Accepted: 06/06/2018] [Indexed: 01/28/2023]
Abstract
BACKGROUND AND PURPOSE Convolutional neural networks are a powerful technology for image recognition. This study evaluates a convolutional neural network optimized for the detection and quantification of intraparenchymal, epidural/subdural, and subarachnoid hemorrhages on noncontrast CT. MATERIALS AND METHODS This study was performed in 2 phases. First, a training cohort of all NCCTs acquired at a single institution between January 1, 2017, and July 31, 2017, was used to develop and cross-validate a custom hybrid 3D/2D mask ROI-based convolutional neural network architecture for hemorrhage evaluation. Second, the trained network was applied prospectively to all NCCTs ordered from the emergency department between February 1, 2018, and February 28, 2018, in an automated inference pipeline. Hemorrhage-detection accuracy, area under the curve, sensitivity, specificity, positive predictive value, and negative predictive value were assessed for full and balanced datasets and were further stratified by hemorrhage type and size. Quantification was assessed by the Dice score coefficient and the Pearson correlation. RESULTS A 10,159-examination training cohort (512,598 images; 901/8.1% hemorrhages) and an 862-examination test cohort (23,668 images; 82/12% hemorrhages) were used in this study. Accuracy, area under the curve, sensitivity, specificity, positive predictive value, and negative-predictive value for hemorrhage detection were 0.975, 0.983, 0.971, 0.975, 0.793, and 0.997 on training cohort cross-validation and 0.970, 0.981, 0.951, 0.973, 0.829, and 0.993 for the prospective test set. Dice scores for intraparenchymal hemorrhage, epidural/subdural hemorrhage, and SAH were 0.931, 0.863, and 0.772, respectively. CONCLUSIONS A customized deep learning tool is accurate in the detection and quantification of hemorrhage on NCCT. Demonstrated high performance on prospective NCCTs ordered from the emergency department suggests the clinical viability of the proposed deep learning tool.
Collapse
|
25
|
Local Glioma Cells Are Associated with Vascular Dysregulation. AJNR Am J Neuroradiol 2018; 39:507-514. [PMID: 29371254 DOI: 10.3174/ajnr.a5526] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Accepted: 11/09/2017] [Indexed: 12/20/2022]
Abstract
BACKGROUND AND PURPOSE Malignant glioma is a highly infiltrative malignancy that causes variable disruptions to the structure and function of the cerebrovasculature. While many of these structural disruptions have known correlative histopathologic alterations, the mechanisms underlying vascular dysfunction identified by resting-state blood oxygen level-dependent imaging are not yet known. The purpose of this study was to characterize the alterations that correlate with a blood oxygen level-dependent biomarker of vascular dysregulation. MATERIALS AND METHODS Thirty-two stereotactically localized biopsies were obtained from contrast-enhancing (n = 16) and nonenhancing (n = 16) regions during open surgical resection of malignant glioma in 17 patients. Preoperative resting-state blood oxygen level-dependent fMRI was used to evaluate the relationships between radiographic and histopathologic characteristics. Signal intensity for a blood oxygen level-dependent biomarker was compared with scores of tumor infiltration and microvascular proliferation as well as total cell and neuronal density. RESULTS Biopsies corresponded to a range of blood oxygen level-dependent signals, ranging from relatively normal (z = -4.79) to markedly abnormal (z = 8.84). Total cell density was directly related to blood oxygen level-dependent signal abnormality (P = .013, R2 = 0.19), while the neuronal labeling index was inversely related to blood oxygen level-dependent signal abnormality (P = .016, R2 = 0.21). The blood oxygen level-dependent signal abnormality was also related to tumor infiltration (P = .014) and microvascular proliferation (P = .045). CONCLUSIONS The relationship between local, neoplastic characteristics and a blood oxygen level-dependent biomarker of vascular function suggests that local effects of glioma cell infiltration contribute to vascular dysregulation.
Collapse
|
26
|
A novel method of quantifying brain atrophy associated with age-related hearing loss. NEUROIMAGE-CLINICAL 2017; 16:205-209. [PMID: 28808617 PMCID: PMC5544491 DOI: 10.1016/j.nicl.2017.07.021] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2017] [Revised: 07/04/2017] [Accepted: 07/22/2017] [Indexed: 11/28/2022]
Abstract
A growing body of evidence has shown that a relationship between age-related hearing loss and structural brain changes exists. However, a method to measure brain atrophy associated with hearing loss from a single MRI study (i.e. without an interval study) that produces an independently interpretable output does not. Such a method would be beneficial for studying patterns of structural brain changes on a large scale. Here, we introduce our method for this. Audiometric evaluations and mini-mental state exams were obtained in 34 subjects over the age of 80 who have had brain MRIs in the past 6 years. CSF and parenchymal brain volumes (whole brain and by lobe) were obtained through a novel, fully automated algorithm. Atrophy was calculated by taking the ratio of CSF to parenchyma. High frequency hearing loss was associated with disproportional temporal lobe atrophy relative to whole brain atrophy independent of age (r = 0.471, p = 0.005). Mental state was associated with frontoparietal atrophy but not to temporal lobe atrophy, which is consistent with known results. Our method demonstrates that hearing loss is associated with temporal lobe atrophy and generalized whole brain atrophy. Our algorithm is efficient, fully automated, and able to detect significant associations in a small cohort. A novel, fully automated method measuring brain atrophy using CSF to brain parenchymal volume ratios is introduced. Brain atrophy is obtained from a single MRI study and unlike brain volume, is interpretable without relative comparison. Age-related hearing loss is significantly associated with both temporal lobe and generalized whole brain atrophy.
Collapse
|
27
|
Sodium Fluorescein Facilitates Guided Sampling of Diagnostic Tumor Tissue in Nonenhancing Gliomas. Neurosurgery 2017. [DOI: 10.1093/neuros/nyx271] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Abstract
BACKGROUND
Accurate tissue sampling in nonenhancing (NE) gliomas is a unique surgical challenge due to their intratumoral histological heterogeneity and absence of contrast enhancement as a guide for intraoperative stereotactic guidance. Instead, T2/fluid-attenuated inversion-recovery (FLAIR) hyperintensity on MRI is commonly used as an imaging surrogate for pathological tissue, but sampling from this region can yield nondiagnostic or underdiagnostic brain tissue. Sodium fluorescein is an intraoperative fluorescent dye that has a high predictive value for tumor identification in areas of contrast enhancement and NE in glioblastomas. However, the underlying histopathological alterations in fluorescent regions of NE gliomas remain undefined.
OBJECTIVE
To evaluate whether fluorescein can identify diagnostic tissue and differentiate regions with higher malignant potential during surgery for NE gliomas, thus improving sampling accuracy.
METHODS
Thirteen patients who presented with NE, T2/FLAIR hyperintense lesions suspicious for glioma received fluorescein (10%, 3 mg/kg intravenously) during surgical resection.
RESULTS
Patchy fluorescence was identified within the T2/FLAIR hyperintense area in 10 of 13 (77%) patients. Samples taken from fluorescent regions were more likely to demonstrate diagnostic glioma tissue and cytologic atypia (P < .05). Fluorescein demonstrated a 95% positive predictive value for the presence of diagnostic tissue. Samples from areas of fluorescence also demonstrated greater total cell density and higher Ki-67 labeling than nonfluorescent biopsies (P < .05).
CONCLUSION
Fluorescence in NE gliomas is highly predictive of diagnostic tumor tissue and regions of higher cell density and proliferative activity.
Collapse
|
28
|
A Multiparametric Model for Mapping Cellularity in Glioblastoma Using Radiographically Localized Biopsies. AJNR Am J Neuroradiol 2017; 38:890-898. [PMID: 28255030 DOI: 10.3174/ajnr.a5112] [Citation(s) in RCA: 77] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2016] [Accepted: 12/09/2016] [Indexed: 11/07/2022]
Abstract
BACKGROUND AND PURPOSE The complex MR imaging appearance of glioblastoma is a function of underlying histopathologic heterogeneity. A better understanding of these correlations, particularly the influence of infiltrating glioma cells and vasogenic edema on T2 and diffusivity signal in nonenhancing areas, has important implications in the management of these patients. With localized biopsies, the objective of this study was to generate a model capable of predicting cellularity at each voxel within an entire tumor volume as a function of signal intensity, thus providing a means of quantifying tumor infiltration into surrounding brain tissue. MATERIALS AND METHODS Ninety-one localized biopsies were obtained from 36 patients with glioblastoma. Signal intensities corresponding to these samples were derived from T1-postcontrast subtraction, T2-FLAIR, and ADC sequences by using an automated coregistration algorithm. Cell density was calculated for each specimen by using an automated cell-counting algorithm. Signal intensity was plotted against cell density for each MR image. RESULTS T2-FLAIR (r = -0.61) and ADC (r = -0.63) sequences were inversely correlated with cell density. T1-postcontrast (r = 0.69) subtraction was directly correlated with cell density. Combining these relationships yielded a multiparametric model with improved correlation (r = 0.74), suggesting that each sequence offers different and complementary information. CONCLUSIONS Using localized biopsies, we have generated a model that illustrates a quantitative and significant relationship between MR signal and cell density. Projecting this relationship over the entire tumor volume allows mapping of the intratumoral heterogeneity in both the contrast-enhancing tumor core and nonenhancing margins of glioblastoma and may be used to guide extended surgical resection, localized biopsies, and radiation field mapping.
Collapse
|
29
|
Fully Convolutional Deep Residual Neural Networks for Brain Tumor Segmentation. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2016. [DOI: 10.1007/978-3-319-55524-9_11] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|