1
|
Gandomkar Z, Brennan PC, Suleiman ME. Optimizing Radiologic Detection of COVID-19. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
2
|
Optimizing Radiologic Detection of COVID-19. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_285-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
3
|
Frank SM, Qi A, Ravasio D, Sasaki Y, Rosen EL, Watanabe T. Supervised Learning Occurs in Visual Perceptual Learning of Complex Natural Images. Curr Biol 2020; 30:2995-3000.e3. [PMID: 32502415 DOI: 10.1016/j.cub.2020.05.050] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 04/14/2020] [Accepted: 05/14/2020] [Indexed: 01/13/2023]
Abstract
There have been long-standing debates regarding whether supervised or unsupervised learning mechanisms are involved in visual perceptual learning (VPL) [1-14]. However, these debates have been based on the effects of simple feedback only about response accuracy in detection or discrimination tasks of low-level visual features such as orientation [15-22]. Here, we examined whether the content of response feedback plays a critical role for the acquisition and long-term retention of VPL of complex natural images. We trained three groups of human subjects (n = 72 in total) to better detect "grouped microcalcifications" or "architectural distortion" lesions (referred to as calcification and distortion in the following) in mammograms either with no trial-by-trial feedback, partial trial-by-trial feedback (response correctness only), or detailed trial-by-trial feedback (response correctness and target location). Distortion lesions consist of more complex visual structures than calcification lesions [23-26]. We found that partial feedback is necessary for VPL of calcifications, whereas detailed feedback is required for VPL of distortions. Furthermore, detailed feedback during training is necessary for VPL of distortion and calcification lesions to be retained for 6 months. These results show that although supervised learning is heavily involved in VPL of complex natural images, the extent of supervision for VPL varies across different types of complex natural images. Such differential requirements for VPL to improve the detectability of lesions in mammograms are potentially informative for the professional training of radiologists.
Collapse
Affiliation(s)
- Sebastian M Frank
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912, USA.
| | - Andrea Qi
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912, USA
| | - Daniela Ravasio
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912, USA
| | - Yuka Sasaki
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912, USA
| | - Eric L Rosen
- Stanford University, Department of Radiology, 300 Pasteur Drive, Stanford, CA 94305, USA; University of Colorado Denver, Department of Radiology, 12401 East 17th Avenue, Aurora, CO 80045, USA
| | - Takeo Watanabe
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912, USA.
| |
Collapse
|
4
|
Grimm LJ, Zhang J, Lo JY, Johnson KS, Ghate SV, Walsh R, Mazurowski MA. Radiology Trainee Performance in Digital Breast Tomosynthesis: Relationship Between Difficulty and Error-Making Patterns. J Am Coll Radiol 2015; 13:198-202. [PMID: 26577878 DOI: 10.1016/j.jacr.2015.09.025] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Revised: 09/05/2015] [Accepted: 09/10/2015] [Indexed: 12/28/2022]
Abstract
PURPOSE The aim of this study was to better understand the relationship between digital breast tomosynthesis (DBT) difficulty and radiology trainee performance. METHODS Twenty-seven radiology residents and fellows and three expert breast imagers reviewed 60 DBT studies consisting of unilateral craniocaudal and medial lateral oblique views. Trainees had no prior DBT experience. All readers provided difficulty ratings and final BI-RADS(®) scores. Expert breast imager consensus interpretations were used to determine the ground truth. Trainee sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated for low- and high-difficulty subsets of cases as assessed by each trainee him or herself (self-assessed difficulty) and consensus expert-assessed difficulty. RESULTS For self-assessed difficulty, the trainee AUC was 0.696 for high-difficulty and 0.704 for low-difficulty cases (P = .753). Trainee sensitivity was 0.776 for high-difficulty and 0.538 for low-difficulty cases (P < .001). Trainee specificity was 0.558 for high-difficulty and 0.810 for low-difficulty cases (P < .001). For expert-assessed difficulty, the trainee AUC was 0.645 for high-difficulty and 0.816 for low-difficulty cases (P < .001). Trainee sensitivity was 0.612 for high-difficulty and .784 for low-difficulty cases (P < .001). Trainee specificity was 0.654 for high-difficulty and 0.765 for low-difficulty cases (P = .021). CONCLUSIONS Cases deemed difficult by experts were associated with decreases in trainee AUC, sensitivity, and specificity. In contrast, for self-assessed more difficult cases, the trainee AUC was unchanged because of increased sensitivity and compensatory decreased specificity. Educators should incorporate these findings when developing educational materials to teach interpretation of DBT.
Collapse
Affiliation(s)
- Lars J Grimm
- Department of Radiology, Duke University Medical Center, Durham, North Carolina.
| | - Jing Zhang
- Department of Radiology, Duke University Medical Center, Durham, North Carolina; Department of Computer Science, Lamar University, Beaumont, Texas
| | - Joseph Y Lo
- Department of Radiology, Duke University Medical Center, Durham, North Carolina; Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina; Department of Biomedical Engineering, Duke University, Durham, North Carolina
| | - Karen S Johnson
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
| | - Sujata V Ghate
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
| | - Ruth Walsh
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
| | - Maciej A Mazurowski
- Department of Radiology, Duke University Medical Center, Durham, North Carolina; Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
| |
Collapse
|
5
|
Does Breast Imaging Experience During Residency Translate Into Improved Initial Performance in Digital Breast Tomosynthesis? J Am Coll Radiol 2015; 12:728-32. [DOI: 10.1016/j.jacr.2015.02.025] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2015] [Accepted: 02/25/2015] [Indexed: 01/12/2023]
|
6
|
Zhang J, Lo JY, Kuzmiak CM, Ghate SV, Yoon SC, Mazurowski MA. Using computer-extracted image features for modeling of error-making patterns in detection of mammographic masses among radiology residents. Med Phys 2015; 41:091907. [PMID: 25186394 DOI: 10.1118/1.4892173] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Mammography is the most widely accepted and utilized screening modality for early breast cancer detection. Providing high quality mammography education to radiology trainees is essential, since excellent interpretation skills are needed to ensure the highest benefit of screening mammography for patients. The authors have previously proposed a computer-aided education system based on trainee models. Those models relate human-assessed image characteristics to trainee error. In this study, the authors propose to build trainee models that utilize features automatically extracted from images using computer vision algorithms to predict likelihood of missing each mass by the trainee. This computer vision-based approach to trainee modeling will allow for automatically searching large databases of mammograms in order to identify challenging cases for each trainee. METHODS The authors' algorithm for predicting the likelihood of missing a mass consists of three steps. First, a mammogram is segmented into air, pectoral muscle, fatty tissue, dense tissue, and mass using automated segmentation algorithms. Second, 43 features are extracted using computer vision algorithms for each abnormality identified by experts. Third, error-making models (classifiers) are applied to predict the likelihood of trainees missing the abnormality based on the extracted features. The models are developed individually for each trainee using his/her previous reading data. The authors evaluated the predictive performance of the proposed algorithm using data from a reader study in which 10 subjects (7 residents and 3 novices) and 3 experts read 100 mammographic cases. Receiver operating characteristic (ROC) methodology was applied for the evaluation. RESULTS The average area under the ROC curve (AUC) of the error-making models for the task of predicting which masses will be detected and which will be missed was 0.607 (95% CI,0.564-0.650). This value was statistically significantly different from 0.5 (p<0.0001). For the 7 residents only, the AUC performance of the models was 0.590 (95% CI,0.537-0.642) and was also significantly higher than 0.5 (p=0.0009). Therefore, generally the authors' models were able to predict which masses were detected and which were missed better than chance. CONCLUSIONS The authors proposed an algorithm that was able to predict which masses will be detected and which will be missed by each individual trainee. This confirms existence of error-making patterns in the detection of masses among radiology trainees. Furthermore, the proposed methodology will allow for the optimized selection of difficult cases for the trainees in an automatic and efficient manner.
Collapse
Affiliation(s)
- Jing Zhang
- Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705
| | - Joseph Y Lo
- Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705; Duke Cancer Institute, Durham, North Carolina 27710; Departments of Biomedical Engineering and Electrical & Computer Engineering, Duke University, Durham, North Carolina 27705; and Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705
| | - Cherie M Kuzmiak
- Department of Radiology, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina 27599
| | - Sujata V Ghate
- Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705
| | - Sora C Yoon
- Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705
| | - Maciej A Mazurowski
- Department of Radiology, Duke University School of Medicine, Durham, North Carolina 27705; Duke Cancer Institute, Durham, North Carolina 27710; and Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705
| |
Collapse
|
7
|
Zhang J, Silber JI, Mazurowski MA. Modeling false positive error making patterns in radiology trainees for improved mammography education. J Biomed Inform 2015; 54:50-7. [PMID: 25640462 DOI: 10.1016/j.jbi.2015.01.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2014] [Revised: 01/13/2015] [Accepted: 01/19/2015] [Indexed: 10/24/2022]
Abstract
INTRODUCTION While mammography notably contributes to earlier detection of breast cancer, it has its limitations, including a large number of false positive exams. Improved radiology education could potentially contribute to alleviating this issue. Toward this goal, in this paper we propose an algorithm for modeling of false positive error making among radiology trainees. Identifying troublesome locations for the trainees could focus their training and in turn improve their performance. METHODS The algorithm proposed in this paper predicts locations that are likely to result in a false positive error for each trainee based on the previous annotations made by the trainee. The algorithm consists of three steps. First, the suspicious false positive locations are identified in mammograms by Difference of Gaussian filter and suspicious regions are segmented by computer vision-based segmentation algorithms. Second, 133 features are extracted for each suspicious region to describe its distinctive characteristics. Third, a random forest classifier is applied to predict the likelihood of the trainee making a false positive error using the extracted features. The random forest classifier is trained using previous annotations made by the trainee. We evaluated the algorithm using data from a reader study in which 3 experts and 10 trainees interpreted 100 mammographic cases. RESULTS The algorithm was able to identify locations where the trainee will commit a false positive error with accuracy higher than an algorithm that selects such locations randomly. Specifically, our algorithm found false positive locations with 40% accuracy when only 1 location was selected for all cases for each trainee and 12% accuracy when 10 locations were selected. The accuracies for randomly identified locations were both 0% for these two scenarios. CONCLUSIONS In this first study on the topic, we were able to build computer models that were able to find locations for which a trainee will make a false positive error in images that were not previously seen by the trainee. Presenting the trainees with such locations rather than randomly selected ones may improve their educational outcomes.
Collapse
Affiliation(s)
- Jing Zhang
- Department of Radiology, Duke University School of Medicine, Durham, NC, United States; Computer Science Department, Lamar University, Beaumont, TX, United States.
| | - James I Silber
- Department of Biomedical Engineering, Duke University Pratt School of Engineering, Durham, NC, United States
| | - Maciej A Mazurowski
- Department of Radiology, Duke University School of Medicine, Durham, NC, United States; Duke Cancer Institute, United States; Duke Medical Physics Program, United States
| |
Collapse
|