1
|
Al Moteri M, Mahesh TR, Thakur A, Vinoth Kumar V, Khan SB, Alojail M. Enhancing accessibility for improved diagnosis with modified EfficientNetV2-S and cyclic learning rate strategy in women with disabilities and breast cancer. Front Med (Lausanne) 2024; 11:1373244. [PMID: 38515985 PMCID: PMC10954891 DOI: 10.3389/fmed.2024.1373244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 02/27/2024] [Indexed: 03/23/2024] Open
Abstract
Breast cancer, a prevalent cancer among women worldwide, necessitates precise and prompt detection for successful treatment. While conventional histopathological examination is the benchmark, it is a lengthy process and prone to variations among different observers. Employing machine learning to automate the diagnosis of breast cancer presents a viable option, striving to improve both precision and speed. Previous studies have primarily focused on applying various machine learning and deep learning models for the classification of breast cancer images. These methodologies leverage convolutional neural networks (CNNs) and other advanced algorithms to differentiate between benign and malignant tumors from histopathological images. Current models, despite their potential, encounter obstacles related to generalizability, computational performance, and managing datasets with imbalances. Additionally, a significant number of these models do not possess the requisite transparency and interpretability, which are vital for medical diagnostic purposes. To address these limitations, our study introduces an advanced machine learning model based on EfficientNetV2. This model incorporates state-of-the-art techniques in image processing and neural network architecture, aiming to improve accuracy, efficiency, and robustness in classification. We employed the EfficientNetV2 model, fine-tuned for the specific task of breast cancer image classification. Our model underwent rigorous training and validation using the BreakHis dataset, which includes diverse histopathological images. Advanced data preprocessing, augmentation techniques, and a cyclical learning rate strategy were implemented to enhance model performance. The introduced model exhibited remarkable efficacy, attaining an accuracy rate of 99.68%, balanced precision and recall as indicated by a significant F1 score, and a considerable Cohen's Kappa value. These indicators highlight the model's proficiency in correctly categorizing histopathological images, surpassing current techniques in reliability and effectiveness. The research emphasizes improved accessibility, catering to individuals with disabilities and the elderly. By enhancing visual representation and interpretability, the proposed approach aims to make strides in inclusive medical image interpretation, ensuring equitable access to diagnostic information.
Collapse
Affiliation(s)
- Moteeb Al Moteri
- Department of Management Information Systems, College of Business Administration, King Saud University, Riyadh, Saudi Arabia
| | - T. R. Mahesh
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - Arastu Thakur
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - V. Vinoth Kumar
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, India
| | - Surbhi Bhatia Khan
- Department of Data Science, School of Science Engineering and Environment, University of Salford, Manchester, United Kingdom
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
| | - Mohammed Alojail
- Department of Management Information Systems, College of Business Administration, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
2
|
Brunyé TT, Booth K, Hendel D, Kerr KF, Shucard H, Weaver DL, Elmore JG. Machine learning classification of diagnostic accuracy in pathologists interpreting breast biopsies. J Am Med Inform Assoc 2024; 31:552-562. [PMID: 38031453 PMCID: PMC10873842 DOI: 10.1093/jamia/ocad232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 10/19/2023] [Accepted: 11/20/2023] [Indexed: 12/01/2023] Open
Abstract
OBJECTIVE This study explores the feasibility of using machine learning to predict accurate versus inaccurate diagnoses made by pathologists based on their spatiotemporal viewing behavior when evaluating digital breast biopsy images. MATERIALS AND METHODS The study gathered data from 140 pathologists of varying experience levels who each reviewed a set of 14 digital whole slide images of breast biopsy tissue. Pathologists' viewing behavior, including zooming and panning actions, was recorded during image evaluation. A total of 30 features were extracted from the viewing behavior data, and 4 machine learning algorithms were used to build classifiers for predicting diagnostic accuracy. RESULTS The Random Forest classifier demonstrated the best overall performance, achieving a test accuracy of 0.81 and area under the receiver-operator characteristic curve of 0.86. Features related to attention distribution and focus on critical regions of interest were found to be important predictors of diagnostic accuracy. Further including case-level and pathologist-level information incrementally improved classifier performance. DISCUSSION Results suggest that pathologists' viewing behavior during digital image evaluation can be leveraged to predict diagnostic accuracy, affording automated feedback and decision support systems based on viewing behavior to aid in training and, ultimately, clinical practice. They also carry implications for basic research examining the interplay between perception, thought, and action in diagnostic decision-making. CONCLUSION The classifiers developed herein have potential applications in training and clinical settings to provide timely feedback and support to pathologists during diagnostic decision-making. Further research could explore the generalizability of these findings to other medical domains and varied levels of expertise.
Collapse
Affiliation(s)
- Tad T Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA 02155, United States
- Department of Psychology, Tufts University, Medford, MA 02155, United States
| | - Kelsey Booth
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA 02155, United States
| | - Dalit Hendel
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA 02155, United States
| | - Kathleen F Kerr
- Department of Biostatistics, University of Washington, Seattle, WA 98105, United States
| | - Hannah Shucard
- Department of Biostatistics, University of Washington, Seattle, WA 98105, United States
| | - Donald L Weaver
- Department of Pathology and Laboratory Medicine, Larner College of Medicine, University of Vermont and Vermont Cancer Center, Burlington, VT 05405, United States
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA 90095, United States
| |
Collapse
|
3
|
Brunyé TT, Balla A, Drew T, Elmore JG, Kerr KF, Shucard H, Weaver DL. From Image to Diagnosis: Characterizing Sources of Error in Histopathologic Interpretation. Mod Pathol 2023; 36:100162. [PMID: 36948400 DOI: 10.1016/j.modpat.2023.100162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 02/11/2023] [Accepted: 03/07/2023] [Indexed: 03/24/2023]
Abstract
An accurate histopathologic diagnosis on surgical biopsy material is necessary for the clinical management of patients and has important implications for research, clinical trial design/enrollment, and public health education. This study used a mixed methods approach to isolate sources of diagnostic error while residents and attending pathologists interpreted digitized breast biopsy slides. Ninety participants including pathology residents and attendings at major United States medical centers reviewed a set of 14 digitized whole slide images of breast biopsies. Each case had a consensus-defined diagnosis and critical region of interest (cROI) representing the most significant pathology on the slide. Participants were asked to view unmarked digitized slides, draw their own participant region of interest (pROI), describe its features, and render a diagnosis. Participants' review behavior was tracked using case viewer software and an eye-tracking device. Diagnostic accuracy was calculated in comparison to the consensus diagnosis. We measured the frequency of errors emerging during four interpretive phases: 1) detecting the cROI, 2) recognizing its relevance, 3) using the correct terminology to describe findings in the pROI, and 4) making a diagnostic decision. According to eye tracking data, both trainees and attending pathologists were very likely (about 94% of the time) to find the cROI when inspecting a slide. However, trainees were less likely to consider the cROI relevant to their diagnosis. Pathology trainees were more likely (41% of cases) to use incorrect terminology to describe pROI features than attending pathologists (21% of cases). Failure to accurately describe features was the only factor strongly associated with an incorrect diagnosis. Identifying where errors emerge in the interpretive and/or descriptive process and working on building organ-specific feature recognition and verbal fluency in describing those features are critical steps for achieving competency in diagnostic decision making.
Collapse
Affiliation(s)
- Tad T Brunyé
- Center for Applied Brain and Cognitive Sciences, Tufts University, 177 College Ave., Suite 090, Medford, MA 02155; Department of Psychology, Tufts University, 490 Boston Ave., Medford, MA 02155.
| | - Agnes Balla
- Department of Pathology, University of Vermont and Vermont Cancer Center, 89 Beaumont Ave., Burlington, VT 05405
| | - Trafton Drew
- Department of Psychology, University of Utah, 380 S 1530 E Beh S 502, Salt Lake City, UT 84112
| | - Joann G Elmore
- David Geffen School of Medicine, Department of Medicine, University of California, Los Angeles, 885 Tiverton Drive, Los Angeles, CA 90095
| | - Kathleen F Kerr
- Department of Biostatistics, University of Washington, 1705 NE Pacific Street, Seattle, WA 98195
| | - Hannah Shucard
- Department of Biostatistics, University of Washington, 1705 NE Pacific Street, Seattle, WA 98195
| | - Donald L Weaver
- Department of Pathology, University of Vermont and Vermont Cancer Center, 89 Beaumont Ave., Burlington, VT 05405
| |
Collapse
|
4
|
Nelson KC, Brown AE, Herrmann A, Dorsey C, Simon JM, Wilson JM, Savory SA, Haydu LE. Validation of a Novel Cutaneous Neoplasm Diagnostic Self-Efficacy Instrument (CNDSEI) for Evaluating User-Perceived Confidence With Dermoscopy. Dermatol Pract Concept 2020; 10:e2020088. [PMID: 33150029 DOI: 10.5826/dpc.1004a88] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/12/2020] [Indexed: 01/14/2023] Open
Abstract
Background Accurate medical image interpretation is an essential proficiency for multiple medical specialties, including dermatologists and primary care providers. A dermatoscope, a ×10-×20 magnifying lens paired with a light source, enables enhanced visualization of skin cancer structures beyond standard visual inspection. Skilled interpretation of dermoscopic images improves diagnostic accuracy for skin cancer. Objective Design and validation of Cutaneous Neoplasm Diagnostic Self-Efficacy Instrument (CNDSEI)-a new tool to assess dermatology residents' confidence in dermoscopic diagnosis of skin tumors. Methods In the 2018-2019 academic year, the authors administered the CNDSEI and the Long Dermoscopy Assessment (LDA), to measure dermoscopic image interpretation accuracy, to residents in 9 dermatology residency programs prior to dermoscopy educational intervention exposure. The authors conducted CNDSEI item analysis with inspection of response distribution histograms, assessed internal reliability using Cronbach's coefficient alpha (α) and construct validity by comparing baseline CNDSEI and LDA results for corresponding lesions with one-way analysis of variance (ANOVA). Results At baseline, residents respectively demonstrated significantly higher and lower CNDSEI scores for correctly and incorrectly diagnosed lesions on the LDA (P = 0.001). The internal consistency reliability of CNDSEI responses for the majority (13/15) of the lesion types was excellent (α ≥ 0.9) or good (0.8≥ α <0.9). Conclusions The CNDSEI pilot established that the tool reliably measures user dermoscopic image interpretation confidence and that self-efficacy correlates with diagnostic accuracy. Precise alignment of medical image diagnostic performance and the self-efficacy instrument content offers opportunity for construct validation of novel medical image interpretation self-efficacy instruments.
Collapse
Affiliation(s)
- Kelly C Nelson
- Department of Dermatology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Ashley E Brown
- McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Amanda Herrmann
- Department of Pathology, The University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Chloe Dorsey
- Department of Surgical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Julie M Simon
- Department of Surgical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Janice M Wilson
- Department of Dermatology, The University of Texas Medical Branch, Galveston, TX, USA
| | - Stephanie A Savory
- Department of Dermatology, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Lauren E Haydu
- Department of Surgical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
5
|
Brunyé TT, Drew T, Kerr KF, Shucard H, Weaver DL, Elmore JG. Eye tracking reveals expertise-related differences in the time-course of medical image inspection and diagnosis. J Med Imaging (Bellingham) 2020; 7:051203. [PMID: 37476351 PMCID: PMC10355124 DOI: 10.1117/1.jmi.7.5.051203] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 10/14/2020] [Indexed: 07/22/2023] Open
Abstract
Purpose: Physicians' eye movements provide insights into relative reliance on different visual features during medical image review and diagnosis. Current theories posit that increasing expertise is associated with relatively holistic viewing strategies activated early in the image viewing experience. This study examined whether early image viewing behavior is associated with experience level and diagnostic accuracy when pathologists and trainees interpreted breast biopsies. Approach: Ninety-two residents in training and experienced pathologists at nine major U.S. medical centers interpreted digitized whole slide images of breast biopsy cases while eye movements were monitored. The breadth of visual attention and frequency and duration of eye fixations on critical image regions were recorded. We dissociated eye movements occurring early during initial viewing (prior to first zoom) versus later viewing, examining seven viewing behaviors of interest. Results: Residents and faculty pathologists were similarly likely to detect critical image regions during early image viewing, but faculty members showed more and longer duration eye fixations in these regions. Among pathology residents, year of residency predicted increasingly higher odds of fixating on critical image regions during early viewing. No viewing behavior was significantly associated with diagnostic accuracy. Conclusions: Results suggest early detection and recognition of critical image features by experienced pathologists, with relatively directed and efficient search behavior. The results also suggest that the immediate distribution of eye movements over medical images warrants further exploration as a potential metric for the objective monitoring and evaluation of progress during medical training.
Collapse
Affiliation(s)
- Tad T. Brunyé
- Center for Applied Brain and Cognitive Sciences, Medford, Massachusetts, United States
- Tufts University, Department of Psychology, Medford, Massachusetts, United States
| | - Trafton Drew
- University of Utah, Department of Psychology, Salt Lake City, Utah, United States
| | - Kathleen F. Kerr
- University of Washington, Department of Biostatistics, Seattle, Washington, United States
| | - Hannah Shucard
- University of Washington, Department of Biostatistics, Seattle, Washington, United States
| | - Donald L. Weaver
- University of Vermont, Larner College of Medicine and UVM Cancer Center, Department of Pathology, Burlington, Vermont, United States
| | - Joann G. Elmore
- University of California, Los Angeles, David Geffen School of Medicine, Department of Medicine, Los Angeles, California, United States
| |
Collapse
|
6
|
Kok EM, van Geel K, van Merriënboer JJG, Robben SGF. What We Do and Do Not Know about Teaching Medical Image Interpretation. Front Psychol 2017; 8:309. [PMID: 28316582 PMCID: PMC5334326 DOI: 10.3389/fpsyg.2017.00309] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2016] [Accepted: 02/20/2017] [Indexed: 11/13/2022] Open
Abstract
Educators in medical image interpretation have difficulty finding scientific evidence as to how they should design their instruction. We review and comment on 81 papers that investigated instructional design in medical image interpretation. We distinguish between studies that evaluated complete offline courses and curricula, studies that evaluated e-learning modules, and studies that evaluated specific educational interventions. Twenty-three percent of all studies evaluated the implementation of complete courses or curricula, and 44% of the studies evaluated the implementation of e-learning modules. We argue that these studies have encouraging results but provide little information for educators: too many differences exist between conditions to unambiguously attribute the learning effects to specific instructional techniques. Moreover, concepts are not uniformly defined and methodological weaknesses further limit the usefulness of evidence provided by these studies. Thirty-two percent of the studies evaluated a specific interventional technique. We discuss three theoretical frameworks that informed these studies: diagnostic reasoning, cognitive schemas and study strategies. Research on diagnostic reasoning suggests teaching students to start with non-analytic reasoning and subsequently applying analytic reasoning, but little is known on how to train non-analytic reasoning. Research on cognitive schemas investigated activities that help the development of appropriate cognitive schemas. Finally, research on study strategies supports the effectiveness of practice testing, but more study strategies could be applicable to learning medical image interpretation. Our commentary highlights the value of evaluating specific instructional techniques, but further evidence is required to optimally inform educators in medical image interpretation.
Collapse
Affiliation(s)
- Ellen M Kok
- Department of Educational Development and Research, School of Health Professions Education, Maastricht University Maastricht, Netherlands
| | - Koos van Geel
- Department of Educational Development and Research, School of Health Professions Education, Maastricht University Maastricht, Netherlands
| | - Jeroen J G van Merriënboer
- Department of Educational Development and Research, School of Health Professions Education, Maastricht University Maastricht, Netherlands
| | - Simon G F Robben
- Department of Radiology, Maastricht University Medical Centre Maastricht, Netherlands
| |
Collapse
|