1
|
Murphy PM. Visual Image Annotation for Bowel Obstruction: Repeatability and Agreement with Manual Annotation and Neural Networks. J Digit Imaging 2023; 36:2179-2193. [PMID: 37278918 PMCID: PMC10502000 DOI: 10.1007/s10278-023-00825-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 03/21/2023] [Accepted: 03/29/2023] [Indexed: 06/07/2023] Open
Abstract
Bowel obstruction is a common cause of acute abdominal pain. The development of algorithms for automated detection and characterization of bowel obstruction on CT has been limited by the effort required for manual annotation. Visual image annotation with an eye tracking device may mitigate that limitation. The purpose of this study is to assess the agreement between visual and manual annotations for bowel segmentation and diameter measurement, and to assess agreement with convolutional neural networks (CNNs) trained using that data. Sixty CT scans of 50 patients with bowel obstruction from March to June 2022 were retrospectively included and partitioned into training and test data sets. An eye tracking device was used to record 3-dimensional coordinates within the scans, while a radiologist cast their gaze at the centerline of the bowel, and adjusted the size of a superimposed ROI to approximate the diameter of the bowel. For each scan, 59.4 ± 15.1 segments, 847.9 ± 228.1 gaze locations, and 5.8 ± 1.2 m of bowel were recorded. 2d and 3d CNNs were trained using this data to predict bowel segmentation and diameter maps from the CT scans. For comparisons between two repetitions of visual annotation, CNN predictions, and manual annotations, Dice scores for bowel segmentation ranged from 0.69 ± 0.17 to 0.81 ± 0.04 and intraclass correlations [95% CI] for diameter measurement ranged from 0.672 [0.490-0.782] to 0.940 [0.933-0.947]. Thus, visual image annotation is a promising technique for training CNNs to perform bowel segmentation and diameter measurement in CT scans of patients with bowel obstruction.
Collapse
Affiliation(s)
- Paul M Murphy
- University of California-San Diego, 9500 Gilman Dr, 92093, La Jolla, CA, USA.
- UCSD Radiology, 200 W Arbor Dr, 92103, San Diego, CA, USA.
| |
Collapse
|
2
|
Image Annotation by Eye Tracking: Accuracy and Precision of Centerlines of Obstructed Small-Bowel Segments Placed Using Eye Trackers. J Digit Imaging 2020; 32:855-864. [PMID: 31144146 DOI: 10.1007/s10278-018-0169-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Small-bowel obstruction (SBO) is a common and important disease, for which machine learning tools have yet to be developed. Image annotation is a critical first step for development of such tools. This study assesses whether image annotation by eye tracking is sufficiently accurate and precise to serve as a first step in the development of machine learning tools for detection of SBO on CT. Seven subjects diagnosed with SBO by CT were included in the study. For each subject, an obstructed segment of bowel was chosen. Three observers annotated the centerline of the segment by manual fiducial placement and by visual fiducial placement using a Tobii 4c eye tracker. Each annotation was repeated three times. The distance between centerlines was calculated after alignment using dynamic time warping (DTW) and statistically compared to clinical thresholds for diagnosis of SBO. Intra-observer DTW distance between manual and visual centerlines was calculated as a measure of accuracy. These distances were 1.1 ± 0.2, 1.3 ± 0.4, and 1.8 ± 0.2 cm for the three observers and were less than 1.5 cm for two of three observers (P < 0.01). Intra- and inter-observer DTW distances between centerlines placed with each method were calculated as measures of precision. These distances were 0.6 ± 0.1 and 0.8 ± 0.2 cm for manual centerlines, 1.1 ± 0.4 and 1.9 ± 0.6 cm for visual centerlines, and were less than 3.0 cm in all cases (P < 0.01). Results suggest that eye tracking-based annotation is sufficiently accurate and precise for small-bowel centerline annotation for use in machine learning-based applications.
Collapse
|
3
|
John KK, Jensen JD, King AJ, Pokharel M, Grossman D. Emerging applications of eye-tracking technology in dermatology. J Dermatol Sci 2018; 91:S0923-1811(18)30156-7. [PMID: 29655589 PMCID: PMC6173990 DOI: 10.1016/j.jdermsci.2018.04.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Revised: 04/01/2018] [Accepted: 04/03/2018] [Indexed: 10/17/2022]
Abstract
Eye-tracking technology has been used within a multitude of disciplines to provide data linking eye movements to visual processing of various stimuli (i.e., x-rays, situational positioning, printed information, and warnings). Despite the benefits provided by eye-tracking in allowing for the identification and quantification of visual attention, the discipline of dermatology has yet to see broad application of the technology. Notwithstanding dermatologists' heavy reliance upon visual patterns and cues to discriminate between benign and atypical nevi, literature that applies eye-tracking to the study of dermatology is sparse; and literature specific to patient-initiated behaviors, such as skin self-examination (SSE), is largely non-existent. The current article provides a review of eye-tracking research in various medical fields, culminating in a discussion of current applications and advantages of eye-tracking for dermatology research.
Collapse
Affiliation(s)
- Kevin K John
- School of Communication, Brigham Young University, United States.
| | - Jakob D Jensen
- Department of Communication, University of Utah, United States; Cancer Control & Population Science Program, Huntsman Cancer Institute, United States
| | - Andy J King
- Department of Public Relations, Texas Tech University, United States
| | | | - Douglas Grossman
- Departments of Dermatology and Oncological Sciences, University of Utah, United States; Huntsman Cancer Institute, University of Utah, United States
| |
Collapse
|
4
|
Ebner L, Tall M, Choudhury KR, Ly DL, Roos JE, Napel S, Rubin GD. Variations in the functional visual field for detection of lung nodules on chest computed tomography: Impact of nodule size, distance, and local lung complexity. Med Phys 2017; 44:3483-3490. [PMID: 28419484 DOI: 10.1002/mp.12277] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2016] [Revised: 02/24/2017] [Accepted: 03/20/2017] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To explore the characteristics that impact lung nodule detection by peripheral vision when searching for lung nodules on chest CT-scans. METHODS This study was approved by the local IRB and is HIPAA compliant. A simulated primary (1°) target mass (2 × 2 × 5 cm) was embedded into 5 cm thick subvolumes (SV) extracted from three unenhanced lung MDCT scans (64 row, 1.25 mm thickness, 0.7 mm increment). One of 30 solid, secondary nodules with either 3-4 mm and 5-8 mm diameters were embedded into 192 of 207 SVs. The secondary nodule was placed at a random depth within each SV, a transverse distance of 2.5, 5, 7.5, or 10 mm, and along one of eight rays cast every 45° from the center of the 1° mass. Video recordings of transverse paging in cranio-caudal direction were created for each SV (frame rate three sections/sec). Six radiologists observed each cine-loop once while gaze-tracking hardware assured that gaze was centered on the 1° mass. Each radiologist assigned a confidence rating (0-5) to the detection of a secondary nodule and indicated its location. Detection sensitivity was analyzed relative to secondary nodule size, transverse distance, radial orientation, and lung complexity. Lung complexity was characterized by the number of particles (connected pixels) and the sum of the area of all particles above a -500 HU threshold within regions of interest around the 1° mass and secondary nodule. RESULTS Using a proportional odds logistic regression model and eliminating redundant predictors, models fit individually to each reader resulted in the following decreasing order of association based on greatest reduction in Akaike Information Criterion: secondary nodule diameter (6/6 readers, P < 0.001), distance from central mass (6/6 readers, P < 0.001), lung complexity particle count (5/6 readers, P = 0.05), and lung complexity particle area (3/6 readers, P = 0.03). Substantial inter-reader differences in sensitivity to decreasing nodule diameter, distance, and complexity characteristics were observed. CONCLUSIONS Of the investigated parameters, secondary nodule size, distance from the gaze center and lung complexity (particle number and area) significantly impact nodule detection with peripheral vision.
Collapse
Affiliation(s)
- Lukas Ebner
- Department of Radiology, Duke University Medical Center, Durham, NC, 27710, USA
| | - Martin Tall
- Department of Radiology, Duke University Medical Center, Durham, NC, 27710, USA
| | | | - Donald L Ly
- Department of Radiology, Stanford School of Medicine, Stanford, CA, 94305, USA
| | - Justus E Roos
- Department of Radiology, Duke University Medical Center, Durham, NC, 27710, USA
| | - Sandy Napel
- Department of Radiology, Stanford School of Medicine, Stanford, CA, 94305, USA
| | - Geoffrey D Rubin
- Department of Radiology, Duke University Medical Center, Durham, NC, 27710, USA
| |
Collapse
|
5
|
Venjakob AC, Mello-Thoms CR. Review of prospects and challenges of eye tracking in volumetric imaging. J Med Imaging (Bellingham) 2015; 3:011002. [PMID: 27081663 DOI: 10.1117/1.jmi.3.1.011002] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2015] [Accepted: 08/20/2015] [Indexed: 11/14/2022] Open
Abstract
While eye tracking research in conventional radiography has flourished over the past decades, the number of eye tracking studies that looked at multislice images lags behind. A possible reason for the lack of studies in this area might be that the eye tracking methodology used in the context of conventional radiography cannot be applied one-on-one to volumetric imaging material. Challenges associated with eye tracking in volumetric imaging are particularly associated with the selection of stimulus material, the detection of events in the eye tracking data, the calculation of meaningful eye tracking parameters, and the reporting of abnormalities. However, all of these challenges can be addressed in the design of the experiment. If this is done, eye tracking studies using volumetric imaging material offer almost unlimited opportunity for perception research and are highly relevant as the number of volumetric images that are acquired and interpreted is rising.
Collapse
Affiliation(s)
- Antje C Venjakob
- Technische Universität Berlin , Chair of Human-Machine Systems, Department of Psychology and Ergonomics, Marchstraße 23, 10587 Berlin, Germany
| | - Claudia R Mello-Thoms
- University of Sydney, Medical Imaging and Radiation Sciences, Faculty of Health Science, 94 Mallet Street, Level 2, Room 204, Sydney, NSW 2150, Australia; University of Pittsburgh, Department of Biomedical Informatics, 5607 Baum Boulevard, Room 423, Pittsburgh, Pennsylvania 15206-3701, United States
| |
Collapse
|
6
|
Alvare G, Gordon R. CT brush and CancerZap!: two video games for computed tomography dose minimization. Theor Biol Med Model 2015; 12:7. [PMID: 25962597 PMCID: PMC4469010 DOI: 10.1186/s12976-015-0003-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2014] [Accepted: 04/20/2015] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND X-ray dose from computed tomography (CT) scanners has become a significant public health concern. All CT scanners spray x-ray photons across a patient, including those using compressive sensing algorithms. New technologies make it possible to aim x-ray beams where they are most needed to form a diagnostic or screening image. We have designed a computer game, CT Brush, that takes advantage of this new flexibility. It uses a standard MART algorithm (Multiplicative Algebraic Reconstruction Technique), but with a user defined dynamically selected subset of the rays. The image appears as the player moves the CT brush over an initially blank scene, with dose accumulating with every "mouse down" move. The goal is to find the "tumor" with as few moves (least dose) as possible. RESULTS We have successfully implemented CT Brush in Java and made it available publicly, requesting crowdsourced feedback on improving the open source code. With this experience, we also outline a "shoot 'em up game" CancerZap! for photon limited CT. CONCLUSIONS We anticipate that human computing games like these, analyzed by methods similar to those used to understand eye tracking, will lead to new object dependent CT algorithms that will require significantly less dose than object independent nonlinear and compressive sensing algorithms that depend on sprayed photons. Preliminary results suggest substantial dose reduction is achievable.
Collapse
Affiliation(s)
- Graham Alvare
- BioInformation Technology Laboratory, Department of Plant Science, University of Manitoba, E2-532 EITC, Winnipeg, R3T 2N2, MB, Canada. .,Current address: Faculty of Medicine, University of Manitoba, Box 107, Winnipeg, Canada.
| | - Richard Gordon
- Embryogenesis Center, Gulf Specimen Aquarium and Marine Laboratory, 222Clark Drive, Panacea, FL, 32346, USA. .,C.S. Mott Center for Human Growth and Development, Department of Obstetrics and Gynecology, Wayne State University, 275 E. Hancock, Detroit, MI, 48201, USA. .,Stellarray, 9210 Cameron Road Suite #300, Austin, TX, 78754, USA.
| |
Collapse
|
7
|
Rubin GD, Roos JE, Tall M, Harrawood B, Bag S, Ly DL, Seaman DM, Hurwitz LM, Napel S, Roy Choudhury K. Characterizing search, recognition, and decision in the detection of lung nodules on CT scans: elucidation with eye tracking. Radiology 2014; 274:276-86. [PMID: 25325324 DOI: 10.1148/radiol.14132918] [Citation(s) in RCA: 65] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE To determine the effectiveness of radiologists' search, recognition, and acceptance of lung nodules on computed tomographic (CT) images by using eye tracking. MATERIALS AND METHODS This study was performed with a protocol approved by the institutional review board. All study subjects provided informed consent, and all private health information was protected in accordance with HIPAA. A remote eye tracker was used to record time-varying gaze paths while 13 radiologists interpreted 40 lung CT images with an average of 3.9 synthetic nodules (5-mm diameter) embedded randomly in the lung parenchyma. The radiologists' gaze volumes ( GV gaze volume s) were defined as the portion of the lung parenchyma within 50 pixels (approximately 3 cm) of all gaze points. The fraction of the total lung volume encompassed within the GV gaze volume s, the fraction of lung nodules encompassed within each GV gaze volume (search effectiveness), the fraction of lung nodules within the GV gaze volume detected by the reader (recognition-acceptance effectiveness), and overall sensitivity of lung nodule detection were measured. RESULTS Detected nodules were within 50 pixels of the nearest gaze point for 990 of 992 correct detections. On average, radiologists searched 26.7% of the lung parenchyma in 3 minutes and 16 seconds and encompassed between 86 and 143 of 157 nodules within their GV gaze volume s. Once encompassed within their GV gaze volume , the average sensitivity of nodule recognition and acceptance ranged from 47 of 100 nodules to 103 of 124 nodules (sensitivity, 0.47-0.82). Overall sensitivity ranged from 47 to 114 of 157 nodules (sensitivity, 0.30-0.73) and showed moderate correlation (r = 0.62, P = .02) with the fraction of lung volume searched. CONCLUSION Relationships between reader search, recognition and acceptance, and overall lung nodule detection rate can be studied with eye tracking. Radiologists appear to actively search less than half of the lung parenchyma, with substantial interreader variation in volume searched, fraction of nodules included within the search volume, sensitivity for nodules within the search volume, and overall detection rate.
Collapse
Affiliation(s)
- Geoffrey D Rubin
- From the Duke Clinical Research Institute, Box 17969, 2400 Pratt St, Durham, NC 27715 (G.D.R., K.R.C.); Department of Radiology, Duke University School of Medicine, Durham, NC (G.D.R., J.E.R., M.T., B.H., S.B., D.M.S., L.M.H.); Department of Medical Imaging, University of Toronto, Toronto, ON, Canada (D.L.L.); and Department of Radiology, Stanford University School of Medicine, Stanford, Calif (S.N.)
| | | | | | | | | | | | | | | | | | | |
Collapse
|