1
|
Liu Y, Wolfe JM, Trueblood JS. Risky hybrid foraging: The impact of risk, reward value, and prevalence on foraging behavior in hybrid visual search. J Exp Psychol Gen 2024; 154:2025-45615-001. [PMID: 39541518 PMCID: PMC12075627 DOI: 10.1037/xge0001652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2024]
Abstract
In hybrid foraging, foragers search for multiple targets in multiple patches throughout the foraging session, mimicking a range of real-world scenarios. This research examines outcome uncertainty, the prevalence of different target types, and the reward value of targets in human hybrid foraging. Our empirical findings show a consistent tendency toward risk-averse behavior in hybrid foraging. That is, people display a preference for certainty and actively avoid taking risks. While altering the prevalence or reward value of the risky targets does influence people's aversion to risk, the overall effect of risk remains dominant. Additionally, simulation results suggest that the observed risk-averse strategy is suboptimal in the sense that it prevents foragers from maximizing their overall returns. These results underscore the crucial role of outcome uncertainty in shaping hybrid foraging behavior and shed light on potential theoretical developments bridging theories in decision making and hybrid foraging. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
- Yanjun Liu
- School of Psychology, University of New South Wales
- Department of Psychological and Brain Sciences, Indiana University
| | | | | |
Collapse
|
2
|
Wolfe JM, Hulleman J, Mitra A, Si W. In simple but challenging search tasks, most errors are stochastic. Atten Percept Psychophys 2024; 86:2289-2300. [PMID: 39160388 DOI: 10.3758/s13414-024-02938-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/07/2024] [Indexed: 08/21/2024]
Abstract
In visual search tasks in the lab and in the real world, people routinely miss targets that are clearly visible: so-called look but fail to see (LBFTS) errors. If search displays are shown to the same observer twice, we can ask about the probability of joint errors, where the target is missed both times. If errors are "deterministic," then the probability of a second error on the same display-given that the target was missed the first time-should be high. If errors are "stochastic," the probability of joint errors should be the product of the error rate for first and second appearances. Here, we report on two versions of a T among Ls search with somewhat degraded letters to make search more difficult. In Experiment 1, Ts could either appear amidst crowded "clumps" of Ls or more in isolation. Observers made more errors when the T was in a clump, but these errors were mainly stochastic. In Experiment 2, the task was made harder by making Ts and Ls more similar. Again, errors were predominantly stochastic. If other, socially important errors are also stochastic, this would suggest that "double reading," where two observers (human or otherwise) look at each stimulus, could reduce overall error rates.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Brigham and Women's Hospital, 900 Commonwealth Ave, 3rd Floor, Boston, MA, 02215, USA.
- Harvard Medical School, Boston, MA, USA.
| | | | - Ava Mitra
- Brigham and Women's Hospital, 900 Commonwealth Ave, 3rd Floor, Boston, MA, 02215, USA
| | | |
Collapse
|
3
|
Li A, Hulleman J, Wolfe JM. Errors in visual search: Are they stochastic or deterministic? Cogn Res Princ Implic 2024; 9:15. [PMID: 38502280 PMCID: PMC10951178 DOI: 10.1186/s41235-024-00543-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 03/07/2024] [Indexed: 03/21/2024] Open
Abstract
In any visual search task in the lab or in the world, observers will make errors. Those errors can be categorized as "deterministic": If you miss this target in this display once, you will definitely miss it again. Alternatively, errors can be "stochastic", occurring randomly with some probability from trial to trial. Researchers and practitioners have sought to reduce errors in visual search, but different types of errors might require different techniques for mitigation. To empirically categorize errors in a simple search task, our observers searched for the letter "T" among "L" distractors, with each display presented twice. When the letters were clearly visible (white letters on a gray background), the errors were almost completely stochastic (Exp 1). An error made on the first appearance of a display did not predict that an error would be made on the second appearance. When the visibility of the letters was manipulated (letters of different gray levels on a noisy background), the errors became a mix of stochastic and deterministic. Unsurprisingly, lower contrast targets produced more deterministic errors. (Exp 2). Using the stimuli of Exp 2, we tested whether errors could be reduced using cues that guided attention around the display but knew nothing about the content of that display (Exp3a, b). This had no effect, but cueing all item locations did succeed in reducing deterministic errors (Exp3c).
Collapse
Affiliation(s)
- Aoqi Li
- The University of Manchester, Manchester, UK.
| | | | - Jeremy M Wolfe
- Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| |
Collapse
|
4
|
Pershin I, Mustafaev T, Ibragimova D, Ibragimov B. Changes in Radiologists' Gaze Patterns Against Lung X-rays with Different Abnormalities: a Randomized Experiment. J Digit Imaging 2023; 36:767-775. [PMID: 36622464 PMCID: PMC9838425 DOI: 10.1007/s10278-022-00760-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 11/23/2022] [Accepted: 12/15/2022] [Indexed: 01/10/2023] Open
Abstract
The workload of some radiologists increased dramatically in the last several, which resulted in a potentially reduced quality of diagnosis. It was demonstrated that diagnostic accuracy of radiologists significantly reduces at the end of work shifts. The study aims to investigate how radiologists cover chest X-rays with their gaze in the presence of different chest abnormalities and high workload. We designed a randomized experiment to quantitatively assess how radiologists' image reading patterns change with the radiological workload. Four radiologists read chest X-rays on a radiological workstation equipped with an eye-tracker. The lung fields on the X-rays were automatically segmented with U-Net neural network allowing to measure the lung coverage with radiologists' gaze. The images were randomly split so that each image was shown at a different time to a different radiologist. Regression models were fit to the gaze data to calculate the treads in lung coverage for individual radiologists and chest abnormalities. For the study, a database of 400 chest X-rays with reference diagnoses was assembled. The average lung coverage with gaze ranged from 55 to 65% per radiologist. For every 100 X-rays read, the lung coverage reduced from 1.3 to 7.6% for the different radiologists. The coverage reduction trends were consistent for all abnormalities ranging from 3.4% per 100 X-rays for cardiomegaly to 4.1% per 100 X-rays for atelectasis. The more image radiologists read, the smaller part of the lung fields they cover with the gaze. This pattern is very stable for all abnormality types and is not affected by the exact order the abnormalities are viewed by radiologists. The proposed randomized experiment captured and quantified consistent changes in X-ray reading for different lung abnormalities that occur due to high workload.
Collapse
Affiliation(s)
- Ilya Pershin
- Innopolis University, Republic of Tatarstan, Innopolis, Russia
| | - Tamerlan Mustafaev
- Innopolis University, Republic of Tatarstan, Innopolis, Russia
- Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| | | | | |
Collapse
|
5
|
Hout MC, Papesh MH, Masadeh S, Sandin H, Walenchok SC, Post P, Madrid J, White B, Pinto JDG, Welsh J, Goode D, Skulsky R, Rodriguez MC. The Oddity Detection in Diverse Scenes (ODDS) database: Validated real-world scenes for studying anomaly detection. Behav Res Methods 2023; 55:583-599. [PMID: 35353316 PMCID: PMC8966608 DOI: 10.3758/s13428-022-01816-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/18/2022] [Indexed: 11/24/2022]
Abstract
Many applied screening tasks (e.g., medical image or baggage screening) involve challenging searches for which standard laboratory search is rarely equivalent. For example, whereas laboratory search frequently requires observers to look for precisely defined targets among isolated, non-overlapping images randomly arrayed on clean backgrounds, medical images present unspecified targets in noisy, yet spatially regular scenes. Those unspecified targets are typically oddities, elements that do not belong. To develop a closer laboratory analogue to this, we created a database of scenes containing subtle, ill-specified "oddity" targets. These scenes have similar perceptual densities and spatial regularities to those found in expert search tasks, and each includes 16 variants of the unedited scene wherein an oddity (a subtle deformation of the scene) is hidden. In Experiment 1, eight volunteers searched thousands of scene variants for an oddity. Regardless of their search accuracy, they were then shown the highlighted anomaly and rated its subtlety. Subtlety ratings reliably predicted search performance (accuracy and response times) and did so better than image statistics. In Experiment 2, we conducted a conceptual replication in which a larger group of naïve searchers scanned subsets of the scene variants. Prior subtlety ratings reliably predicted search outcomes. Whereas medical image targets are difficult for naïve searchers to detect, our database contains thousands of interior and exterior scenes that vary in difficulty, but are nevertheless searchable by novices. In this way, the stimuli will be useful for studying visual search as it typically occurs in expert domains: Ill-specified search for anomalies in noisy displays.
Collapse
Affiliation(s)
- Michael C Hout
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA.
- National Science Foundation, Alexandria, VA, USA.
| | - Megan H Papesh
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Saleem Masadeh
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Hailey Sandin
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | | | - Phillip Post
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Jessica Madrid
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Bryan White
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | | | - Julian Welsh
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Dre Goode
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Rebecca Skulsky
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Mariana Cazares Rodriguez
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| |
Collapse
|
6
|
Wolfe JM, Lyu W, Dong J, Wu CC. What eye tracking can tell us about how radiologists use automated breast ultrasound. J Med Imaging (Bellingham) 2022; 9:045502. [PMID: 35911209 PMCID: PMC9315059 DOI: 10.1117/1.jmi.9.4.045502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 07/08/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Automated breast ultrasound (ABUS) presents three-dimensional (3D) representations of the breast in the form of stacks of coronal and transverse plane images. ABUS is especially useful for the assessment of dense breasts. Here, we present the first eye tracking data showing how radiologists search and evaluate ABUS cases. Approach: Twelve readers evaluated single-breast cases in 20-min sessions. Positive findings were present in 56% of the evaluated cases. Eye position and the currently visible coronal and transverse slice were tracked, allowing for reconstruction of 3D "scanpaths." Results: Individual readers had consistent search strategies. Most readers had strategies that involved examination of all available images. Overall accuracy was 0.74 (sensitivity = 0.66 and specificity = 0.84). The 20 false negative errors across all readers can be classified using Kundel's (1978) taxonomy: 17 are "decision" errors (readers found the target but misclassified it as normal or benign). There was one recognition error and two "search" errors. This is an unusually high proportion of decision errors. Readers spent essentially the same proportion of time viewing coronal and transverse images, regardless of whether the case was positive or negative, correct or incorrect. Readers tended to use a "scanner" strategy when viewing coronal images and a "driller" strategy when viewing transverse images. Conclusions: These results suggest that ABUS errors are more likely to be errors of interpretation than of search. Further research could determine if readers' exploration of all images is useful or if, in some negative cases, search of transverse images is redundant following a search of coronal images.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Brigham and Women's Hospital, Boston, Massachusetts, United States.,Harvard Medical School, Boston, Massachusetts, United States
| | - Wanyi Lyu
- Brigham and Women's Hospital, Boston, Massachusetts, United States
| | - Jeffrey Dong
- Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States
| | - Chia-Chien Wu
- Brigham and Women's Hospital, Boston, Massachusetts, United States.,Harvard Medical School, Boston, Massachusetts, United States
| |
Collapse
|
7
|
Pershin I, Kholiavchenko M, Maksudov B, Mustafaev T, Ibragimova D, Ibragimov B. Artificial Intelligence for the Analysis of Workload-Related Changes in Radiologists' Gaze Patterns. IEEE J Biomed Health Inform 2022; 26:4541-4550. [PMID: 35704540 DOI: 10.1109/jbhi.2022.3183299] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Around 60-80% of radiological errors are attributed to overlooked abnormalities, the rate of which increases at the end of work shifts. In this study, we run an experiment to investigate if artificial intelligence (AI) can assist in detecting radiologists' gaze patterns that correlate with fatigue. A retrospective database of lung X-ray images with the reference diagnoses was used. The X-ray images were acquired from 400 subjects with a mean age of 49 ± 17, and 61% men. Four practicing radiologists read these images while their eye movements were recorded. The radiologists passed a series of concentration tests at prearranged breaks of the experiment. A U-Net neural network was adapted to annotate lung anatomy on X-rays and calculate coverage and information gain features from the radiologists' eye movements over lung fields. The lung coverage, information gain, and eye tracker-based features were compared with the cumulative work done (CDW) label for each radiologist. The gaze-traveled distance, X-ray coverage, and lung coverage statistically significantly (p < 0.01) deteriorated with cumulative work done (CWD) for three out of four radiologists. The reading time and information gain over lungs statistically significantly deteriorated for all four radiologists. We discovered a novel AI-based metric blending reading time, speed, and organ coverage, which can be used to predict changes in the fatigue-related image reading patterns.
Collapse
|
8
|
Wu CC, Wolfe JM. The Functional Visual Field(s) in simple visual search. Vision Res 2022; 190:107965. [PMID: 34775158 PMCID: PMC8976560 DOI: 10.1016/j.visres.2021.107965] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 10/07/2021] [Accepted: 10/12/2021] [Indexed: 01/03/2023]
Abstract
During a visual search for a target among distractors, observers do not fixate every location in the search array. Rather processing is thought to occur within a Functional Visual Field (FVF) surrounding each fixation. We argue that there are three questions that can be asked at each fixation and that these imply three different senses of the FVF. 1) Can I identify what is at location XY? This defines a resolution FVF. 2) To what shall I attend during this fixation? This defines an Attentional FVF. 3) Where should I fixate next? This defines an Exploratory FVF. We examine FVFs 2&3 using eye movements in visual search. In three Experiments, we collected eye movements during visual search for the target letter T among distractor letter Ls (Exps 1 and 3) or for a color X orientation conjunction (Exp 2). Saccades that do not go to the target can be used to define the Exploratory FVF. The saccade that goes to the target can be used to define the Attentional FVF since the target was probably covertly detected during the prior fixation. The Exploratory FVF is larger than the Attentional FVF for all three experiments. Interestingly, the probability that the next saccade would go to the target was always well below 1.0, even when the current fixation was close to the target and well within any reasonable estimate of the FVF. Measuring search-based Exploratory and Attentional FVFs sheds light on how we can miss clearly visible targets.
Collapse
Affiliation(s)
- Chia-Chien Wu
- Harvard Medical School, Boston, MA, USA; Brigham & Women's Hospital, Boston, MA, USA.
| | - Jeremy M Wolfe
- Harvard Medical School, Boston, MA, USA; Brigham & Women's Hospital, Boston, MA, USA
| |
Collapse
|