1
|
Kumar A, Pundlik S, Peli E, Luo G. Comparison of visual SLAM and IMU in tracking head movement outdoors. Behav Res Methods 2023; 55:2787-2799. [PMID: 35953662 PMCID: PMC10775920 DOI: 10.3758/s13428-022-01941-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/22/2022] [Indexed: 11/08/2022]
Abstract
Tracking head movement in outdoor activities is more challenging than in controlled indoor lab environments. Large-magnitude head scanning is common under natural conditions. Compensatory gaze (head and eye) scanning while walking may be critical for people with visual field loss. We compared the accuracy of two outdoor head tracking methods: differential inertial measurement units (IMU) and simultaneous localization and mapping (SLAM). At a fixed location experiment, a gaze aiming test showed that SLAM outperforms IMU in terms of error (IMU: 9.6°, SLAM: 4.47°). In an urban street walking experiment conducted with five patients with hemifield loss, the IMU drift, quantified by root-mean-square deviation, was as high as 68.1°, while the drift of SLAM was only 5.3°. However, the SLAM method suffered from data loss due to tracking failure (~10% overall, and ~ 18% when crossing streets). Our results show that the SLAM and IMU methods have complementary properties. Because of no data gaps, the differential IMU method may be desirable as compared to SLAM in settings where the signal drift can be removed in post-processing and small gaze estimation errors can be tolerated.
Collapse
Affiliation(s)
- Ayush Kumar
- Schepens Eye Research Institute of Mass Eye & Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Shrinivas Pundlik
- Schepens Eye Research Institute of Mass Eye & Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Eli Peli
- Schepens Eye Research Institute of Mass Eye & Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Gang Luo
- Schepens Eye Research Institute of Mass Eye & Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
2
|
Eye movement behavior in a real-world virtual reality task reveals ADHD in children. Sci Rep 2022; 12:20308. [PMID: 36434040 PMCID: PMC9700686 DOI: 10.1038/s41598-022-24552-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 11/16/2022] [Indexed: 11/26/2022] Open
Abstract
Eye movements and other rich data obtained in virtual reality (VR) environments resembling situations where symptoms are manifested could help in the objective detection of various symptoms in clinical conditions. In the present study, 37 children with attention deficit hyperactivity disorder and 36 typically developing controls (9-13 y.o) played a lifelike prospective memory game using head-mounted display with inbuilt 90 Hz eye tracker. Eye movement patterns had prominent group differences, but they were dispersed across the full performance time rather than associated with specific events or stimulus features. A support vector machine classifier trained on eye movement data showed excellent discrimination ability with 0.92 area under curve, which was significantly higher than for task performance measures or for eye movements obtained in a visual search task. We demonstrated that a naturalistic VR task combined with eye tracking allows accurate prediction of attention deficits, paving the way for precision diagnostics.
Collapse
|
3
|
Evaluating Eye Movement Event Detection: A Review of the State of the Art. Behav Res Methods 2022:10.3758/s13428-021-01763-7. [PMID: 35715615 DOI: 10.3758/s13428-021-01763-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/27/2021] [Indexed: 11/08/2022]
Abstract
Detecting eye movements in raw eye tracking data is a well-established research area by itself, as well as a common pre-processing step before any subsequent analysis. As in any field, however, progress and successful collaboration can only be achieved provided a shared understanding of the pursued goal. This is often formalised via defining metrics that express the quality of an approach to solving the posed problem. Both the big-picture intuition behind the evaluation strategies and seemingly small implementation details influence the resulting measures, making even studies with outwardly similar procedures essentially incomparable, impeding a common understanding. In this review, we systematically describe and analyse evaluation methods and measures employed in the eye movement event detection field to date. While recently developed evaluation strategies tend to quantify the detector's mistakes at the level of whole eye movement events rather than individual gaze samples, they typically do not separate establishing correspondences between true and predicted events from the quantification of the discovered errors. In our analysis we separate these two steps where possible, enabling their almost arbitrary combinations in an evaluation pipeline. We also present the first large-scale empirical analysis of event matching strategies in the literature, examining these various combinations both in practice and theoretically. We examine the particular benefits and downsides of the evaluation methods, providing recommendations towards more intuitive and informative assessment. We implemented the evaluation strategies on which this work focuses in a single publicly available library: https://github.com/r-zemblys/EM-event-detection-evaluation .
Collapse
|
4
|
David EJ, Lebranchu P, Perreira Da Silva M, Le Callet P. What are the visuo-motor tendencies of omnidirectional scene free-viewing in virtual reality? J Vis 2022; 22:12. [PMID: 35323868 PMCID: PMC8963670 DOI: 10.1167/jov.22.4.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Central and peripheral vision during visual tasks have been extensively studied on two-dimensional screens, highlighting their perceptual and functional disparities. This study has two objectives: replicating on-screen gaze-contingent experiments removing central or peripheral field of view in virtual reality, and identifying visuo-motor biases specific to the exploration of 360 scenes with a wide field of view. Our results are useful for vision modelling, with applications in gaze position prediction (e.g., content compression and streaming). We ask how previous on-screen findings translate to conditions where observers can use their head to explore stimuli. We implemented a gaze-contingent paradigm to simulate loss of vision in virtual reality, participants could freely view omnidirectional natural scenes. This protocol allows the simulation of vision loss with an extended field of view (\(\gt \)80°) and studying the head's contributions to visual attention. The time-course of visuo-motor variables in our pure free-viewing task reveals long fixations and short saccades during first seconds of exploration, contrary to literature in visual tasks guided by instructions. We show that the effect of vision loss is reflected primarily on eye movements, in a manner consistent with two-dimensional screens literature. We hypothesize that head movements mainly serve to explore the scenes during free-viewing, the presence of masks did not significantly impact head scanning behaviours. We present new fixational and saccadic visuo-motor tendencies in a 360° context that we hope will help in the creation of gaze prediction models dedicated to virtual reality.
Collapse
Affiliation(s)
- Erwan Joël David
- Department of Psychology, Goethe-Universität, Frankfurt, Germany.,
| | - Pierre Lebranchu
- LS2N UMR CNRS 6004, University of Nantes and Nantes University Hospital, Nantes, France.,
| | | | - Patrick Le Callet
- LS2N UMR CNRS 6004, University of Nantes, Nantes, France., http://pagesperso.ls2n.fr/~lecallet-p/index.html
| |
Collapse
|
5
|
Gong H, Hsieh SS, Holmes D, Cook D, Inoue A, Bartlett D, Baffour F, Takahashi H, Leng S, Yu L, McCollough CH, Fletcher JG. An interactive eye-tracking system for measuring radiologists' visual fixations in volumetric CT images: Implementation and initial eye-tracking accuracy validation. Med Phys 2021; 48:6710-6723. [PMID: 34534365 PMCID: PMC8595866 DOI: 10.1002/mp.15219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 08/28/2021] [Accepted: 08/30/2021] [Indexed: 01/17/2023] Open
Abstract
PURPOSE Eye-tracking approaches have been used to understand the visual search process in radiology. However, previous eye-tracking work in computer tomography (CT) has been limited largely to single cross-sectional images or video playback of the reconstructed volume, which do not accurately reflect radiologists' visual search activities and their interactivity with three-dimensional image data at a computer workstation (e.g., scroll, pan, and zoom) for visual evaluation of diagnostic imaging targets. We have developed a platform that integrates eye-tracking hardware with in-house-developed reader workstation software to allow monitoring of the visual search process and reader-image interactions in clinically relevant reader tasks. The purpose of this work is to validate the spatial accuracy of eye-tracking data using this platform for different eye-tracking data acquisition modes. METHODS An eye-tracker was integrated with a previously developed workstation designed for reader performance studies. The integrated system captured real-time eye movement and workstation events at 1000 Hz sampling frequency. The eye-tracker was operated either in head-stabilized mode or in free-movement mode. In head-stabilized mode, the reader positioned their head on a manufacturer-provided chinrest. In free-movement mode, a biofeedback tool emitted an audio cue when the head position was outside the data collection range (general biofeedback) or outside a narrower range of positions near the calibration position (strict biofeedback). Four radiologists and one resident were invited to participate in three studies to determine eye-tracking spatial accuracy under three constraint conditions: head-stabilized mode (i.e., with use of a chin rest), free movement with general biofeedback, and free movement with strict biofeedback. Study 1 evaluated the impact of head stabilization versus general or strict biofeedback using a cross-hair target prior to the integration of the eye-tracker with the image viewing workstation. In Study 2, after integration of the eye-tracker and reader workstation, readers were asked to fixate on targets that were randomly distributed within a volumetric digital phantom. In Study 3, readers used the integrated system to scroll through volumetric patient CT angiographic images while fixating on the centerline of designated blood vessels (from the left coronary artery to dorsalis pedis artery). Spatial accuracy was quantified as the offset between the center of the intended target and the detected fixation using units of image pixels and the degree of visual angle. RESULTS The three head position constraint conditions yielded comparable accuracy in the studies using digital phantoms. For Study 1 involving the digital crosshairs, the median ± the standard deviation of offset values among readers were 15.2 ± 7.0 image pixels with the chinrest, 14.2 ± 3.6 image pixels with strict biofeedback, and 19.1 ± 6.5 image pixels with general biofeedback. For Study 2 using the random dot phantom, the median ± standard deviation offset values were 16.7 ± 28.8 pixels with use of a chinrest, 16.5 ± 24.6 pixels using strict biofeedback, and 18.0 ± 22.4 pixels using general biofeedback, which translated to a visual angle of about 0.8° for all three conditions. We found no obvious association between eye-tracking accuracy and target size or view time. In Study 3 viewing patient images, use of the chinrest and strict biofeedback demonstrated comparable accuracy, while the use of general biofeedback demonstrated a slightly worse accuracy. The median ± standard deviation of offset values were 14.8 ± 11.4 pixels with use of a chinrest, 21.0 ± 16.2 pixels using strict biofeedback, and 29.7 ± 20.9 image pixels using general biofeedback. These corresponded to visual angles ranging from 0.7° to 1.3°. CONCLUSIONS An integrated eye-tracker system to assess reader eye movement and interactive viewing in relation to imaging targets demonstrated reasonable spatial accuracy for assessment of visual fixation. The head-free movement condition with audio biofeedback performed similarly to head-stabilized mode.
Collapse
Affiliation(s)
- Hao Gong
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - Scott S. Hsieh
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - David Holmes
- Department of Physiology & Biomedical Engineering, Mayo Clinic, Rochester, MN 55901
| | - David Cook
- Department of Internal Medicine, Mayo Clinic, Rochester, MN 55901
| | - Akitoshi Inoue
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - David Bartlett
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | | | | | - Shuai Leng
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | - Lifeng Yu
- Department of Radiology, Mayo Clinic, Rochester, MN 55901
| | | | | |
Collapse
|
6
|
David EJ, Beitner J, Võ MLH. The importance of peripheral vision when searching 3D real-world scenes: A gaze-contingent study in virtual reality. J Vis 2021; 21:3. [PMID: 34251433 PMCID: PMC8287039 DOI: 10.1167/jov.21.7.3] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual search in natural scenes is a complex task relying on peripheral vision to detect potential targets and central vision to verify them. The segregation of the visual fields has been particularly established by on-screen experiments. We conducted a gaze-contingent experiment in virtual reality in order to test how the perceived roles of central and peripheral visions translated to more natural settings. The use of everyday scenes in virtual reality allowed us to study visual attention by implementing a fairly ecological protocol that cannot be implemented in the real world. Central or peripheral vision was masked during visual search, with target objects selected according to scene semantic rules. Analyzing the resulting search behavior, we found that target objects that were not spatially constrained to a probable location within the scene impacted search measures negatively. Our results diverge from on-screen studies in that search performances were only slightly affected by central vision loss. In particular, a central mask did not impact verification times when the target was grammatically constrained to an anchor object. Our findings demonstrates that the role of central vision (up to 6 degrees of eccentricities) in identifying objects in natural scenes seems to be minor, while the role of peripheral preprocessing of targets in immersive real-world searches may have been underestimated by on-screen experiments.
Collapse
Affiliation(s)
- Erwan Joël David
- Department of Psychology, Goethe-Universität, Frankfurt, Germany.,
| | - Julia Beitner
- Department of Psychology, Goethe-Universität, Frankfurt, Germany.,
| | | |
Collapse
|
7
|
GlassesViewer: Open-source software for viewing and analyzing data from the Tobii Pro Glasses 2 eye tracker. Behav Res Methods 2020; 52:1244-1253. [PMID: 31898293 PMCID: PMC7280338 DOI: 10.3758/s13428-019-01314-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We present GlassesViewer, open-source software for viewing and analyzing eye-tracking data of the Tobii Pro Glasses 2 head-mounted eye tracker as well as the scene and eye videos and other data streams (pupil size, gyroscope, accelerometer, and TTL input) that this headset can record. The software provides the following functionality written in MATLAB: (1) a graphical interface for navigating the study- and recording structure produced by the Tobii Glasses 2; (2) functionality to unpack, parse, and synchronize the various data and video streams comprising a Glasses 2 recording; and (3) a graphical interface for viewing the Glasses 2's gaze direction, pupil size, gyroscope and accelerometer time-series data, along with the recorded scene and eye camera videos. In this latter interface, segments of data can furthermore be labeled through user-provided event classification algorithms or by means of manual annotation. Lastly, the toolbox provides integration with the GazeCode tool by Benjamins et al. (2018), enabling a completely open-source workflow for analyzing Tobii Pro Glasses 2 recordings.
Collapse
|
8
|
David E, Beitner J, Võ MLH. Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a Virtual Environment. Brain Sci 2020; 10:E841. [PMID: 33198116 PMCID: PMC7696943 DOI: 10.3390/brainsci10110841] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/09/2020] [Accepted: 11/10/2020] [Indexed: 11/19/2022] Open
Abstract
Central and peripheral fields of view extract information of different quality and serve different roles during visual tasks. Past research has studied this dichotomy on-screen in conditions remote from natural situations where the scene would be omnidirectional and the entire field of view could be of use. In this study, we had participants looking for objects in simulated everyday rooms in virtual reality. By implementing a gaze-contingent protocol we masked central or peripheral vision (masks of 6 deg. of radius) during trials. We analyzed the impact of vision loss on visuo-motor variables related to fixation (duration) and saccades (amplitude and relative directions). An important novelty is that we segregated eye, head and the general gaze movements in our analyses. Additionally, we studied these measures after separating trials into two search phases (scanning and verification). Our results generally replicate past on-screen literature and teach about the role of eye and head movements. We showed that the scanning phase is dominated by short fixations and long saccades to explore, and the verification phase by long fixations and short saccades to analyze. One finding indicates that eye movements are strongly driven by visual stimulation, while head movements serve a higher behavioral goal of exploring omnidirectional scenes. Moreover, losing central vision has a smaller impact than reported on-screen, hinting at the importance of peripheral scene processing for visual search with an extended field of view. Our findings provide more information concerning how knowledge gathered on-screen may transfer to more natural conditions, and attest to the experimental usefulness of eye tracking in virtual reality.
Collapse
Affiliation(s)
- Erwan David
- Scene Grammar Lab, Department of Psychology, Theodor-W.-Adorno-Platz 6, Johann Wolfgang-Goethe-Universität, 60323 Frankfurt, Germany; (J.B.); (M.L.-H.V.)
| | | | | |
Collapse
|
9
|
Billington J, Webster RJ, Sherratt TN, Wilkie RM, Hassall C. The (Under)Use of Eye-Tracking in Evolutionary Ecology. Trends Ecol Evol 2020; 35:495-502. [PMID: 32396816 DOI: 10.1016/j.tree.2020.01.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Revised: 12/18/2019] [Accepted: 01/20/2020] [Indexed: 02/07/2023]
Abstract
To survive and pass on their genes, animals must perform many tasks that affect their fitness, such as mate-choice, foraging, and predator avoidance. The ability to make rapid decisions is dependent on the information that needs to be sampled from the environment and how it is processed. We highlight the need to consider visual attention within sensory ecology and advocate the use of eye-tracking methods to better understand how animals prioritise the sampling of information from their environments prior to making a goal-directed decision. We consider ways in which eye-tracking can be used to determine how animals work within attentional constraints and how environmental pressures may exploit these limitations.
Collapse
Affiliation(s)
- J Billington
- School of Psychology, University of Leeds, Leeds, UK.
| | - R J Webster
- Department of Biology, Carleton University, Ottawa, Ontario, Canada
| | - T N Sherratt
- Department of Biology, Carleton University, Ottawa, Ontario, Canada
| | - R M Wilkie
- School of Psychology, University of Leeds, Leeds, UK
| | - C Hassall
- School of Biology, Faculty of Biological Sciences, University of Leeds, Leeds, UK
| |
Collapse
|
10
|
Kothari R, Yang Z, Kanan C, Bailey R, Pelz JB, Diaz GJ. Gaze-in-wild: A dataset for studying eye and head coordination in everyday activities. Sci Rep 2020; 10:2539. [PMID: 32054884 PMCID: PMC7018838 DOI: 10.1038/s41598-020-59251-5] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Accepted: 01/23/2020] [Indexed: 11/21/2022] Open
Abstract
The study of gaze behavior has primarily been constrained to controlled environments in which the head is fixed. Consequently, little effort has been invested in the development of algorithms for the categorization of gaze events (e.g. fixations, pursuits, saccade, gaze shifts) while the head is free, and thus contributes to the velocity signals upon which classification algorithms typically operate. Our approach was to collect a novel, naturalistic, and multimodal dataset of eye + head movements when subjects performed everyday tasks while wearing a mobile eye tracker equipped with an inertial measurement unit and a 3D stereo camera. This Gaze-in-the-Wild dataset (GW) includes eye + head rotational velocities (deg/s), infrared eye images and scene imagery (RGB + D). A portion was labelled by coders into gaze motion events with a mutual agreement of 0.74 sample based Cohen's κ. This labelled data was used to train and evaluate two machine learning algorithms, Random Forest and a Recurrent Neural Network model, for gaze event classification. Assessment involved the application of established and novel event based performance metrics. Classifiers achieve ~87% human performance in detecting fixations and saccades but fall short (50%) on detecting pursuit movements. Moreover, pursuit classification is far worse in the absence of head movement information. A subsequent analysis of feature significance in our best performing model revealed that classification can be done using only the magnitudes of eye and head movements, potentially removing the need for calibration between the head and eye tracking systems. The GW dataset, trained classifiers and evaluation metrics will be made publicly available with the intention of facilitating growth in the emerging area of head-free gaze event classification.
Collapse
Affiliation(s)
- Rakshit Kothari
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA.
| | - Zhizhuo Yang
- Golisano College of Computing and Information Sciences, RIT, Rochester, NY, USA
| | - Christopher Kanan
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA
| | - Reynold Bailey
- Golisano College of Computing and Information Sciences, RIT, Rochester, NY, USA
| | - Jeff B Pelz
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA
| | - Gabriel J Diaz
- Chester F. Carlson Center for Imaging Science, RIT, Rochester, NY, USA
| |
Collapse
|
11
|
Hessels RS, Niehorster DC, Nyström M, Andersson R, Hooge ITC. Is the eye-movement field confused about fixations and saccades? A survey among 124 researchers. ROYAL SOCIETY OPEN SCIENCE 2018; 5:180502. [PMID: 30225041 PMCID: PMC6124022 DOI: 10.1098/rsos.180502] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2018] [Accepted: 08/06/2018] [Indexed: 06/08/2023]
Abstract
Eye movements have been extensively studied in a wide range of research fields. While new methods such as mobile eye tracking and eye tracking in virtual/augmented realities are emerging quickly, the eye-movement terminology has scarcely been revised. We assert that this may cause confusion about two of the main concepts: fixations and saccades. In this study, we assessed the definitions of fixations and saccades held in the eye-movement field, by surveying 124 eye-movement researchers. These eye-movement researchers held a variety of definitions of fixations and saccades, of which the breadth seems even wider than what is reported in the literature. Moreover, these definitions did not seem to be related to researcher background or experience. We urge researchers to make their definitions more explicit by specifying all the relevant components of the eye movement under investigation: (i) the oculomotor component: e.g. whether the eye moves slow or fast; (ii) the functional component: what purposes does the eye movement (or lack thereof) serve; (iii) the coordinate system used: relative to what does the eye move; (iv) the computational definition: how is the event represented in the eye-tracker signal. This should enable eye-movement researchers from different fields to have a discussion without misunderstandings.
Collapse
Affiliation(s)
- Roy S. Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Developmental Psychology, Utrecht University, Utrecht, The Netherlands
| | - Diederick C. Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | | | - Ignace T. C. Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|