101
|
A new jsPsych plugin for psychophysics, providing accurate display duration and stimulus onset asynchrony. Behav Res Methods 2020; 53:301-310. [PMID: 32700239 DOI: 10.3758/s13428-020-01445-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A JavaScript framework named 'jsPsych' developed by de Leeuw (2015) is widely used for conducting Web-based experiments, and its functionality can be enhanced by using plugins. This article introduces a new jsPsych plugin which enables experimenters to set different onset times for geometric figures, images, sounds, and moving objects, and present them synchronized with the refresh of the display. Moreover, this study evaluated the stimulus onset asynchronies (SOAs) using visual and audio stimuli. The results showed that: (i) the deviations from the intended SOAs between two visual stimuli were less than 10 ms, (ii) the variability across browser-computer combinations was reduced compared with the no-plugin condition, and (iii) the variability of the SOAs between visual and audio stimuli was relatively large (about 50 ms). This study concludes that although the use of audio stimuli is somewhat limited, the new plugin provides experimenters with useful and accurate methods for conducting psychophysical experiments online. The latest version of the plugin can be downloaded freely from https://jspsychophysics.hes.kyushu-u.ac.jp under the MIT license.
Collapse
|
102
|
Wu Z, Luu CD, Hodgson LA, Caruso E, Chen FK, Chakravarthy U, Arnold JJ, Heriot WJ, Runciman J, Guymer RH. Examining the added value of microperimetry and low luminance deficit for predicting progression in age-related macular degeneration. Br J Ophthalmol 2020; 105:711-715. [PMID: 32606079 DOI: 10.1136/bjophthalmol-2020-315935] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Revised: 04/21/2020] [Accepted: 06/01/2020] [Indexed: 11/04/2022]
Abstract
PURPOSE To examine the added predictive value of microperimetric sensitivity and low luminance deficit (LLD; difference between photopic and low luminance visual acuity (VA)) to information from colour fundus photography (CFP) for progression to late age-related macular degeneration (AMD) in individuals with bilateral large drusen. METHODS 140 participants with bilateral large drusen underwent baseline microperimetry testing, VA measurements and CFP. They were then reviewed at 6-monthly intervals to 36 months, to determine late AMD progression. Microperimetry pointwise sensitivity SD (PSD), LLD and the presence of pigmentary abnormalities on CFPs were determined. Predictive models based on these parameters were developed and examined. RESULTS Baseline microperimetry PSD and presence of pigmentary abnormalities were both significantly associated with time to develop late AMD (p≤0.004), but LLD was not (p=0.471). The area under the receiver operating characteristic curve (AUC) for discriminating between eyes that progressed to late AMD based on models using microperimetry PSD (AUC=0.68) and LLD (AUC=0.58) alone was significantly lower than that based on CFP grading for the presence of pigmentary abnormalities (AUC=0.80; both p<0.005). Addition of microperimetry and/or LLD information to a model that included CFP grading did not result in any improvement in its predictive performance (AUC=0.80 for all; all p≥0.66). CONCLUSIONS While microperimetry, but not LLD, was significantly and independently associated with AMD progression at the population level, this study observed that both measures were suboptimal at predicting progression at the individual level when compared to conventional CFP grading and their addition to the latter did not improve predictive performance.
Collapse
|
103
|
Sato H, Motoyoshi I. Distinct strategies for estimating the temporal average of numerical and perceptual information. Vision Res 2020; 174:41-49. [PMID: 32521341 DOI: 10.1016/j.visres.2020.05.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 05/17/2020] [Accepted: 05/20/2020] [Indexed: 01/29/2023]
Abstract
Humans can estimate global trends in dynamic information presented either as perceptual features or as symbolic codes such as numbers. Previous studies on temporal statistics estimation have shown that observers judge the temporal average of visual attributes according to information from the last few frames of the presentation sequence (in what is referred to as the recency effect). Here, we investigated how humans estimate the temporal average of number vs. orientation using identical stimuli for the two tasks. In Experiment 1, a randomly-selected single-digit number was serially presented at orientations randomly varying over time. In Experiment 2, a texture comprising a random number of Gabor elements was shown at orientations randomly varying over time. In both experiments, observers judged the temporal averages of the numerical values and orientations in separate blocks. Results showed that observers judging the temporal average of orientation relied upon information from later frames as predicted by a typical model of perceptual decision making. By contrast, for the judgement of numerical values, we found that the impacts of each temporal frame were constant or varied little across temporal frames regardless of whether the numerical information was given as digits or by the number of texture elements. The results are interpreted as evidence that distinct computational strategies may be involved in estimating the temporal averages of perceptual features and numerical information.
Collapse
|
104
|
Dufour A, Després O, Pebayle T, Lithfous S. Thermal sensitivity in humans at the depth of thermal receptor endings beneath the skin: validation of a heat transfer model of the skin using high-temporal resolution stimuli. Eur J Appl Physiol 2020; 120:1509-1518. [PMID: 32361772 DOI: 10.1007/s00421-020-04372-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 04/07/2020] [Indexed: 10/24/2022]
Abstract
PURPOSE The bioheat transfer equation predicts temperature distribution in living tissues such as the skin. This study aimed at psychophysically validating this model in humans. METHODS Three experiments were performed. In the first, participants were asked to judge the thermal intensity of stimuli with combinations of intensity and duration that yielded, according to the model, identical temperatures at the thermoreceptor's depth. In experiment 2, participants' thermal detection thresholds for stimuli of different durations were measured to verify whether these thresholds correspond, according to the model, to equivalent temperatures at the thermoreceptor's location. In experiment 3, an alternative forced choice method was used, in which subjects indicated which of the two consecutive thermal stimulations was more intense. RESULTS The model predicted results that agreed with subjects' perceptions. Participants judged stimuli of different combinations of intensities and durations yielding identical temperature at the receptor level as having equivalent intensity. Moreover, although cold detection thresholds for stimuli of different durations differed for temperatures of the stimulating probe, stimulations using the model's parameters showed equivalence at the depth of the thermal receptors. Furthermore, stimuli with temperature/duration combinations for which the model predicts temperature equivalence at the depth of the receptors corresponded to subjective equalization. CONCLUSION These findings indicate that heat transfer models provide good estimates of temperatures at the thermal receptors. Use of these models may facilitate comparisons among studies using different stimulation devices and may facilitate the establishment of standards involving all stimulation parameters.
Collapse
|
105
|
Abstract
Animal models have significantly contributed to understanding the pathophysiology of chronic subjective tinnitus. They are useful because they control etiology, which in humans is heterogeneous; employ random group assignment; and often use methods not permissible in human studies. Animal models can be broadly categorized as either operant or reflexive, based on methodology. Operant methods use variants of established psychophysical procedures to reveal what an animal hears. Reflexive methods do the same using elicited behavior, for example, the acoustic startle reflex. All methods contrast the absence of sound and presence of sound, because tinnitus cannot by definition be perceived as silence.
Collapse
|
106
|
Nonlinear transduction of emotional facial expression. Vision Res 2020; 170:1-11. [PMID: 32217366 DOI: 10.1016/j.visres.2020.03.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Revised: 03/06/2020] [Accepted: 03/09/2020] [Indexed: 11/23/2022]
Abstract
To create neural representations of external stimuli, the brain performs a number of processing steps that transform its inputs. For fundamental attributes, such as stimulus contrast, this involves one or more nonlinearities that are believed to optimise the neural code to represent features of the natural environment. Here we ask if the same is also true of more complex stimulus dimensions, such as emotional facial expression. We report the results of three experiments combining morphed facial stimuli with electrophysiological and psychophysical methods to measure the function mapping emotional expression intensity to internal response. The results converge on a nonlinearity that accelerates over weak expressions, and then becomes shallower for stronger expressions, similar to the situation for lower level stimulus properties. We further demonstrate that the nonlinearity is not attributable to the morphing procedure used in stimulus generation.
Collapse
|
107
|
Chow-Wing-Bom H, Dekker TM, Jones PR. The worse eye revisited: Evaluating the impact of asymmetric peripheral vision loss on everyday function. Vision Res 2020; 169:49-57. [PMID: 32179339 DOI: 10.1016/j.visres.2019.10.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 10/23/2019] [Accepted: 10/28/2019] [Indexed: 10/24/2022]
Abstract
In instances of asymmetric peripheral vision loss (e.g., glaucoma), binocular performance on simple psychophysical tasks (e.g., static threshold perimetry) is well-predicted by the better seeing eye alone. This suggests that peripheral vision is largely 'better-eye limited'. In the present study, we examine whether this also holds true for real-world tasks, or whether even a degraded fellow eye contributes important information for tasks of daily living. Twelve normally-sighted adults performed an everyday visually-guided action (finding a mobile phone) in a virtual-reality domestic environment, while levels of peripheral vision loss were independently manipulated in each eye (gaze-contingent blur). The results showed that even when vision in the better eye was held constant, participants were significantly slower to locate the target, and made significantly more head- and eye-movements, as peripheral vision loss in the worse eye increased. A purely unilateral peripheral impairment increased response times by up to 25%, although the effect of bilateral vision loss was much greater (>200%). These findings indicate that even a degraded visual field still contributes important information for performing everyday visually-guided actions. This may have clinical implications for how patients with visual field loss are managed or prioritized, and for our understanding of how binocular information in the periphery is integrated.
Collapse
|
108
|
Halovic S, Kroos C, Stevens C. Adaptation aftereffects influence the perception of specific emotions from walking gait. Acta Psychol (Amst) 2020; 204:103026. [PMID: 32087419 DOI: 10.1016/j.actpsy.2020.103026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2019] [Revised: 02/03/2020] [Accepted: 02/03/2020] [Indexed: 10/25/2022] Open
Abstract
We investigated the existence and nature of adaptation aftereffects on the visual perception of basic emotions displayed through walking gait. Stimuli were previously validated gender-ambiguous point-light walker models displaying various basic emotions (happy, sad, anger and fear). Results indicated that both facilitative and inhibitive aftereffects influenced the perception of all displayed emotions. Facilitative aftereffects were found between theoretically opposite emotions (i.e. happy/sad and anger/fear). Evidence suggested that low-level and high-level visual processes contributed to both stimulus aftereffect and conceptual aftereffect mechanisms. Significant aftereffects were more frequently evident for the time required to identify the displayed emotion than for emotion identification rates. The perception of basic emotions from walking gait is influenced by a number of different perceptual mechanisms which shift the categorical boundaries of each emotion as a result of perceptual experience.
Collapse
|
109
|
Kopco N, Doreswamy KK, Huang S, Rossi S, Ahveninen J. Cortical auditory distance representation based on direct-to-reverberant energy ratio. Neuroimage 2020; 208:116436. [PMID: 31809885 PMCID: PMC6997045 DOI: 10.1016/j.neuroimage.2019.116436] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 11/30/2019] [Accepted: 12/02/2019] [Indexed: 11/26/2022] Open
Abstract
Auditory distance perception and its neuronal mechanisms are poorly understood, mainly because 1) it is difficult to separate distance processing from intensity processing, 2) multiple intensity-independent distance cues are often available, and 3) the cues are combined in a context-dependent way. A recent fMRI study identified human auditory cortical area representing intensity-independent distance for sources presented along the interaural axis (Kopco et al. PNAS, 109, 11019-11024). For these sources, two intensity-independent cues are available, interaural level difference (ILD) and direct-to-reverberant energy ratio (DRR). Thus, the observed activations may have been contributed by not only distance-related, but also direction-encoding neuron populations sensitive to ILD. Here, the paradigm from the previous study was used to examine DRR-based distance representation for sounds originating in front of the listener, where ILD is not available. In a virtual environment, we performed behavioral and fMRI experiments, combined with computational analyses to identify the neural representation of distance based on DRR. The stimuli varied in distance (15-100 cm) while their received intensity was varied randomly and independently of distance. Behavioral performance showed that intensity-independent distance discrimination is accurate for frontal stimuli, even though it is worse than for lateral stimuli. fMRI activations for sounds varying in frontal distance, as compared to varying only in intensity, increased bilaterally in the posterior banks of Heschl's gyri, the planum temporale, and posterior superior temporal gyrus regions. Taken together, these results suggest that posterior human auditory cortex areas contain neuron populations that are sensitive to distance independent of intensity and of binaural cues relevant for directional hearing.
Collapse
|
110
|
Lak A, Okun M, Moss MM, Gurnani H, Farrell K, Wells MJ, Reddy CB, Kepecs A, Harris KD, Carandini M. Dopaminergic and Prefrontal Basis of Learning from Sensory Confidence and Reward Value. Neuron 2020; 105:700-711.e6. [PMID: 31859030 PMCID: PMC7031700 DOI: 10.1016/j.neuron.2019.11.018] [Citation(s) in RCA: 66] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2019] [Revised: 09/04/2019] [Accepted: 11/11/2019] [Indexed: 01/07/2023]
Abstract
Deciding between stimuli requires combining their learned value with one's sensory confidence. We trained mice in a visual task that probes this combination. Mouse choices reflected not only present confidence and past rewards but also past confidence. Their behavior conformed to a model that combines signal detection with reinforcement learning. In the model, the predicted value of the chosen option is the product of sensory confidence and learned value. We found precise correlates of this variable in the pre-outcome activity of midbrain dopamine neurons and of medial prefrontal cortical neurons. However, only the latter played a causal role: inactivating medial prefrontal cortex before outcome strengthened learning from the outcome. Dopamine neurons played a causal role only after outcome, when they encoded reward prediction errors graded by confidence, influencing subsequent choices. These results reveal neural signals that combine reward value with sensory confidence and guide subsequent learning.
Collapse
|
111
|
Baucum M, John R. The Psychophysics of Terror Attack Casualty Counts. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2020; 40:399-407. [PMID: 31483513 DOI: 10.1111/risa.13396] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Revised: 05/19/2019] [Accepted: 07/22/2019] [Indexed: 06/10/2023]
Abstract
In communicating the risk that terror attacks pose to the public, government agencies and other organizations must understand which characteristics of an attack contribute to the public's perception of its severity. An attack's casualty count is one of the most commonly used metrics of a terror attack's severity, yet it is unclear whether the public responds to information about casualty count when forming affective and cognitive reactions to terror attacks. This study sought to characterize the "psychophysical function" relating terror attack casualty counts to the severity of the affective and cognitive reactions they elicit. We recruited n = 684 Mechanical Turk participants to read a realistic vignette depicting either a biological or radiological terror attack, whose death toll ranged from 20 to 50,000, and rated their levels of fear and anger along with the attack's severity. Even when controlling for the perceived plausibility of the scenarios, participants' severity ratings of each attack were logarithmic with respect to casualty count, while ratings of fear and anger did not significantly depend on casualty count. These results were consistent across attack weapon (biological vs. radiological) and time horizon of the casualties (same-day or anticipated to occur over several years). These results complement past work on life loss valuation and highlight a potential bifurcation between the public's affective and cognitive evaluations of terror attacks.
Collapse
|
112
|
Mathôt S, Ivanov Y. The effect of pupil size and peripheral brightness on detection and discrimination performance. PeerJ 2019; 7:e8220. [PMID: 31875153 PMCID: PMC6925951 DOI: 10.7717/peerj.8220] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Accepted: 11/15/2019] [Indexed: 01/19/2023] Open
Abstract
It is easier to read dark text on a bright background (positive polarity) than to read bright text on a dark background (negative polarity). This positive-polarity advantage is often linked to pupil size: A bright background induces small pupils, which in turn increases visual acuity. Here we report that pupil size, when manipulated through peripheral brightness, has qualitatively different effects on discrimination of fine stimuli in central vision and detection of faint stimuli in peripheral vision. Small pupils are associated with improved discrimination performance, consistent with the positive-polarity advantage, but only for very small stimuli that are at the threshold of visual acuity. In contrast, large pupils are associated with improved detection performance. These results are likely due to two pupil-size related factors: Small pupils increase visual acuity, which improves discrimination of fine stimuli; and large pupils increase light influx, which improves detection of faint stimuli. Light scatter is likely also a contributing factor: When a display is bright, light scatter creates a diffuse veil of retinal illumination that reduces perceived image contrast, thus impairing detection performance. We further found that pupil size was larger during the detection task than during the discrimination task, even though both tasks were equally difficult and similar in visual input; this suggests that the pupil may automatically assume an optimal size for the current task. Our results may explain why pupils dilate in response to arousal: This may reflect an increased emphasis on detection of unpredictable danger, which is crucially important in many situations that are characterized by high levels of arousal. Finally, we discuss the implications of our results for the ergonomics of display design.
Collapse
|
113
|
Determination of scotopic and photopic conventional visual acuity and hyperacuity. Graefes Arch Clin Exp Ophthalmol 2019; 258:129-135. [PMID: 31754827 DOI: 10.1007/s00417-019-04505-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2019] [Revised: 09/23/2019] [Accepted: 10/05/2019] [Indexed: 10/25/2022] Open
Abstract
PURPOSE Visual acuity (VA) is an important determinant of visual function. Here we establish procedures and recommendations for VA testing extending beyond the classical VA and thus make them available for future studies of visual function in health and disease. Specifically, we provide reference values for photopic and scotopic conventional uncrowded visual acuity (cVA) and Vernier-hyperacuity (hVA) and assess their reproducibility and dependence on contrast polarity. METHODS For ten observers with normal vision, we determined photopic ("p"; maximal luminance 220 cd/m2) and scotopic ("s"; maximal luminance 0.004 cd/m2; 40 min of dark adaptation) cVA and hVA, for two contrast polarities i.e. black optotypes on white background and vice versa. To assess intersession effects, two sets of measurements were obtained on different days. RESULTS Compared to pcVA (1.32 decimal VA; - 0.12 ± 0.02 LogMAR), the phVA (14.45 decimal VA; - 1.16 ± 0.04 LogMAR) scaled (in terms of decimal visual acuity) on average with a factor 11.0, the scVA (0.12 decimal VA; 0.91 ± 0.03 LogMAR) with a factor of 0.1, and the shVA (1.47 decimal VA; - 0.17 ± 0.02 LogMAR) with a factor of 1.1. There were neither significant effects of contrast polarity (p > 0.12), nor of session (p > 0.28). CONCLUSIONS Our approach optimises integrated photopic and scotopic cVA and hVA measurements for general use and thus encourages the integration of these important measures of scotopic visual function in future studies. The absence of strong intersession effects demonstrates that no dedicated training session is needed to obtain scotopic and hVA measurements. The combined measures of scotopic and photopic VAs open a field of applications to study interplay and plasticity of the retinal photoreceptor systems and cortical processing in health and visual disease. As a rule of thumb, hyperacuity is 10× higher both in the photopic and scotopic range than conventional acuity. Thus, scotopic hyperacuity is close to photopic conventional acuity.
Collapse
|
114
|
Self-reported Sensory Hypersensitivity Moderates Association Between Tactile Psychophysical Performance and Autism-Related Traits in Neurotypical Adults. J Autism Dev Disord 2019; 49:3159-3172. [PMID: 31073751 DOI: 10.1007/s10803-019-04043-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Atypical responses to tactile stimulation have been linked to core domains of dysfunction in individuals with autism spectrum disorder (ASD) and phenotypic traits associated with ASD in neurotypical individuals. We investigated (a) the extent to which two psychophysically derived measures of tactile sensitivity-detection threshold and dynamic range-relate to traits associated with ASD and (b) whether those relations vary according to the presence of self-reported sensory hypersensitivities in neurotypical individuals. A narrow dynamic range was associated with increased autism-related traits in individuals who reported greater sensory hypersensitivity. In contrast, in individuals less prone to sensory hypersensitivity, a narrow dynamic range was associated with reduced autism-related traits. Findings highlight the potential importance of considering dynamic psychophysical metrics in future studies.
Collapse
|
115
|
Mangalam M, Chen R, McHugh TR, Singh T, Kelty-Stephen DG. Bodywide fluctuations support manual exploration: Fractal fluctuations in posture predict perception of heaviness and length via effortful touch by the hand. Hum Mov Sci 2019; 69:102543. [PMID: 31715380 DOI: 10.1016/j.humov.2019.102543] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Revised: 10/31/2019] [Accepted: 11/01/2019] [Indexed: 01/13/2023]
Abstract
The human haptic perceptual system respects a bodywide organization that responds to local stimulation through full-bodied coordination of nested tensions and compressions across multiple nonoverlapping scales. Under such an organization, the suprapostural task of manually hefting objects to perceive their heaviness and length should depend on roots extending into the postural control for maintaining upright balance on the ground surface. Postural sway of the whole body should thus carry signatures predicting what the hand can extract by hefting an object. We found that fractal fluctuations in Euclidean displacement in the participants' center of pressure (CoP) contributed to perceptual judgments by moderating how the participants' hand picked up the informational variable of the moment of inertia. The role of fractality in CoP displacement in supporting heaviness and length judgments increased across trials, indicating that the participants progressively implicate their fractal scaling in their perception of heaviness and length. Traditionally, we had to measure fractality in hand movements to predict perceptual judgments by manual hefting. However, our findings suggest that we can observe what is happening at hand in the relatively distant-from-hand measure of CoP. Our findings reveal the complex relationship through which posture supports manual exploration, entailing perception of the intended properties of hefted objects (heaviness or length) putatively through the redistribution of forces throughout the body.
Collapse
|
116
|
Abstract
Behavioral testing in perceptual or cognitive domains requires querying a subject multiple times in order to quantify his or her ability in the corresponding domain. These queries must be conducted sequentially, and any additional testing domains are also typically tested sequentially, such as with distinct tests comprising a test battery. As a result, existing behavioral tests are often lengthy and do not offer comprehensive evaluation. The use of active machine-learning kernel methods for behavioral assessment provides extremely flexible yet efficient estimation tools to more thoroughly investigate perceptual or cognitive processes without incurring the penalty of excessive testing time. Audiometry represents perhaps the simplest test case to demonstrate the utility of these techniques. In pure-tone audiometry, hearing is assessed in the two-dimensional input space of frequency and intensity, and the test is repeated for both ears. Although an individual's ears are not linked physiologically, they share many features in common that lead to correlations suitable for exploitation in testing. The bilateral audiogram estimates hearing thresholds in both ears simultaneously by conjoining their separate input domains into a single search space, which can be evaluated efficiently with modern machine-learning methods. The result is the introduction of the first conjoint psychometric function estimation procedure, which consistently delivers accurate results in significantly less time than sequential disjoint estimators.
Collapse
|
117
|
Wijetillake AA, van Hoesel RJM, Cowan R. Sequential stream segregation with bilateral cochlear implants. Hear Res 2019; 383:107812. [PMID: 31630083 DOI: 10.1016/j.heares.2019.107812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2019] [Revised: 10/05/2019] [Accepted: 10/07/2019] [Indexed: 11/18/2022]
Abstract
Sequential stream segregation on the basis of binaural 'ear-of-entry', modulation rate and electrode place-of-stimulation cues was investigated in bilateral cochlear implant (CI) listeners using a rhythm anisochrony detection task. Sequences of alternating 'A' and 'B' bursts were presented via direct electrical stimulation and comprised either an isochronous timing structure or an anisochronous structure that was generated by delaying just the 'B' bursts. 'B' delay thresholds that enabled rhythm anisochrony detection were determined. Higher thresholds were assumed to indicate a greater likelihood of stream segregation, resulting specifically from stream integration breakdown. Results averaged across subjects showed that thresholds were significantly higher when monaural 'A' and 'B' bursts were presented contralaterally rather than ipsilaterally, and that diotic presentation of 'A', with a monaural 'B', yielded intermediate thresholds. When presented monaurally and ipsilaterally, higher thresholds were also found when successive bursts had mismatched rather than matched modulation rates. In agreement with previous studies, average delay thresholds also increased as electrode separation between bursts increased when presented ipsilaterally. No interactions were found between ear-of-entry, modulation rate and place-of-stimulation. However, combining moderate electrode difference cues with either diotic-'A' ear-of-entry cues or modulation-rate mismatch cues did yield greater threshold increases than observed with any of those cues alone. The results from the present study indicate that sequential stream segregation can be elicited in bilateral CI users by differences in the signal across ears (binaural cues), in modulation rate (monaural cues) and in place-of-stimulation (monaural cues), and that those differences can be combined to further increase segregation.
Collapse
|
118
|
Abstract
This review addresses the adverse influences of neurotoxic exposures on the ability to smell and taste. These chemical senses largely determine the flavor of foods and beverages, impact food intake, and ultimately nutrition, and provide a warning for spoiled or poisonous food, leaking natural gas, smoke, airborne pollutants, and other hazards. Hence, toxicants that damage these senses have a significant impact on everyday function. As noted in detail, a large number of toxicants encountered in urban and industrial air pollution, including smoke, solvents, metals, and particulate matter can alter the ability to smell. Their influence on taste, i.e., sweet, sour, bitter, salty, and savory (umami) sensations, is not well documented. Given the rather direct exposure of olfactory receptors to the outside environment, olfaction is particularly vulnerable to damage from toxicants. Some toxicants, such as nanoparticles, have the potential to damage not only the olfactory receptor cells, but also the central nervous system structures by their entrance into the brain through the olfactory mucosa.
Collapse
|
119
|
Nolden AA, Lenart G, Hayes JE. Putting out the fire - Efficacy of common beverages in reducing oral burn from capsaicin. Physiol Behav 2019; 208:112557. [PMID: 31121171 PMCID: PMC6620146 DOI: 10.1016/j.physbeh.2019.05.018] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2019] [Revised: 05/08/2019] [Accepted: 05/18/2019] [Indexed: 11/28/2022]
Abstract
Capsaicin is classically considered an irritant, due to the warming and burning sensations it elicits. Widespread consumption of chilis suggests many individuals enjoy this burn, but these sensations can be overwhelming if the burn is too intense. While substantial folklore exists on the ability of specific beverages to mitigate capsaicin burn, quantitative data to support these claims are generally lacking. Here, we systematically tested various beverages for their ability to reduce oral burn following consumption of capsaicin in tomato juice. Participants (n = 72, 42 women, 30 men) rated the burn of 30 mL of spicy tomato juice on a general Labeled Magnitude Scale (gLMS) immediately after swallowing, and again every 10 s for 2 min. On 7 of 8 trials, a test beverage (40 mL) was consumed after tomato juice was swallowed, including: skim milk, whole milk, seltzer water, Cherry Kool-Aid, non-alcoholic beer, cola, and water. Participants also answered questions regarding intake frequency and liking of spicy food. Initial burn of tomato juice alone was rated below "strong" but above "moderate" on a gLMS and continued to decay over the 2 min to a mean just above "weak". All beverages significantly reduced the burn of the tomato juice. To quantify efficacy over time, area under the curve (AUC) values were calculated, and the largest reductions in burn were observed for whole milk, skim milk, and Kool-Aid. More work is needed to determine the mechanism(s) by which these beverages reduce burn (i.e., partitioning due to fat, binding by protein, or sucrose analgesia). Present data suggest milk is the best choice to mitigate burn, regardless of fat context, suggesting the presence of protein may be more relevant than lipid content.
Collapse
|
120
|
Andrade MJOD, Rezende MTC, Figueiredo BGDD, Farias CA, Santos NAD. Psychophysical measure of visual luminance contrast during a daily rhythm. Chronobiol Int 2019; 36:1496-1503. [PMID: 31409141 DOI: 10.1080/07420528.2019.1653904] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
This study evaluated visual sensitivity to luminance contrast during a daily period. Twenty-eight young male adults (M = 24.85; SD = 2.4) with normal color vision and 20/20 visual acuity participated in this study. The circadian pattern was assessed using the Karolinska Sleepiness Scale (KSS), the Pittsburgh Sleep Quality Index (PSQI), and a sleep diary. To measure the luminance contrast, we used version 11.0 of the Metropsis software with sine-element frequency stimuli for spatial frequencies of 0.2, 0.6, 1, 3.1, 6.1, 8.8, 13.2, and 15.6 cycles per degree of visual angle (cpd). The stimuli were presented on a 19-inch color cathode ray tube (CRT) video monitor with a resolution of 1024 × 786 pixels, an update rate of 100 Hz, and a photopic luminance of 39.6 cd/m2. There was a significant difference in KSS on the weekdays [χ2(2) = 20.27; p = .001] and in the luminance contrast for frequencies of 13.2 cpd [χ2(2) = 8.27; p = .001] and 15.6 cpd [χ2(2) = 13.72; p = .041]. The results showed greater stability of the measurement during the afternoon and a reduction in the visual sensitivity in the high spatial frequencies during the night.
Collapse
|
121
|
Watson MR, Voloh B, Thomas C, Hasan A, Womelsdorf T. USE: An integrative suite for temporally-precise psychophysical experiments in virtual environments for human, nonhuman, and artificially intelligent agents. J Neurosci Methods 2019; 326:108374. [PMID: 31351974 DOI: 10.1016/j.jneumeth.2019.108374] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Revised: 06/24/2019] [Accepted: 07/24/2019] [Indexed: 11/30/2022]
Abstract
BACKGROUND There is a growing interest in complex, active, and immersive behavioral neuroscience tasks. However, the development and control of such tasks present unique challenges. NEW METHOD The Unified Suite for Experiments (USE) is an integrated set of hardware and software tools for the design and control of behavioral neuroscience experiments. The software, developed using the Unity video game engine, supports both active tasks in immersive 3D environments and static 2D tasks used in more traditional visual experiments. The custom USE SyncBox hardware, based around an Arduino Mega2560 board, integrates and synchronizes multiple data streams from different pieces of experimental hardware. The suite addresses three key issues with developing cognitive neuroscience experiments in Unity: tight experimental control, accurate sub-ms timing, and accurate gaze target identification. RESULTS USE is a flexible framework to realize experiments, enabling (i) nested control over complex tasks, (ii) flexible use of 3D or 2D scenes and objects, (iii) touchscreen-, button-, joystick- and gaze-based interaction, and (v) complete offline reconstruction of experiments for post-processing and temporal alignment of data streams. COMPARISON WITH EXISTING METHODS Most existing experiment-creation tools are not designed to support the development of video-game-like tasks. Those that do use older or less popular video game engines as their base, and are not as feature-rich or enable as precise control over timing as USE. CONCLUSIONS USE provides an integrated, open source framework for a wide variety of active behavioral neuroscience experiments using human and nonhuman participants, and artificially-intelligent agents.
Collapse
|
122
|
The human task-evoked pupillary response function is linear: Implications for baseline response scaling in pupillometry. Behav Res Methods 2019; 51:865-878. [PMID: 30264368 DOI: 10.3758/s13428-018-1134-4] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
The human task-evoked pupillary response provides a sensitive physiological index of the intensity and online resource demands of numerous cognitive processes (e.g., memory retrieval, problem solving, or target detection). Cognitive pupillometry is a well-established technique that relies upon precise measurement of these subtle response functions. Baseline variability of pupil diameter is a complex artifact that typically necessitates mathematical correction. A methodological paradox within pupillometry is that linear and nonlinear forms of baseline scaling both remain accepted baseline correction techniques, despite yielding highly disparate results. The task-evoked pupillary response (TEPR) could potentially scale nonlinearly, similar to autonomic functions such as heart rate, in which the amplitude of an evoked response diminishes as the baseline rises. Alternatively, the TEPR could scale similarly to the cortical hemodynamic response, as a linear function that is independent of its baseline. However, the TEPR cannot scale both linearly and nonlinearly. Our aim was to adjudicate between linear and nonlinear scaling of human TEPR. We manipulated baseline pupil size by modulating the illuminance in the testing room as participants heard abrupt pure-tone transitions (Exp. 1) or visually monitored word lists (Exp. 2). Phasic pupillary responses scaled according to a linear function across all lighting (dark, mid, bright) and task (tones, words) conditions, demonstrating that the TEPR is independent of its baseline amplitude. We discuss methodological implications and identify a need to reevaluate past pupillometry studies.
Collapse
|
123
|
Noel JP, Faivre N, Magosso E, Blanke O, Alais D, Wallace M. Multisensory perceptual awareness: Categorical or graded? Cortex 2019; 120:169-180. [PMID: 31323457 DOI: 10.1016/j.cortex.2019.05.018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2018] [Revised: 03/31/2019] [Accepted: 05/30/2019] [Indexed: 11/18/2022]
Abstract
Neural evidence suggests that mechanisms associated with conscious access (i.e., the ability to report on a conscious state) are "all-or-none". Upon crossing some threshold, neural signals are globally broadcast throughout the brain and allow conscious reports. However, whether subjective experience (phenomenal consciousness) is categorical (i.e., transitioning abruptly from unconscious to conscious states) or graded (i.e., characterized by multiple intermediate states) remains an open question. To address this issue, we built a series of artificial neural networks containing distinct feedback connectivity from "multisensory" to "unisensory" cortices. In line with consciousness theories, we operationalized perceptual consciousness by the presence of feedback from higher-order nodes back to unisensory nodes which allow 'neural ignition' - a rapid, non-linear boost in response putatively leading to phenomenal consciousness. When simulating how these networks responded to unisensory and multisensory inputs, we found the fastest responses for multisensory presentations associated with multisensory feedback, and the slowest responses for multisensory presentations without feedback. Most interestingly, despite being built in line with "all-or-none" models of consciousness, multisensory stimuli associated with unisensory feedback (i.e., auditory or visual), and hence consistent with unisensory phenomenology according to theories of consciousness, generated intermediate reaction times. To extend these models to human perception and performance, we conducted extensive psychophysical testing in 29 subjects who each completed 10 h of a multisensory cue-congruency task. Consistent with the modeling results, we found that reaction times to multisensory cues reported as unisensory were intermediate between those of fully aware and fully unaware cues. These results support the existence of graded forms of phenomenological consciousness that can be instantiated by simple neural networks built in line with "all-or-none" models of consciousness.
Collapse
|
124
|
Creamer MS, Mano O, Tanaka R, Clark DA. A flexible geometry for panoramic visual and optogenetic stimulation during behavior and physiology. J Neurosci Methods 2019; 323:48-55. [PMID: 31103713 DOI: 10.1016/j.jneumeth.2019.05.005] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2019] [Revised: 05/11/2019] [Accepted: 05/12/2019] [Indexed: 11/26/2022]
Abstract
BACKGROUND To study visual processing, it is necessary to precisely control visual stimuli while recording neural and behavioral responses. It can be important to present stimuli over a broad area of the visual field, which can be technically difficult. NEW METHOD We present a simple geometry that can be used to display panoramic stimuli. A single digital light projector generates images that are reflected by mirrors onto flat screens that surround an animal. It can be used for behavioral and neurophysiological measurements, so virtually identical stimuli can be presented. Moreover, this geometry permits light from the projector to be used to activate optogenetic tools. RESULTS Using this geometry, we presented panoramic visual stimulation to Drosophila in three paradigms. We presented drifting contrast gratings while recording walking and turning speed. We used the same projector to activate optogenetic channels during visual stimulation. Finally, we used two-photon microscopy to record responses in direction-selective cells to drifting gratings. COMPARISON WITH EXISTING METHOD(S) Existing methods have typically required custom hardware or curved screens, while this method requires only flat back projection screens and a digital light projector. The projector generates images in real time and does not require pre-generated images. Finally, while many setups are large, this geometry occupies a 30 × 20 cm footprint with a 25 cm height. CONCLUSIONS This flexible geometry enables measurements of behavioral and neural responses to panoramic stimuli. This allows moderate throughput behavioral experiments with simultaneous optogenetic manipulation, with easy comparisons between behavior and neural activity using virtually identical stimuli.
Collapse
|
125
|
Abstract
Cerebellar plasticity is a critical mechanism for optimal feedback control. While Purkinje cell activity of the oculomotor vermis predicts eye movement speed and direction, more lateral areas of the cerebellum may play a role in more complex tasks, including decision-making. It is still under question how this motor-cognitive functional dichotomy between medial and lateral areas of the cerebellum plays a role in optimal feedback control. Here we show that elite athletes subjected to a trajectory prediction, go/no-go task manifest superior subsecond trajectory prediction accompanied by optimal eye movements and changes in cognitive load dynamics. Moreover, while interacting with the cerebral cortex, both the medial and lateral cerebellar networks are prominently activated during the fast feedback stage of the task, regardless of whether or not a motor response was required for the correct response. Our results show that cortico-cerebellar interactions are widespread during dynamic feedback and that experience can result in superior task-specific decision skills.
Collapse
|