1
|
Data Visualization Using R for Researchers Who Do Not Use R. ADVANCES IN METHODS AND PRACTICES IN PSYCHOLOGICAL SCIENCE 2022. [DOI: 10.1177/25152459221074654] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In addition to benefiting reproducibility and transparency, one of the advantages of using R is that researchers have a much larger range of fully customizable data visualizations options than are typically available in point-and-click software because of the open-source nature of R. These visualization options not only look attractive but also can increase transparency about the distribution of the underlying data rather than relying on commonly used visualizations of aggregations, such as bar charts of means. In this tutorial, we provide a practical introduction to data visualization using R specifically aimed at researchers who have little to no prior experience of using R. First, we detail the rationale for using R for data visualization and introduce the “grammar of graphics” that underlies data visualization using the ggplot package. The tutorial then walks the reader through how to replicate plots that are commonly available in point-and-click software, such as histograms and box plots, and shows how the code for these “basic” plots can be easily extended to less commonly available options, such as violin box plots. The data set and code used in this tutorial and an interactive version with activity solutions, additional resources, and advanced plotting options are available at https://osf.io/bj83f/ .
Collapse
|
2
|
Abstract
We form very rapid personality impressions about speakers on hearing a single word. This implies that the acoustical properties of the voice (e.g., pitch) are very powerful cues when forming social impressions. Here, we aimed to explore how personality impressions for brief social utterances transfer across languages and whether acoustical properties play a similar role in driving personality impressions. Additionally, we examined whether evaluations are similar in the native and a foreign language of the listener. In two experiments we asked Spanish listeners to evaluate personality traits from different instances of the Spanish word "Hola" (Experiment 1) and the English word "Hello" (Experiment 2), native and foreign language respectively. The results revealed that listeners across languages form very similar personality impressions irrespective of whether the voices belong to the native or the foreign language of the listener. A social voice space was summarized by two main personality traits, one emphasizing valence (e.g., trust) and the other strength (e.g., dominance). Conversely, the acoustical properties that listeners pay attention to when judging other's personality vary across languages. These results provide evidence that social voice perception contains certain elements invariant across cultures/languages, while others are modulated by the cultural/linguistic background of the listener.
Collapse
|
3
|
Correction: The sound of trustworthiness: Acoustic-based modulation of perceived voice personality. PLoS One 2019; 14:e0211282. [PMID: 30653619 PMCID: PMC6336380 DOI: 10.1371/journal.pone.0211282] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
|
4
|
Judgements of a speaker's personality are correlated across differing content and stimulus type. PLoS One 2018; 13:e0204991. [PMID: 30286148 PMCID: PMC6171871 DOI: 10.1371/journal.pone.0204991] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2017] [Accepted: 09/18/2018] [Indexed: 11/19/2022] Open
Abstract
It has previously been shown that first impressions of a speaker's personality, whether accurate or not, can be judged from short utterances of vowels and greetings, as well as from prolonged sentences and readings of complex paragraphs. From these studies, it is established that listeners' judgements are highly consistent with one another, suggesting that different people judge personality traits in a similar fashion, with three key personality traits being related to measures of valence (associated with trustworthiness), dominance, and attractiveness. Yet, particularly in voice perception, limited research has established the reliability of such personality judgements across stimulus types of varying lengths. Here we investigate whether first impressions of trustworthiness, dominance, and attractiveness of novel speakers are related when a judgement is made on hearing both one word and one sentence from the same speaker. Secondly, we test whether what is said, thus adjusting content, influences the stability of personality ratings. 60 Scottish voices (30 females) were recorded reading two texts: one of ambiguous content and one with socially-relevant content. One word (~500 ms) and one sentence (~3000 ms) were extracted from each recording for each speaker. 181 participants (138 females) rated either male or female voices across both content conditions (ambiguous, socially-relevant) and both stimulus types (word, sentence) for one of the three personality traits (trustworthiness, dominance, attractiveness). Pearson correlations showed personality ratings between words and sentences were strongly correlated, with no significant influence of content. In short, when establishing an impression of a novel speaker, judgments of three key personality traits are highly related whether you hear one word or one sentence, irrespective of what they are saying. This finding is consistent with initial personality judgments serving as elucidators of approach or avoidance behaviour, without modulation by time or content. All data and sounds are available on OSF (osf.io/s3cxy).
Collapse
|
5
|
The sound of trustworthiness: Acoustic-based modulation of perceived voice personality. PLoS One 2017; 12:e0185651. [PMID: 29023462 PMCID: PMC5638233 DOI: 10.1371/journal.pone.0185651] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2017] [Accepted: 09/15/2017] [Indexed: 11/18/2022] Open
Abstract
When we hear a new voice we automatically form a "first impression" of the voice owner's personality; a single word is sufficient to yield ratings highly consistent across listeners. Past studies have shown correlations between personality ratings and acoustical parameters of voice, suggesting a potential acoustical basis for voice personality impressions, but its nature and extent remain unclear. Here we used data-driven voice computational modelling to investigate the link between acoustics and perceived trustworthiness in the single word "hello". Two prototypical voice stimuli were generated based on the acoustical features of voices rated low or high in perceived trustworthiness, respectively, as well as a continuum of stimuli inter- and extrapolated between these two prototypes. Five hundred listeners provided trustworthiness ratings on the stimuli via an online interface. We observed an extremely tight relationship between trustworthiness ratings and position along the trustworthiness continuum (r = 0.99). Not only were trustworthiness ratings higher for the high- than the low-prototypes, but the difference could be modulated quasi-linearly by reducing or exaggerating the acoustical difference between the prototypes, resulting in a strong caricaturing effect. The f0 trajectory, or intonation, appeared a parameter of particular relevance: hellos rated high in trustworthiness were characterized by a high starting f0 then a marked decrease at mid-utterance to finish on a strong rise. These results demonstrate a strong acoustical basis for voice personality impressions, opening the door to multiple potential applications.
Collapse
|
6
|
Low Vocal Pitch Preference Drives First Impressions Irrespective of Context in Male Voices but Not in Female Voices. Perception 2016; 45:946-963. [DOI: 10.1177/0301006616643675] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Vocal pitch has been found to influence judgments of perceived trustworthiness and dominance from a novel voice. However, the majority of findings arise from using only male voices and in context-specific scenarios. In two experiments, we first explore the influence of average vocal pitch on first-impression judgments of perceived trustworthiness and dominance, before establishing the existence of an overall preference for high or low pitch across genders. In Experiment 1, pairs of high- and low-pitched temporally reversed recordings of male and female vocal utterances were presented in a two-alternative forced-choice task. Results revealed a tendency to select the low-pitched voice over the high-pitched voice as more trustworthy, for both genders, and more dominant, for male voices only. Experiment 2 tested an overall preference for low-pitched voices, and whether judgments were modulated by speech content, using forward and reversed speech to manipulate context. Results revealed an overall preference for low pitch, irrespective of direction of speech, in male voices only. No such overall preference was found for female voices. We propose that an overall preference for low pitch is a default prior in male voices irrespective of context, whereas pitch preferences in female voices are more context- and situation-dependent. The present study confirms the important role of vocal pitch in the formation of first-impression personality judgments and advances understanding of the impact of context on pitch preferences across genders.
Collapse
|
7
|
The human voice areas: Spatial organization and inter-individual variability in temporal and extra-temporal cortices. Neuroimage 2015; 119:164-74. [PMID: 26116964 PMCID: PMC4768083 DOI: 10.1016/j.neuroimage.2015.06.050] [Citation(s) in RCA: 133] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2014] [Revised: 06/15/2015] [Accepted: 06/18/2015] [Indexed: 12/02/2022] Open
Abstract
fMRI studies increasingly examine functions and properties of non-primary areas of human auditory cortex. However there is currently no standardized localization procedure to reliably identify specific areas across individuals such as the standard ‘localizers’ available in the visual domain. Here we present an fMRI ‘voice localizer’ scan allowing rapid and reliable localization of the voice-sensitive ‘temporal voice areas’ (TVA) of human auditory cortex. We describe results obtained using this standardized localizer scan in a large cohort of normal adult subjects. Most participants (94%) showed bilateral patches of significantly greater response to vocal than non-vocal sounds along the superior temporal sulcus/gyrus (STS/STG). Individual activation patterns, although reproducible, showed high inter-individual variability in precise anatomical location. Cluster analysis of individual peaks from the large cohort highlighted three bilateral clusters of voice-sensitivity, or “voice patches” along posterior (TVAp), mid (TVAm) and anterior (TVAa) STS/STG, respectively. A series of extra-temporal areas including bilateral inferior prefrontal cortex and amygdalae showed small, but reliable voice-sensitivity as part of a large-scale cerebral voice network. Stimuli for the voice localizer scan and probabilistic maps in MNI space are available for download. Three “voice patches” along human superior temporal gyrus/sulcus. Anatomical location reproducible within- but variable between-individuals. Extended voice processing network includes amygdala and prefrontal cortex. Stimulus material for “voice localizer” scan available for download.
Collapse
|
8
|
Familiarity with interest breeds gossip: contributions of emotion, expectation, and reputation. PLoS One 2014; 9:e104916. [PMID: 25119267 PMCID: PMC4132070 DOI: 10.1371/journal.pone.0104916] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2014] [Accepted: 07/15/2014] [Indexed: 11/19/2022] Open
Abstract
Although gossip serves several important social functions, it has relatively infrequently been the topic of systematic investigation. In two experiments, we advance a cognitive-informational approach to gossip. Specifically, we sought to determine which informational components engender gossip. In Experiment 1, participants read brief passages about other people and indicated their likelihood to share this information. We manipulated target familiarity (celebrity, non-celebrity) and story interest (interesting, boring). While participants were more likely to gossip about celebrity than non-celebrity targets and interesting than boring stories, they were even more likely to gossip about celebrity targets embedded within interesting stories. In Experiment 2, we additionally probed participants' reactions to the stories concerning emotion, expectation, and reputation information conveyed. Analyses showed that while such information partially mediated target familiarity and story interest effects, only expectation and reputation accounted for the interactive pattern of gossip behavior. Our findings provide novel insights into the essential components and processing mechanisms of gossip.
Collapse
|
9
|
Experience in judging intent to harm modulates parahippocampal activity: An fMRI study with experienced CCTV operators. Cortex 2014; 57:74-91. [DOI: 10.1016/j.cortex.2014.02.026] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2013] [Revised: 12/06/2013] [Accepted: 02/09/2014] [Indexed: 01/30/2023]
|
10
|
How do you say 'hello'? Personality impressions from brief novel voices. PLoS One 2014; 9:e90779. [PMID: 24622283 PMCID: PMC3951273 DOI: 10.1371/journal.pone.0090779] [Citation(s) in RCA: 121] [Impact Index Per Article: 12.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2013] [Accepted: 02/05/2014] [Indexed: 11/19/2022] Open
Abstract
On hearing a novel voice, listeners readily form personality impressions of that speaker. Accurate or not, these impressions are known to affect subsequent interactions; yet the underlying psychological and acoustical bases remain poorly understood. Furthermore, hitherto studies have focussed on extended speech as opposed to analysing the instantaneous impressions we obtain from first experience. In this paper, through a mass online rating experiment, 320 participants rated 64 sub-second vocal utterances of the word ‘hello’ on one of 10 personality traits. We show that: (1) personality judgements of brief utterances from unfamiliar speakers are consistent across listeners; (2) a two-dimensional ‘social voice space’ with axes mapping Valence (Trust, Likeability) and Dominance, each driven by differing combinations of vocal acoustics, adequately summarises ratings in both male and female voices; and (3) a positive combination of Valence and Dominance results in increased perceived male vocal Attractiveness, whereas perceived female vocal Attractiveness is largely controlled by increasing Valence. Results are discussed in relation to the rapid evaluation of personality and, in turn, the intent of others, as being driven by survival mechanisms via approach or avoidance behaviours. These findings provide empirical bases for predicting personality impressions from acoustical analyses of short utterances and for generating desired personality impressions in artificial voices.
Collapse
|
11
|
Distinct patterns of functional brain connectivity correlate with objective performance and subjective beliefs. Proc Natl Acad Sci U S A 2013; 110:11577-82. [PMID: 23801762 PMCID: PMC3710822 DOI: 10.1073/pnas.1301353110] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
The degree of correspondence between objective performance and subjective beliefs varies widely across individuals. Here we demonstrate that functional brain network connectivity measured before exposure to a perceptual decision task covaries with individual objective (type-I performance) and subjective (type-II performance) accuracy. Increases in connectivity with type-II performance were observed in networks measured while participants directed attention inward (focus on respiration), but not in networks measured during states of neutral (resting state) or exogenous attention. Measures of type-I performance were less sensitive to the subjects' specific attentional states from which the networks were derived. These results suggest the existence of functional brain networks indexing objective performance and accuracy of subjective beliefs distinctively expressed in a set of stable mental states.
Collapse
|
12
|
Uni- and multisensory brain areas are synchronised across spectators when watching unedited dance recordings. Iperception 2013; 4:265-84. [PMID: 24349687 PMCID: PMC3859570 DOI: 10.1068/i0536] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2012] [Revised: 02/20/2013] [Indexed: 11/17/2022] Open
Abstract
The superior temporal sulcus (STS) and gyrus (STG) are commonly identified to be functionally relevant for multisensory integration of audiovisual (AV) stimuli. However, most neuroimaging studies on AV integration used stimuli of short duration in explicit evaluative tasks. Importantly though, many of our AV experiences are of a long duration and ambiguous. It is unclear if the enhanced activity in audio, visual, and AV brain areas would also be synchronised over time across subjects when they are exposed to such multisensory stimuli. We used intersubject correlation to investigate which brain areas are synchronised across novices for uni- and multisensory versions of a 6-min 26-s recording of an unfamiliar, unedited Indian dance recording (Bharatanatyam). In Bharatanatyam, music and dance are choreographed together in a highly intermodal-dependent manner. Activity in the middle and posterior STG was significantly correlated between subjects and showed also significant enhancement for AV integration when the functional magnetic resonance signals were contrasted against each other using a general linear model conjunction analysis. These results extend previous studies by showing an intermediate step of synchronisation for novices: while there was a consensus across subjects' brain activity in areas relevant for unisensory processing and AV integration of related audio and visual stimuli, we found no evidence for synchronisation of higher level cognitive processes, suggesting these were idiosyncratic.
Collapse
|
13
|
Norm-based coding of voice identity in human auditory cortex. Curr Biol 2013; 23:1075-80. [PMID: 23707425 PMCID: PMC3690478 DOI: 10.1016/j.cub.2013.04.055] [Citation(s) in RCA: 87] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2012] [Revised: 04/04/2013] [Accepted: 04/22/2013] [Indexed: 11/23/2022]
Abstract
Listeners exploit small interindividual variations around a generic acoustical structure to discriminate and identify individuals from their voice—a key requirement for social interactions. The human brain contains temporal voice areas (TVA) [1] involved in an acoustic-based representation of voice identity [2, 3, 4, 5, 6], but the underlying coding mechanisms remain unknown. Indirect evidence suggests that identity representation in these areas could rely on a norm-based coding mechanism [4, 7, 8, 9, 10, 11]. Here, we show by using fMRI that voice identity is coded in the TVA as a function of acoustical distance to two internal voice prototypes (one male, one female)—approximated here by averaging a large number of same-gender voices by using morphing [12]. Voices more distant from their prototype are perceived as more distinctive and elicit greater neuronal activity in voice-sensitive cortex than closer voices—a phenomenon not merely explained by neuronal adaptation [13, 14]. Moreover, explicit manipulations of distance-to-mean by morphing voices toward (or away from) their prototype elicit reduced (or enhanced) neuronal activity. These results indicate that voice-sensitive cortex integrates relevant acoustical features into a complex representation referenced to idealized male and female voice prototypes. More generally, they shed light on remarkable similarities in cerebral representations of facial and vocal identity. Identity coding in temporal voice area in terms of acoustical distance to prototypes Description of the “voice space” in terms of simple acoustical measures Male and female prototypes are ideally smooth versions of the population means Comparable coding mechanism for identity across sensory modalities
Collapse
|
14
|
Using a Novel Motion Index to Study the Neural Basis of Event Segmentation. Iperception 2012. [DOI: 10.1068/id225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022] Open
|
15
|
Intention perception in high functioning people with Autism Spectrum Disorders using animacy displays derived from human actions. J Autism Dev Disord 2011; 41:1053-63. [PMID: 21069445 DOI: 10.1007/s10803-010-1130-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
The perception of intent in Autism Spectrum Disorders (ASD) often relies on synthetic animacy displays. This study tests intention perception in ASD via animacy stimuli derived from human motion. Using a forced choice task, 28 participants (14 ASDs; 14 age and verbal-I.Q. matched controls) categorized displays of Chasing, Fighting, Flirting, Following, Guarding and Playing, from two viewpoints (side, overhead) in both animacy and full video displays. Detailed analysis revealed no differences between populations in accuracy, or response patterns. Collapsing across groups revealed Following and Video displays to be most accurately perceived. The stimuli and intentions used are compared to those of previous studies, and the implication of our results on the understanding of Theory of Mind in ASD is discussed.
Collapse
|
16
|
Do distinct atypical cortical networks process biological motion information in adults with Autism Spectrum Disorders? Neuroimage 2011; 59:1524-33. [PMID: 21888982 DOI: 10.1016/j.neuroimage.2011.08.033] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2011] [Revised: 08/09/2011] [Accepted: 08/11/2011] [Indexed: 10/17/2022] Open
Abstract
Whether people with Autism Spectrum Disorders (ASDs) have a specific deficit when processing biological motion has been a topic of much debate. We used psychophysical methods to determine individual behavioural thresholds in a point-light direction discrimination paradigm for a small but carefully matched groups of adults (N=10 per group) with and without ASDs. These thresholds were used to derive individual stimulus levels in an identical fMRI task, with the purpose of equalising task performance across all participants whilst inside the scanner. The results of this investigation show that despite comparable behavioural performance both inside and outside the scanner, the group with ASDs shows a different pattern of BOLD activation from the TD group in response to the same stimulus levels. Furthermore, connectivity analysis suggests that the main differences between the groups are that the TD group utilise a unitary network with information passing from temporal to parietal regions, whilst the ASD group utilise two distinct networks; one utilising motion sensitive areas and another utilising form selective areas. Furthermore, a temporal-parietal link that is present in the TD group is missing in the ASD group. We tentatively propose that these differences may occur due to early dysfunctional connectivity in the brains of people with ASDs, which to some extent is compensated for by rewiring in high functioning adults.
Collapse
|
17
|
How Does Your Brain See “Living” Circles: A Study of Animacy and Intention Using fMRI. Iperception 2011. [DOI: 10.1068/i200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Abstract
It is widely reported that the perception of animacy can occur from simple displays of moving shapes with participants attributing such qualities as goals, beliefs, and intentions. Furthermore, via neuroimaging studies, a network of brain areas, including regions of the temporal and frontal lobes, has been shown to process the percept. However, problems exist that prevent the bridging of fMRI studies on the perception of animacy and intention in shapes to the same percept of human movement. First, the issue of prior displays being poorly controlled in terms of low-level visual cues blurs the actual root of the effect. Second, the general use of synthetically generated displays and their relationship to actual human movement: a problem previously addressed in behavioural studies via a systematic reduction of live visual footage of human actors. Therefore, we propose experiments that incorporate both synthetically generated animacy stimuli and displays derived from human motion. Following the classic Tremoulet and Feldman displays, stimuli are created that allow for manipulation of animacy and intent whilst controlling low-level visual cues. These displays are then used in a whole-brain fMRI study to locate neural regions sensitive to the perception of animacy and intention. Finally, within these regions, a region-of-interest analysis is performed to examine the change in brain activation from viewing animacy displays derived from human movement with varying intent (eg, chasing or following). This study develops the relationship between previous animacy literature and the real-world perception of intent.
Collapse
|
18
|
Action expertise reduces brain activity for audiovisual matching actions: an fMRI study with expert drummers. Neuroimage 2011; 56:1480-92. [PMID: 21397699 DOI: 10.1016/j.neuroimage.2011.03.009] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2010] [Revised: 03/02/2011] [Accepted: 03/03/2011] [Indexed: 11/19/2022] Open
Abstract
When we observe someone perform a familiar action, we can usually predict what kind of sound that action will produce. Musical actions are over-experienced by musicians and not by non-musicians, and thus offer a unique way to examine how action expertise affects brain processes when the predictability of the produced sound is manipulated. We used functional magnetic resonance imaging to scan 11 drummers and 11 age- and gender-matched novices who made judgments on point-light drumming movements presented with sound. In Experiment 1, sound was synchronized or desynchronized with drumming strikes, while in Experiment 2 sound was always synchronized, but the natural covariation between sound intensity and velocity of the drumming strike was maintained or eliminated. Prior to MRI scanning, each participant completed psychophysical testing to identify personal levels of synchronous and asynchronous timing to be used in the two fMRI activation tasks. In both experiments, the drummers' brain activation was reduced in motor and action representation brain regions when sound matched the observed movements, and was similar to that of novices when sound was mismatched. This reduction in neural activity occurred bilaterally in the cerebellum and left parahippocampal gyrus in Experiment 1, and in the right inferior parietal lobule, inferior temporal gyrus, middle frontal gyrus and precentral gyrus in Experiment 2. Our results indicate that brain functions in action-sound representation areas are modulated by multimodal action expertise.
Collapse
|
19
|
Book Review: Obstetric Anesthesia Handbook. Fifth edition. Anaesth Intensive Care 2011. [DOI: 10.1177/0310057x1103900127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
20
|
Audiovisual integration of emotional signals from music improvisation does not depend on temporal correspondence. Brain Res 2010; 1323:139-48. [DOI: 10.1016/j.brainres.2010.02.012] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2009] [Revised: 01/23/2010] [Accepted: 02/03/2010] [Indexed: 11/24/2022]
|
21
|
Audiovisual congruence and the processing of synchrony in swing groove drumming. J Vis 2010. [DOI: 10.1167/7.9.874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
22
|
Intention recognition in autistic spectrum condition (ASC) using video recordings and their corresponding animacy displays. J Vis 2010. [DOI: 10.1167/6.6.1035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
23
|
|
24
|
Contribution of configural information in a direction discrimination task: Evidence using a novel masking paradigm. Vision Res 2009; 49:2503-8. [DOI: 10.1016/j.visres.2009.08.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2009] [Revised: 08/04/2009] [Accepted: 08/06/2009] [Indexed: 11/26/2022]
|
25
|
|
26
|
|
27
|
Treatment of facial verrucae with topical imiquimod cream in a patient with human immunodeficiency virus. Acta Derm Venereol 2000; 80:134-5. [PMID: 10877136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023] Open
Abstract
Imiquimod is a recently developed imidazoquinolin heterocyclic amine that is an immune response modifier. Treatment with topical 5% imiquimod cream has shown promising results in the treatment of genital warts in immunocompetent individuals. We report here the first case of successful treatment with topical 5% imiquimod cream of facial verrucae in an individual with human immunodeficiency virus.
Collapse
|
28
|
Abstract
Psoriasis is commonly reported in association with HIV in adults. A 3-month-old girl with HIV presented with a widespread eruption and was diagnosed with psoriasis. This is the first infant reported with psoriasis in association with HIV infection. The relationship between the two entities is discussed, as is the role of treatment with zidovudine.
Collapse
|