1
|
Fini C, Era V, Cuomo G, Falcinelli I, Gervasi MA, Candidi M, Mazzuca C, Liuzza MT, Winter B, Borghi AM. Digital connection, real bonding: Brief online chats boost interpersonal closeness regardless of the conversational topic. Heliyon 2025; 11:e42526. [PMID: 40028546 PMCID: PMC11869026 DOI: 10.1016/j.heliyon.2025.e42526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 02/03/2025] [Accepted: 02/06/2025] [Indexed: 03/05/2025] Open
Abstract
This study explores how the quality of brief dyadic written exchanges (lasting under 5 min) on a virtual platform and the nature of the conversational topic (abstract or concrete), influences physical, interpersonal, and psychological closeness between interlocutors. In the first experiment, participants engaged in written conversations on either an abstract or concrete topic under two conditions: (i) an interactive condition, where participants exchanged messages with another person, and (ii) a non-interactive condition, where participants wrote independently on the same topic, aware that another person was simultaneously doing the same. Results indicated that participants in the interactive condition reported feeling significantly closer to their interlocutor than those in the non-interactive condition. In addition, greater perceived pleasantness, intimacy, and the importance of the other person's contribution to the conversation were associated with increased feelings of closeness. However, inconclusive evidence was obtained regarding the interaction of the other person's contribution with the abstractness of the conversational topic during the written exchanges in fostering feelings of closeness. The second experiment focused only on the interactive condition, where we examined interpersonal dynamics across different subcategories of abstract (e.g., philosophical/spiritual, emotional, social, physical/spatio-temporal) and concrete topics (e.g., tools, animals, food). The results of the first experiment were replicated, reinforcing the idea that the quality of the virtual exchange-rather than the topic itself-plays a crucial role in fostering feelings of closeness between individuals.
Collapse
Affiliation(s)
- Chiara Fini
- Department of Dynamic and Clinical Psychology, and Health Studies, Sapienza University of Rome, Rome, Italy
| | - Vanessa Era
- Department of Psychology, Sapienza University of Rome, Rome, Italy
- IRCCS, Fondazione Santa Lucia, 00185, Rome, Italy
| | - Giovanna Cuomo
- Department of Psychology, Sapienza University of Rome, Rome, Italy
- IRCCS, Fondazione Santa Lucia, 00185, Rome, Italy
| | | | | | - Matteo Candidi
- Department of Psychology, Sapienza University of Rome, Rome, Italy
- IRCCS, Fondazione Santa Lucia, 00185, Rome, Italy
| | - Claudia Mazzuca
- Department of Dynamic and Clinical Psychology, and Health Studies, Sapienza University of Rome, Rome, Italy
| | - Marco Tullio Liuzza
- Department of Medical and Surgical Sciences, “Magna Graecia” University of Catanzaro, Catanzaro, Italy
| | - Bodo Winter
- Department of English Language and Literature, University of Birmingham, Birmingham, United Kingdom
| | - Anna M. Borghi
- Department of Dynamic and Clinical Psychology, and Health Studies, Sapienza University of Rome, Rome, Italy
- Institute of Cognitive Sciences and Technologies, Italian National Research Council, Rome, Italy
| |
Collapse
|
2
|
Lobben M, Laeng B. Zooming in and out of semantics: proximal-distal construal levels and prominence hierarchies. Front Psychol 2024; 15:1371538. [PMID: 39323580 PMCID: PMC11423544 DOI: 10.3389/fpsyg.2024.1371538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 08/23/2024] [Indexed: 09/27/2024] Open
Abstract
We argue that the "Prominence Hierarchy" within linguistics can be subsumed under the "Construal Level Theory" within psychology and that a wide spectrum of grammatical phenomena, ranging from case assignment to number, definiteness, verbal agreement, voice, direct/inverse morphology, and syntactic word-order respond to Prominence Hierarchies (PH), or semantic scales. In fact, the field of prominence hierarchies, as expressed through the languages of the world, continues to be riddled with riddles. We identify a set of conundrums: (A) vantage point and animacy, (B) individuation and narrow reference phenomena, (C) fronting mechanisms, (D) abstraction, and (E) cultural variance and flexibility. We here propose an account for the existence of these hierarchies and their pervasive effects on grammar by relying on psychological Construal Level Theory (CLT). We suggest that both PH and CLT structure the external world according to proximity or distance from the "Me, Here and Now" (MHN) perspective. In language, MHN has the effect of structuring grammars; in cognition, it structures our lives, our preferences, and choices.
Collapse
Affiliation(s)
- Marit Lobben
- Department of Psychology, University of Oslo, Oslo, Norway
| | - Bruno Laeng
- Department of Psychology, University of Oslo, Oslo, Norway
| |
Collapse
|
3
|
Kwon J, Kotani H. Head motion synchrony in unidirectional and bidirectional verbal communication. PLoS One 2023; 18:e0286098. [PMID: 37224121 DOI: 10.1371/journal.pone.0286098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 05/09/2023] [Indexed: 05/26/2023] Open
Abstract
Interpersonal communication includes verbal and nonverbal communication. Verbal communication comprises one-way (e.g., a speech or lecture) and interactive verbal communication (e.g., daily conversations or meetings), which we frequently encounter. Nonverbal communication has considerable influence on interpersonal communication, and body motion synchrony is known to be an important factor for successful communication and social interaction. However, most research on body motion synchrony has been elucidated by either the setting of one-way verbal transmission or the verbal interaction setting, and it remains unclear whether verbal directionality and interactivity affect body motion synchrony. One-way and two-way (interactive) verbal communication is implicated in designed or undesigned leader-follower relationships, and also in the complexity and diversity of interpersonal interactions, where two-way verbal communication is more complex and diverse than in the one-way condition. In this study, we tested head motion synchrony between the one-way verbal communication condition (in which the roles of the speaker and listener are fixed) and the two-way verbal communication condition (where the speaker and listener can freely engage in a conversation). Therefore, although no statistically significant difference in synchrony activity (relative frequency) was found, a statistically significant difference was observed in synchrony direction (temporal lead-lag structure as mimicry) and intensity. Specifically, the synchrony direction in two-way verbal communication was close to zero, but this in one-way verbal communication was synchronized with the listener's movement predominantly delayed. Furthermore, synchrony intensity, in terms of the degree of variation in the phase difference distribution, was significantly higher in the one-way verbal communication than in the two-way condition, with bigger time-shifts being observed in the latter. This result suggests that verbal interaction does not affect the overall frequency of head motion synchrony but does affect the temporal lead-lag structure and coherence.
Collapse
Affiliation(s)
- Jinhwan Kwon
- Department of Education, Kyoto University of Education, Kyoto, Japan
| | - Hiromi Kotani
- Department of Education, Kyoto University of Education, Kyoto, Japan
| |
Collapse
|
4
|
Reece A, Cooney G, Bull P, Chung C, Dawson B, Fitzpatrick C, Glazer T, Knox D, Liebscher A, Marin S. The CANDOR corpus: Insights from a large multimodal dataset of naturalistic conversation. SCIENCE ADVANCES 2023; 9:eadf3197. [PMID: 37000886 PMCID: PMC10065445 DOI: 10.1126/sciadv.adf3197] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 03/02/2023] [Indexed: 06/19/2023]
Abstract
People spend a substantial portion of their lives engaged in conversation, and yet, our scientific understanding of conversation is still in its infancy. Here, we introduce a large, novel, and multimodal corpus of 1656 conversations recorded in spoken English. This 7+ million word, 850-hour corpus totals more than 1 terabyte of audio, video, and transcripts, with moment-to-moment measures of vocal, facial, and semantic expression, together with an extensive survey of speakers' postconversation reflections. By taking advantage of the considerable scope of the corpus, we explore many examples of how this large-scale public dataset may catalyze future research, particularly across disciplinary boundaries, as scholars from a variety of fields appear increasingly interested in the study of conversation.
Collapse
Affiliation(s)
| | - Gus Cooney
- University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Peter Bull
- DrivenData Inc., Berkeley, CA, 94709, USA
| | | | | | | | | | - Dean Knox
- University of Pennsylvania, Philadelphia, PA 19104, USA
| | | | | |
Collapse
|
5
|
Boytos AS, Costabile KA. Shared reality, memory goal satisfaction, and psychological well-being during conversational remembering. Memory 2023; 31:689-704. [PMID: 36933230 DOI: 10.1080/09658211.2023.2188643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023]
Abstract
Conversational remembering, or sharing autobiographical memories with others, occurs frequently in everyday communication. The current project examined how the experience of shared reality with a conversation partner when describing autobiographical memories to them can operate to enhance the self, social, and directive uses of a recalled memory and explored the role of shared reality experienced as a result of conversational remembering in psychological well-being. In this project, conversational remembering was examined using experimental (Study 1) and daily diary (Study 2) methodologies. Results indicated that experiencing a shared reality during conversational remembering of an autobiographical memory enhanced self, social, and directive memory goal fulfilment and was positively associated with greater psychological well-being. The current investigation highlights important benefits of sharing our life stories with others, especially those with whom we develop a sense of shared reality.
Collapse
|
6
|
Morin O. The puzzle of ideography. Behav Brain Sci 2022; 46:e233. [PMID: 36254782 DOI: 10.1017/s0140525x22002801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
An ideography is a general-purpose code made of pictures that do not encode language, which can be used autonomously - not just as a mnemonic prop - to encode information on a broad range of topics. Why are viable ideographies so hard to find? I contend that self-sufficient graphic codes need to be narrowly specialized. Writing systems are only an apparent exception: At their core, they are notations of a spoken language. Even if they also encode nonlinguistic information, they are useless to someone who lacks linguistic competence in the encoded language or a related one. The versatility of writing is thus vicarious: Writing borrows it from spoken language. Why is it so difficult to build a fully generalist graphic code? The most widespread answer points to a learnability problem. We possess specialized cognitive resources for learning spoken language, but lack them for graphic codes. I argue in favor of a different account: What is difficult about graphic codes is not so much learning or teaching them as getting every user to learn and teach the same code. This standardization problem does not affect spoken or signed languages as much. Those are based on cheap and transient signals, allowing for easy online repairing of miscommunication, and require face-to-face interactions where the advantages of common ground are maximized. Graphic codes lack these advantages, which makes them smaller in size and more specialized.
Collapse
Affiliation(s)
- Olivier Morin
- Max Planck Institute for Geoanthropology, Minds and Traditions Research Group, Jena, Germany ; https://www.shh.mpg.de/94549/themintgroup
- Institut Jean Nicod, CNRS, ENS, PSL University, Paris, France
| |
Collapse
|
7
|
Facial Emotion Expressions in Human–Robot Interaction: A Survey. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00867-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
AbstractFacial expressions are an ideal means of communicating one’s emotions or intentions to others. This overview will focus on human facial expression recognition as well as robotic facial expression generation. In the case of human facial expression recognition, both facial expression recognition on predefined datasets as well as in real-time will be covered. For robotic facial expression generation, hand-coded and automated methods i.e., facial expressions of a robot are generated by moving the features (eyes, mouth) of the robot by hand-coding or automatically using machine learning techniques, will also be covered. There are already plenty of studies that achieve high accuracy for emotion expression recognition on predefined datasets, but the accuracy for facial expression recognition in real-time is comparatively lower. In the case of expression generation in robots, while most of the robots are capable of making basic facial expressions, there are not many studies that enable robots to do so automatically. In this overview, state-of-the-art research in facial emotion expressions during human–robot interaction has been discussed leading to several possible directions for future research.
Collapse
|
8
|
Itzchakov G, Weinstein N, Saluk D, Amar M. Connection Heals Wounds: Feeling Listened to Reduces Speakers' Loneliness Following a Social Rejection Disclosure. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2022:1461672221100369. [PMID: 35726696 DOI: 10.1177/01461672221100369] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Memories of rejection contribute to feeling lonely. However, high-quality listening that conveys well-meaning attention and understanding when speakers discuss social rejection may help them to reconnect. Speakers may experience less loneliness because they feel close and connected (relatedness) to the listener and because listening supports self-congruent expression (autonomy). Five experiments (total N = 1,643) manipulated listening during visualized (Studies 1, 4, 5) and actual (Studies 2, 3) conversations. We used different methods (video vignettes; in-person; computer-mediated; recall; written scenarios) to compare high-quality with regular (all studies) and poor (Study 1) listening. Findings across studies showed that high-quality listening reduced speakers' state loneliness after they shared past experiences of social rejection. Parallel mediation analyses indicated that both feeling related to the listener and autonomy satisfaction (particularly its self-congruence component; Study 5) mediated the effect of listening on loneliness. These results provide novel insights into the hitherto unexplored effect of listening on state loneliness.
Collapse
Affiliation(s)
| | | | | | - Moty Amar
- Ono Academic College, Kiryat Ono, Israel
| |
Collapse
|
9
|
Itzchakov G, Reis HT, Weinstein N. How to foster perceived partner responsiveness: High‐quality listening is key. SOCIAL AND PERSONALITY PSYCHOLOGY COMPASS 2021. [DOI: 10.1111/spc3.12648] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- Guy Itzchakov
- Department of Human Services University of Haifa Haifa Israel
| | - Harry T. Reis
- Department of Psychology University of Rochester Rochester New York USA
| | | |
Collapse
|
10
|
Kriz TD, Kluger AN, Lyddy CJ. Feeling Heard: Experiences of Listening (or Not) at Work. Front Psychol 2021; 12:659087. [PMID: 34381396 PMCID: PMC8350774 DOI: 10.3389/fpsyg.2021.659087] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 06/29/2021] [Indexed: 11/13/2022] Open
Abstract
Listening has been identified as a key workplace skill, important for ensuring high-quality communication, building relationships, and motivating employees. However, recent research has increasingly suggested that speaker perceptions of good listening do not necessarily align with researcher or listener conceptions of good listening. While many of the benefits of workplace listening rely on employees feeling heard, little is known about what constitutes this subjective perception. To better understand what leaves employees feeling heard or unheard, we conducted 41 interviews with bank employees, who collectively provided 81 stories about listening interactions they had experienced at work. Whereas, prior research has typically characterized listening as something that is perceived through responsive behaviors within conversation, our findings suggest conversational behaviors alone are often insufficient to distinguish between stories of feeling heard vs. feeling unheard. Instead, our interviewees felt heard or unheard only when listeners met their subjective needs and expectations. Sometimes their needs and expectations could be fulfilled through conversation alone, and other times action was required. Notably, what would be categorized objectively as good listening during an initial conversation could be later counteracted by a failure to follow-through in ways expected by the speaker. In concert, these findings contribute to both theory and practice by clarifying how listening behaviors take on meaning from the speakers' perspective and the circumstances under which action is integral to feeling heard. Moreover, they point toward the various ways listeners can engage to help speakers feel heard in critical conversations.
Collapse
Affiliation(s)
- Tiffany D Kriz
- Department of Organizational Behaviour, Human Resources Management, and Management, MacEwan University, Edmonton, AB, Canada
| | - Avraham N Kluger
- Department of Organizational Behavior, School of Business Administration, The Hebrew University, Jerusalem, Israel
| | - Christopher J Lyddy
- Department of Management, School of Business, Providence College, Providence, RI, United States
| |
Collapse
|
11
|
Oertel C, Jonell P, Kontogiorgos D, Mora KF, Odobez JM, Gustafson J. Towards an Engagement-Aware Attentive Artificial Listener for Multi-Party Interactions. Front Robot AI 2021; 8:555913. [PMID: 34277714 PMCID: PMC8280470 DOI: 10.3389/frobt.2021.555913] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Accepted: 06/08/2021] [Indexed: 11/30/2022] Open
Abstract
Listening to one another is essential to human-human interaction. In fact, we humans spend a substantial part of our day listening to other people, in private as well as in work settings. Attentive listening serves the function to gather information for oneself, but at the same time, it also signals to the speaker that he/she is being heard. To deduce whether our interlocutor is listening to us, we are relying on reading his/her nonverbal cues, very much like how we also use non-verbal cues to signal our attention. Such signaling becomes more complex when we move from dyadic to multi-party interactions. Understanding how humans use nonverbal cues in a multi-party listening context not only increases our understanding of human-human communication but also aids the development of successful human-robot interactions. This paper aims to bring together previous analyses of listener behavior analyses in human-human multi-party interaction and provide novel insights into gaze patterns between the listeners in particular. We are investigating whether the gaze patterns and feedback behavior, as observed in the human-human dialogue, are also beneficial for the perception of a robot in multi-party human-robot interaction. To answer this question, we are implementing an attentive listening system that generates multi-modal listening behavior based on our human-human analysis. We are comparing our system to a baseline system that does not differentiate between different listener types in its behavior generation. We are evaluating it in terms of the participant’s perception of the robot, his behavior as well as the perception of third-party observers.
Collapse
Affiliation(s)
- Catharine Oertel
- Department of Intelligent Systems, Interactive Intelligence, Delft University of Technology, Delft, Netherlands
| | - Patrik Jonell
- Department of Intelligent Systems, Division of speech music and hearing, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Dimosthenis Kontogiorgos
- Department of Intelligent Systems, Division of speech music and hearing, KTH Royal Institute of Technology, Stockholm, Sweden
| | | | - Jean-Marc Odobez
- Perception and Activity Understanding, Idiap Research Institute, Martigny, Switzerland
| | - Joakim Gustafson
- Department of Intelligent Systems, Division of speech music and hearing, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
12
|
Comparing Frequency of Listener Responses Between Adolescents with and Without ASD During Conversation. J Autism Dev Disord 2021; 52:1007-1018. [PMID: 33840008 PMCID: PMC8854326 DOI: 10.1007/s10803-021-04996-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/29/2021] [Indexed: 11/17/2022]
Abstract
In conversation, the listener plays an active role in conversation success, specifically by providing listener feedback which signals comprehension and interest. Previous work has shown that frequency of feedback positively correlates with conversation success. Because individuals with ASD are known to struggle with various conversational skills, e.g., turn-taking and commenting, this study examines their use of listener feedback by comparing the frequency of feedback produced by 20 adolescents with ASD and 23 neurotypical (NT) adolescents. We coded verbal and nonverbal listener feedback during the time when participants were listening in a semi-structured interview with a research assistant. Results show that ASD participants produced significantly fewer instances of listener feedback than NT adolescents, which likely contributes to difficulties with social interactions.
Collapse
|
13
|
Breil C, Böckler A. Look away to listen: the interplay of emotional context and eye contact in video conversations. VISUAL COGNITION 2021. [DOI: 10.1080/13506285.2021.1908470] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Christina Breil
- Department of Psychology, Leibniz-University Hanover, Hanover, Germany
| | - Anne Böckler
- Department of Psychology, Leibniz-University Hanover, Hanover, Germany
- Max-Planck-Institute for Human Cognitive and Brain Science, Leipzig, Germany
| |
Collapse
|
14
|
Paxton A, Roche JM, Ibarra A, Tanenhaus MK. Predictions of Miscommunication in Verbal Communication During Collaborative Joint Action. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:613-627. [PMID: 33502916 PMCID: PMC8632505 DOI: 10.1044/2020_jslhr-20-00137] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 07/07/2020] [Accepted: 09/13/2020] [Indexed: 06/12/2023]
Abstract
Purpose The purpose of the current study was to examine the lexical and pragmatic factors that may contribute to turn-by-turn failures in communication (i.e., miscommunication) that arise regularly in interactive communication. Method Using a corpus from a collaborative dyadic building task, we investigated what differentiated successful from unsuccessful communication and potential factors associated with the choice to provide greater lexical information to a conversation partner. Results We found that more successful dyads' language tended to be associated with greater lexical density, lower ambiguity, and fewer questions. We also found participants were more lexically dense when accepting and integrating a partner's information (i.e., grounding) but were less lexically dense when responding to a question. Finally, an exploratory analysis suggested that dyads tended to spend more lexical effort when responding to an inquiry and used assent language accurately-that is, only when communication was successful. Conclusion Together, the results suggest that miscommunication both emerges and benefits from ambiguous and lexically dense utterances.
Collapse
Affiliation(s)
- Alexandra Paxton
- Department of Psychological Sciences, University of Connecticut, Storrs
- Center for the Ecological Study of Perception and Action, University of Connecticut, Storrs
| | - Jennifer M. Roche
- Department of Speech Pathology & Audiology, School of Health Sciences, Kent State University, OH
| | - Alyssa Ibarra
- Department of Brain & Cognitive Sciences, University of Rochester, NY
| | | |
Collapse
|
15
|
The many minds problem: disclosure in dyadic versus group conversation. Curr Opin Psychol 2020; 31:22-27. [DOI: 10.1016/j.copsyc.2019.06.032] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Revised: 06/26/2019] [Accepted: 06/28/2019] [Indexed: 11/17/2022]
|
16
|
Ivey AE, Daniels T. Systematic Interviewing Microskills and Neuroscience: Developing Bridges between the Fields of Communication and Counseling Psychology. ACTA ACUST UNITED AC 2016. [DOI: 10.1080/10904018.2016.1173815] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
17
|
Grysman A, Denney A. Gender, experimenter gender and medium of report influence the content of autobiographical memory report. Memory 2016; 25:132-145. [PMID: 26775811 DOI: 10.1080/09658211.2015.1133829] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
In this study, we examined the role of context in autobiographical memory narratives, specifically as it pertains to gender among emerging adults. Male and female participants reported stressful events in their lives in the presence of an experimenter, and were randomly assigned either to report events verbally or type them, and to report in the presence of a male or female experimenter. Narratives were coded for factual and interpretive content. Results revealed that men verbally reporting to women reported longer narratives than all other groups. Women's narrative length did not vary by medium of report or conversational partner, but women used proportionally fewer internal state phrases when verbally reporting to men than when reporting to women. Women also used proportionally fewer evaluative statements in verbal reports than in typed narratives. Of these important interactions among context, gender, and experimenter gender, some findings, such as men's longer narratives and women's reduced internal states, were counter to expectations. These findings highlight the importance of methodological influences in autobiographical memory studies, in regard to both the context generated by experimental methods, and how gender differences are understood.
Collapse
Affiliation(s)
- Azriel Grysman
- a Psychology Department , Hamilton College , Clinton , NY , USA
| | - Amelia Denney
- a Psychology Department , Hamilton College , Clinton , NY , USA
| |
Collapse
|
18
|
Don’t be fooled! Attentional responses to social cues in a face-to-face and video magic trick reveals greater top-down control for overt than covert attention. Cognition 2016; 146:136-42. [DOI: 10.1016/j.cognition.2015.08.005] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2014] [Revised: 05/04/2015] [Accepted: 08/11/2015] [Indexed: 11/21/2022]
|
19
|
Technology-Based Support for Older Adult Communication in Safety-Critical Domains. PSYCHOLOGY OF LEARNING AND MOTIVATION 2016. [DOI: 10.1016/bs.plm.2015.09.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
20
|
Moran N, Hadley LV, Bader M, Keller PE. Perception of 'Back-Channeling' Nonverbal Feedback in Musical Duo Improvisation. PLoS One 2015; 10:e0130070. [PMID: 26086593 PMCID: PMC4473276 DOI: 10.1371/journal.pone.0130070] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Accepted: 05/16/2015] [Indexed: 12/02/2022] Open
Abstract
In witnessing face-to-face conversation, observers perceive authentic communication according to the social contingency of nonverbal feedback cues (‘back-channeling’) by non-speaking interactors. The current study investigated the generality of this function by focusing on nonverbal communication in musical improvisation. A perceptual experiment was conducted to test whether observers can reliably identify genuine versus fake (mismatched) duos from musicians’ nonverbal cues, and how this judgement is affected by observers’ musical background and rhythm perception skill. Twenty-four musicians were recruited to perform duo improvisations, which included solo episodes, in two styles: standard jazz (where rhythm is based on a regular pulse) or free improvisation (where rhythm is non-pulsed). The improvisations were recorded using a motion capture system to generate 16 ten-second point-light displays (with audio) of the soloist and the silent non-soloing musician (‘back-channeler’). Sixteen further displays were created by splicing soloists with back-channelers from different duos. Participants (N = 60) with various musical backgrounds were asked to rate the point-light displays as either real or fake. Results indicated that participants were sensitive to the real/fake distinction in the free improvisation condition independently of musical experience. Individual differences in rhythm perception skill did not account for performance in the free condition, but were positively correlated with accuracy in the standard jazz condition. These findings suggest that the perception of back-channeling in free improvisation is not dependent on music-specific skills but is a general ability. The findings invite further study of the links between interpersonal dynamics in conversation and musical interaction.
Collapse
Affiliation(s)
- Nikki Moran
- Institute for Music in Human and Social Development (IMHSD), Reid School of Music, University of Edinburgh, Edinburgh, United Kingdom
- * E-mail:
| | - Lauren V. Hadley
- Institute for Music in Human and Social Development (IMHSD), Reid School of Music, University of Edinburgh, Edinburgh, United Kingdom
- Psychology, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Maria Bader
- Research Group: Music Cognition and Action, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Peter E. Keller
- Research Group: Music Cognition and Action, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Music Cognition and Action Group, The MARCS Institute, University of Western Sydney, Penrith, Australia
| |
Collapse
|
21
|
Levinson SC, Torreira F. Timing in turn-taking and its implications for processing models of language. Front Psychol 2015; 6:731. [PMID: 26124727 PMCID: PMC4464110 DOI: 10.3389/fpsyg.2015.00731] [Citation(s) in RCA: 151] [Impact Index Per Article: 15.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2015] [Accepted: 05/16/2015] [Indexed: 12/03/2022] Open
Abstract
The core niche for language use is in verbal interaction, involving the rapid exchange of turns at talking. This paper reviews the extensive literature about this system, adding new statistical analyses of behavioral data where they have been missing, demonstrating that turn-taking has the systematic properties originally noted by Sacks et al. (1974; hereafter SSJ). This system poses some significant puzzles for current theories of language processing: the gaps between turns are short (of the order of 200 ms), but the latencies involved in language production are much longer (over 600 ms). This seems to imply that participants in conversation must predict (or 'project' as SSJ have it) the end of the current speaker's turn in order to prepare their response in advance. This in turn implies some overlap between production and comprehension despite their use of common processing resources. Collecting together what is known behaviorally and experimentally about the system, the space for systematic explanations of language processing for conversation can be significantly narrowed, and we sketch some first model of the mental processes involved for the participant preparing to speak next.
Collapse
Affiliation(s)
- Stephen C. Levinson
- Language and Cognition Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegen, Netherlands
| | - Francisco Torreira
- Language and Cognition Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
| |
Collapse
|
22
|
Pasupathi M, Billitteri J. Being and Becoming through Being Heard: Listener Effects on Stories and Selves. ACTA ACUST UNITED AC 2015. [DOI: 10.1080/10904018.2015.1029363] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
23
|
Holler J, Kendrick KH. Unaddressed participants' gaze in multi-person interaction: optimizing recipiency. Front Psychol 2015; 6:98. [PMID: 25709592 PMCID: PMC4321333 DOI: 10.3389/fpsyg.2015.00098] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2014] [Accepted: 01/19/2015] [Indexed: 12/02/2022] Open
Abstract
One of the most intriguing aspects of human communication is its turn-taking system. It requires the ability to process on-going turns at talk while planning the next, and to launch this next turn without considerable overlap or delay. Recent research has investigated the eye movements of observers of dialogs to gain insight into how we process turns at talk. More specifically, this research has focused on the extent to which we are able to anticipate the end of current and the beginning of next turns. At the same time, there has been a call for shifting experimental paradigms exploring social-cognitive processes away from passive observation toward on-line processing. Here, we present research that responds to this call by situating state-of-the-art technology for tracking interlocutors’ eye movements within spontaneous, face-to-face conversation. Each conversation involved three native speakers of English. The analysis focused on question–response sequences involving just two of those participants, thus rendering the third momentarily unaddressed. Temporal analyses of the unaddressed participants’ gaze shifts from current to next speaker revealed that unaddressed participants are able to anticipate next turns, and moreover, that they often shift their gaze toward the next speaker before the current turn ends. However, an analysis of the complex structure of turns at talk revealed that the planning of these gaze shifts virtually coincides with the points at which the turns first become recognizable as possibly complete. We argue that the timing of these eye movements is governed by an organizational principle whereby unaddressed participants shift their gaze at a point that appears interactionally most optimal: It provides unaddressed participants with access to much of the visual, bodily behavior that accompanies both the current speaker’s and the next speaker’s turn, and it allows them to display recipiency with regard to both speakers’ turns.
Collapse
Affiliation(s)
- Judith Holler
- Language and Cognition Department, Max Planck Institute for Psycholinguistics Nijmegen, Netherlands
| | - Kobin H Kendrick
- Language and Cognition Department, Max Planck Institute for Psycholinguistics Nijmegen, Netherlands
| |
Collapse
|
24
|
Abstract
BACKGROUND Nonverbal communication is a critical feature of successful social interaction and interpersonal rapport. Social exclusion is a feature of schizophrenia. This experimental study investigated if the undisclosed presence of a patient with schizophrenia in interaction changes nonverbal communication (ie, speaker gesture and listener nodding). METHOD 3D motion-capture techniques recorded 20 patient (1 patient, 2 healthy participants) and 20 control (3 healthy participants) interactions. Participants rated their experience of rapport with each interacting partner. Patients' symptoms, social cognition, and executive functioning were assessed. Four hypotheses were tested: (1) Compared to controls, patients display less speaking gestures and listener nods. (2) Patients' increased symptom severity and poorer social cognition are associated with patients' reduced gesture and nods. (3) Patients' partners compensate for patients' reduced nonverbal behavior by gesturing more when speaking and nodding more when listening. (4) Patients' reduced nonverbal behavior, increased symptom severity, and poorer social cognition are associated with others experiencing poorer rapport with the patient. RESULTS Patients gestured less when speaking. Patients with more negative symptoms nodded less as listeners, while their partners appeared to compensate by gesturing more as speakers. Patients with more negative symptoms also gestured more when speaking, which, alongside increased negative symptoms and poorer social cognition, was associated with others experiencing poorer patient rapport. CONCLUSIONS Patients' symptoms are associated with the nonverbal behavior of patients and their partners. Patients' increased negative symptoms and gesture use are associated with poorer interpersonal rapport. This study provides specific evidence about how negative symptoms impact patients' social interactions.
Collapse
Affiliation(s)
- Mary Lavelle
- School of Electronic Engineering & Computer Science, University of London, London, UK.
| | | | | |
Collapse
|
25
|
Bodie GD, St. Cyr K, Pence M, Rold M, Honeycutt J. Listening Competence in Initial Interactions I: Distinguishing Between What Listening Is and What Listeners Do. ACTA ACUST UNITED AC 2012. [DOI: 10.1080/10904018.2012.639645] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|