1
|
Witharana P, Chang L, Maier R, Ogundimu E, Wilkinson C, Athanasiou T, Akowuah E. Feasibility study of rehabilitation for cardiac patients aided by an artificial intelligence web-based programme: a randomised controlled trial (RECAP trial)-a study protocol. BMJ Open 2024; 14:e079404. [PMID: 38688664 PMCID: PMC11086203 DOI: 10.1136/bmjopen-2023-079404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 02/20/2024] [Indexed: 05/02/2024] Open
Abstract
INTRODUCTION Cardiac rehabilitation (CR) delivered by rehabilitation specialists in a healthcare setting is effective in improving functional capacity and reducing readmission rates after cardiac surgery. It is also associated with a reduction in cardiac mortality and recurrent myocardial infarction. This trial assesses the feasibility of a home-based CR programme delivered using a mobile application (app). METHODS The Rehabilitation through Exercise prescription for Cardiac patients using an Artificial intelligence web-based Programme (RECAP) randomised controlled feasibility trial is a single-centre prospective study, in which patients will be allocated on a 1:1 ratio to a home-based CR programme delivered using a mobile app with accelerometers or standard hospital-based rehabilitation classes. The home-based CR programme will employ artificial intelligence to prescribe exercise goals to the participants on a weekly basis. The trial will recruit 70 patients in total. The primary objectives are to evaluate participant recruitment and dropout rates, assess the feasibility of randomisation, determine acceptability to participants and staff, assess the rates of potential outcome measures and determine hospital resource allocation to inform the design of a larger randomised controlled trial for clinical efficacy and health economic evaluation. Secondary objectives include evaluation of health-related quality of life and 6 minute walk distance. ETHICS AND DISSEMINATION RECAP trial received a favourable outcome from the Berkshire research ethics committee in September 2022 (IRAS 315483).Trial results will be made available through publication in peer-reviewed journals and presented at relevant scientific meetings. TRIAL REGISTRATION NUMBER ISRCTN97352737.
Collapse
Affiliation(s)
- Pasan Witharana
- Academic Cardiovascular Unit, South Tees Hospitals NHS Foundation Trust, Middlesbrough, UK
- Department of Surgery and Cancer, Imperial College London, London, UK
| | - Lisa Chang
- Academic Cardiovascular Unit, South Tees Hospitals NHS Foundation Trust, Middlesbrough, UK
| | - Rebecca Maier
- Academic Cardiovascular Unit, South Tees Hospitals NHS Foundation Trust, Middlesbrough, UK
| | | | - Christopher Wilkinson
- Academic Cardiovascular Unit, South Tees Hospitals NHS Foundation Trust, Middlesbrough, UK
- Hull York Medical School, University of York, York, UK
| | - Thanos Athanasiou
- Department of Surgery and Cancer, Imperial College London, London, UK
| | - Enoch Akowuah
- Academic Cardiovascular Unit, South Tees Hospitals NHS Foundation Trust, Middlesbrough, UK
| |
Collapse
|
2
|
Bi T, Luo W, Wu J, Shao B, Tan Q, Kou H. Effect of facial emotion recognition learning transfers across emotions. Front Psychol 2024; 15:1310101. [PMID: 38312392 PMCID: PMC10834736 DOI: 10.3389/fpsyg.2024.1310101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 01/05/2024] [Indexed: 02/06/2024] Open
Abstract
Introduction Perceptual learning of facial expression is shown specific to the train expression, indicating separate encoding of the emotional contents in different expressions. However, little is known about the specificity of emotional recognition training with the visual search paradigm and the sensitivity of learning to near-threshold stimuli. Methods In the present study, we adopted a visual search paradigm to measure the recognition of facial expressions. In Experiment 1 (Exp1), Experiment 2 (Exp2), and Experiment 3 (Exp3), subjects were trained for 8 days to search for a target expression in an array of faces presented for 950 ms, 350 ms, and 50 ms, respectively. In Experiment 4 (Exp4), we trained subjects to search for a target of a triangle, and tested them with the task of facial expression search. Before and after the training, subjects were tested on the trained and untrained facial expressions which were presented for 950 ms, 650 ms, 350 ms, or 50 ms. Results The results showed that training led to large improvements in the recognition of facial emotions only if the faces were presented long enough (Exp1: 85.89%; Exp2: 46.05%). Furthermore, the training effect could transfer to the untrained expression. However, when the faces were presented briefly (Exp3), the training effect was small (6.38%). In Exp4, the results indicated that the training effect could not transfer across categories. Discussion Our findings revealed cross-emotion transfer for facial expression recognition training in a visual search task. In addition, learning hardly affects the recognition of near-threshold expressions.
Collapse
Affiliation(s)
- Taiyong Bi
- Research Center of Humanities and Medicine, Zunyi Medical University, Zunyi, China
| | - Wei Luo
- The Institute of Ethnology and Anthropology, Chinese Academy of Social Sciences, Beijing, China
| | - Jia Wu
- Research Center of Humanities and Medicine, Zunyi Medical University, Zunyi, China
| | - Boyao Shao
- Research Center of Humanities and Medicine, Zunyi Medical University, Zunyi, China
| | - Qingli Tan
- Research Center of Humanities and Medicine, Zunyi Medical University, Zunyi, China
| | - Hui Kou
- Research Center of Humanities and Medicine, Zunyi Medical University, Zunyi, China
| |
Collapse
|
3
|
Matsufuji Y, Ueji K, Yamamoto T. Predicting Perceived Hedonic Ratings through Facial Expressions of Different Drinks. Foods 2023; 12:3490. [PMID: 37761199 PMCID: PMC10528552 DOI: 10.3390/foods12183490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 09/07/2023] [Accepted: 09/18/2023] [Indexed: 09/29/2023] Open
Abstract
Previous studies have established the utility of facial expressions as an objective assessment approach for determining the hedonics (overall pleasure) of food and beverages. This study endeavors to validate the conclusions drawn from preceding research, illustrating that facial expressions prompted by tastants possess the capacity to forecast the perceived hedonic ratings of these tastants. Facial expressions of 29 female participants, aged 18-55 years, were captured using a digital camera during their consumption of diverse concentrations of solutions representative of five basic tastes. Employing the widely employed facial expression analysis application FaceReader, the facial expressions were meticulously assessed, identifying seven emotions (surprise, happiness, scare, neutral, disgust, sadness, and anger) characterized by scores ranging from 0 to 1-a numerical manifestation of emotional intensity. Simultaneously, participants rated the hedonics of each solution, utilizing a scale spanning from -5 (extremely unpleasant) to +5 (extremely pleasant). Employing a multiple linear regression analysis, a predictive model for perceived hedonic ratings was devised. The model's efficacy was scrutinized by assessing emotion scores from 11 additional taste solutions, sampled from 20 other participants. The anticipated hedonic ratings demonstrated robust alignment and agreement with the observed ratings, underpinning the validity of earlier findings even when incorporating diverse software and taste stimuli across a varied participant base. We discuss some limitations and practical implications of our technique in predicting food and beverage hedonics using facial expressions.
Collapse
Affiliation(s)
| | | | - Takashi Yamamoto
- Department of Nutrition, Faculty of Health Sciences, Kio University, 4-2-2 Umami-naka, Koryo, Kitakatsuragi, Nara 635-0832, Japan; (Y.M.); (K.U.)
| |
Collapse
|
4
|
Hendel E, Gallant A, Mazerolle MP, Cyr SI, Roy-Charland A. Exploration of visual factors in the disgust-anger confusion: the importance of the mouth. Cogn Emot 2023; 37:835-851. [PMID: 37190958 DOI: 10.1080/02699931.2023.2212892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 05/02/2023] [Accepted: 05/06/2023] [Indexed: 05/17/2023]
Abstract
According to the perceptual-attentional limitations hypothesis, the confusion between expressions of disgust and anger may be due to the difficulty in perceptually distinguishing the two, or insufficient attention to their distinctive cues. The objective of the current study was to test this hypothesis as an explanation for the confusion between expressions of disgust and anger in adults using eye-movements. In Experiment 1, participants were asked to identify each emotion in 96 trials composed of prototypes of anger and prototypes of disgust. In Experiment 2, fixation points oriented participants' attention toward the eyes, the nose, or the mouth of each prototype. Results revealed that disgust was less accurately recognised than anger (Experiment 1 and 2), especially when the mouth was open (Experiment 1 and 2), and even when attention was oriented toward the distinctive features of disgust (Experiment 2). Additionally, when attention was oriented toward certain zones, the eyes (which contain characteristics of anger) had the longest dwell times, followed by the nose (which contains characteristics of disgust; Experiment 2). Thus, although participants may attend to the distinguishing features of disgust and anger, these may not aid them in accurately recognising each prototype.
Collapse
Affiliation(s)
- Emalie Hendel
- École de psychologie, Université de Moncton, Moncton, Canada
| | - Adèle Gallant
- École de psychologie, Université de Moncton, Moncton, Canada
| | | | | | | |
Collapse
|
5
|
FEDA: Fine-grained emotion difference analysis for facial expression recognition. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
6
|
Decoding six basic emotions from brain functional connectivity patterns. SCIENCE CHINA LIFE SCIENCES 2022; 66:835-847. [PMID: 36378473 DOI: 10.1007/s11427-022-2206-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 09/26/2022] [Indexed: 11/16/2022]
Abstract
Although distinctive neural and physiological states are suggested to underlie the six basic emotions, basic emotions are often indistinguishable from functional magnetic resonance imaging (fMRI) voxelwise activation (VA) patterns. Here, we hypothesize that functional connectivity (FC) patterns across brain regions may contain emotion-representation information beyond VA patterns. We collected whole-brain fMRI data while human participants viewed pictures of faces expressing one of the six basic emotions (i.e., anger, disgust, fear, happiness, sadness, and surprise) or showing neutral expressions. We obtained FC patterns for each emotion across brain regions over the whole brain and applied multivariate pattern decoding to decode emotions in the FC pattern representation space. Our results showed that the whole-brain FC patterns successfully classified not only the six basic emotions from neutral expressions but also each basic emotion from other emotions. An emotion-representation network for each basic emotion that spanned beyond the classical brain regions for emotion processing was identified. Finally, we demonstrated that within the same brain regions, FC-based decoding consistently performed better than VA-based decoding. Taken together, our findings revealed that FC patterns contained emotional information and advocated for paying further attention to the contribution of FCs to emotion processing.
Collapse
|
7
|
Rinck M, Primbs MA, Verpaalen IAM, Bijlstra G. Face masks impair facial emotion recognition and induce specific emotion confusions. Cogn Res Princ Implic 2022; 7:83. [PMID: 36065042 PMCID: PMC9444085 DOI: 10.1186/s41235-022-00430-5] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 08/11/2022] [Indexed: 11/10/2022] Open
Abstract
Face masks are now worn frequently to reduce the spreading of the SARS-CoV-2 virus. Their health benefits are undisputable, but covering the lower half of one's face also makes it harder for others to recognize facial expressions of emotions. Three experiments were conducted to determine how strongly the recognition of different facial expressions is impaired by masks, and which emotions are confused with each other. In each experiment, participants had to recognize facial expressions of happiness, sadness, anger, surprise, fear, and disgust, as well as a neutral expression, displayed by male and female actors of the Radboud Faces Database. On half of the 168 trials, the lower part of the face was covered by a face mask. In all experiments, facial emotion recognition (FER) was about 20% worse for masked faces than for unmasked ones (68% correct vs. 88%). The impairment was largest for disgust, followed by fear, surprise, sadness, and happiness. It was not significant for anger and the neutral expression. As predicted, participants frequently confused emotions that share activation of the visible muscles in the upper half of the face. In addition, they displayed response biases in these confusions: They frequently misinterpreted disgust as anger, fear as surprise, and sadness as neutral, whereas the opposite confusions were less frequent. We conclude that face masks do indeed cause a marked impairment of FER and that a person perceived as angry, surprised, or neutral may actually be disgusted, fearful, or sad, respectively. This may lead to misunderstandings, confusions, and inadequate reactions by the perceivers.
Collapse
Affiliation(s)
- Mike Rinck
- Behavioural Science Institute, Radboud University Nijmegen, PO Box 9104, 6500 HE, Nijmegen, The Netherlands.
| | - Maximilian A Primbs
- Behavioural Science Institute, Radboud University Nijmegen, PO Box 9104, 6500 HE, Nijmegen, The Netherlands
| | - Iris A M Verpaalen
- Behavioural Science Institute, Radboud University Nijmegen, PO Box 9104, 6500 HE, Nijmegen, The Netherlands
| | - Gijsbert Bijlstra
- Behavioural Science Institute, Radboud University Nijmegen, PO Box 9104, 6500 HE, Nijmegen, The Netherlands
| |
Collapse
|
8
|
Posterior-prefrontal and medial orbitofrontal regions play crucial roles in happiness and sadness recognition. Neuroimage Clin 2022; 35:103072. [PMID: 35689975 PMCID: PMC9192961 DOI: 10.1016/j.nicl.2022.103072] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 05/31/2022] [Accepted: 05/31/2022] [Indexed: 11/23/2022]
Abstract
Brain areas underlying trade-off relations between emotions were identified. Damage to the PPF area reduces accuracy of happiness recognition. Damage to the PPF increases accuracy of sadness recognition. A similar tendency was observed in orbitofrontal regions for sadness recognition. Only a deficit in sadness, but not happiness, persisted in the chronic phase.
The core brain regions responsible for basic human emotions are not yet fully understood. We investigated the key areas responsible for emotion recognition of facial expressions of happiness and sadness using data obtained from patients who underwent local brain resection. A total of 44 patients with right cerebral hemispheric brain tumors and 33 healthy volunteers were enrolled and subjected to a facial expression recognition test. Voxel-based lesion-symptom mapping was performed to investigate the relationship between the accuracy of emotion recognition and the resected regions. Consequently, trade-off relationships were discovered: the posterior-prefrontal region was related to a low score of happiness recognition and a high score of sadness recognition (disorder-of-happiness group), whereas the medial orbitofrontal region was related to a low score of sadness recognition and a high score of happiness recognition (disorder-of-sadness group). The emotion recognition score in both the happiness and sadness disorder groups was significantly lower than that in the control group (p = 0.0009 and p = 0.021, respectively). Interestingly, the deficit in happiness recognition was temporary, whereas the deficit in sadness recognition persisted during the chronic phase. Using graph theoretical analysis, we identified structural connectivity between the posterior-prefrontal and medial orbitofrontal regions. When either of these regions was damaged, the tract volume connecting them was significantly reduced (p = 0.013). These results indicate that the posterior-prefrontal and medial orbitofrontal regions may be crucial for maintaining a balance between happiness and sadness recognition in humans. Investigating the clinical impact of certain area resections using lesion studies combined with connectivity analysis is a useful neuroimaging method for understanding neural networks.
Collapse
|
9
|
Hildebrandt T, Peyser D, Sysko R. Lessons learned developing and testing family-based interoceptive exposure for adolescents with low-weight eating disorders. Int J Eat Disord 2021; 54:2037-2045. [PMID: 34528269 PMCID: PMC8712094 DOI: 10.1002/eat.23605] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 08/30/2021] [Accepted: 08/31/2021] [Indexed: 11/07/2022]
Abstract
BACKGROUND Anorexia nervosa (AN) usually develops in early adolescence and is characterized by high rates of morbidity and mortality. Family-based therapy (FBT) is the leading evidence-based treatment for adolescents with AN, but not all patients experience sufficient improvement. The purpose of this manuscript is to describe the development and subsequent experience with a Family-Based Interoceptive Exposure (FBT-IE) for adolescents with a broader form of low-weight eating disorders. METHODS The novel IE-based behavioral intervention is a six-session family-based treatment module designed to directly target and modify disgust by altering the prefrontal regulation of the insula response to aversive stimuli by decreasing visceral sensitivity (e.g., bloating). Each session teaches a new skill regarding tolerating distress to visceral sensations associated with disgust and an in-vivo "IE exercise," in which the family is provided with a meal replacement shake of unknown content and caloric density. RESULTS In this novel treatment, the patient learns to tolerate disgust in the context of a challenging food stimulus as a way to increase consumption of restricted foods outside of session. CONCLUSION We discuss successes and challenges executing this treatment with patients with low-weight eating disorders and propose future directions for the intervention.
Collapse
Affiliation(s)
- Tom Hildebrandt
- Eating and Weight Disorders Program, Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Deena Peyser
- Eating and Weight Disorders Program, Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Robyn Sysko
- Eating and Weight Disorders Program, Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| |
Collapse
|
10
|
Liu L, du Toit M, Weidemann G. Infants are sensitive to cultural differences in emotions at 11 months. PLoS One 2021; 16:e0257655. [PMID: 34591863 PMCID: PMC8483341 DOI: 10.1371/journal.pone.0257655] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Accepted: 09/07/2021] [Indexed: 11/19/2022] Open
Abstract
A myriad of emotion perception studies has shown infants' ability to discriminate different emotional categories, yet there has been little investigation of infants' perception of cultural differences in emotions. Hence little is known about the extent to which culture-specific emotion information is recognised in the beginning of life. Caucasian Australian infants of 10-12 months participated in a visual-paired comparison task where their preferential looking patterns to three types of infant-directed emotions (anger, happiness, surprise) from two different cultures (Australian, Japanese) were examined. Differences in racial appearances were controlled. Infants exhibited preferential looking to Japanese over Caucasian Australian mothers' angry and surprised expressions, whereas no difference was observed in trials involving East-Asian Australian mothers. In addition, infants preferred Caucasian Australian mothers' happy expressions. These findings suggest that 11-month-olds are sensitive to cultural differences in spontaneous infant-directed emotional expressions when they are combined with a difference in racial appearance.
Collapse
Affiliation(s)
- Liquan Liu
- School of Psychology, Western Sydney University, Sydney, Australia
- MARCS Institute for Brain and Behaviour, Western Sydney University, Sydney, Australia
- Center for Multilingualism in Society Across the Lifespan, University of Oslo, Oslo, Norway
| | - Mieke du Toit
- School of Psychology, Western Sydney University, Sydney, Australia
| | - Gabrielle Weidemann
- School of Psychology, Western Sydney University, Sydney, Australia
- MARCS Institute for Brain and Behaviour, Western Sydney University, Sydney, Australia
| |
Collapse
|
11
|
Analysis of facial expressions in response to basic taste stimuli using artificial intelligence to predict perceived hedonic ratings. PLoS One 2021; 16:e0250928. [PMID: 33945568 PMCID: PMC8096070 DOI: 10.1371/journal.pone.0250928] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 04/18/2021] [Indexed: 11/19/2022] Open
Abstract
Taste stimuli can induce a variety of physiological reactions depending on the quality and/or hedonics (overall pleasure) of tastants, for which objective methods have long been desired. In this study, we used artificial intelligence (AI) technology to analyze facial expressions with the aim of assessing its utility as an objective method for the evaluation of food and beverage hedonics compared with conventional subjective (perceived) evaluation methods. The face of each participant (10 females; age range, 21–22 years) was photographed using a smartphone camera a few seconds after drinking 10 different solutions containing five basic tastes with different hedonic tones. Each image was then uploaded to an AI application to achieve outcomes for eight emotions (surprise, happiness, fear, neutral, disgust, sadness, anger, and embarrassment), with scores ranging from 0 to 100. For perceived evaluations, each participant also rated the hedonics of each solution from –10 (extremely unpleasant) to +10 (extremely pleasant). Based on these, we then conducted a multiple linear regression analysis to obtain a formula to predict perceived hedonic ratings. The applicability of the formula was examined by combining the emotion scores with another 11 taste solutions obtained from another 12 participants of both genders (age range, 22–59 years). The predicted hedonic ratings showed good correlation and concordance with the perceived ratings. To our knowledge, this is the first study to demonstrate a model that enables the prediction of hedonic ratings based on emotional facial expressions to food and beverage stimuli.
Collapse
|
12
|
Namba S, Kambara T. Semantics Based on the Physical Characteristics of Facial Expressions Used to Produce Japanese Vowels. Behav Sci (Basel) 2020; 10:E157. [PMID: 33066229 PMCID: PMC7602070 DOI: 10.3390/bs10100157] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 10/02/2020] [Accepted: 10/12/2020] [Indexed: 12/18/2022] Open
Abstract
Previous studies have reported that verbal sounds are associated-non-arbitrarily-with specific meanings (e.g., sound symbolism and onomatopoeia), including visual forms of information such as facial expressions; however, it remains unclear how mouth shapes used to utter each vowel create our semantic impressions. We asked 81 Japanese participants to evaluate mouth shapes associated with five Japanese vowels by using 10 five-item semantic differential scales. The results reveal that the physical characteristics of the facial expressions (mouth shapes) induced specific evaluations. For example, the mouth shape made to voice the vowel "a" was the one with the biggest, widest, and highest facial components compared to other mouth shapes, and people perceived words containing that vowel sound as bigger. The mouth shapes used to pronounce the vowel "i" were perceived as more likable than the other four vowels. These findings indicate that the mouth shapes producing vowels imply specific meanings. Our study provides clues about the meaning of verbal sounds and what the facial expressions in communication represent to the perceiver.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Team, BZP, Robotics Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 6190288, Japan;
| | - Toshimune Kambara
- Department of Psychology, Graduate School of Education, Hiroshima University, 1-1-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 7398524, Japan
| |
Collapse
|