1
|
Imbriaco G, Capitano M, Rocchi M, Suhan A, Tacci A, Monesi A, Sebastiani S, Samolsky Dekel BG. Relationship between noise levels and intensive care patients' clinical complexity: An observational simulation study. Nurs Crit Care 2024; 29:555-563. [PMID: 37265028 DOI: 10.1111/nicc.12934] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 03/27/2023] [Accepted: 05/03/2023] [Indexed: 06/03/2023]
Abstract
BACKGROUND Noise pollution in intensive care units is a relevant problem, associated with psychological and physiological consequences for patients and healthcare staff. Sources of noise pollution include medical equipment, alarms, communication tools, staff activities, and conversations. AIMS To explore the cumulative effects of noise caused by an increasing number and type of medical devices in an intensive care setting on simulated patients with increasing clinical complexity. Secondly, to measure medical device alarms and nursing activities' sound levels, evaluating their role as potentially disruptive noises. STUDY DESIGN Observational simulation study (reported according to the STROBE checklist). Using an electronic sound meter, the sound levels of an intensive care room in seven simulated clinical scenarios were measured on a single day (09 March 2022), each featuring increasing numbers of devices, hypothetically corresponding to augmented patients' clinical complexity. Secondly, noise levels of medical device alarms and specific nursing activities performed at a distance of three meters from the sound meter were analysed. RESULTS The empty room's mean baseline noise level was 37.8 (±0.7) dBA; among the simulated scenarios, noise ranged between 45.3 (±1.0) and 53.5 (±1.5) dBA. Alarms ranged between 76.4 and 81.3 dBA, while nursing tasks (closing a drawer, opening a saline bag overwrap, or sterile packages) and speaking were all over 80 dBA. The noisiest activity was opening a sterile package (98 dBA). CONCLUSION An increased number of medical devices, an expression of patients' higher clinical complexity, is not a significant cause of increased noise. Some specific nursing activities and conversations produce higher noise levels than medical devices and alarms. This study's findings suggest further research to assess the relationships between these factors and to encourage adequate noise reduction strategies. RELEVANCE TO CLINICAL PRACTICE Excessive noise level in the intensive care unit is a clinical issue that negatively affects patients' and healthcare providers' well-being. The increase in baseline room noise from medical devices is generally limited. Typical nursing tasks and conversations produce higher noise levels than medical devices and alarms. These findings could be helpful to raise awareness among healthcare professionals to recognize noise sources. The noisiest components of the environment can be modified by staff behaviour, promoting noise reduction strategies and improving the critical care environment.
Collapse
Affiliation(s)
- Guglielmo Imbriaco
- Centrale Operativa 118 Emilia Est, Prehospital Emergency Dispatch Center, Helicopter Emergency Medical Service, Maggiore Hospital Carlo Alberto Pizzardi, Bologna, Italy
- Critical Care Nursing Master Course, University of Bologna, Bologna, Italy
| | - Martina Capitano
- Emergency Department, Maggiore Hospital Carlo Alberto Pizzardi, Azienda USL di Bologna, Bologna, Italy
| | - Margherita Rocchi
- Intensive Care Unit, Nuovo San Giovanni di Dio hospital, AUSL Toscana Centro, Florence, Italy
| | - Aglaia Suhan
- Medical Department (COVID-19), Madre Teresa di Calcutta hospital, Padova, Italy
| | - Alice Tacci
- Neonatal Intensive Care Unit, Maggiore Hospital, AOU Parma, Parma, Italy
| | - Alessandro Monesi
- Critical Care Nursing Master Course, University of Bologna, Bologna, Italy
- Intensive Care Unit, Maggiore hospital Carlo Alberto Pizzardi, Azienda USL di Bologna, Bologna, Italy
| | - Stefano Sebastiani
- Critical Care Nursing Master Course, University of Bologna, Bologna, Italy
- IRCCS Azienda Ospedaliero-Universitaria di Bologna, Bologna, Italy
| | - Boaz Gedaliahu Samolsky Dekel
- Critical Care Nursing Master Course, University of Bologna, Bologna, Italy
- IRCCS Azienda Ospedaliero-Universitaria di Bologna, Bologna, Italy
- Department of Medical and Surgical Sciences (DIMEC), University of Bologna, Bologna, Italy
| |
Collapse
|
2
|
Wieczorek K, Ananth S, Valazquez-Pimentel D. Acoustic biomarkers in asthma: a systematic review. J Asthma 2024:1-18. [PMID: 38634718 DOI: 10.1080/02770903.2024.2344156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 04/13/2024] [Indexed: 04/19/2024]
Abstract
OBJECTIVE Current monitoring methods of asthma, such as peak expiratory flow testing, have important limitations. The emergence of automated acoustic sound analysis, capturing cough, wheeze, and inhaler use, offers a promising avenue for improving asthma diagnosis and monitoring. This systematic review evaluated the validity of acoustic biomarkers in supporting the diagnosis of asthma and its monitoring. DATA SOURCES A search was performed using two databases (PubMed and Embase) for all relevant studies published before November 2023. STUDY SELECTION 27 studies were included for analysis. Eligible studies focused on acoustic signals as digital biomarkers in asthma, utilizing recording devices to register or analyze sound. RESULTS Various respiratory acoustic signal types were analyzed, with cough and wheeze being predominant. Data collection methods included smartphones, custom sensors and digital stethoscopes. Across all studies, automated acoustic algorithms achieved average accuracy of cough and wheeze detection of 88.7% (range: 61.0 - 100.0%) with a median of 92.0%. The sensitivity of sound detection ranged from 54.0% to 100.0%, with a median of 90.3%; specificity ranged from 67.0% to 99.7%, with a median of 95.0%. Moreover, 70.4% (19/27) studies had a risk of bias identified. CONCLUSIONS This systematic review establishes the promising role of acoustic biomarkers, particularly cough and wheeze, in supporting the diagnosis of asthma and monitoring. The evidence suggests the potential for clinical integration of acoustic biomarkers, emphasizing the need for further validation in larger, clinically-diverse populations.
Collapse
Affiliation(s)
| | - Sachin Ananth
- London North West University Healthcare Trust, London, UK
| | | |
Collapse
|
3
|
Santos-Silva C, Ferreira-Cardoso H, Silva S, Vieira-Marques P, Valente JC, Almeida R, A Fonseca J, Santos C, Azevedo I, Jácome C. Feasibility and Acceptability of Pediatric Smartphone Lung Auscultation by Parents: Cross-Sectional Study. JMIR Pediatr Parent 2024; 7:e52540. [PMID: 38602309 PMCID: PMC11024396 DOI: 10.2196/52540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 11/23/2023] [Accepted: 01/02/2024] [Indexed: 04/12/2024] Open
Abstract
Background The use of a smartphone built-in microphone for auscultation is a feasible alternative to the use of a stethoscope, when applied by physicians. Objective This cross-sectional study aims to assess the feasibility of this technology when used by parents-the real intended end users. Methods Physicians recruited 46 children (male: n=33, 72%; age: mean 11.3, SD 3.1 y; children with asthma: n=24, 52%) during medical visits in a pediatric department of a tertiary hospital. Smartphone auscultation using an app was performed at 4 locations (trachea, right anterior chest, and right and left lung bases), first by a physician (recordings: n=297) and later by a parent (recordings: n=344). All recordings (N=641) were classified by 3 annotators for quality and the presence of adventitious sounds. Parents completed a questionnaire to provide feedback on the app, using a Likert scale ranging from 1 ("totally disagree") to 5 ("totally agree"). Results Most recordings had quality (physicians' recordings: 253/297, 85.2%; parents' recordings: 266/346, 76.9%). The proportions of physicians' recordings (34/253, 13.4%) and parents' recordings (31/266, 11.7%) with adventitious sounds were similar. Parents found the app easy to use (questionnaire: median 5, IQR 5-5) and were willing to use it (questionnaire: median 5, IQR 5-5). Conclusions Our results show that smartphone auscultation is feasible when performed by parents in the clinical context, but further investigation is needed to test its feasibility in real life.
Collapse
Affiliation(s)
| | | | - Sónia Silva
- Department of Pediatrics, Centro Hospitalar Universitário de São João, Porto, Portugal
| | - Pedro Vieira-Marques
- CINTESIS - Center for Health Technology and Services Research, Faculty of Medicine, Universidade do Porto, Porto, Portugal
| | - José Carlos Valente
- MEDIDA – Serviços em Medicina, Educação, Investigação, Desenvolvimento e Avaliação, Porto, Portugal
| | - Rute Almeida
- CINTESIS@RISE, Department of Community Medicine, Information and Health Decision Sciences (MEDCIDS), Faculty of Medicine, University of Porto, Porto, Portugal
| | - João A Fonseca
- MEDIDA – Serviços em Medicina, Educação, Investigação, Desenvolvimento e Avaliação, Porto, Portugal
- CINTESIS@RISE, Department of Community Medicine, Information and Health Decision Sciences (MEDCIDS), Faculty of Medicine, University of Porto, Porto, Portugal
| | - Cristina Santos
- CINTESIS@RISE, Department of Community Medicine, Information and Health Decision Sciences (MEDCIDS), Faculty of Medicine, University of Porto, Porto, Portugal
| | - Inês Azevedo
- Department of Pediatrics, Centro Hospitalar Universitário de São João, Porto, Portugal
- Department of Obstetrics, Gynecology and Pediatrics, Faculty of Medicine, Universidade do Porto, Porto, Portugal
- EpiUnit, Institute of Public Health, Universidade do Porto, Porto, Portugal
| | - Cristina Jácome
- CINTESIS@RISE, Department of Community Medicine, Information and Health Decision Sciences (MEDCIDS), Faculty of Medicine, University of Porto, Porto, Portugal
| |
Collapse
|
4
|
Campanella C, Byun K, Senerat A, Li L, Zhang R, Aristizabal S, Porter P, Bauer B. The Efficacy of a Multimodal Bedroom-Based 'Smart' Alarm System on Mitigating the Effects of Sleep Inertia. Clocks Sleep 2024; 6:183-199. [PMID: 38534801 DOI: 10.3390/clockssleep6010013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 03/11/2024] [Accepted: 03/13/2024] [Indexed: 03/28/2024] Open
Abstract
Previous work has demonstrated the modest impact of environmental interventions that manipulate lighting, sound, or temperature on sleep inertia symptoms. The current study sought to expand on previous work and measure the impact of a multimodal intervention that collectively manipulated light, sound, and ambient temperature on sleep inertia. Participants slept in the lab for four nights and were awoken each morning by either a traditional alarm clock or the multimodal intervention. Feelings of sleep inertia were measured each morning through Psychomotor Vigilance Test (PVT) assessments and ratings of sleepiness and mood at five time-points. While there was little overall impact of the intervention, the participant's chronotype and the length of the lighting exposure on intervention mornings both influenced sleep inertia symptoms. Moderate evening types who received a shorter lighting exposure (≤15 min) demonstrated more lapses relative to the control condition, whereas intermediate types exhibited a better response speed and fewer lapses. Conversely, moderate evening types who experienced a longer light exposure (>15 min) during the intervention exhibited fewer false alarms over time. The results suggest that the length of the environmental intervention may play a role in mitigating feelings of sleep inertia, particularly for groups who might exhibit stronger feelings of sleep inertia, including evening types.
Collapse
Affiliation(s)
- Carolina Campanella
- Delos Living LLC, New York, NY 10014, USA
- Well Living Lab, Inc., Rochester, MN 55902, USA
| | - Kunjoon Byun
- Delos Living LLC, New York, NY 10014, USA
- Well Living Lab, Inc., Rochester, MN 55902, USA
| | - Araliya Senerat
- Well Living Lab, Inc., Rochester, MN 55902, USA
- International Society for Urban Health, New York, NY 10003, USA
| | - Linhao Li
- Delos Living LLC, New York, NY 10014, USA
- Well Living Lab, Inc., Rochester, MN 55902, USA
| | | | - Sara Aristizabal
- Delos Living LLC, New York, NY 10014, USA
- Well Living Lab, Inc., Rochester, MN 55902, USA
| | - Paige Porter
- Well Living Lab, Inc., Rochester, MN 55902, USA
- School of Environment and Sustainability, University of Michigan, Ann Arbor, MI 48109, USA
| | - Brent Bauer
- Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| |
Collapse
|
5
|
Kohler I, Perrotta MV, Ferreira T, Eagleman DM. Cross-Modal Sensory Boosting to Improve High-Frequency Hearing Loss: Device Development and Validation. JMIRx Med 2024; 5:v5i1e49969. [PMID: 38345294 PMCID: PMC11008433 DOI: 10.2196/49969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 11/23/2023] [Accepted: 12/13/2023] [Indexed: 04/13/2024]
Abstract
Background High-frequency hearing loss is one of the most common problems in the aging population and with those who have a history of exposure to loud noises. This type of hearing loss can be frustrating and disabling, making it difficult to understand speech communication and interact effectively with the world. Objective This study aimed to examine the impact of spatially unique haptic vibrations representing high-frequency phonemes on the self-perceived ability to understand conversations in everyday situations. Methods To address high-frequency hearing loss, a multi-motor wristband was developed that uses machine learning to listen for specific high-frequency phonemes. The wristband vibrates in spatially unique locations to represent which phoneme was present in real time. A total of 16 participants with high-frequency hearing loss were recruited and asked to wear the wristband for 6 weeks. The degree of disability associated with hearing loss was measured weekly using the Abbreviated Profile of Hearing Aid Benefit (APHAB). Results By the end of the 6-week study, the average APHAB benefit score across all participants reached 12.39 points, from a baseline of 40.32 to a final score of 27.93 (SD 13.11; N=16; P=.002, 2-tailed dependent t test). Those without hearing aids showed a 10.78-point larger improvement in average APHAB benefit score at 6 weeks than those with hearing aids (t14=2.14; P=.10, 2-tailed independent t test). The average benefit score across all participants for ease of communication was 15.44 (SD 13.88; N=16; P<.001, 2-tailed dependent t test). The average benefit score across all participants for background noise was 10.88 (SD 17.54; N=16; P=.03, 2-tailed dependent t test). The average benefit score across all participants for reverberation was 10.84 (SD 16.95; N=16; P=.02, 2-tailed dependent t test). Conclusions These findings show that vibrotactile sensory substitution delivered by a wristband that produces spatially distinguishable vibrations in correspondence with high-frequency phonemes helps individuals with high-frequency hearing loss improve their perceived understanding of verbal communication. Vibrotactile feedback provides benefits whether or not a person wears hearing aids, albeit in slightly different ways. Finally, individuals with the greatest perceived difficulty understanding speech experienced the greatest amount of perceived benefit from vibrotactile feedback.
Collapse
Affiliation(s)
| | | | | | - David M Eagleman
- Neosensory, Los Altos, CA, United States
- Department of Psychiatry, Stanford University, Stanford, CA, United States
| |
Collapse
|
6
|
Pérez Varela I, Shear G, Cobas C. Molecular Melodies: Unraveling the Hidden Harmonies of NMR Spectroscopy. Molecules 2024; 29:762. [PMID: 38398514 PMCID: PMC10893351 DOI: 10.3390/molecules29040762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 01/29/2024] [Accepted: 02/03/2024] [Indexed: 02/25/2024] Open
Abstract
This work explores the evolution of auditory analysis in NMR spectroscopy, tracing its journey from a supplementary tool to visual methods such as oscilloscopes, to a technique sidelined due to technological advancements. Despite its renaissance in the late 1990s with artistic and scientific applications, widespread adoption was hindered by the necessity for hardware modifications and reliance on specialized software. Addressing these barriers, this paper introduces a new feature in Mnova NMR software that facilitates the easy auditory interpretation of NMR signals. We discuss new applications of this tool, emphasizing its utility in aiding the identification of specific functional groups by auditory analysis of the spectrum's multiplets, such as distinguishing between aromatic, olefinic, or aliphatic protons, thereby enriching the interpretative capabilities of NMR data.
Collapse
Affiliation(s)
- Iria Pérez Varela
- Centro de Investigación Mestrelab (CIM), Av. Barcelona 7, 15706 Santiago de Compostela, Spain;
| | - Gavin Shear
- Mestrelab Research, 15706 Santiago de Compostela, Spain;
| | - Carlos Cobas
- Mestrelab Research, 15706 Santiago de Compostela, Spain;
| |
Collapse
|
7
|
Nardini A, Cochard H, Mayr S. Talk is cheap: rediscovering sounds made by plants. Trends Plant Sci 2024:S1360-1385(23)00382-5. [PMID: 38218649 DOI: 10.1016/j.tplants.2023.11.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 11/21/2023] [Accepted: 11/30/2023] [Indexed: 01/15/2024]
Abstract
A recent study and related commentaries have raised new interest in the phenomenon of ultrasonic sound production by plants exposed to stress, especially drought. While recent technological advancements have allowed the demonstration that these sounds can propagate in the air surrounding plants, we remind readers here that research on sound production by plants is more than 100 years old. The mechanisms and patterns of sound emission from plants subjected to different stress factors are also reasonably understood, thanks to the pioneering work of John Milburn and others. By contrast, experimental evidence for a role of these sounds in plant-animal or plant-plant communication remains lacking and, at present, these ideas remain highly speculative.
Collapse
Affiliation(s)
- Andrea Nardini
- Dipartimento di Scienze della Vita, Università di Trieste, Via L. Giorgieri 10, 34127 Trieste, Italy.
| | - Hervé Cochard
- Université Clermont-Auvergne, INRAE, PIAF, Clermont-Ferrand 63000, France
| | - Stefan Mayr
- Department of Botany, University of Innsbruck, 6020 Innsbruck, Austria
| |
Collapse
|
8
|
Parizek D, Visnovcova N, Hamza Sladicekova K, Veternik M, Jakus J, Jakusova J, Visnovcova Z, Ferencova N, Tonhajzerova I. Effect of Selected Music Soundtracks on Cardiac Vagal Control and Complexity Assessed by Heart Rate Variability. Physiol Res 2023; 72:587-596. [PMID: 38015758 PMCID: PMC10751054 DOI: 10.33549/physiolres.935114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 06/07/2023] [Indexed: 01/05/2024] Open
Abstract
Listening to music is experimentally associated with positive stress reduction effect on human organisms. However, the opinions of therapists about this complementary non-invasive therapy are still different. PURPOSE The aim of our study was to investigate the effect of selected passive music therapy frequencies without vocals on selected cardio-vagal and complexity indices of short-term heart rate variability (HRV) in healthy youth, in terms of calming the human. MAIN METHODS 30 probands (15 male, averaged age: 19.7+/-1.4 years, BMI: 23.3+/-3.8 kg/m2) were examined during protocol (Silence baseline, Music 1 (20-1000 Hz), Silence 1, Music 2 (250-2000 Hz), Silence 2, Music 3 (1000-16000 Hz), and Silence 3). Evaluated HRV parameters in time, spectral, and geometrical domains represent indices of cardio-vagal and emotional regulation. Additionally, HRV complexity was calculated by approximate entropy and sample entropy (SampEn) and subjective characteristics of each phase by Likert scale. RESULTS the distance between subsequent R-waves in the electrocardiogram (RR intervals [ms]) and SampEn were significantly higher during Music 3 compared to Silence 3 (p=0.015, p=0.021, respectively). Geometrical cardio-vagal index was significantly higher during Music 2 than during Silence 2 (p=0.006). In the subjective perception of the healthy youths evaluated statistically through a Likert scale, the phases of music were perceived significantly more pleasant than the silent phases (p<0.001, p=0.008, p=0.003, respectively). CONCLUSIONS Our findings revealed a rise of cardio-vagal modulation and higher complexity assessed by short-term HRV indices suggesting positive relaxing effect music especially of higher frequency on human organism.
Collapse
Affiliation(s)
- D Parizek
- Department of Medical Biophysics, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava, Martin, Slovak Republic.
| | | | | | | | | | | | | | | | | |
Collapse
|
9
|
Bergez L, Jourdain G, De Luca D. Noise Produced by Neonatal Ventilators Inside and Outside of the Incubators. Respir Care 2023; 68:1693-1700. [PMID: 37147103 PMCID: PMC10676250 DOI: 10.4187/respcare.10989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 05/03/2023] [Indexed: 05/07/2023]
Abstract
BACKGROUND Insufficient data are available about the noise produced by modern neonatal ventilators. We aimed to measure their noise under different ventilatory modes and parameters. METHODS This was a bench study measuring the noise produced by 9 neonatal ventilators set in conventional or high-frequency oscillatory ventilation (HFOV), nasal mask-delivered CPAP with variable- or continuous-flow configuration, or bi-level positive airway pressure (considered as noninvasive ventilation [NIV]). Conventional ventilation and HFOV were tested in 2 distinct settings with moderate or higher parameters. Sound measurements were performed inside and outside an incubator mimicking the clinical setting and using a high-end meter meeting the international ISO 226:2003 standard. RESULTS Four ventilators remained below the internationally recommended safety threshold but only for measurements outside the incubator. Conventional ventilation (49.1 [3.4] dBA) and HFOV (56.3 [5.2] dBA) were the least and most noisy respiratory support technique, respectively. Noise was greater inside than outside the incubators (P < .0001) and different between the ventilators (P < .0001); better results were achieved by Servo-u and Fabian family devices for conventional ventilation; by fabian HFO for HFOV; and by Servo-u, VN500, and fabian family devices for CPAP and NIV. Noise levels were similar when using moderate or higher parameters in conventional ventilation (P = .81) and in HFOV (P = .45). CONCLUSIONS Modern ventilators often produce relevant noise, independent of the respiratory support modality, with acceptable noise levels being measured only outside the incubator. Better results were achieved with Servo-u, VN500, and Fabian family devices.
Collapse
Affiliation(s)
- Lea Bergez
- Division of Pediatrics and Neonatal Critical Care, "A.Beclere" Medical Center, Paris Saclay University Hospitals, APHP, Paris, France
| | - Gilles Jourdain
- Division of Pediatrics and Neonatal Critical Care, "A.Beclere" Medical Center, Paris Saclay University Hospitals, APHP, Paris, France
| | - Daniele De Luca
- Division of Pediatrics and Neonatal Critical Care, "A.Beclere" Medical Center, Paris Saclay University Hospitals, APHP, Paris, France; and Physiopathology and Therapeutic Innovation Unit-INSERM U999, Paris Saclay University, Paris, France.
| |
Collapse
|
10
|
Pastras CJ, Curthoys IS. Vestibular Testing-New Physiological Results for the Optimization of Clinical VEMP Stimuli. Audiol Res 2023; 13:910-928. [PMID: 37987337 PMCID: PMC10660708 DOI: 10.3390/audiolres13060079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 11/05/2023] [Accepted: 11/07/2023] [Indexed: 11/22/2023] Open
Abstract
Both auditory and vestibular primary afferent neurons can be activated by sound and vibration. This review relates the differences between them to the different receptor/synaptic mechanisms of the two systems, as shown by indicators of peripheral function-cochlear and vestibular compound action potentials (cCAPs and vCAPs)-to click stimulation as recorded in animal studies. Sound- and vibration-sensitive type 1 receptors at the striola of the utricular macula are enveloped by the unique calyx afferent ending, which has three modes of synaptic transmission. Glutamate is the transmitter for both cochlear and vestibular primary afferents; however, blocking glutamate transmission has very little effect on vCAPs but greatly reduces cCAPs. We suggest that the ultrafast non-quantal synaptic mechanism called resistive coupling is the cause of the short latency vestibular afferent responses and related results-failure of transmitter blockade, masking, and temporal precision. This "ultrafast" non-quantal transmission is effectively electrical coupling that is dependent on the membrane potentials of the calyx and the type 1 receptor. The major clinical implication is that decreasing stimulus rise time increases vCAP response, corresponding to the increased VEMP response in human subjects. Short rise times are optimal in human clinical VEMP testing, whereas long rise times are mandatory for audiometric threshold testing.
Collapse
Affiliation(s)
- Christopher J. Pastras
- Faculty of Science and Engineering, School of Engineering, Macquarie University, Sydney, NSW 2109, Australia;
| | - Ian S. Curthoys
- Vestibular Research Laboratory, School of Psychology, The University of Sydney, Sydney, NSW 2006, Australia
| |
Collapse
|
11
|
Loughrin JH, Parekh RR, Agga GE, Silva PJ, Sistani KR. Microbiome Diversity of Anaerobic Digesters Is Enhanced by Microaeration and Low Frequency Sound. Microorganisms 2023; 11:2349. [PMID: 37764193 PMCID: PMC10535533 DOI: 10.3390/microorganisms11092349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 09/15/2023] [Accepted: 09/16/2023] [Indexed: 09/29/2023] Open
Abstract
Biogas is produced by a consortium of bacteria and archaea. We studied how the microbiome of poultry litter digestate was affected by time and treatments that enhanced biogas production. The microbiome was analyzed at six, 23, and 42 weeks of incubation. Starting at week seven, the digesters underwent four treatments: control, microaeration with 6 mL air L-1 digestate per day, treatment with a 1000 Hz sine wave, or treatment with the sound wave and microaeration. Both microaeration and sound enhanced biogas production relative to the control, while their combination was not as effective as microaeration alone. At week six, over 80% of the microbiome of the four digesters was composed of the three phyla Actinobacteria, Proteobacteria, and Firmicutes, with less than 10% Euryarchaeota and Bacteroidetes. At week 23, the digester microbiomes were more diverse with the phyla Spirochaetes, Synergistetes, and Verrucomicrobia increasing in proportion and the abundance of Actinobacteria decreasing. At week 42, Firmicutes, Bacteroidetes, Euryarchaeota, and Actinobacteria were the most dominant phyla, comprising 27.8%, 21.4%, 17.6%, and 12.3% of the microbiome. Other than the relative proportions of Firmicutes being increased and proportions of Bacteroidetes being decreased by the treatments, no systematic shifts in the microbiomes were observed due to treatment. Rather, microbial diversity was enhanced relative to the control. Given that both air and sound treatment increased biogas production, it is likely that they improved poultry litter breakdown to promote microbial growth.
Collapse
Affiliation(s)
- John H. Loughrin
- United States Department of Agriculture, Agricultural Research Service, Food Animal Environmental Systems Research Unit, 2413 Nashville Road, Suite B5, Bowling Green, KY 42101, USA; (R.R.P.); (G.E.A.); (P.J.S.); (K.R.S.)
| | | | | | | | | |
Collapse
|
12
|
Olczak K, Penar W, Nowicki J, Magiera A, Klocek C. The Role of Sound in Livestock Farming-Selected Aspects. Animals (Basel) 2023; 13:2307. [PMID: 37508083 PMCID: PMC10376870 DOI: 10.3390/ani13142307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 07/10/2023] [Accepted: 07/11/2023] [Indexed: 07/30/2023] Open
Abstract
To ensure the optimal living conditions of farm animals, it is essential to understand how their senses work and the way in which they perceive their environment. Most animals have a different hearing range compared to humans; thus, some aversive sounds may go unnoticed by caretakers. The auditory pathways may act through the nervous system on the cardiovascular, gastrointestinal, endocrine, and immune systems. Therefore, noise may lead to behavioral activation (arousal), pain, and sleep disorders. Sounds on farms may be produced by machines, humans, or animals themselves. It is worth noting that vocalization may be very informative to the breeder as it is an expression of an emotional state. This information can be highly beneficial in maintaining a high level of livestock welfare. Moreover, understanding learning theory, conditioning, and the potential benefits of certain sounds can guide the deliberate use of techniques in farm management to reduce the aversiveness of certain events.
Collapse
Affiliation(s)
- Katarzyna Olczak
- Department of Horse Breeding, National Research Institute of Animal Production, Krakowska St. 1, 32-083 Balice, Poland
| | - Weronika Penar
- Department of Animal Genetics, Breeding and Ethology, Faculty of Animal Sciences, University of Agriculture in Kraków, 24/28 Mickiewicza Ave., 30-059 Cracow, Poland
| | - Jacek Nowicki
- Department of Animal Genetics, Breeding and Ethology, Faculty of Animal Sciences, University of Agriculture in Kraków, 24/28 Mickiewicza Ave., 30-059 Cracow, Poland
| | - Angelika Magiera
- Department of Animal Genetics, Breeding and Ethology, Faculty of Animal Sciences, University of Agriculture in Kraków, 24/28 Mickiewicza Ave., 30-059 Cracow, Poland
| | - Czesław Klocek
- Department of Animal Genetics, Breeding and Ethology, Faculty of Animal Sciences, University of Agriculture in Kraków, 24/28 Mickiewicza Ave., 30-059 Cracow, Poland
| |
Collapse
|
13
|
Berengueres J, AlKuwaiti M, Abduljabbar M, Taher F. Adding Sound Transparency to a Spacesuit: Effect on Cognitive Performance in Females. IEEE Open J Eng Med Biol 2023; 4:190-194. [PMID: 38226364 PMCID: PMC10789458 DOI: 10.1109/ojemb.2023.3288740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 05/07/2023] [Accepted: 06/19/2023] [Indexed: 01/17/2024] Open
Abstract
Spacesuits may block external sound. This induces sensory deprivation; a side effect is lower cognitive performance. This can increase the risk of an accident. This undesirable effect can be mitigated by designing suits with sound transparency. If the atmosphere is available, as on Mars, sound transparency can be realized by augmenting and processing external sounds. If no atmosphere is available, such as on the Moon, then an Earth-like sound can be re-created via generative AR techniques. We measure the effect of adding sound transparency in an Intra-Vehicular Activity suit by means of the Koh Block test. The results indicate that participants complete the test more quickly when wearing a suit with sound transparency.
Collapse
|
14
|
Campeau S, McNulty C, Stanley JT, Gerber AN, Sasse SK, Dowell RD. Determination of steady-state transcriptome modifications associated with repeated homotypic stress in the rat rostral posterior hypothalamic region. Front Neurosci 2023; 17:1173699. [PMID: 37360161 PMCID: PMC10288150 DOI: 10.3389/fnins.2023.1173699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 05/18/2023] [Indexed: 06/28/2023] Open
Abstract
Chronic stress is epidemiologically correlated with physical and psychiatric disorders. Whereas many animal models of chronic stress induce symptoms of psychopathology, repeated homotypic stressors to moderate intensity stimuli typically reduce stress-related responses with fewer, if any, pathological symptoms. Recent results indicate that the rostral posterior hypothalamic (rPH) region is a significant component of the brain circuitry underlying response reductions (habituation) associated with repeated homotypic stress. To test whether posterior hypothalamic transcriptional regulation associates with the neuroendocrine modifications induced by repeated homotypic stress, RNA-seq was performed in the rPH dissected from adult male rats that experienced either no stress, 1, 3, or 7 stressful loud noise exposures. Plasma samples displayed reliable increases of corticosterone in all stressed groups, with the smallest increase in the group exposed to 7 loud noises, indicating significant habituation compared to the other stressed groups. While few or no differentially expressed genes were detected 24-h after one or three loud noise exposures, relatively large numbers of transcripts were differentially expressed between the group exposed to 7 loud noises when compared to the control or 3-stress groups, respectively, which correlated with the corticosterone response habituation observed. Gene ontology analyses indicated multiple significant functional terms related to neuron differentiation, neural membrane potential, pre- and post-synaptic elements, chemical synaptic transmission, vesicles, axon guidance and projection, glutamatergic and GABAergic neurotransmission. Some of the differentially expressed genes (Myt1l, Zmat4, Dlx6, Csrnp3) encode transcription factors that were independently predicted by transcription factor enrichment analysis to target other differentially regulated genes in this study. A similar experiment employing in situ hybridization histochemical analysis in additional animals validated the direction of change of the 5 transcripts investigated (Camk4, Gabrb2, Gad1, Grin2a and Slc32a) with a high level of temporal and regional specificity for the rPH. In aggregate, the results suggest that distinct patterns of gene regulation are obtained in response to a repeated homotypic stress regimen; they also point to a significant reorganization of the rPH region that may critically contribute to the phenotypic modifications associated with repeated homotypic stress habituation.
Collapse
Affiliation(s)
- Serge Campeau
- Department of Psychology and Neuroscience, University of Colorado, Boulder, CO, United States
| | - Connor McNulty
- Department of Psychology and Neuroscience, University of Colorado, Boulder, CO, United States
| | - Jacob T. Stanley
- Molecular, Cellular and Developmental Biology, University of Colorado, Boulder, CO, United States
- BioFrontiers Institute, University of Colorado, Boulder, CO, United States
| | - Anthony N. Gerber
- Department of Medicine, National Jewish Health, Denver, CO, United States
- Department of Medicine, University of Colorado, Aurora, CO, United States
| | - Sarah K. Sasse
- Department of Medicine, National Jewish Health, Denver, CO, United States
| | - Robin D. Dowell
- Molecular, Cellular and Developmental Biology, University of Colorado, Boulder, CO, United States
- BioFrontiers Institute, University of Colorado, Boulder, CO, United States
- Department of Computer Science, University of Colorado, Boulder, CO, United States
| |
Collapse
|
15
|
Natarajan S, Thangamuthu M, Gnanasekaran S, Rakkiyannan J. Digital Twin-Driven Tool Condition Monitoring for the Milling Process. Sensors (Basel) 2023; 23:5431. [PMID: 37420597 DOI: 10.3390/s23125431] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Revised: 06/02/2023] [Accepted: 06/05/2023] [Indexed: 07/09/2023]
Abstract
Exact observing and forecasting tool conditions fundamentally affect cutting execution, bringing further developed workpiece machining accuracy and lower machining costs. Because of the unpredictability and time-differing nature of the cutting system, existing methodologies cannot achieve ideal oversight progressively. A technique dependent on Digital Twins (DT) is proposed to accomplish extraordinary accuracy in checking and anticipating tool conditions. This technique builds up a balanced virtual instrument framework that matches entirely with the physical system. Collecting data from the physical system (Milling Machine) is initialized, and sensory data collection is carried out. The National Instruments data acquisition system captures vibration data through a uni-axial accelerometer, and a USB-based microphone sensor acquires the sound signals. The data are trained with different Machine Learning (ML) classification-based algorithms. The prediction accuracy is calculated with the help of a confusion matrix with the highest accuracy of 91% through a Probabilistic Neural Network (PNN). This result has been mapped by extracting the statistical features of the vibrational data. Testing has been performed with the trained model to validate the model's accuracy. Later, the modeling of the DT is initiated using MATLAB-Simulink. This model has been created under the data-driven approach. The physical-virtual balance of the DT model is acknowledged utilizing the advances, taking into consideration the detailed planning of the constant state of the tool's condition. The tool condition monitoring system through the DT model is deployed through the machine learning technique. The DT model can predict the different tool conditions based on sensory data.
Collapse
Affiliation(s)
- Sriraamshanjiev Natarajan
- Department of Mechanical Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore 641112, India
| | - Mohanraj Thangamuthu
- Department of Mechanical Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore 641112, India
| | - Sakthivel Gnanasekaran
- Centre for Automation, School of Mechanical Engineering, Vellore Institute of Technology (VIT), Chennai 600127, India
| | - Jegadeeshwaran Rakkiyannan
- Centre for Automation, School of Mechanical Engineering, Vellore Institute of Technology (VIT), Chennai 600127, India
| |
Collapse
|
16
|
Leone MJ, Dashti HS, Coughlin B, Tesh RA, Quadri SA, Bucklin AA, Adra N, Krishnamurthy PV, Ye EM, Hemmige A, Rajan S, Panneerselvam E, Higgins J, Ayub MA, Ganglberger W, Paixao L, Houle TT, Thompson BT, Johnson-Akeju O, Saxena R, Kimchi E, Cash SS, Thomas RJ, Westover MB. Sound and light levels in intensive care units in a large urban hospital in the United States. Chronobiol Int 2023; 40:759-768. [PMID: 37144470 PMCID: PMC10524721 DOI: 10.1080/07420528.2023.2207647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 11/18/2022] [Accepted: 04/21/2023] [Indexed: 05/06/2023]
Abstract
Intensive care units (ICUs) may disrupt sleep. Quantitative ICU studies of concurrent and continuous sound and light levels and timings remain sparse in part due to the lack of ICU equipment that monitors sound and light. Here, we describe sound and light levels across three adult ICUs in a large urban United States tertiary care hospital using a novel sensor. The novel sound and light sensor is composed of a Gravity Sound Level Meter for sound level measurements and an Adafruit TSL2561 digital luminosity sensor for light levels. Sound and light levels were continuously monitored in the room of 136 patients (mean age = 67.0 (8.7) years, 44.9% female) enrolled in the Investigation of Sleep in the Intensive Care Unit study (ICU-SLEEP; Clinicaltrials.gov: #NCT03355053), at the Massachusetts General Hospital. The hours of available sound and light data ranged from 24.0 to 72.2 hours. Average sound and light levels oscillated throughout the day and night. On average, the loudest hour was 17:00 and the quietest hour was 02:00. Average light levels were brightest at 09:00 and dimmest at 04:00. For all participants, average nightly sound levels exceeded the WHO guideline of < 35 decibels. Similarly, mean nightly light levels varied across participants (minimum: 1.00 lux, maximum: 577.05 lux). Sound and light events were more frequent between 08:00 and 20:00 than between 20:00 and 08:00 and were largely similar on weekdays and weekend days. Peaks in distinct alarm frequencies (Alarm 1) occurred at 01:00, 06:00, and at 20:00. Alarms at other frequencies (Alarm 2) were relatively consistent throughout the day and night, with a small peak at 20:00. In conclusion, we present a sound and light data collection method and results from a cohort of critically ill patients, demonstrating excess sound and light levels across multiple ICUs in a large tertiary care hospital in the United States. ClinicalTrials.gov, #NCT03355053. Registered 28 November 2017, https://clinicaltrials.gov/ct2/show/NCT03355053.
Collapse
Affiliation(s)
- Michael J Leone
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Hassan S Dashti
- Center for Genomic Medicine, Massachusetts General Hospital, Boston, Massachusetts, USA
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, USA
- Brain Data Science Platform, Broad Institute, Cambridge, Massachusetts, USA
- Division of Sleep Medicine, Harvard Medical School, Boston, Massachusetts, USA
| | - Brian Coughlin
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Ryan A Tesh
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Syed A Quadri
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Abigail A Bucklin
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Noor Adra
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Parimala Velpula Krishnamurthy
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Elissa M Ye
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Aashritha Hemmige
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Subapriya Rajan
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Ezhil Panneerselvam
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Jasmine Higgins
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Muhammad Abubakar Ayub
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
- Division of Pulmonary and Critical Care, Department of Medicine, Massachusetts General Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | - Wolfgang Ganglberger
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
- Sleep & Health Zurich, University of Zurich, Zurich, Switzerland
- Henry and Allison McCance Center for Brain Health, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Luis Paixao
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
- Department of Neurology, Washington University School of Medicine in St. Louis, St. Louis, Missouri, USA
| | - Timothy T Houle
- Sleep & Health Zurich, University of Zurich, Zurich, Switzerland
- Department of Medicine, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - B Taylor Thompson
- Division of Pulmonary and Critical Care, Department of Medicine, Massachusetts General Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | - Oluwaseun Johnson-Akeju
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Richa Saxena
- Center for Genomic Medicine, Massachusetts General Hospital, Boston, Massachusetts, USA
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, USA
- Brain Data Science Platform, Broad Institute, Cambridge, Massachusetts, USA
- Division of Sleep Medicine, Harvard Medical School, Boston, Massachusetts, USA
| | - Eyal Kimchi
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Sydney S Cash
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Robert J Thomas
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
- Department of Medicine, Division of Pulmonary, Critical Care & Sleep, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
| | - M Brandon Westover
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Clinical Data Animation Center, Massachusetts General Hospital, Boston, Massachusetts, USA
- Department of Medicine, Massachusetts General Hospital, Boston, Massachusetts, USA
| |
Collapse
|
17
|
Curthoys IS, Smith CM, Burgess AM, Dlugaiczyk J. A Review of Neural Data and Modelling to Explain How a Semicircular Canal Dehiscence (SCD) Causes Enhanced VEMPs, Skull Vibration Induced Nystagmus (SVIN), and the Tullio Phenomenon. Audiol Res 2023; 13:418-430. [PMID: 37366683 DOI: 10.3390/audiolres13030037] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 05/24/2023] [Accepted: 05/30/2023] [Indexed: 06/28/2023] Open
Abstract
Angular acceleration stimulation of a semicircular canal causes an increased firing rate in primary canal afferent neurons that result in nystagmus in healthy adult animals. However, increased firing rate in canal afferent neurons can also be caused by sound or vibration in patients after a semicircular canal dehiscence, and so these unusual stimuli will also cause nystagmus. The recent data and model by Iversen and Rabbitt show that sound or vibration may increase firing rate either by neural activation locked to the individual cycles of the stimulus or by slow changes in firing rate due to fluid pumping ("acoustic streaming"), which causes cupula deflection. Both mechanisms will act to increase the primary afferent firing rate and so trigger nystagmus. The primary afferent data in guinea pigs indicate that in some situations, these two mechanisms may oppose each other. This review has shown how these three clinical phenomena-skull vibration-induced nystagmus, enhanced vestibular evoked myogenic potentials, and the Tullio phenomenon-have a common tie: they are caused by the new response of semicircular canal afferent neurons to sound and vibration after a semicircular canal dehiscence.
Collapse
Affiliation(s)
- Ian S Curthoys
- Vestibular Research Laboratory, School of Psychology, University of Sydney, Sydney, NSW 2006, Australia
| | - Christopher M Smith
- Center for Anatomy and Functional Morphology, Icahn School of Medicine at Mount Sinai, Annenberg Building, Room 12-90, 1468 Madison Ave., New York, NY 10029, USA
| | - Ann M Burgess
- Vestibular Research Laboratory, School of Psychology, University of Sydney, Sydney, NSW 2006, Australia
| | - Julia Dlugaiczyk
- Department of Otorhinolaryngology, Head and Neck Surgery & Interdisciplinary Center of Vertigo, Balance and Ocular Motor Disorders, University Hospital Zurich (USZ), University of Zurich (UZH), CH-8091 Zürich, Switzerland
| |
Collapse
|
18
|
Le VL, Kim D, Cho E, Jang H, Reyes RD, Kim H, Lee D, Yoon IY, Hong J, Kim JW. Real-Time Detection of Sleep Apnea Based on Breathing Sounds and Prediction Reinforcement Using Home Noises: Algorithm Development and Validation. J Med Internet Res 2023; 25:e44818. [PMID: 36811943 PMCID: PMC9996414 DOI: 10.2196/44818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/29/2022] [Accepted: 01/11/2023] [Indexed: 02/24/2023] Open
Abstract
BACKGROUND Multinight monitoring can be helpful for the diagnosis and management of obstructive sleep apnea (OSA). For this purpose, it is necessary to be able to detect OSA in real time in a noisy home environment. Sound-based OSA assessment holds great potential since it can be integrated with smartphones to provide full noncontact monitoring of OSA at home. OBJECTIVE The purpose of this study is to develop a predictive model that can detect OSA in real time, even in a home environment where various noises exist. METHODS This study included 1018 polysomnography (PSG) audio data sets, 297 smartphone audio data sets synced with PSG, and a home noise data set containing 22,500 noises to train the model to predict breathing events, such as apneas and hypopneas, based on breathing sounds that occur during sleep. The whole breathing sound of each night was divided into 30-second epochs and labeled as "apnea," "hypopnea," or "no-event," and the home noises were used to make the model robust to a noisy home environment. The performance of the prediction model was assessed using epoch-by-epoch prediction accuracy and OSA severity classification based on the apnea-hypopnea index (AHI). RESULTS Epoch-by-epoch OSA event detection showed an accuracy of 86% and a macro F1-score of 0.75 for the 3-class OSA event detection task. The model had an accuracy of 92% for "no-event," 84% for "apnea," and 51% for "hypopnea." Most misclassifications were made for "hypopnea," with 15% and 34% of "hypopnea" being wrongly predicted as "apnea" and "no-event," respectively. The sensitivity and specificity of the OSA severity classification (AHI≥15) were 0.85 and 0.84, respectively. CONCLUSIONS Our study presents a real-time epoch-by-epoch OSA detector that works in a variety of noisy home environments. Based on this, additional research is needed to verify the usefulness of various multinight monitoring and real-time diagnostic technologies in the home environment.
Collapse
Affiliation(s)
| | | | | | - Hyeryung Jang
- Department of Artificial Intelligence, Dongguk University, Seoul, Republic of Korea
| | | | | | | | - In-Young Yoon
- Department of Psychiatry, Seoul National University Bundang Hospital, Seongnam, Republic of Korea.,Seoul National University College of Medicine, Seoul, Republic of Korea
| | | | - Jeong-Whun Kim
- Seoul National University College of Medicine, Seoul, Republic of Korea.,Department of Otorhinolaryngology, Seoul National University Bundang Hospital, Seongnam-si, Republic of Korea
| |
Collapse
|
19
|
Kosters J, Janus SIM, van den Bosch KA, Andringa TC, Hoop EO, de Boer MR, Elburg RAJ, Warmelink S, Zuidema SU, Luijendijk HJ. Soundscape Awareness Intervention Reduced Neuropsychiatric Symptoms in Nursing Home Residents With Dementia: A Cluster-Randomized Trial With MoSART. J Am Med Dir Assoc 2023; 24:192-198.e5. [PMID: 36528077 DOI: 10.1016/j.jamda.2022.11.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 11/09/2022] [Accepted: 11/11/2022] [Indexed: 12/15/2022]
Abstract
OBJECTIVES Auditory environments as perceived by an individual, also called soundscapes, are often suboptimal for nursing home residents. Poor soundscapes have been associated with neuropsychiatric symptoms (NPS). We evaluated the effect of the Mobile Soundscape Appraisal and Recording Technology sound awareness intervention (MoSART+) on NPS in nursing home residents with dementia. DESIGN A 15-month, stepped-wedge, cluster-randomized trial. Every 3 months, a nursing home switched from care as usual to the use of the intervention. INTERVENTION The 3-month MoSART+ intervention involved ambassador training, staff performing sound measurements with the MoSART application, meetings, and implementation of microinterventions. The goal was to raise awareness about soundscapes and their influence on residents. SETTING AND PARTICIPANTS We included 110 residents with dementia in 5 Dutch nursing homes. Exclusion criteria were palliative sedation and deafness. METHODS The primary outcome was NPS severity measured with the Neuropsychiatric Inventory-Nursing Home version (NPI-NH) by the resident's primary nurse. Secondary outcomes were quality of life (QUALIDEM), psychotropic drug use (ATC), staff workload (workload questionnaire), and staff job satisfaction (Maastricht Questionnaire of Job Satisfaction). RESULTS The mean age of the residents (n = 97) at enrollment was 86.5 ± 6.7 years, and 76 were female (76.8%). The mean NPI-NH score was 17.5 ± 17.3. One nursing home did not implement the intervention because of staff shortages. Intention-to-treat analysis showed a clinically relevant reduction in NPS between the study groups (-8.0, 95% CI -11.7, -2.6). There was no clear effect on quality of life [odds ratio (OR) 2.8, 95% CI -0.7, 6.3], psychotropic drug use (1.2, 95% CI 0.9, 1.7), staff workload (-0.3, 95% CI -0.3, 0.8), or staff job satisfaction (-0.2, 95% CI -1.2, 0.7). CONCLUSIONS AND IMPLICATIONS MoSART+ empowered staff to adapt the local soundscape, and the intervention effectively reduced staff-reported levels of NPS in nursing home residents with dementia. Nursing homes should consider implementing interventions to improve the soundscape.
Collapse
|
20
|
Eagleman DM, Perrotta MV. The future of sensory substitution, addition, and expansion via haptic devices. Front Hum Neurosci 2023; 16:1055546. [PMID: 36712151 PMCID: PMC9880183 DOI: 10.3389/fnhum.2022.1055546] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 12/23/2022] [Indexed: 01/14/2023] Open
Abstract
Haptic devices use the sense of touch to transmit information to the nervous system. As an example, a sound-to-touch device processes auditory information and sends it to the brain via patterns of vibration on the skin for people who have lost hearing. We here summarize the current directions of such research and draw upon examples in industry and academia. Such devices can be used for sensory substitution (replacing a lost sense, such as hearing or vision), sensory expansion (widening an existing sensory experience, such as detecting electromagnetic radiation outside the visible light spectrum), and sensory addition (providing a novel sense, such as magnetoreception). We review the relevant literature, the current status, and possible directions for the future of sensory manipulation using non-invasive haptic devices.
Collapse
Affiliation(s)
- David M. Eagleman
- Department of Psychiatry, Stanford University School of Medicine, Stanford, CA, United States,Neosensory, Palo Alto, CA, United States,*Correspondence: David M. Eagleman ✉
| | | |
Collapse
|
21
|
Liu Y, Liu S, Tang C, Tang K, Liu D, Chen M, Mao Z, Xia X. Transcranial alternating current stimulation combined with sound stimulation improves cognitive function in patients with Alzheimer's disease: Study protocol for a randomized controlled trial. Front Aging Neurosci 2023; 14:1068175. [PMID: 36698862 PMCID: PMC9869764 DOI: 10.3389/fnagi.2022.1068175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 12/14/2022] [Indexed: 01/12/2023] Open
Abstract
Background The number of patients with Alzheimer's disease (AD) worldwide is increasing yearly, but the existing treatment methods have poor efficacy. Transcranial alternating current stimulation (tACS) is a new treatment for AD, but the offline effect of tACS is insufficient. To prolong the offline effect, we designed to combine tACS with sound stimulation to maintain the long-term post-effect. Materials and methods To explore the safety and effectiveness of tACS combined with sound stimulation and its impact on the cognition of AD patients. This trial will recruit 87 patients with mild to moderate AD. All patients were randomly divided into three groups. The change in Alzheimer's Disease Assessment Scale-Cognitive (ADAS-Cog) scores from the day before treatment to the end of treatment and 3 months after treatment was used as the main evaluation index. We will also explore the changes in the brain structural network, functional network, and metabolic network of AD patients in each group after treatment. Discussion We hope to conclude that tACS combined with sound stimulation is safe and tolerable in 87 patients with mild to moderate AD under three standardized treatment regimens. Compared with tACS alone or sound alone, the combination group had a significant long-term effect on cognitive improvement. To screen out a better treatment plan for AD patients. tACS combined with sound stimulation is a previously unexplored, non-invasive joint intervention to improve patients' cognitive status. This study may also identify the potential mechanism of tACS combined with sound stimulation in treating mild to moderate AD patients. Clinical Trial Registration Clinicaltrials.gov, NCT05251649. Registered on February 22, 2022.
Collapse
Affiliation(s)
- Yang Liu
- Department of Neurosurgery, Affiliated Hospital of Guilin Medical University, Guilin, China
| | | | - Can Tang
- Department of Neurosurgery, Affiliated Hospital of Guilin Medical University, Guilin, China
| | - Keke Tang
- Guangzhou Kangzhi Digital Technology Co., Ltd., Guangzhou, China
| | - Di Liu
- Guangzhou Kangzhi Digital Technology Co., Ltd., Guangzhou, China
| | - Meilian Chen
- Guangzhou Kangzhi Digital Technology Co., Ltd., Guangzhou, China
| | - Zhiqi Mao
- Department of Neurosurgery, Chinese PLA General Hospital, Beijing, China,*Correspondence: Zhiqi Mao, ; Xuewei Xia,
| | - Xuewei Xia
- Department of Neurosurgery, Affiliated Hospital of Guilin Medical University, Guilin, China,*Correspondence: Zhiqi Mao, ; Xuewei Xia,
| |
Collapse
|
22
|
Chen QY, Wan J, Wang M, Hong S, Zhuo M. Sound-induced analgesia cannot always be observed in adult mice. Mol Pain 2023; 19:17448069231197158. [PMID: 37606554 PMCID: PMC10467218 DOI: 10.1177/17448069231197158] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Revised: 08/04/2023] [Accepted: 08/08/2023] [Indexed: 08/23/2023] Open
Abstract
Music seems promising as an adjuvant pain treatment in humans, while its mechanism remains to be illustrated. In rodent models of chronic pain, few studies reported the analgesic effect of music. Recently, Zhou et al. stated that the analgesic effects of sound depended on a low (5 dB) signal-to-noise ratio (SNR) relative to ambient noise in mice. However, despite employing multiple behavioral analysis approaches, we were unable to extend these findings to a mice model of chronic pain listening to the 5 dB SNR.
Collapse
Affiliation(s)
- Qi-Yu Chen
- CAS Key Laboratory of Brain Connectome and Manipulation, Interdisciplinary Center for Brain Information, Chinese Academy of Sciences Shenzhen Institute of Advanced Technology, Shenzhen, China
- International Institute for Brain Research, Qingdao International Academician Park, Qingdao, China
| | - Jinjin Wan
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Key Laboratory of Alzheimer’s Disease of Zhejiang Province, Zhejiang Provincial Clinical Research Center for Mental Disorders, Institute of Aging, The Affiliated Wenzhou Kangning Hospital, Wenzhou Medical University, Wenzhou, China
| | - Mianxian Wang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou, China
| | - Shanshan Hong
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou, China
| | - Min Zhuo
- CAS Key Laboratory of Brain Connectome and Manipulation, Interdisciplinary Center for Brain Information, Chinese Academy of Sciences Shenzhen Institute of Advanced Technology, Shenzhen, China
- International Institute for Brain Research, Qingdao International Academician Park, Qingdao, China
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou, China
- Department of Physiology, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
23
|
Rohde BB, Cooperband MF, Canlas I, Mankin RW. Evidence of Receptivity to Vibroacoustic Stimuli in the Spotted Lanternfly Lycorma delicatula (Hemiptera: Fulgoridae). J Econ Entomol 2022; 115:2116-2120. [PMID: 36305621 DOI: 10.1093/jee/toac167] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Indexed: 06/16/2023]
Abstract
The spotted lanternfly Lycorma delicatula White (Hemiptera: Fulgoridae) is a polyphagous insect pest that invaded the United States in 2014, in Berks County, Pennsylvania. It has since spread to several northeastern states and poses a significant threat to northeastern grape production. Most studied species of Hemiptera are known to communicate intraspecifically using some form of substrate-borne vibrational signals, although such behavior has not yet been reported in L. delicatula. This report demonstrates that adult and fourth-instar L. delicatula were attracted towards broadcasts of 60-Hz vibroacoustic stimuli directed to a laboratory arena and test substrate, which suggests that both adults and fourth instar nymphs can perceive and respond to vibrational stimuli.
Collapse
Affiliation(s)
- Barukh B Rohde
- USDA-ARS, Subtropical Horticulture Research Station, Miami, FL, USA
| | - Miriam F Cooperband
- Forest Pest Methods Laboratory, USDA-APHIS-PPQ-S&T, 1398 West Truck Road, Buzzards Bay, MA, USA
| | - Isaiah Canlas
- Forest Pest Methods Laboratory, USDA-APHIS-PPQ-S&T, 1398 West Truck Road, Buzzards Bay, MA, USA
| | - Richard W Mankin
- Center for Medical, Agricultural, and Veterinary Entomology, USDA-ARS, Gainesville, FL, USA
| |
Collapse
|
24
|
Oliveira AS, Pirscoveanu CI, Rasmussen J. Predicting Vertical Ground Reaction Forces in Running from the Sound of Footsteps. Sensors (Basel) 2022; 22:9640. [PMID: 36560009 PMCID: PMC9787899 DOI: 10.3390/s22249640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 11/29/2022] [Accepted: 12/05/2022] [Indexed: 06/17/2023]
Abstract
From the point of view of measurement, footstep sounds represent a simple, wearable and inexpensive sensing opportunity to assess running biomechanical parameters. Therefore, the aim of this study was to investigate whether the sounds of footsteps can be used to predict the vertical ground reaction force profiles during running. Thirty-seven recreational runners performed overground running, and their sounds of footsteps were recorded from four microphones, while the vertical ground reaction force was recorded using a force plate. We generated nine different combinations of microphone data, ranging from individual recordings up to all four microphones combined. We trained machine learning models using these microphone combinations and predicted the ground reaction force profiles by a leave-one-out approach on the subject level. There were no significant differences in the prediction accuracy between the different microphone combinations (p < 0.05). Moreover, the machine learning model was able to predict the ground reaction force profiles with a mean Pearson correlation coefficient of 0.99 (range 0.79−0.999), mean relative root-mean-square error of 9.96% (range 2−23%) and mean accuracy to define rearfoot or forefoot strike of 77%. Our results demonstrate the feasibility of using the sounds of footsteps in combination with machine learning algorithms based on Fourier transforms to predict the ground reaction force curves. The results are encouraging in terms of the opportunity to create wearable technology to assess the ground reaction force profiles for runners in the interests of injury prevention and performance optimization.
Collapse
Affiliation(s)
| | | | - John Rasmussen
- Department of Materials and Production, Aalborg University, DK-9220 Aalborg East, Denmark
| |
Collapse
|
25
|
Iosifyan M, Sidoroff-Dorso A, Wolfe J. Cross-modal associations between paintings and sounds: Effects of embodiment. Perception 2022; 51:871-888. [PMID: 36217800 PMCID: PMC9720465 DOI: 10.1177/03010066221126452] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 08/30/2022] [Indexed: 11/16/2022]
Abstract
The present study investigated cross-modal associations between a series of paintings and sounds. We studied the effects of sound congruency (congruent vs. non-congruent sounds) and embodiment (embodied vs. synthetic sounds) on the evaluation of abstract and figurative paintings. Participants evaluated figurative and abstract paintings paired with congruent and non-congruent embodied and synthetic sounds. They also evaluated the perceived meaningfulness of the paintings, aesthetic value and immersive experience of the paintings. Embodied sounds (sounds associated with bodily sensations, bodily movements and touch) were more strongly associated with figurative paintings, while synthetic sounds (non-embodied sounds) were more strongly associated with abstract paintings. Sound congruency increased the perceived meaningfulness, immersive experience and aesthetic value of paintings. Sound embodiment increased immersive experience of paintings.
Collapse
Affiliation(s)
| | | | - Judith Wolfe
- University of St
Andrews, School of Divinity, UK
| |
Collapse
|
26
|
Janicka W, Wilk I, Próchniak T, Janczarek I. Can Sound Alone Act as a Virtual Barrier for Horses? A Preliminary Study. Animals (Basel) 2022; 12:ani12223151. [PMID: 36428379 PMCID: PMC9686701 DOI: 10.3390/ani12223151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 11/10/2022] [Accepted: 11/14/2022] [Indexed: 11/17/2022] Open
Abstract
Virtual fencing is an innovative alternative to conventional fences. Different systems have been studied, including electric-impulse-free systems. We tested the potential of self-applied acoustic stimulus in deterring the horses from further movement. Thirty warmblood horses were individually introduced to a designated corridor leading toward a food reward (variant F) or a familiar horse (variant S). As the subject reached a distance of 30, 15 or 5 m from a finish line, an acute alarming sound was played. Generally, a sudden and unknown sound was perceived by horses as a threat causing an increase in vigilance and sympathetic activation. Horses' behaviour and barrier effectiveness (80% for F vs. 20% for S) depended on motivator (F/S), while the cardiac response indicating some level of stress was similar. The motivation for social interactions was too strong to stop the horses from crossing a designated boundary. Conversely, the sound exposure distance did not vary the barrier effectiveness, but it differentiated HRV responses, with the strongest sympathetic activation noted at a distance of 5 m. Thus, the moment of a sound playback has important welfare implications. Due to the limited potential of sound as a virtual barrier, auditory cues cannot be used as an alternative for conventional fencing.
Collapse
Affiliation(s)
- Wiktoria Janicka
- Department of Horse Breeding and Use, Faculty of Animal Sciences and Bioeconomy, University of Life Sciences in Lublin, 20-950 Lublin, Poland
| | - Izabela Wilk
- Department of Horse Breeding and Use, Faculty of Animal Sciences and Bioeconomy, University of Life Sciences in Lublin, 20-950 Lublin, Poland
- Correspondence:
| | - Tomasz Próchniak
- Institute of Biological Basis of Animal Production, Faculty of Animal Sciences and Bioeconomy, University of Life Sciences in Lublin, 20-950 Lublin, Poland
| | - Iwona Janczarek
- Department of Horse Breeding and Use, Faculty of Animal Sciences and Bioeconomy, University of Life Sciences in Lublin, 20-950 Lublin, Poland
| |
Collapse
|
27
|
Ascari E, Cerchiai M, Fredianelli L, Licitra G. Statistical Pass-By for Unattended Road Traffic Noise Measurement in an Urban Environment. Sensors (Basel) 2022; 22:8767. [PMID: 36433368 PMCID: PMC9695770 DOI: 10.3390/s22228767] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 10/28/2022] [Accepted: 11/11/2022] [Indexed: 06/16/2023]
Abstract
Low-noise surfaces have become a common mitigation action in the last decade, so much so that different methods for feature extraction have been established to evaluate their efficacy. Among these, the Close Proximity Index (CPX) evaluates the noise emissions by means of multiple runs at different speeds performed with a vehicle equipped with a reference tire and with acoustic sensors close to the wheel. However, signals acquired with CPX make it source oriented, and the analysis does not consider the real traffic flow of the studied site for a receiver-oriented approach. These aspects are remedied by Statistical Pass-By (SPB), a method based on sensor feature extraction with live detection of events; noise and speed acquisitions are performed at the roadside in real case scenarios. Unfortunately, the specific SPB requirements for its measurement setup do not allow an evaluation in urban context unless a special setup is used, but this may alter the acoustical context in which the measurement was performed. The present paper illustrates the testing and validation of a method named Urban Pass-By (U-SPB), developed during the LIFE NEREiDE project. U-SPB originates from standard SPB, exploits unattended measurements and develops an in-lab feature detection and extraction procedure. The U-SPB extends the evaluation in terms of before/after data comparison of the efficiency of low-noise laying in an urban context while combining the estimation of long-term noise levels and traffic parameters for other environmental noise purposes, such as noise mapping and action planning.
Collapse
Affiliation(s)
- Elena Ascari
- Institute for Chemical-Physical Processes of the Italian Research Council (CNR-IPCF), Via Giuseppe Moruzzi 1, 56124 Pisa, Italy
| | - Mauro Cerchiai
- Environmental Protection Agency of Tuscany Region, Pisa Department, Via Vittorio Veneto 27, 56127 Pisa, Italy
| | - Luca Fredianelli
- Institute for Chemical-Physical Processes of the Italian Research Council (CNR-IPCF), Via Giuseppe Moruzzi 1, 56124 Pisa, Italy
| | - Gaetano Licitra
- Institute for Chemical-Physical Processes of the Italian Research Council (CNR-IPCF), Via Giuseppe Moruzzi 1, 56124 Pisa, Italy
- Environmental Protection Agency of Tuscany Region, Pisa Department, Via Vittorio Veneto 27, 56127 Pisa, Italy
| |
Collapse
|
28
|
Ikonen E. The Sonic Meanings of the Life and Death of My Disabled Brother: Listening as a Method for Qualitative Research. Qual Health Res 2022; 32:2019-2029. [PMID: 36190174 DOI: 10.1177/10497323221130846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Long neglected listening is an underdeveloped element in western epistemology in general and in qualitative research in particular. However, recent developments in philosophy, sound art, anthropology, and qualitative research open promising pathways for mastering listening as a method and metaphor of inquiry also in health research, where understanding multiple layers of emotionally challenging experiences is crucial, yet often elusive. Through a sonic analysis of autobiographical data about life, death, and my disabled brother, I will try to demonstrate how paying attention to listening and sounds in research can add evolving layers of meaning to the data gathering and analyzing processes. Ultimately, such an analysis provides insights into a sibling's role as a caregiver by revealing mechanisms of nonconscious emotional coping and also into the acts of a disabled sibling as a teacher of subtle ways of listening.
Collapse
Affiliation(s)
- Essi Ikonen
- Faculty of Arts, School of Culture and Society, 1006Aarhus University, Aarhus, Denmark
| |
Collapse
|
29
|
Graham ME. Ambient ageism: Exploring ageism in acoustic representations of older adults in AgeTech advertisements. Front Sociol 2022; 7:1007836. [PMID: 36299412 PMCID: PMC9588956 DOI: 10.3389/fsoc.2022.1007836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 09/14/2022] [Indexed: 06/16/2023]
Abstract
Ageing-in-place environments are increasingly marked by ambient digital technologies designed to keep older adults safe while they live independently at home. These AgeTech companies market their products by constructing imagined visual and aural worlds of the smart home, usually deploying ageist representations of ageing and older adults. The advertisements are multimodal, and while what is seen on screen is often considered most important in a visuo-centric western culture, scholars have argued that it is what audiences hear that has the greatest impact. The acoustic domain of AgeTech advertisements and its relationship to ageism in marketing has not yet been explored. Accordingly, this paper will address this gap by following Van Leeuwen's framework for critical analysis of musical discourse to explore what AgeTech companies say about ageing, older adults, and ageing-in-place technologies using sound in an illustrative set of smart home advertisements for ageing-in-place. The paper will discuss how music, voice, and sound are semiotic resources that are used to construct stereotypical (both negative and positive) portrayals of older adults, reinforce the narrative of "technology as saviour," and trouble the private/public boundaries of the ageing-in-place smart home.
Collapse
Affiliation(s)
- Megan E. Graham
- Department of Sociology and Anthropology, Faculty of Arts and Social Sciences, Carleton University, Ottawa, ON, Canada
- Department of Sociology, Trent University, Peterborough, ON, Canada
| |
Collapse
|
30
|
Lin YHT, Hamid N, Shepherd D, Kantono K, Spence C. Musical and Non-Musical Sounds Influence the Flavour Perception of Chocolate Ice Cream and Emotional Responses. Foods 2022; 11:1784. [PMID: 35741981 DOI: 10.3390/foods11121784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 06/06/2022] [Accepted: 06/08/2022] [Indexed: 02/01/2023] Open
Abstract
Auditory cues, such as real-world sounds or music, influence how we perceive food. The main aim of the present study was to investigate the influence of negatively and positively valenced mixtures of musical and non-musical sounds on the affective states of participants and their perception of chocolate ice cream. Consuming ice cream while listening to liked music (LM) and while listening to the combination of liked music and pleasant sound (LMPS) conditions gave rise to more positive emotions than listening to just pleasant sound (PS). Consuming ice cream during the LM condition resulted in the longest duration of perceived sweetness. On the other hand, PS and LMPS conditions resulted in cocoa dominating for longer. Bitterness and roasted were dominant under the disliked music and unpleasant sound (DMUS) and DM conditions respectively. Positive emotions correlated well with the temporal sensory perception of sweetness and cocoa when consuming chocolate ice cream under the positively valenced auditory conditions. In contrast, negative emotions were associated with bitter and roasted tastes/flavours under the negatively valenced auditory conditions. The combination of pleasant music and non-musical sound conditions evoked more positive emotions than when either was presented in isolation. Taken together, the results of this study support the view that sensory attributes correlated well with emotions evoked when consuming ice cream under different auditory conditions varying in terms of their valence.
Collapse
|
31
|
Spence C, Di Stefano N. Coloured hearing, colour music, colour organs, and the search for perceptually meaningful correspondences between colour and sound. Iperception 2022; 13:20416695221092802. [PMID: 35572076 PMCID: PMC9099070 DOI: 10.1177/20416695221092802] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 03/16/2022] [Accepted: 03/22/2022] [Indexed: 11/17/2022] Open
Abstract
There has long been interest in the nature of the relationship(s) between hue and pitch or, in other words, between colour and musical/pure tones, stretching back at least as far as Newton, Goethe, Helmholtz, and beyond. In this narrative historical review, we take a closer look at the motivations that have lain behind the various assertions that have been made in the literature concerning the analogies, and possible perceptual similarities, between colour and sound. During the last century, a number of experimental psychologists have also investigated the nature of the correspondence between these two primary dimensions of perceptual experience. The multitude of different crossmodal mappings that have been put forward over the centuries are summarized, and a distinction drawn between physical/structural and psychological correspondences. The latter being further sub-divided into perceptual and affective categories. Interest in physical correspondences has typically been motivated by the structural similarities (analogous mappings) between the organization of perceptible dimensions of auditory and visual experience. Emphasis has been placed both on the similarity in terms of the number of basic categories into which pitch and colour can be arranged and also on the fact that both can be conceptualized as circular dimensions. A distinction is drawn between those commentators who have argued for a dimensional alignment of pitch and hue (based on a structural mapping), and those who appear to have been motivated by the existence of specific correspondences between particular pairs of auditory and visual stimuli instead (often, as we will see, based on the idiosyncratic correspondences that have been reported by synaesthetes). Ultimately, though, the emotional-mediation account would currently appear to provide the most parsimonious account for whatever affinity the majority of people experience between musical sounds and colour.
Collapse
|
32
|
Brengman M, Willems K, De Gauquier L. Customer Engagement in Multi-Sensory Virtual Reality Advertising: The Effect of Sound and Scent Congruence. Front Psychol 2022; 13:747456. [PMID: 35386898 PMCID: PMC8977604 DOI: 10.3389/fpsyg.2022.747456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 01/20/2022] [Indexed: 11/13/2022] Open
Abstract
Despite the power of VR in immersing viewers in an experience, it generally only targets viewers via visual and auditory cues. Human beings use more senses to gather information, so expectedly, the full potential of this medium is currently not yet tapped. This study contributes in answering two research questions: (1) How can conventional VR ads be enriched by also addressing the forgotten sense of smell?; and (2) Does doing so indeed instill more engaging experiences? A 2 × 3 between-subjects study (n = 235) is conducted, whereby an existing branded VR commercial (Boursin Sensorium Experience) is augmented with “sound” (on/off) and (congruent/incongruent/no) “scents.” The power of these sensory augmentations is evaluated by inspecting emotional, cognitive and conative dimensions of customer engagement. The results identify product-scent congruence (with sound) as a deal-maker, albeit product-scent incongruence is not necessarily a deal-breaker. The article concludes with further research avenues and a translation into managerial implications.
Collapse
Affiliation(s)
- Malaika Brengman
- Marketing & Consumer Behavior Cluster, Business Department, Faculty of Social Sciences & Solvay Business School, Vrije Universiteit Brussel, Brussel, Belgium
| | - Kim Willems
- Marketing & Consumer Behavior Cluster, Business Department, Faculty of Social Sciences & Solvay Business School, Vrije Universiteit Brussel, Brussel, Belgium.,Marketing & Strategy Department, Faculty of Business Economics, Hasselt University, Hasselt, Belgium
| | - Laurens De Gauquier
- Marketing & Consumer Behavior Cluster, Business Department, Faculty of Social Sciences & Solvay Business School, Vrije Universiteit Brussel, Brussel, Belgium
| |
Collapse
|
33
|
Barratt MJ, Maddox A, Smith N, Davis JL, Goold L, Winstock AR, Ferris JA. Who uses digital drugs? An international survey of 'binaural beat' consumers. Drug Alcohol Rev 2022; 41:1126-1130. [PMID: 35353927 DOI: 10.1111/dar.13464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 02/20/2022] [Accepted: 03/03/2022] [Indexed: 11/27/2022]
Abstract
INTRODUCTION Digital drugs, or binaural beats claimed to elicit specific cognitive or emotional states, are a phenomenon about which little is known. In this brief report, we describe demographic and drug use correlates of binaural beat use, patterns of use, reasons for use and methods of access. METHODS The Global Drug Survey 2021 was translated into 11 languages; 30 896 responses were gathered from 22 countries. RESULTS The use of binaural beats to experience altered states was reported by 5.3% of the sample (median age 27; 60.5% male), with the highest rates from the United States, Mexico, Brazil, Poland, Romania and the United Kingdom. Controlling for all variables, age and non-male gender predicted binaural beat use, as did the recent use of cannabis, psychedelics and novel/new drugs. Respondents most commonly used binaural beats 'to relax or fall asleep' (72.2%) and 'to change my mood' (34.7%), while 11.7% reported trying 'to get a similar effect to that of other drugs'. This latter motivation was more commonly reported among those who used classic psychedelics (16.5% vs. 7.9%; P < 0.001). The majority sought to connect with themselves (53.1%) or 'something bigger than themselves' (22.5%) through the experience. Binaural beats were accessed primarily through video streaming sites via mobile phones. DISCUSSION AND CONCLUSIONS This paper establishes the existence of the phenomenon of listening to binaural beats to elicit changes in embodied and psychological states. Future research directions include the cultural context for consumption and proximate experiences, including co-use with ingestible drugs and other auditory phenomena.
Collapse
Affiliation(s)
- Monica J Barratt
- Social and Global Studies Centre and Digital Ethnography Research Centre, RMIT University, Melbourne, Australia.,National Drug and Alcohol Research Centre, UNSW Sydney, Sydney, Australia
| | - Alexia Maddox
- Blockchain Innovation Hub, RMIT University, Melbourne, Australia
| | - Naomi Smith
- School of Arts, Humanities and Social Sciences, Federation University Australia, Gippsland, Australia
| | - Jenny L Davis
- School of Sociology, Australian National University, Canberra, Australia
| | - Lachlan Goold
- School of Business and Creative Industries, University of the Sunshine Coast, Sunshine Coast, Australia
| | - Adam R Winstock
- Institute of Epidemiology and Health Care, University College London, London, UK.,Global Drug Survey, London, UK
| | - Jason A Ferris
- Centre for Health Services Research, The University of Queensland, Brisbane, Australia
| |
Collapse
|
34
|
Zeng Z, Koponen LM, Hamdan R, Li Z, Goetz SM, Peterchev AV. Modular multilevel TMS device with wide output range and ultrabrief pulse capability for sound reduction. J Neural Eng 2022; 19:10.1088/1741-2552/ac572c. [PMID: 35189604 PMCID: PMC9425059 DOI: 10.1088/1741-2552/ac572c] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 02/21/2022] [Indexed: 11/12/2022]
Abstract
Objective.This article presents a novel transcranial magnetic stimulation (TMS) pulse generator with a wide range of pulse shape, amplitude, and width.Approach.Based on a modular multilevel TMS (MM-TMS) topology we had proposed previously, we realized the first such device operating at full TMS energy levels. It consists of ten cascaded H-bridge modules, each implemented with insulated-gate bipolar transistors, enabling both novel high-amplitude ultrabrief pulses as well as pulses with conventional amplitude and duration. The MM-TMS device can output pulses including up to 21 voltage levels with a step size of up to 1100 V, allowing relatively flexible generation of various pulse waveforms and sequences. The circuit further allows charging the energy storage capacitor on each of the ten cascaded modules with a conventional TMS power supply.Main results. The MM-TMS device can output peak coil voltages and currents of 11 kV and 10 kA, respectively, enabling suprathreshold ultrabrief pulses (>8.25μs active electric field phase). Further, the MM-TMS device can generate a wide range of near-rectangular monophasic and biphasic pulses, as well as more complex staircase-approximated sinusoidal, polyphasic, and amplitude-modulated pulses. At matched estimated stimulation strength, briefer pulses emit less sound, which could enable quieter TMS. Finally, the MM-TMS device can instantaneously increase or decrease the amplitude from one pulse to the next in discrete steps by adding or removing modules in series, which enables rapid pulse sequences and paired-pulse protocols with variable pulse shapes and amplitudes.Significance.The MM-TMS device allows unprecedented control of the pulse characteristics which could enable novel protocols and quieter pulses.
Collapse
Affiliation(s)
- Zhiyong Zeng
- Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC, 27710, United States of America
| | - Lari M Koponen
- Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC, 27710, United States of America
| | - Rena Hamdan
- Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC, 27710, United States of America
| | - Zhongxi Li
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, United States of America
| | - Stefan M Goetz
- Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC, 27710, United States of America.,Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, United States of America.,Duke Institute for Brain Sciences, Duke University, Durham, NC, 27708, United States of America.,Department of Neurosurgery, Duke University, Durham, NC, 27710, United States of America
| | - Angel V Peterchev
- Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC, 27710, United States of America.,Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, United States of America.,Duke Institute for Brain Sciences, Duke University, Durham, NC, 27708, United States of America.,Department of Neurosurgery, Duke University, Durham, NC, 27710, United States of America.,Department of Biomedical Engineering, Duke University, Durham, NC, 27708, United States of America
| |
Collapse
|
35
|
Mlynski R. Headphone Audio in Training Systems or Systems That Convey Important Sound Information. Int J Environ Res Public Health 2022; 19:2579. [PMID: 35270271 DOI: 10.3390/ijerph19052579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 02/18/2022] [Accepted: 02/21/2022] [Indexed: 11/30/2022]
Abstract
In the work environment, miniature electroacoustic transducers are often used in communication, for the transmission of warning signals or during training. They can be used in headphones or mounted in personal protective equipment. It is often important to reproduce sounds accurately. The purpose of this work was to assess audio strips by comparing the frequency response of the signal in the electrical outputs of six common-purpose devices. Based on the risk of hearing damage, the level of noise exposure was assessed. The following headphones were investigated: low-budget closed-back, open-back for instant messengers, open-back for music, and in-ear. A head and torso simulator with a transfer function was used. The most uniform shape of the frequency response of the signal at the electrical outputs was found to be in smartphones. Sound cards integrated into laptop motherboards had highly unequal characteristics (up to 23 dB). In the case of one of the laptops, the upper range of the transmitted frequencies was limited to the 12,500 Hz band. An external sound card or wireless headphones can improve the situation. In the worst-case scenario, i.e., rock music, the listening time was limited to 2 h and 18 min.
Collapse
|
36
|
Sivakumaran K, Ritonja JA, Waseem H, AlShenaibar L, Morgan E, Ahmadi SA, Denning A, Michaud DS, Morgan RL. Impact of Noise Exposure on Risk of Developing Stress-Related Health Effects Related to the Cardiovascular System: A Systematic Review and Meta-Analysis. Noise Health 2022; 24:107-129. [PMID: 36124520 PMCID: PMC9743313 DOI: 10.4103/nah.nah_83_21] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Background : Exposure to acute noise can cause an increase in biological stress reactions, which provides biological plausibility for a potential association between sustained noise exposure and stress-related health effects. However, the certainty in the evidence for an association between exposures to noise on short- and long-term biomarkers of stress has not been widely explored. The objective of this review was to evaluate the strength of evidence between noise exposure and changes in the biological parameters known to contribute to the development of stress-related adverse cardiovascular responses. Materials and Methods This systematic review comprises English language comparative studies available in PubMed, Cochrane Central, EMBASE, and CINAHL databases from January 1, 1980 to December 29, 2021. Where possible, random-effects meta-analyses were used to examine the effect of noise exposure from various sources on stress-related cardiovascular biomarkers. The risk of bias of individual studies was assessed using the risk of bias of nonrandomized studies of exposures instrument. The certainty of the body of evidence for each outcome was assessed using the Grading of Recommendations Assessment, Development, and Evaluation approach. Results : The search identified 133 primary studies reporting on blood pressure, hypertension, heart rate, cardiac arrhythmia, vascular resistance, and cardiac output. Meta-analyses of blood pressure, hypertension, and heart rate suggested there may be signals of increased risk in response to a higher noise threshold or incrementally higher levels of noise. Across all outcomes, the certainty of the evidence was very low due to concerns with the risk of bias, inconsistency across exposure sources, populations, and studies and imprecision in the estimates of effects. Conclusions : This review identifies that exposure to higher levels of noise may increase the risk of some short- and long-term cardiovascular events; however, the certainty of the evidence was very low. This likely represents the inability to compare across the totality of the evidence for each outcome, underscoring the value of continued research in this area. Findings from this review may be used to inform policies of noise reduction or mitigation interventions.
Collapse
Affiliation(s)
- Kapeena Sivakumaran
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,Evidence Foundation, Cleveland Heights, OH, USA
| | - Jennifer A. Ritonja
- Université de Montréal Hospital Research Centre (CRCHUM), Montreal, QC, Canada,Department of Social and Preventive Medicine, Université de Montréal, Montreal, QC, Canada
| | - Haya Waseem
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,Evidence Foundation, Cleveland Heights, OH, USA
| | - Leena AlShenaibar
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,Evidence Foundation, Cleveland Heights, OH, USA
| | - Elissa Morgan
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,Evidence Foundation, Cleveland Heights, OH, USA
| | - Salman A. Ahmadi
- Department of Public Health Sciences, Queen’s University, Kingston, ON, Canada
| | - Allison Denning
- Health Canada, Environmental and Radiation Health Sciences Directorate, Consumer & Clinical Radiation Protection Bureau, Ottawa, ON, Canada
| | - David S. Michaud
- Health Canada, Environmental and Radiation Health Sciences Directorate, Consumer & Clinical Radiation Protection Bureau, Ottawa, ON, Canada,Address for correspondence: David S. Michaud, 775 Brookfield Road, Ottawa, ON, K1A1C1, Canada. E-mail:
| | - Rebecca L. Morgan
- Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada,Evidence Foundation, Cleveland Heights, OH, USA
| |
Collapse
|
37
|
Sivakumaran K, Ritonja JA, Waseem H, AlShenaibar L, Morgan E, Ahmadi SA, Denning A, Michaud D, Morgan RL. Impact of Noise Exposure on Risk of Developing Stress-Related Obstetric Health Effects: A Systematic Review and Meta-Analysis. Noise Health 2022; 24:137-144. [PMID: 36124522 PMCID: PMC9743309 DOI: 10.4103/nah.nah_22_22] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Background Exposure to noise can increase biological stress reactions and that could increase the risk of stress-related prenatal effects, including adverse obstetric outcomes; however, the association between exposure to noise and adverse obstetric outcomes has not been extensively explored. The objective of this review was to evaluate the evidence between noise exposures and adverse obstetric outcomes, specifically preeclampsia, gestational diabetes, and gestational hypertension. Materials and Methods A systematic review of English language, comparative studies available in PubMed, Cochrane Central, EMBASE, and CINAHL databases between January 1, 1980 and December 29, 2021 was performed. Risk of bias for individual studies was assessed using the Risk of Bias Instrument for Nonrandomized Studies of Exposures, and the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach was used to assess the certainty of the body of evidence for each outcome. Results Six studies reporting on preeclampsia, gestational diabetes, and gestational hypertension were identified. Although some studies suggested there may be signals of increased responses to increased noise exposure for preeclampsia and gestational hypertension, the certainty in the evidence of an effect of increased noise on all the outcomes was very low due to concerns with risk of bias, inconsistency across studies, and imprecision in the effect estimates. Conclusions While the certainty of the evidence for noise exposure and adverse obstetric outcomes was very low, the findings from this review may be useful for directing further research in this area, as there is currently limited evidence available. These findings may also be useful for informing guidelines and policies involving noise exposure situations or environments.
Collapse
Affiliation(s)
- Kapeena Sivakumaran
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada,Evidence Foundation, Cleveland Heights, Ohio, USA
| | - Jennifer A. Ritonja
- University of Montreal Hospital Research Centre (CRCHUM), Montreal, Quebec, Canada,Department of Social and Preventive Medicine, University of Montreal, Montreal, Quebec, Canada
| | - Haya Waseem
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada,Evidence Foundation, Cleveland Heights, Ohio, USA
| | - Leena AlShenaibar
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada,Evidence Foundation, Cleveland Heights, Ohio, USA
| | - Elissa Morgan
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada,Evidence Foundation, Cleveland Heights, Ohio, USA
| | - Salman A. Ahmadi
- Department of Public Health Sciences, Queen’s University, Kingston, Ontario, Canada
| | - Allison Denning
- Health Canada, Environmental and Radiation Health Sciences Directorate, Consumer and Clinical Radiation Protection Bureau, Ottawa, Ontario, Canada
| | - David Michaud
- Health Canada, Environmental and Radiation Health Sciences Directorate, Consumer and Clinical Radiation Protection Bureau, Ottawa, Ontario, Canada,Address for correspondence: David S. Michaud, Research Scientist, 775 Brookfield Road, Ottawa, Ontario, K1A1C1, Canada.
E-mail:
| | - Rebecca L. Morgan
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada,Evidence Foundation, Cleveland Heights, Ohio, USA
| |
Collapse
|
38
|
Babak KV, Fomenko LP, Fomenko AN, Fraifeld VE. [Visual and acoustic interventions for improving mental health in advanced age.]. Adv Gerontol 2022; 35:559-568. [PMID: 36401866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Data accumulated in the last years indicate that certain visual and acoustic interventions are of geroprotective potential. Among them are bright light, white noise, and also rhythmic sensory stimulation (flickering light, binaural rhythms), etc. It should be noted that visual and acoustic interventions are simple in use, safe and practically do not have adverse side effects and do not need special medical control. Here, we review the studies on using the visual and acoustic interventions for improving mental health with regard to the advanced age and age-related pathology. We also discuss possible mechanisms of their therapeutic action and points for the future investigations.
Collapse
Affiliation(s)
- K V Babak
- ООО «Vechnaya molodost», 1 Tchaikovsky str., St. Petersburg 191187, Russian Federation, e-mail:
| | - L P Fomenko
- ООО «Vechnaya molodost», 1 Tchaikovsky str., St. Petersburg 191187, Russian Federation, e-mail:
| | - A N Fomenko
- ООО «Vechnaya molodost», 1 Tchaikovsky str., St. Petersburg 191187, Russian Federation, e-mail:
| | - V E Fraifeld
- Ben-Gurion University of the Negev, Beer Sheva 8410501, Israel
| |
Collapse
|
39
|
Carlini A, Bigand E. Does Sound Influence Perceived Duration of Visual Motion? Front Psychol 2021; 12:751248. [PMID: 34925155 PMCID: PMC8675101 DOI: 10.3389/fpsyg.2021.751248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Accepted: 11/10/2021] [Indexed: 11/13/2022] Open
Abstract
Multimodal perception is a key factor in obtaining a rich and meaningful representation of the world. However, how each stimulus combines to determine the overall percept remains a matter of research. The present work investigates the effect of sound on the bimodal perception of motion. A visual moving target was presented to the participants, associated with a concurrent sound, in a time reproduction task. Particular attention was paid to the structure of both the auditory and the visual stimuli. Four different laws of motion were tested for the visual motion, one of which is biological. Nine different sound profiles were tested, from an easier constant sound to more variable and complex pitch profiles, always presented synchronously with motion. Participants' responses show that constant sounds produce the worst duration estimation performance, even worse than the silent condition; more complex sounds, instead, guarantee significantly better performance. The structure of the visual stimulus and that of the auditory stimulus appear to condition the performance independently. Biological motion provides the best performance, while the motion featured by a constant-velocity profile provides the worst performance. Results clearly show that a concurrent sound influences the unified perception of motion; the type and magnitude of the bias depends on the structure of the sound stimulus. Contrary to expectations, the best performance is not generated by the simplest stimuli, but rather by more complex stimuli that are richer in information.
Collapse
Affiliation(s)
- Alessandro Carlini
- Laboratory for Research on Learning and Development, CNRS UMR 5022, University of Burgundy, Dijon, France
| | - Emmanuel Bigand
- Laboratory for Research on Learning and Development, CNRS UMR 5022, University of Burgundy, Dijon, France
| |
Collapse
|
40
|
Kim JW, Shin J, Lee K, Won TB, Rhee CS, Cho SW. Prediction of Oxygen Desaturation by Using Sound Data From a Noncontact Device: A Proof-of-Concept Study. Laryngoscope 2021; 132:901-905. [PMID: 34873695 DOI: 10.1002/lary.29971] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 11/04/2021] [Accepted: 11/24/2021] [Indexed: 11/11/2022]
Abstract
OBJECTIVES/HYPOTHESIS Prediction of the apnea-hypopnea index (AHI) from breathing sounds during sleep could be used to prescreen for obstructive sleep apnea (OSA). In addition, the oxygen desaturation index (ODI) is a known risk factor for developing cardiovascular disease in OSA patients. This study focused on estimation of ODI from a noncontact manner from sleep breathing sounds. STUDY DESIGN Retrospective study. METHODS Patients who visited the sleep center due to snoring or sleep apnea underwent polysomnography in lab overnight. Sound recordings were made during polysomnography using a microphone. After noise reduction, the sound data were segmented into 5 seconds windows and features were extracted. Binary classification and regression analyses were performed to estimate the ODI during sleep (model 1). This was re-tested after inclusion of body mass index (BMI) and age as additional features (model 2: BMI only, model 3: BMI and age). RESULTS We included 116 patients. The mean age and AHI of all patients were 50.4 ± 16.7 years and 23.0 ± 24.0 events/hr. In binary classification, for ODI cutoff values of 5, 15, and 30 events/hr, the areas under the curve were 0.88, 0.93, 0.91, respectively, and accuracies were 85.34, 86.21, and 87.07, respectively. In regression analysis, the correlation coefficient and mean absolute error were 0.80 and 9.60 events/hr, respectively. In models 2 and 3, the correlation coefficient and mean absolute error were 0.82, 9.44 events/hr and 0.81, 9.6 events/hr, respectively. CONCLUSION Prediction of ODI from sleep sound seems to be feasible. Additional clinical feature such as BMI may increase overall predictability. LEVEL OF EVIDENCE IV Laryngoscope, 2021.
Collapse
Affiliation(s)
- Jeong-Whun Kim
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, South Korea.,Sensory Organ Research Institute, Seoul National University Medical Research Center, Seoul National University Medical Research Center, Seoul, Korea
| | - Jaeyoung Shin
- Music and Audio Research Group, Graduate School of Convergence Science and Technology, Seoul National University, Suwon, South Korea
| | - Kyogu Lee
- Music and Audio Research Group, Graduate School of Convergence Science and Technology, Seoul National University, Suwon, South Korea
| | - Tae-Bin Won
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, South Korea.,Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, South Korea
| | - Chae-Seo Rhee
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, South Korea.,Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, South Korea.,Sensory Organ Research Institute, Seoul National University Medical Research Center, Seoul National University Medical Research Center, Seoul, Korea
| | - Sung-Woo Cho
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, South Korea
| |
Collapse
|
41
|
Adadi P, Harris A, Bremer P, Silcock P, Ganley ARD, Jeffs AG, Eyres GT. The Effect of Sound Frequency and Intensity on Yeast Growth, Fermentation Performance and Volatile Composition of Beer. Molecules 2021; 26:7239. [PMID: 34885824 DOI: 10.3390/molecules26237239] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 11/26/2021] [Accepted: 11/26/2021] [Indexed: 11/16/2022] Open
Abstract
This study investigated the impact of varying sound conditions (frequency and intensity) on yeast growth, fermentation performance and production of volatile organic compounds (VOCs) in beer. Fermentations were carried out in plastic bags suspended in large water-filled containers fitted with underwater speakers. Ferments were subjected to either 200-800 or 800-2000 Hz at 124 and 140 dB @ 20 µPa. Headspace solid-phase microextraction (HS-SPME) coupled with gas chromatography-mass spectrometry (GC-MS) was used to identify and measure the relative abundance of the VOCs produced. Sound treatment had significant effects on the number of viable yeast cells in suspension at 10 and 24 h (p < 0.05), with control (silence) samples having the highest cell numbers. For wort gravity, there were significant differences between treatments at 24 and 48 h, with the silence control showing the lowest density before all ferments converged to the same final gravity at 140 h. A total of 33 VOCs were identified in the beer samples, including twelve esters, nine alcohols, three acids, three aldehydes, and six hop-derived compounds. Only the abundance of some alcohols showed any consistent response to the sound treatments. These results show that the application of audible sound via underwater transmission to a beer fermentation elicited limited changes to wort gravity and VOCs during fermentation.
Collapse
|
42
|
Kraus N. Descending Control in the Auditory System: A Perspective. Front Neurosci 2021; 15:769192. [PMID: 34733138 PMCID: PMC8558241 DOI: 10.3389/fnins.2021.769192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 09/21/2021] [Indexed: 11/13/2022] Open
Affiliation(s)
- Nina Kraus
- Departments of Communication Sciences, Neurobiology, Otolaryngology, Northwestern University, Evanston, IL, United States
| |
Collapse
|
43
|
Li LPH, Han JY, Zheng WZ, Huang RJ, Lai YH. Improved Environment-Aware-Based Noise Reduction System for Cochlear Implant Users Based on a Knowledge Transfer Approach: Development and Usability Study. J Med Internet Res 2021; 23:e25460. [PMID: 34709193 PMCID: PMC8587190 DOI: 10.2196/25460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 02/11/2021] [Accepted: 04/27/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Cochlear implant technology is a well-known approach to help deaf individuals hear speech again and can improve speech intelligibility in quiet conditions; however, it still has room for improvement in noisy conditions. More recently, it has been proven that deep learning-based noise reduction, such as noise classification and deep denoising autoencoder (NC+DDAE), can benefit the intelligibility performance of patients with cochlear implants compared to classical noise reduction algorithms. OBJECTIVE Following the successful implementation of the NC+DDAE model in our previous study, this study aimed to propose an advanced noise reduction system using knowledge transfer technology, called NC+DDAE_T; examine the proposed NC+DDAE_T noise reduction system using objective evaluations and subjective listening tests; and investigate which layer substitution of the knowledge transfer technology in the NC+DDAE_T noise reduction system provides the best outcome. METHODS The knowledge transfer technology was adopted to reduce the number of parameters of the NC+DDAE_T compared with the NC+DDAE. We investigated which layer should be substituted using short-time objective intelligibility and perceptual evaluation of speech quality scores as well as t-distributed stochastic neighbor embedding to visualize the features in each model layer. Moreover, we enrolled 10 cochlear implant users for listening tests to evaluate the benefits of the newly developed NC+DDAE_T. RESULTS The experimental results showed that substituting the middle layer (ie, the second layer in this study) of the noise-independent DDAE (NI-DDAE) model achieved the best performance gain regarding short-time objective intelligibility and perceptual evaluation of speech quality scores. Therefore, the parameters of layer 3 in the NI-DDAE were chosen to be replaced, thereby establishing the NC+DDAE_T. Both objective and listening test results showed that the proposed NC+DDAE_T noise reduction system achieved similar performances compared with the previous NC+DDAE in several noisy test conditions. However, the proposed NC+DDAE_T only required a quarter of the number of parameters compared to the NC+DDAE. CONCLUSIONS This study demonstrated that knowledge transfer technology can help reduce the number of parameters in an NC+DDAE while keeping similar performance rates. This suggests that the proposed NC+DDAE_T model may reduce the implementation costs of this noise reduction system and provide more benefits for cochlear implant users.
Collapse
Affiliation(s)
- Lieber Po-Hung Li
- Department of Otolaryngology, Cheng Hsin General Hospital, Taipei, Taiwan.,Faculty of Medicine, Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan.,Department of Speech Language Pathology and Audiology, College of Health Technology, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan
| | - Ji-Yan Han
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wei-Zhong Zheng
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ren-Jie Huang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ying-Hui Lai
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| |
Collapse
|
44
|
Mathiesen SL, Aadal L, Uldbæk ML, Astrup P, Byrne DV, Wang QJ. Music Is Served: How Acoustic Interventions in Hospital Dining Environments Can Improve Patient Mealtime Wellbeing. Foods 2021; 10:foods10112590. [PMID: 34828871 PMCID: PMC8622365 DOI: 10.3390/foods10112590] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 10/19/2021] [Accepted: 10/23/2021] [Indexed: 11/18/2022] Open
Abstract
Eating-related challenges and discomforts arising from moderately acquired brain injuries (ABI)—including physiological and cognitive difficulties—can interfere with patients’ eating experience and impede the recovery process. At the same time, external environmental factors have been proven to be influential in our mealtime experience. This experimental pilot study investigates whether redesigning the sonic environment in hospital dining areas can positively influence ABI patients’ (n = 17) nutritional state and mealtime experience. Using a three-phase between-subjects interventional design, we investigate the effects of installing sound proofing materials and playing music during the lunch meals at a specialised ABI hospital unit. Comprising both quantitative and qualitative research approaches and data acquisition methods, this project provides multidisciplinary and holistic insights into the importance of attending to sound in hospital surroundings. Our results demonstrate that improved acoustics and music playback during lunch meals might improve the mealtime atmosphere, the patient well-being, and social interaction, which potentially supports patient food intake and nutritional state. The results are discussed in terms of potential future implications for the healthcare sector.
Collapse
Affiliation(s)
- Signe Lund Mathiesen
- Department of Food Science, Faculty of Technical Sciences, Aarhus University, 8200 Aarhus N, Denmark; (D.V.B.); (Q.J.W.)
- Correspondence: ; Tel.: +45-2577-2779
| | - Lena Aadal
- Hammel Neurorehabilitation and Research Center, 8450 Hammel, Denmark;
- Department of Clinical Medicine, Faculty of Health, Aarhus University, 8000 Aarhus C, Denmark
| | | | - Peter Astrup
- Test and Development Center for Welfaretech, 8800 Viborg, Denmark;
| | - Derek Victor Byrne
- Department of Food Science, Faculty of Technical Sciences, Aarhus University, 8200 Aarhus N, Denmark; (D.V.B.); (Q.J.W.)
| | - Qian Janice Wang
- Department of Food Science, Faculty of Technical Sciences, Aarhus University, 8200 Aarhus N, Denmark; (D.V.B.); (Q.J.W.)
| |
Collapse
|
45
|
Biag AD, Belen VL. Development and Psychometric Testing of a Self-Rated Scale Based on National Nursing Core Competency Standards. J Nurs Meas 2021:JNM-D-20-00049. [PMID: 34518416 DOI: 10.1891/JNM-D-20-00049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
BACKGROUND AND PURPOSE The objectives of this study were to develop a National Nursing Core Competency Standards (NNCCS)-based instrument and determine its construct validity and internal consistency reliability. METHODS A methodologic research design was used to validate the 59-item scale based on the responses of 600 nurses. The scale items were culled from the client care, leadership and management, and research competencies identified in the NNCCS. RESULTS The results of the analyses confirmed 53 items and gave rise to a five-factor solution. The five dimensions are leadership, management, research, ethico-legal, and strategic competencies. CONCLUSIONS The seminal psychometric testing provided an evidence of acceptable validity and reliability of the proposed instrument. Further testing was recommended to accrue the psychometric soundness of the instrument.
Collapse
|
46
|
Dozio N, Maggioni E, Pittera D, Gallace A, Obrist M. Corrigendum: May I Smell Your Attention: Exploration of Smell and Sound for Visuospatial Attention in Virtual Reality. Front Psychol 2021; 12:749419. [PMID: 34489845 PMCID: PMC8417906 DOI: 10.3389/fpsyg.2021.749419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 07/30/2021] [Indexed: 11/13/2022] Open
Abstract
[This corrects the article DOI: 10.3389/fpsyg.2021.671470.].
Collapse
Affiliation(s)
- Nicolò Dozio
- Politecnico di Milano, Department of Mechanical Engineering, Milan, Italy.,Sussex Computer-Human Interaction Lab, Department of Informatics, University of Sussex, Brighton, United Kingdom
| | - Emanuela Maggioni
- Sussex Computer-Human Interaction Lab, Department of Informatics, University of Sussex, Brighton, United Kingdom.,Department of Computer Science, University College London, London, United Kingdom
| | - Dario Pittera
- Sussex Computer-Human Interaction Lab, Department of Informatics, University of Sussex, Brighton, United Kingdom.,Ultraleap Ltd., Bristol, United Kingdom
| | - Alberto Gallace
- Mind and Behavior Technological Center - MibTec, University of Milano-Bicocca, Milan, Italy
| | - Marianna Obrist
- Sussex Computer-Human Interaction Lab, Department of Informatics, University of Sussex, Brighton, United Kingdom.,Department of Computer Science, University College London, London, United Kingdom
| |
Collapse
|
47
|
Spence C. Musical Scents: On the Surprising Absence of Scented Musical/Auditory Events, Entertainments, and Experiences. Iperception 2021; 12:20416695211038747. [PMID: 34589196 PMCID: PMC8474342 DOI: 10.1177/20416695211038747] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 07/23/2021] [Indexed: 11/15/2022] Open
Abstract
The matching of scents with music is both one of the most natural (or intuitive) of crossmodal correspondences and, at the same time, one of the least frequently explored combinations of senses in an entertainment and multisensory experiential design context. This narrative review highlights the various occasions over the last century or two when scents and sounds have coincided, and the various motivations behind those who have chosen to bring these senses together: This has included everything from the masking of malodour to the matching of the semantic meaning or arousal potential of the two senses, through to the longstanding and recently-reemerging interest in the crossmodal correspondences (now that they have been distinguished from the superficially similar phenomenon of synaesthesia, with which they were previously often confused). As such, there exist a number of ways in which these two senses can be incorporated into meaningful multisensory experiences that can potentially resonate with the public. Having explored the deliberate combination of scent and music (or sound) in everything from "scent-sory" marketing through to fragrant discos and olfactory storytelling, I end by summarizing some of the opportunities around translating such unusual multisensory experiences from the public to the private sphere. This will likely be via the widespread dissemination of sensory apps that promise to convert (or translate) from one sense (likely scent) to another (e.g., music), as has, for example already started to occur in the world of music selections to match the flavour of specific wines.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, University
of Oxford, Oxford, United Kingdom
| |
Collapse
|
48
|
Rodríguez B, Arroyo C, Reyes LH, Reinoso-Carvalho F. Promoting Healthier Drinking Habits: Using Sound to Encourage the Choice for Non-Alcoholic Beers in E-Commerce. Foods 2021; 10:2063. [PMID: 34574172 DOI: 10.3390/foods10092063] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/30/2021] [Accepted: 08/18/2021] [Indexed: 11/17/2022] Open
Abstract
Important institutions, such as the World Health Organization, recommend reducing alcohol consumption by encouraging healthier drinking habits. This could be achieved, for example, by employing more effective promotion of non-alcoholic beverages. For such purposes, in this study, we assessed the role of experiential beer packaging sounds during the e-commerce experience of a non-alcoholic beer (NAB). Here, we designed two experiments. Experiment 1 evaluated the influence of different experiential beer packaging sounds on consumers' general emotions and sensory expectations. Experiment 2 assessed how the sounds that evoked more positive results in Experiment 1 would influence emotions and sensory expectations related to a NAB digital image. The obtained results revealed that a beer bottle pouring sound helped suppress some of the negativity that is commonly associated with the experience of a NAB. Based on such findings, brands and organizations interested in more effectively promoting NAB may feel encouraged to involve beer packaging sounds as part of their virtual shopping environments.
Collapse
|
49
|
Abstract
Enrichment is presented to improve the welfare of captive animals but sound is frequently presented with the assumption that it is enriching without assessing individuals' preferences. Typically, presented sounds are unnatural and animals are unable to choose which sounds they can listen to or escape them. We examined preferences of three zoo-housed gorillas for six categories of sound. The gorillas selected unique icons on a computer touchscreen that initiated brief samples of silence, white noise, nature, animal, percussion, and electronic instrumental sounds. Following training, gorillas selected each sound paired with silence (Phase 2), each sound paired with each other sound (Phase 3), and one sound among all other sound categories (Phase 4). Initially, a single sound was associated with each icon, but additional exemplars of the category were added in phases 5-8. Preferences were generally stable and one gorilla showed a strong preference for silence. Although there were individual differences, a surprising general preference for unnatural over natural sounds was revealed. These results indicate the importance of assessing preferences for individuals before introducing auditory stimulation in captive habitats.
Collapse
Affiliation(s)
- Jordyn Truax
- Department of Psychology, Oakland University, Rochester, MI, USA
| | - Jennifer Vonk
- Department of Psychology, Oakland University, Rochester, MI, USA
| |
Collapse
|
50
|
Restin T, Gaspar M, Bassler D, Kurtcuoglu V, Scholkmann F, Haslbeck FB. Newborn Incubators Do Not Protect from High Noise Levels in the Neonatal Intensive Care Unit and Are Relevant Noise Sources by Themselves. Children (Basel) 2021; 8:704. [PMID: 34438595 PMCID: PMC8394397 DOI: 10.3390/children8080704] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 08/09/2021] [Accepted: 08/13/2021] [Indexed: 11/24/2022]
Abstract
BACKGROUND While meaningful sound exposure has been shown to be important for newborn development, an excess of noise can delay the proper development of the auditory cortex. AIM The aim of this study was to assess the acoustic environment of a preterm baby in an incubator on a newborn intensive care unit (NICU). METHODS An empty but running incubator (Giraffe Omnibed, GE Healthcare) was used to evaluate the incubator frequency response with 60 measurements. In addition, a full day and night period outside and inside the incubator at the NICU of the University Hospital Zurich was acoustically analyzed. RESULTS The fan construction inside the incubator generates noise in the frequency range of 1.3-1.5 kHz with a weighted sound pressure level (SPL) of 40.5 dB(A). The construction of the incubator narrows the transmitted frequency spectrum of sound entering the incubator to lower frequencies, but it does not attenuate transient noises such as alarms or opening and closing of cabinet doors substantially. Alarms, as generated by the monitors, the incubator, and additional devices, still pass to the newborn. CONCLUSIONS The incubator does protect only insufficiently from noise coming from the NICUThe transmitted frequency spectrum is changed, limiting the impact of NICU noise on the neonate, but also limiting the neonate's perception of voices. The incubator, in particular its fan, as well as alarms from patient monitors are major sources of noise. Further optimizations with regard to the sound exposure in the NICU, as well as studies on the role of the incubator as a source and modulator, are needed to meet the preterm infants' multi-sensory needs.
Collapse
Affiliation(s)
- Tanja Restin
- Department of Neonatology, Newborn Research Zurich, University Hospital Zurich, 8091 Zurich, Switzerland; (D.B.); (F.S.); (F.B.H.)
- Institute of Physiology, University of Zurich, 8057 Zurich, Switzerland; (M.G.); (V.K.)
| | - Mikael Gaspar
- Institute of Physiology, University of Zurich, 8057 Zurich, Switzerland; (M.G.); (V.K.)
| | - Dirk Bassler
- Department of Neonatology, Newborn Research Zurich, University Hospital Zurich, 8091 Zurich, Switzerland; (D.B.); (F.S.); (F.B.H.)
| | - Vartan Kurtcuoglu
- Institute of Physiology, University of Zurich, 8057 Zurich, Switzerland; (M.G.); (V.K.)
| | - Felix Scholkmann
- Department of Neonatology, Newborn Research Zurich, University Hospital Zurich, 8091 Zurich, Switzerland; (D.B.); (F.S.); (F.B.H.)
| | - Friederike Barbara Haslbeck
- Department of Neonatology, Newborn Research Zurich, University Hospital Zurich, 8091 Zurich, Switzerland; (D.B.); (F.S.); (F.B.H.)
| |
Collapse
|