1
|
Ketelhut S, Brand R, Martin-Niedecken AL, Hug D. Sounds and Sights of Motivation: Using Digital Encouragement and Dissociation Strategies during Cardiopulmonary Exercise Testing To Improve Patient Engagement and Diagnostic Quality. SPORTS MEDICINE - OPEN 2025; 11:43. [PMID: 40274647 PMCID: PMC12022186 DOI: 10.1186/s40798-025-00847-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2024] [Accepted: 04/01/2025] [Indexed: 04/26/2025]
Abstract
The cardiopulmonary exercise test (CPET) stands as a fundamental assessment in sports and health science as it is crucial for evaluating physical fitness, tailoring training regimens, and diagnosing health conditions. An essential aspect of this test is that participants exert maximal effort, as insufficient effort can compromise the validity of the results. While reliable results are seen in physically active individuals, reliability may not be guaranteed in exercise-naïve, less fit, and clinical populations lacking experience to exhaust themselves. This can result in inaccurate assessments, misdiagnoses, misinterpretation of intervention results, and unsuitable exercise recommendations. Various strategies, including verbal, audio, and video stimuli, are used to elicit maximal effort in exercise. While music and verbal encouragement are well-studied, non-musical sound, video, virtual reality, and augmented reality remain underexplored, with inconsistent or absent CPET-specific guidelines. Surprisingly, innovative approaches combining multisensory digital methods are notably lacking. Future research should systematically evaluate these strategies to create more immersive and engaging experiences, increasing effort and standardizing encouragement. Adaptive audio-visual methods could improve test reliability, validity, and workflows while enhancing participant enjoyment. Realizing this potential requires interdisciplinary collaboration among sound, graphic, and video designers, exercise physiologists, and psychologists. By moving beyond conventional approaches, CPET could be transformed into a more engaging and effective tool for diverse populations.
Collapse
Affiliation(s)
- Sascha Ketelhut
- Institute of Sport Science, University of Bern, Bremgartenstrasse 145, Bern, 3013, Switzerland.
| | - Ralf Brand
- Sport and Exercise Psychology, University of Potsdam, Potsdam, Germany
| | | | - Daniel Hug
- Institute for Computer Music and Sound Technology, Zurich University of the Arts, Zurich, Switzerland
| |
Collapse
|
2
|
Barkasi M, Bansal A, Jörges B, Harris LR. Online reach adjustments induced by real-time movement sonification. Hum Mov Sci 2024; 96:103250. [PMID: 38964027 DOI: 10.1016/j.humov.2024.103250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Revised: 05/30/2024] [Accepted: 06/26/2024] [Indexed: 07/06/2024]
Abstract
Movement sonification can improve motor control in both healthy subjects (e.g., learning or refining a sport skill) and those with sensorimotor deficits (e.g., stroke patients and deafferented individuals). It is not known whether improved motor control and learning from movement sonification are driven by feedback-based real-time ("online") trajectory adjustments, adjustments to internal models over multiple trials, or both. We searched for evidence of online trajectory adjustments (muscle twitches) in response to movement sonification feedback by comparing the kinematics and error of reaches made with online (i.e., real-time) and terminal sonification feedback. We found that reaches made with online feedback were significantly more jerky than reaches made with terminal feedback, indicating increased muscle twitching (i.e., online trajectory adjustment). Using a between-subject design, we found that online feedback was associated with improved motor learning of a reach path and target over terminal feedback; however, using a within-subjects design, we found that switching participants who had learned with online sonification feedback to terminal feedback was associated with a decrease in error. Thus, our results suggest that, with our task and sonification, movement sonification leads to online trajectory adjustments which improve internal models over multiple trials, but which themselves are not helpful online corrections.
Collapse
Affiliation(s)
- Michael Barkasi
- Centre for Vision Research, York University, 4700 Keele Street, Toronto M3J 1P3, Ontario, Canada; Department of Neuroscience, Washington University School of Medicine in St. Louis, 660 S. Euclid Ave., St. Louis 63110-1010, MO, USA.
| | - Ambika Bansal
- Centre for Vision Research, York University, 4700 Keele Street, Toronto M3J 1P3, Ontario, Canada.
| | - Björn Jörges
- Centre for Vision Research, York University, 4700 Keele Street, Toronto M3J 1P3, Ontario, Canada.
| | - Laurence R Harris
- Centre for Vision Research, York University, 4700 Keele Street, Toronto M3J 1P3, Ontario, Canada.
| |
Collapse
|
3
|
Pinardi M, Di Stefano N, Di Pino G, Spence C. Exploring crossmodal correspondences for future research in human movement augmentation. Front Psychol 2023; 14:1190103. [PMID: 37397340 PMCID: PMC10308310 DOI: 10.3389/fpsyg.2023.1190103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/30/2023] [Indexed: 07/04/2023] Open
Abstract
"Crossmodal correspondences" are the consistent mappings between perceptual dimensions or stimuli from different sensory domains, which have been widely observed in the general population and investigated by experimental psychologists in recent years. At the same time, the emerging field of human movement augmentation (i.e., the enhancement of an individual's motor abilities by means of artificial devices) has been struggling with the question of how to relay supplementary information concerning the state of the artificial device and its interaction with the environment to the user, which may help the latter to control the device more effectively. To date, this challenge has not been explicitly addressed by capitalizing on our emerging knowledge concerning crossmodal correspondences, despite these being tightly related to multisensory integration. In this perspective paper, we introduce some of the latest research findings on the crossmodal correspondences and their potential role in human augmentation. We then consider three ways in which the former might impact the latter, and the feasibility of this process. First, crossmodal correspondences, given the documented effect on attentional processing, might facilitate the integration of device status information (e.g., concerning position) coming from different sensory modalities (e.g., haptic and visual), thus increasing their usefulness for motor control and embodiment. Second, by capitalizing on their widespread and seemingly spontaneous nature, crossmodal correspondences might be exploited to reduce the cognitive burden caused by additional sensory inputs and the time required for the human brain to adapt the representation of the body to the presence of the artificial device. Third, to accomplish the first two points, the benefits of crossmodal correspondences should be maintained even after sensory substitution, a strategy commonly used when implementing supplementary feedback.
Collapse
Affiliation(s)
- Mattia Pinardi
- NeXT Lab, Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Nicola Di Stefano
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Giovanni Di Pino
- NeXT Lab, Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Charles Spence
- Crossmodal Research Laboratory, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
4
|
Matinfar S, Salehi M, Suter D, Seibold M, Dehghani S, Navab N, Wanivenhaus F, Fürnstahl P, Farshad M, Navab N. Sonification as a reliable alternative to conventional visual surgical navigation. Sci Rep 2023; 13:5930. [PMID: 37045878 PMCID: PMC10097653 DOI: 10.1038/s41598-023-32778-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 04/02/2023] [Indexed: 04/14/2023] Open
Abstract
Despite the undeniable advantages of image-guided surgical assistance systems in terms of accuracy, such systems have not yet fully met surgeons' needs or expectations regarding usability, time efficiency, and their integration into the surgical workflow. On the other hand, perceptual studies have shown that presenting independent but causally correlated information via multimodal feedback involving different sensory modalities can improve task performance. This article investigates an alternative method for computer-assisted surgical navigation, introduces a novel four-DOF sonification methodology for navigated pedicle screw placement, and discusses advanced solutions based on multisensory feedback. The proposed method comprises a novel four-DOF sonification solution for alignment tasks in four degrees of freedom based on frequency modulation synthesis. We compared the resulting accuracy and execution time of the proposed sonification method with visual navigation, which is currently considered the state of the art. We conducted a phantom study in which 17 surgeons executed the pedicle screw placement task in the lumbar spine, guided by either the proposed sonification-based or the traditional visual navigation method. The results demonstrated that the proposed method is as accurate as the state of the art while decreasing the surgeon's need to focus on visual navigation displays instead of the natural focus on surgical tools and targeted anatomy during task execution.
Collapse
Affiliation(s)
- Sasan Matinfar
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany.
- Nuklearmedizin rechts der Isar, Technical University of Munich, 81675, Munich, Germany.
| | - Mehrdad Salehi
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
| | - Daniel Suter
- Department of Orthopaedics, Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Matthias Seibold
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
- Research in Orthopedic Computer Science (ROCS), Balgrist University Hospital, University of Zurich, Balgrist Campus, 8008, Zurich, Switzerland
| | - Shervin Dehghani
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
- Nuklearmedizin rechts der Isar, Technical University of Munich, 81675, Munich, Germany
| | - Navid Navab
- Topological Media Lab, Concordia University, Montreal, H3G 2W1, Canada
| | - Florian Wanivenhaus
- Department of Orthopaedics, Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Philipp Fürnstahl
- Research in Orthopedic Computer Science (ROCS), Balgrist University Hospital, University of Zurich, Balgrist Campus, 8008, Zurich, Switzerland
| | - Mazda Farshad
- Department of Orthopaedics, Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Nassir Navab
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
| |
Collapse
|
5
|
Minciacchi D, Bravi R, Rosenboom D. Editorial: Sonification, aesthetic representation of physical quantities. Front Neurosci 2023; 17:1162383. [PMID: 37008216 PMCID: PMC10064135 DOI: 10.3389/fnins.2023.1162383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 02/17/2023] [Indexed: 03/19/2023] Open
Affiliation(s)
- Diego Minciacchi
- Physiological Sciences Section, Department of Experimental and Clinical Medicine, University of Florence, Florence, Italy
- *Correspondence: Diego Minciacchi
| | - Riccardo Bravi
- Physiological Sciences Section, Department of Experimental and Clinical Medicine, University of Florence, Florence, Italy
| | - David Rosenboom
- The Herb Alpert School of Music, California Institute of the Arts, Valencia, CA, United States
| |
Collapse
|
6
|
Kibushi B, Okada J. Auditory sEMG biofeedback for reducing muscle co-contraction during pedaling. Physiol Rep 2022; 10:e15288. [PMID: 35611763 PMCID: PMC9131599 DOI: 10.14814/phy2.15288] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 03/06/2022] [Accepted: 04/13/2022] [Indexed: 06/15/2023] Open
Abstract
Muscle co-contraction between the agonist and antagonist muscles often causes low energy efficiency or movement disturbances. Surface electromyography biofeedback (sEMG-BF) has been used to train muscle activation or relaxation but it is unknown whether sEMG-BF reduces muscle co-contraction. We hypothesized that auditory sEMG-BF improves muscle co-contraction. Our purpose was to investigate whether auditory sEMG-BF is effective in improving muscle co-contraction. Thirteen participants pedaled on a road bike using four different auditory sEMG-BF conditions. We measured the surface electromyography at the lower limb muscles. The vastus lateralis (VL) and the semitendinosus (ST) activities were individually transformed into different beep sounds. Four feedback conditions were no-feedback, VL feedback, ST feedback, and both VL and ST feedback. We compared the co-contraction index (COI) of the knee extensor-flexor muscles and the hip flexor-extensor muscles among the conditions. There were no significant differences in COIs among the conditions (p = 0.83 for the COI of the knee extensor-flexor; p = 0.32 for the COI of the hip flexor-extensor). To improve the muscle co-contraction by sEMG-BF, it may be necessary to convert muscle activation into a muscle co-contraction. We concluded that individual sEMG-BF does not immediately improve muscle co-contraction during pedaling.
Collapse
Affiliation(s)
- Benio Kibushi
- Graduate School of Human Development and EnvironmentKobe UniversityKobeJapan
| | - Junichi Okada
- Faculty of Sport SciencesWaseda UniversityTokorozawaSaitamaJapan
| |
Collapse
|