1
|
Baltieri M, Iizuka H, Witkowski O, Sinapayen L, Suzuki K. Hybrid Life: Integrating biological, artificial, and cognitive systems. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2023; 14:e1662. [PMID: 37403661 DOI: 10.1002/wcs.1662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 05/22/2023] [Accepted: 05/30/2023] [Indexed: 07/06/2023]
Abstract
Artificial life is a research field studying what processes and properties define life, based on a multidisciplinary approach spanning the physical, natural, and computational sciences. Artificial life aims to foster a comprehensive study of life beyond "life as we know it" and toward "life as it could be," with theoretical, synthetic, and empirical models of the fundamental properties of living systems. While still a relatively young field, artificial life has flourished as an environment for researchers with different backgrounds, welcoming ideas, and contributions from a wide range of subjects. Hybrid Life brings our attention to some of the most recent developments within the artificial life community, rooted in more traditional artificial life studies but looking at new challenges emerging from interactions with other fields. Hybrid Life aims to cover studies that can lead to an understanding, from first principles, of what systems are and how biological and artificial systems can interact and integrate to form new kinds of hybrid (living) systems, individuals, and societies. To do so, it focuses on three complementary perspectives: theories of systems and agents, hybrid augmentation, and hybrid interaction. Theories of systems and agents are used to define systems, how they differ (e.g., biological or artificial, autonomous, or nonautonomous), and how multiple systems relate in order to form new hybrid systems. Hybrid augmentation focuses on implementations of systems so tightly connected that they act as a single, integrated one. Hybrid interaction is centered around interactions within a heterogeneous group of distinct living and nonliving systems. After discussing some of the major sources of inspiration for these themes, we will focus on an overview of the works that appeared in Hybrid Life special sessions, hosted by the annual Artificial Life Conference between 2018 and 2022. This article is categorized under: Neuroscience > Cognition Philosophy > Artificial Intelligence Computer Science and Robotics > Robotics.
Collapse
Affiliation(s)
- Manuel Baltieri
- Araya Inc., Tokyo, Japan
- Department of Informatics, University of Sussex, Brighton, UK
| | - Hiroyuki Iizuka
- Faculty of Information Science and Technology, Hokkaido University, Sapporo, Japan
- Center for Human Nature, Artificial Intelligence and Neuroscience (CHAIN), Hokkaido University, Sapporo, Japan
| | - Olaf Witkowski
- Center for Human Nature, Artificial Intelligence and Neuroscience (CHAIN), Hokkaido University, Sapporo, Japan
- Cross Labs, Cross Compass, Kyoto, Japan
- College of Arts and Sciences, University of Tokyo, Tokyo, Japan
| | - Lana Sinapayen
- Sony Computer Science Laboratories, Kyoto, Japan
- National Institute for Basic Biology, Okazaki, Japan
| | - Keisuke Suzuki
- Center for Human Nature, Artificial Intelligence and Neuroscience (CHAIN), Hokkaido University, Sapporo, Japan
| |
Collapse
|
2
|
Neugebauer A, Rifai K, Getzlaff M, Wahl S. Navigation aid for blind persons by visual-to-auditory sensory substitution: A pilot study. PLoS One 2020; 15:e0237344. [PMID: 32818953 PMCID: PMC7446825 DOI: 10.1371/journal.pone.0237344] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 07/23/2020] [Indexed: 11/19/2022] Open
Abstract
PURPOSE In this study, we investigate to what degree augmented reality technology can be used to create and evaluate a visual-to-auditory sensory substitution device to improve the performance of blind persons in navigation and recognition tasks. METHODS A sensory substitution algorithm that translates 3D visual information into audio feedback was designed. This algorithm was integrated in an augmented reality based mobile phone application. Using the mobile device as sensory substitution device, a study with blind participants (n = 7) was performed. The participants navigated through pseudo-randomized obstacle courses using either the sensory substitution device, a white cane or a combination of both. In a second task, virtual 3D objects and structures had to be identified by the participants using the same sensory substitution device. RESULTS The realized application for mobile devices enabled participants to complete the navigation and object recognition tasks in an experimental environment already within the first trials without previous training. This demonstrates the general feasibility and low entry barrier of the designed sensory substitution algorithm. In direct comparison to the white cane, within the study duration of ten hours the sensory substitution device did not offer a statistically significant improvement in navigation.
Collapse
Affiliation(s)
- Alexander Neugebauer
- ZEISS Vision Science Lab, Eberhard-Karls-University Tuebingen, Tübingen, Germany
- * E-mail:
| | - Katharina Rifai
- ZEISS Vision Science Lab, Eberhard-Karls-University Tuebingen, Tübingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| | - Mathias Getzlaff
- Institute for Applied Physics, Heinrich-Heine University Duesseldorf, Duesseldorf, Germany
| | - Siegfried Wahl
- ZEISS Vision Science Lab, Eberhard-Karls-University Tuebingen, Tübingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| |
Collapse
|
3
|
Lloyd-Esenkaya T, Lloyd-Esenkaya V, O'Neill E, Proulx MJ. Multisensory inclusive design with sensory substitution. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:37. [PMID: 32770416 PMCID: PMC7415050 DOI: 10.1186/s41235-020-00240-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Accepted: 07/13/2020] [Indexed: 11/10/2022]
Abstract
Sensory substitution techniques are perceptual and cognitive phenomena used to represent one sensory form with an alternative. Current applications of sensory substitution techniques are typically focused on the development of assistive technologies whereby visually impaired users can acquire visual information via auditory and tactile cross-modal feedback. But despite their evident success in scientific research and furthering theory development in cognition, sensory substitution techniques have not yet gained widespread adoption within sensory-impaired populations. Here we argue that shifting the focus from assistive to mainstream applications may resolve some of the current issues regarding the use of sensory substitution devices to improve outcomes for those with disabilities. This article provides a tutorial guide on how to use research into multisensory processing and sensory substitution techniques from the cognitive sciences to design new inclusive cross-modal displays. A greater focus on developing inclusive mainstream applications could lead to innovative technologies that could be enjoyed by every person.
Collapse
Affiliation(s)
- Tayfun Lloyd-Esenkaya
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK.,Department of Computer Science, University of Bath, Bath, UK
| | | | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, UK
| | - Michael J Proulx
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK. .,Department of Psychology, University of Bath, Bath, UK.
| |
Collapse
|
4
|
Chebat DR, Schneider FC, Ptito M. Spatial Competence and Brain Plasticity in Congenital Blindness via Sensory Substitution Devices. Front Neurosci 2020; 14:815. [PMID: 32848575 PMCID: PMC7406645 DOI: 10.3389/fnins.2020.00815] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 07/10/2020] [Indexed: 12/22/2022] Open
Abstract
In congenital blindness (CB), tactile, and auditory information can be reinterpreted by the brain to compensate for visual information through mechanisms of brain plasticity triggered by training. Visual deprivation does not cause a cognitive spatial deficit since blind people are able to acquire spatial knowledge about the environment. However, this spatial competence takes longer to achieve but is eventually reached through training-induced plasticity. Congenitally blind individuals can further improve their spatial skills with the extensive use of sensory substitution devices (SSDs), either visual-to-tactile or visual-to-auditory. Using a combination of functional and anatomical neuroimaging techniques, our recent work has demonstrated the impact of spatial training with both visual to tactile and visual to auditory SSDs on brain plasticity, cortical processing, and the achievement of certain forms of spatial competence. The comparison of performances between CB and sighted people using several different sensory substitution devices in perceptual and sensory-motor tasks uncovered the striking ability of the brain to rewire itself during perceptual learning and to interpret novel sensory information even during adulthood. We discuss here the implications of these findings for helping blind people in navigation tasks and to increase their accessibility to both real and virtual environments.
Collapse
Affiliation(s)
- Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel, Israel
| | - Fabien C. Schneider
- Department of Radiology, University of Lyon, Saint-Etienne, France
- Neuroradiology Unit, University Hospital of Saint-Etienne, Saint-Etienne, France
| | - Maurice Ptito
- BRAIN Lab, Department of Neuroscience and Pharmacology, University of Copenhagen, Copenhagen, Denmark
- Chaire de Recherche Harland Sanders en Sciences de la Vision, École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| |
Collapse
|
5
|
Cross-modal size-contrast illusion: Acoustic increases in intensity and bandwidth modulate haptic representation of object size. Sci Rep 2019; 9:14440. [PMID: 31595003 PMCID: PMC6783429 DOI: 10.1038/s41598-019-50912-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Accepted: 09/12/2019] [Indexed: 01/20/2023] Open
Abstract
Changes in the retinal size of stationary objects provide a cue to the observer's motion in the environment: Increases indicate the observer's forward motion, and decreases backward motion. In this study, a series of images each comprising a pair of pine-tree figures were translated into auditory modality using sensory substitution software. Resulting auditory stimuli were presented in an ascending sequence (i.e. increasing in intensity and bandwidth compatible with forward motion), descending sequence (i.e. decreasing in intensity and bandwidth compatible with backward motion), or in a scrambled order. During the presentation of stimuli, blindfolded participants estimated the lengths of wooden sticks by haptics. Results showed that those exposed to the stimuli compatible with forward motion underestimated the lengths of the sticks. This consistent underestimation may share some aspects with visual size-contrast effects such as the Ebbinghaus illusion. In contrast, participants in the other two conditions did not show such magnitude of error in size estimation; which is consistent with the "adaptive perceptual bias" towards acoustic increases in intensity and bandwidth. In sum, we report a novel cross-modal size-contrast illusion, which reveals that auditory motion cues compatible with listeners' forward motion modulate haptic representations of object size.
Collapse
|
6
|
Evaluation of an Audio-haptic Sensory Substitution Device for Enhancing Spatial Awareness for the Visually Impaired. Optom Vis Sci 2019; 95:757-765. [PMID: 30153241 PMCID: PMC6133230 DOI: 10.1097/opx.0000000000001284] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023] Open
Abstract
Supplemental digital content is available in the text. SIGNIFICANCE Visually impaired participants were surprisingly fast in learning a new sensory substitution device, which allows them to detect obstacles within a 3.5-m radius and to find the optimal path in between. Within a few hours of training, participants successfully performed complex navigation as well as with the white cane. PURPOSE Globally, millions of people live with vision impairment, yet effective assistive devices to increase their independence remain scarce. A promising method is the use of sensory substitution devices, which are human-machine interfaces transforming visual into auditory or tactile information. The Sound of Vision (SoV) system continuously encodes visual elements of the environment into audio-haptic signals. Here, we evaluated the SoV system in complex navigation tasks, to compare performance with the SoV system with the white cane, quantify training effects, and collect user feedback. METHODS Six visually impaired participants received eight hours of training with the SoV system, completed a usability questionnaire, and repeatedly performed assessments, for which they navigated through standardized scenes. In each assessment, participants had to avoid collisions with obstacles, using the SoV system, the white cane, or both assistive devices. RESULTS The results show rapid and substantial learning with the SoV system, with less collisions and higher obstacle awareness. After four hours of training, visually impaired people were able to successfully avoid collisions in a difficult navigation task as well as when using the cane, although they still needed more time. Overall, participants rated the SoV system's usability favorably. CONCLUSIONS Contrary to the cane, the SoV system enables users to detect the best free space between objects within a 3.5-m (up to 10-m) radius and, importantly, elevated and dynamic obstacles. All in all, we consider that visually impaired people can learn to adapt to the haptic-auditory representation and achieve expertise in usage through well-defined training within acceptable time.
Collapse
|
7
|
Richardson M, Thar J, Alvarez J, Borchers J, Ward J, Hamilton-Fletcher G. How Much Spatial Information Is Lost in the Sensory Substitution Process? Comparing Visual, Tactile, and Auditory Approaches. Perception 2019; 48:1079-1103. [PMID: 31547778 DOI: 10.1177/0301006619873194] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Sensory substitution devices (SSDs) can convey visuospatial information through spatialised auditory or tactile stimulation using wearable technology. However, the level of information loss associated with this transformation is unknown. In this study, novice users discriminated the location of two objects at 1.2 m using devices that transformed a 16 × 8-depth map into spatially distributed patterns of light, sound, or touch on the abdomen. Results showed that through active sensing, participants could discriminate the vertical position of objects to a visual angle of 1°, 14°, and 21°, and their distance to 2 cm, 8 cm, and 29 cm using these visual, auditory, and haptic SSDs, respectively. Visual SSDs significantly outperformed auditory and tactile SSDs on vertical localisation, whereas for depth perception, all devices significantly differed from one another (visual > auditory > haptic). Our findings highlight the high level of acuity possible for SSDs even with low spatial resolutions (e.g., 16 × 8) and quantify the level of information loss attributable to this transformation for the SSD user. Finally, we discuss ways of closing this “modality gap” found in SSDs and conclude that this process is best benchmarked against performance with SSDs that return to their primary modality (e.g., visuospatial into visual).
Collapse
Affiliation(s)
| | - Jan Thar
- Media Computing Group, RWTH Aachen University, Germany
| | - James Alvarez
- Department of Psychology, University of Sussex, Brighton, UK
| | - Jan Borchers
- Media Computing Group, RWTH Aachen University, Germany
| | - Jamie Ward
- Department of Psychology, University of Sussex, Brighton, UK; Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
| | - Giles Hamilton-Fletcher
- Department of Psychology, University of Sussex, Brighton, UK; Neuroimaging and Visual Science Laboratory, New York University Langone Health, NY, USA
| |
Collapse
|
8
|
Arnold G, Pesnot-Lerousseau J, Auvray M. Individual Differences in Sensory Substitution. Multisens Res 2017; 30:579-600. [DOI: 10.1163/22134808-00002561] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2016] [Accepted: 03/16/2017] [Indexed: 12/23/2022]
Abstract
Sensory substitution devices were developed in the context of perceptual rehabilitation and they aim at compensating one or several functions of a deficient sensory modality by converting stimuli that are normally accessed through this deficient sensory modality into stimuli accessible by another sensory modality. For instance, they can convert visual information into sounds or tactile stimuli. In this article, we review those studies that investigated the individual differences at the behavioural, neural, and phenomenological levels when using a sensory substitution device. We highlight how taking into account individual differences has consequences for the optimization and learning of sensory substitution devices. We also discuss the extent to which these studies allow a better understanding of the experience with sensory substitution devices, and in particular how the resulting experience is not akin to a single sensory modality. Rather, it should be conceived as a multisensory experience, involving both perceptual and cognitive processes, and emerging on each user’s pre-existing sensory and cognitive capacities.
Collapse
Affiliation(s)
- Gabriel Arnold
- Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222, Université Pierre et Marie Curie, 4 place Jussieu, 75005 Paris, France
| | - Jacques Pesnot-Lerousseau
- Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222, Université Pierre et Marie Curie, 4 place Jussieu, 75005 Paris, France
| | - Malika Auvray
- Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222, Université Pierre et Marie Curie, 4 place Jussieu, 75005 Paris, France
| |
Collapse
|
9
|
Cecchetti L, Kupers R, Ptito M, Pietrini P, Ricciardi E. Are Supramodality and Cross-Modal Plasticity the Yin and Yang of Brain Development? From Blindness to Rehabilitation. Front Syst Neurosci 2016; 10:89. [PMID: 27877116 PMCID: PMC5099160 DOI: 10.3389/fnsys.2016.00089] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Accepted: 10/27/2016] [Indexed: 12/20/2022] Open
Abstract
Research in blind individuals has primarily focused for a long time on the brain plastic reorganization that occurs in early visual areas. Only more recently, scientists have developed innovative strategies to understand to what extent vision is truly a mandatory prerequisite for the brain's fine morphological architecture to develop and function. As a whole, the studies conducted to date in sighted and congenitally blind individuals have provided ample evidence that several "visual" cortical areas develop independently from visual experience and do process information content regardless of the sensory modality through which a particular stimulus is conveyed: a property named supramodality. At the same time, lack of vision leads to a structural and functional reorganization within "visual" brain areas, a phenomenon known as cross-modal plasticity. Cross-modal recruitment of the occipital cortex in visually deprived individuals represents an adaptative compensatory mechanism that mediates processing of non-visual inputs. Supramodality and cross-modal plasticity appears to be the "yin and yang" of brain development: supramodal is what takes place despite the lack of vision, whereas cross-modal is what happens because of lack of vision. Here we provide a critical overview of the research in this field and discuss the implications that these novel findings have for the development of educative/rehabilitation approaches and sensory substitution devices (SSDs) in sensory-impaired individuals.
Collapse
Affiliation(s)
- Luca Cecchetti
- Department of Surgical, Medical, Molecular Pathology and Critical Care, University of PisaPisa, Italy; Clinical Psychology Branch, Pisa University HospitalPisa, Italy
| | - Ron Kupers
- BRAINlab, Department of Neuroscience and Pharmacology, Panum Institute, University of CopenhagenCopenhagen, Denmark; Department of Radiology and Biomedical Imaging, Yale UniversityNew Haven, CT, USA
| | - Maurice Ptito
- Laboratory of Neuropsychiatry, Psychiatric Centre CopenhagenCopenhagen, Denmark; School of Optometry, Université de MontréalMontréal, QC, Canada
| | | | - Emiliano Ricciardi
- Department of Surgical, Medical, Molecular Pathology and Critical Care, University of PisaPisa, Italy; MOMILab, IMT School for Advanced Studies LuccaLucca, Italy
| |
Collapse
|