1
|
Wang X, Song Y, Liao M, Liu T, Liu L, Reynaud A. Corrective mechanisms of motion extrapolation. J Vis 2024; 24:6. [PMID: 38512248 PMCID: PMC10960225 DOI: 10.1167/jov.24.3.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 02/01/2024] [Indexed: 03/22/2024] Open
Abstract
Transmission and processing of sensory information in the visual system takes time. For motion perception, our brain can overcome this intrinsic neural delay through extrapolation mechanisms and accurately predict the current position of a continuously moving object. But how does the system behave when the motion abruptly changes and the prediction becomes wrong? Here we address this question by studying the perceived position of a moving object with various abrupt motion changes by human observers. We developed a task in which a bar is monotonously moving horizontally, and then motion suddenly stops, reverses, or disappears-then-reverses around two vertical stationary reference lines. Our results showed that participants overestimated the position of the stopping bar but did not perceive an overshoot in the motion reversal condition. When a temporal gap was added at the reverse point, the perceptual overshoot of the end point scaled with the gap durations. Our model suggests that the overestimation of the object position when it disappears is not linear as a function of its speeds but gradually fades out. These results can thus be reconciled in a single process where there is an interplay of the cortical motion prediction mechanisms and the late sensory transient visual inputs.
Collapse
Affiliation(s)
- Xi Wang
- Department of Ophthalmology, and Laboratory of Optometry and Vision Sciences, West China Hospital, Sichuan University, Chengdu, Sichuan, China
- McGill Vision Research Unit, Department of Ophthalmology & Visual Sciences, McGill University, Montreal, Quebec, Canada
| | - Yutong Song
- Department of Ophthalmology, and Laboratory of Optometry and Vision Sciences, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Meng Liao
- Department of Ophthalmology, and Laboratory of Optometry and Vision Sciences, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Tong Liu
- Department of Ophthalmology, and Laboratory of Optometry and Vision Sciences, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Longqian Liu
- Department of Ophthalmology, and Laboratory of Optometry and Vision Sciences, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Alexandre Reynaud
- McGill Vision Research Unit, Department of Ophthalmology & Visual Sciences, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
2
|
Turner W, Sexton C, Hogendoorn H. Neural mechanisms of visual motion extrapolation. Neurosci Biobehav Rev 2024; 156:105484. [PMID: 38036162 DOI: 10.1016/j.neubiorev.2023.105484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 11/21/2023] [Accepted: 11/25/2023] [Indexed: 12/02/2023]
Abstract
Because neural processing takes time, the brain only has delayed access to sensory information. When localising moving objects this is problematic, as an object will have moved on by the time its position has been determined. Here, we consider predictive motion extrapolation as a fundamental delay-compensation strategy. From a population-coding perspective, we outline how extrapolation can be achieved by a forwards shift in the population-level activity distribution. We identify general mechanisms underlying such shifts, involving various asymmetries which facilitate the targeted 'enhancement' and/or 'dampening' of population-level activity. We classify these on the basis of their potential implementation (intra- vs inter-regional processes) and consider specific examples in different visual regions. We consider how motion extrapolation can be achieved during inter-regional signaling, and how asymmetric connectivity patterns which support extrapolation can emerge spontaneously from local synaptic learning rules. Finally, we consider how more abstract 'model-based' predictive strategies might be implemented. Overall, we present an integrative framework for understanding how the brain determines the real-time position of moving objects, despite neural delays.
Collapse
Affiliation(s)
- William Turner
- Queensland University of Technology, Brisbane 4059, Australia; The University of Melbourne, Melbourne 3010, Australia.
| | | | - Hinze Hogendoorn
- Queensland University of Technology, Brisbane 4059, Australia; The University of Melbourne, Melbourne 3010, Australia
| |
Collapse
|
3
|
Grimaldi A, Perrinet LU. Learning heterogeneous delays in a layer of spiking neurons for fast motion detection. BIOLOGICAL CYBERNETICS 2023; 117:373-387. [PMID: 37695359 DOI: 10.1007/s00422-023-00975-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 08/18/2023] [Indexed: 09/12/2023]
Abstract
The precise timing of spikes emitted by neurons plays a crucial role in shaping the response of efferent biological neurons. This temporal dimension of neural activity holds significant importance in understanding information processing in neurobiology, especially for the performance of neuromorphic hardware, such as event-based cameras. Nonetheless, many artificial neural models disregard this critical temporal dimension of neural activity. In this study, we present a model designed to efficiently detect temporal spiking motifs using a layer of spiking neurons equipped with heterogeneous synaptic delays. Our model capitalizes on the diverse synaptic delays present on the dendritic tree, enabling specific arrangements of temporally precise synaptic inputs to synchronize upon reaching the basal dendritic tree. We formalize this process as a time-invariant logistic regression, which can be trained using labeled data. To demonstrate its practical efficacy, we apply the model to naturalistic videos transformed into event streams, simulating the output of the biological retina or event-based cameras. To evaluate the robustness of the model in detecting visual motion, we conduct experiments by selectively pruning weights and demonstrate that the model remains efficient even under significantly reduced workloads. In conclusion, by providing a comprehensive, event-driven computational building block, the incorporation of heterogeneous delays has the potential to greatly improve the performance of future spiking neural network algorithms, particularly in the context of neuromorphic chips.
Collapse
Affiliation(s)
- Antoine Grimaldi
- Institut de Neurosciences de la Timone, Aix Marseille Univ, CNRS, 27 boulevard Jean Moulin, 13005, Marseille, France
| | - Laurent U Perrinet
- Institut de Neurosciences de la Timone, Aix Marseille Univ, CNRS, 27 boulevard Jean Moulin, 13005, Marseille, France.
| |
Collapse
|
4
|
Sexton CM, Burkitt AN, Hogendoorn H. Spike-timing dependent plasticity partially compensates for neural delays in a multi-layered network of motion-sensitive neurons. PLoS Comput Biol 2023; 19:e1011457. [PMID: 37672532 PMCID: PMC10506708 DOI: 10.1371/journal.pcbi.1011457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 09/18/2023] [Accepted: 08/22/2023] [Indexed: 09/08/2023] Open
Abstract
The ability of the brain to represent the external world in real-time is impacted by the fact that neural processing takes time. Because neural delays accumulate as information progresses through the visual system, representations encoded at each hierarchical level are based upon input that is progressively outdated with respect to the external world. This 'representational lag' is particularly relevant to the task of localizing a moving object-because the object's location changes with time, neural representations of its location potentially lag behind its true location. Converging evidence suggests that the brain has evolved mechanisms that allow it to compensate for its inherent delays by extrapolating the position of moving objects along their trajectory. We have previously shown how spike-timing dependent plasticity (STDP) can achieve motion extrapolation in a two-layer, feedforward network of velocity-tuned neurons, by shifting the receptive fields of second layer neurons in the opposite direction to a moving stimulus. The current study extends this work by implementing two important changes to the network to bring it more into line with biology: we expanded the network to multiple layers to reflect the depth of the visual hierarchy, and we implemented more realistic synaptic time-courses. We investigate the accumulation of STDP-driven receptive field shifts across several layers, observing a velocity-dependent reduction in representational lag. These results highlight the role of STDP, operating purely along the feedforward pathway, as a developmental strategy for delay compensation.
Collapse
Affiliation(s)
- Charlie M. Sexton
- Melbourne School of Psychological Sciences, The University of Melbourne, Victoria, Australia
| | - Anthony N. Burkitt
- Department of Biomedical Engineering, The University of Melbourne, Victoria, Australia
- Graeme Clark Institute for Biomedical Engineering, The University of Melbourne, Victoria, Australia
| | - Hinze Hogendoorn
- Melbourne School of Psychological Sciences, The University of Melbourne, Victoria, Australia
- School of Psychology and Counselling, Queensland University of Technology, Queensland, Australia
| |
Collapse
|
5
|
Grimaldi A, Gruel A, Besnainou C, Jérémie JN, Martinet J, Perrinet LU. Precise Spiking Motifs in Neurobiological and Neuromorphic Data. Brain Sci 2022; 13:68. [PMID: 36672049 PMCID: PMC9856822 DOI: 10.3390/brainsci13010068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 12/20/2022] [Accepted: 12/23/2022] [Indexed: 12/31/2022] Open
Abstract
Why do neurons communicate through spikes? By definition, spikes are all-or-none neural events which occur at continuous times. In other words, spikes are on one side binary, existing or not without further details, and on the other, can occur at any asynchronous time, without the need for a centralized clock. This stands in stark contrast to the analog representation of values and the discretized timing classically used in digital processing and at the base of modern-day neural networks. As neural systems almost systematically use this so-called event-based representation in the living world, a better understanding of this phenomenon remains a fundamental challenge in neurobiology in order to better interpret the profusion of recorded data. With the growing need for intelligent embedded systems, it also emerges as a new computing paradigm to enable the efficient operation of a new class of sensors and event-based computers, called neuromorphic, which could enable significant gains in computation time and energy consumption-a major societal issue in the era of the digital economy and global warming. In this review paper, we provide evidence from biology, theory and engineering that the precise timing of spikes plays a crucial role in our understanding of the efficiency of neural networks.
Collapse
Affiliation(s)
- Antoine Grimaldi
- INT UMR 7289, Aix Marseille Univ, CNRS, 27 Bd Jean Moulin, 13005 Marseille, France
| | - Amélie Gruel
- SPARKS, Côte d’Azur, CNRS, I3S, 2000 Rte des Lucioles, 06900 Sophia-Antipolis, France
| | - Camille Besnainou
- INT UMR 7289, Aix Marseille Univ, CNRS, 27 Bd Jean Moulin, 13005 Marseille, France
| | - Jean-Nicolas Jérémie
- INT UMR 7289, Aix Marseille Univ, CNRS, 27 Bd Jean Moulin, 13005 Marseille, France
| | - Jean Martinet
- SPARKS, Côte d’Azur, CNRS, I3S, 2000 Rte des Lucioles, 06900 Sophia-Antipolis, France
| | - Laurent U. Perrinet
- INT UMR 7289, Aix Marseille Univ, CNRS, 27 Bd Jean Moulin, 13005 Marseille, France
| |
Collapse
|
6
|
Le Bec B, Troncoso XG, Desbois C, Passarelli Y, Baudot P, Monier C, Pananceau M, Frégnac Y. Horizontal connectivity in V1: Prediction of coherence in contour and motion integration. PLoS One 2022; 17:e0268351. [PMID: 35802625 PMCID: PMC9269411 DOI: 10.1371/journal.pone.0268351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Accepted: 04/26/2022] [Indexed: 11/30/2022] Open
Abstract
This study demonstrates the functional importance of the Surround context relayed laterally in V1 by the horizontal connectivity, in controlling the latency and the gain of the cortical response to the feedforward visual drive. We report here four main findings: 1) a centripetal apparent motion sequence results in a shortening of the spiking latency of V1 cells, when the orientation of the local inducer and the global motion axis are both co-aligned with the RF orientation preference; 2) this contextual effects grows with visual flow speed, peaking at 150–250°/s when it matches the propagation speed of horizontal connectivity (0.15–0.25 mm/ms); 3) For this speed range, the axial sensitivity of V1 cells is tilted by 90° to become co-aligned with the orientation preference axis; 4) the strength of modulation by the surround context correlates with the spatiotemporal coherence of the apparent motion flow. Our results suggest an internally-generated binding process, linking local (orientation /position) and global (motion/direction) features as early as V1. This long-range diffusion process constitutes a plausible substrate in V1 of the human psychophysical bias in speed estimation for collinear motion. Since it is demonstrated in the anesthetized cat, this novel form of contextual control of the cortical gain and phase is a built-in property in V1, whose expression does not require behavioral attention and top-down control from higher cortical areas. We propose that horizontal connectivity participates in the propagation of an internal “prediction” wave, shaped by visual experience, which links contour co-alignment and global axial motion at an apparent speed in the range of saccade-like eye movements.
Collapse
Affiliation(s)
- Benoit Le Bec
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
| | - Xoana G. Troncoso
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
| | - Christophe Desbois
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
- Ecole Nationale Vétérinaire d’Alfort, Maisons-Alfort, France
| | - Yannick Passarelli
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
| | - Pierre Baudot
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
| | - Cyril Monier
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
| | - Marc Pananceau
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
| | - Yves Frégnac
- NeuroPSI-UNIC, Paris-Saclay Institute of Neuroscience, CNRS, Paris-Saclay University, Gif-sur-Yvette, France
- * E-mail:
| |
Collapse
|
7
|
Debat G, Chauhan T, Cottereau BR, Masquelier T, Paindavoine M, Baures R. Event-Based Trajectory Prediction Using Spiking Neural Networks. Front Comput Neurosci 2021; 15:658764. [PMID: 34108870 PMCID: PMC8180888 DOI: 10.3389/fncom.2021.658764] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 04/27/2021] [Indexed: 11/13/2022] Open
Abstract
In recent years, event-based sensors have been combined with spiking neural networks (SNNs) to create a new generation of bio-inspired artificial vision systems. These systems can process spatio-temporal data in real time, and are highly energy efficient. In this study, we used a new hybrid event-based camera in conjunction with a multi-layer spiking neural network trained with a spike-timing-dependent plasticity learning rule. We showed that neurons learn from repeated and correlated spatio-temporal patterns in an unsupervised way and become selective to motion features, such as direction and speed. This motion selectivity can then be used to predict ball trajectory by adding a simple read-out layer composed of polynomial regressions, and trained in a supervised manner. Hence, we show that a SNN receiving inputs from an event-based sensor can extract relevant spatio-temporal patterns to process and predict ball trajectories.
Collapse
Affiliation(s)
- Guillaume Debat
- CERCO UMR 5549, CNRS-Université Toulouse 3, Toulouse, France
| | - Tushar Chauhan
- CERCO UMR 5549, CNRS-Université Toulouse 3, Toulouse, France
| | | | | | - Michel Paindavoine
- Laboratory for Research on Learning and Development (LEAD), University of Burgundy, CNRS UMR, Dijon, France
| | - Robin Baures
- CERCO UMR 5549, CNRS-Université Toulouse 3, Toulouse, France
| |
Collapse
|
8
|
Khoei MA, Masson GS, Perrinet LU. The Flash-Lag Effect as a Motion-Based Predictive Shift. PLoS Comput Biol 2017; 13:e1005068. [PMID: 28125585 PMCID: PMC5268412 DOI: 10.1371/journal.pcbi.1005068] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2015] [Accepted: 07/21/2016] [Indexed: 11/18/2022] Open
Abstract
Due to its inherent neural delays, the visual system has an outdated access to sensory information about the current position of moving objects. In contrast, living organisms are remarkably able to track and intercept moving objects under a large range of challenging environmental conditions. Physiological, behavioral and psychophysical evidences strongly suggest that position coding is extrapolated using an explicit and reliable representation of object’s motion but it is still unclear how these two representations interact. For instance, the so-called flash-lag effect supports the idea of a differential processing of position between moving and static objects. Although elucidating such mechanisms is crucial in our understanding of the dynamics of visual processing, a theory is still missing to explain the different facets of this visual illusion. Here, we reconsider several of the key aspects of the flash-lag effect in order to explore the role of motion upon neural coding of objects’ position. First, we formalize the problem using a Bayesian modeling framework which includes a graded representation of the degree of belief about visual motion. We introduce a motion-based prediction model as a candidate explanation for the perception of coherent motion. By including the knowledge of a fixed delay, we can model the dynamics of sensory information integration by extrapolating the information acquired at previous instants in time. Next, we simulate the optimal estimation of object position with and without delay compensation and compared it with human perception under a broad range of different psychophysical conditions. Our computational study suggests that the explicit, probabilistic representation of velocity information is crucial in explaining position coding, and therefore the flash-lag effect. We discuss these theoretical results in light of the putative corrective mechanisms that can be used to cancel out the detrimental effects of neural delays and illuminate the more general question of the dynamical representation at the present time of spatial information in the visual pathways. Visual illusions are powerful tools to explore the limits and constraints of human perception. One of them has received considerable empirical and theoretical interests: the so-called “flash-lag effect”. When a visual stimulus moves along a continuous trajectory, it may be seen ahead of its veridical position with respect to an unpredictable event such as a punctuate flash. This illusion tells us something important about the visual system: contrary to classical computers, neural activity travels at a relatively slow speed. It is largely accepted that the resulting delays cause this perceived spatial lag of the flash. Still, after three decades of debates, there is no consensus regarding the underlying mechanisms. Herein, we re-examine the original hypothesis that this effect may be caused by the extrapolation of the stimulus’ motion that is naturally generated in order to compensate for neural delays. Contrary to classical models, we propose a novel theoretical framework, called parodiction, that optimizes this process by explicitly using the precision of both sensory and predicted motion. Using numerical simulations, we show that the parodiction theory subsumes many of the previously proposed models and empirical studies. More generally, the parodiction hypothesis proposes that neural systems implement generic neural computations that can systematically compensate the existing neural delays in order to represent the predicted visual scene at the present time. It calls for new experimental approaches to directly explore the relationships between neural delays and predictive coding.
Collapse
Affiliation(s)
- Mina A. Khoei
- Institut de Neurosciences de la Timone, UMR7289, CNRS / Aix-Marseille Université, Marseille, France
| | - Guillaume S. Masson
- Institut de Neurosciences de la Timone, UMR7289, CNRS / Aix-Marseille Université, Marseille, France
| | - Laurent U. Perrinet
- Institut de Neurosciences de la Timone, UMR7289, CNRS / Aix-Marseille Université, Marseille, France
- * E-mail:
| |
Collapse
|
9
|
Knight JC, Tully PJ, Kaplan BA, Lansner A, Furber SB. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware. Front Neuroanat 2016; 10:37. [PMID: 27092061 PMCID: PMC4823276 DOI: 10.3389/fnana.2016.00037] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Accepted: 03/18/2016] [Indexed: 11/17/2022] Open
Abstract
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.
Collapse
Affiliation(s)
- James C Knight
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| | - Philip J Tully
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden; Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden; Institute for Adaptive and Neural Computation, School of Informatics, University of EdinburghEdinburgh, UK
| | - Bernhard A Kaplan
- Department of Visualization and Data Analysis, Zuse Institute Berlin Berlin, Germany
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden; Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden; Department of Numerical analysis and Computer Science, Stockholm UniversityStockholm, Sweden
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| |
Collapse
|
10
|
Wall J, Glackin C. Spiking neural network connectivity and its potential for temporal sensory processing and variable binding. Front Comput Neurosci 2013; 7:182. [PMID: 24391578 PMCID: PMC3867688 DOI: 10.3389/fncom.2013.00182] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2013] [Accepted: 12/02/2013] [Indexed: 11/13/2022] Open
Affiliation(s)
- Julie Wall
- Multimedia and Vision Research Group, School of Electronic Engineering and Computer Science, Queen Mary, University of London London, UK
| | - Cornelius Glackin
- Adaptive Systems Research Group, Department of Computer Science, University of Hertfordshire Hatfield, Hertfordshire, UK
| |
Collapse
|