1
|
Hu Q, Hailstone M, Wang J, Wincott M, Stoychev D, Atilgan H, Gala D, Chaiamarit T, Parton RM, Antonello J, Packer AM, Davis I, Booth MJ. Universal adaptive optics for microscopy through embedded neural network control. Light Sci Appl 2023; 12:270. [PMID: 37953294 PMCID: PMC10641083 DOI: 10.1038/s41377-023-01297-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 09/24/2023] [Accepted: 10/01/2023] [Indexed: 11/14/2023]
Abstract
The resolution and contrast of microscope imaging is often affected by aberrations introduced by imperfect optical systems and inhomogeneous refractive structures in specimens. Adaptive optics (AO) compensates these aberrations and restores diffraction limited performance. A wide range of AO solutions have been introduced, often tailored to a specific microscope type or application. Until now, a universal AO solution - one that can be readily transferred between microscope modalities - has not been deployed. We propose versatile and fast aberration correction using a physics-based machine learning assisted wavefront-sensorless AO control (MLAO) method. Unlike previous ML methods, we used a specially constructed neural network (NN) architecture, designed using physical understanding of the general microscope image formation, that was embedded in the control loop of different microscope systems. The approach means that not only is the resulting NN orders of magnitude simpler than previous NN methods, but the concept is translatable across microscope modalities. We demonstrated the method on a two-photon, a three-photon and a widefield three-dimensional (3D) structured illumination microscope. Results showed that the method outperformed commonly-used modal-based sensorless AO methods. We also showed that our ML-based method was robust in a range of challenging imaging conditions, such as 3D sample structures, specimen motion, low signal to noise ratio and activity-induced fluorescence fluctuations. Moreover, as the bespoke architecture encapsulated physical understanding of the imaging process, the internal NN configuration was no-longer a "black box", but provided physical insights on internal workings, which could influence future designs.
Collapse
Affiliation(s)
- Qi Hu
- Department of Engineering Science, University of Oxford, Oxford, UK
| | | | - Jingyu Wang
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Matthew Wincott
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Danail Stoychev
- Department of Biochemistry, University of Oxford, Oxford, UK
| | - Huriye Atilgan
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
| | - Dalia Gala
- Department of Biochemistry, University of Oxford, Oxford, UK
| | - Tai Chaiamarit
- Department of Biochemistry, University of Oxford, Oxford, UK
| | | | - Jacopo Antonello
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Adam M Packer
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
| | - Ilan Davis
- Department of Biochemistry, University of Oxford, Oxford, UK
| | - Martin J Booth
- Department of Engineering Science, University of Oxford, Oxford, UK.
| |
Collapse
|
2
|
Atilgan H, Doody M, Oliver DK, McGrath TM, Shelton AM, Echeverria-Altuna I, Tracey I, Vyazovskiy VV, Manohar SG, Packer AM. Human lesions and animal studies link the claustrum to perception, salience, sleep and pain. Brain 2022; 145:1610-1623. [PMID: 35348621 PMCID: PMC9166552 DOI: 10.1093/brain/awac114] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 02/24/2022] [Accepted: 02/26/2022] [Indexed: 11/24/2022] Open
Abstract
The claustrum is the most densely interconnected region in the human brain. Despite the accumulating data from clinical and experimental studies, the functional role of the claustrum remains unknown. Here, we systematically review claustrum lesion studies and discuss their functional implications. Claustral lesions are associated with an array of signs and symptoms, including changes in cognitive, perceptual and motor abilities; electrical activity; mental state; and sleep. The wide range of symptoms observed following claustral lesions do not provide compelling evidence to support prominent current theories of claustrum function such as multisensory integration or salience computation. Conversely, the lesions studies support the hypothesis that the claustrum regulates cortical excitability. We argue that the claustrum is connected to, or part of, multiple brain networks that perform both fundamental and higher cognitive functions. As a multifunctional node in numerous networks, this may explain the manifold effects of claustrum damage on brain and behaviour.
Collapse
Affiliation(s)
- Huriye Atilgan
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK
| | - Max Doody
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK
| | - David K. Oliver
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK
| | - Thomas M. McGrath
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK
| | - Andrew M. Shelton
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK
| | | | - Irene Tracey
- Wellcome Centre for Integrative Neuroimaging, FMRIB Centre, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital and Merton College, University of Oxford, Oxford OX3 9DU, UK
| | | | - Sanjay G. Manohar
- Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford OX3 9DU, UK
| | - Adam M. Packer
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK
| |
Collapse
|
3
|
Atilgan H, Bizley JK. Training enhances the ability of listeners to exploit visual information for auditory scene analysis. Cognition 2020; 208:104529. [PMID: 33373937 PMCID: PMC7868888 DOI: 10.1016/j.cognition.2020.104529] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 11/24/2020] [Accepted: 11/25/2020] [Indexed: 11/25/2022]
Abstract
The ability to use temporal relationships between cross-modal cues facilitates perception and behavior. Previously we observed that temporally correlated changes in the size of a visual stimulus and the intensity in an auditory stimulus influenced the ability of listeners to perform an auditory selective attention task (Maddox, Atilgan, Bizley, & Lee, 2015). Participants detected timbral changes in a target sound while ignoring those in a simultaneously presented masker. When the visual stimulus was temporally coherent with the target sound, performance was significantly better than when the visual stimulus was temporally coherent with the masker, despite the visual stimulus conveying no task-relevant information. Here, we trained observers to detect audiovisual temporal coherence and asked whether this changed the way in which they were able to exploit visual information in the auditory selective attention task. We observed that after training, participants were able to benefit from temporal coherence between the visual stimulus and both the target and masker streams, relative to the condition in which the visual stimulus was coherent with neither sound. However, we did not observe such changes in a second group that were trained to discriminate modulation rate differences between temporally coherent audiovisual streams, although they did show an improvement in their overall performance. A control group did not change their performance between pretest and post-test and did not change how they exploited visual information. These results provide insights into how crossmodal experience may optimize multisensory integration.
Collapse
|
4
|
Atilgan H, Town SM, Wood KC, Jones GP, Maddox RK, Lee AKC, Bizley JK. Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding. Neuron 2018; 97:640-655.e4. [PMID: 29395914 PMCID: PMC5814679 DOI: 10.1016/j.neuron.2017.12.034] [Citation(s) in RCA: 79] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Revised: 10/28/2017] [Accepted: 12/22/2017] [Indexed: 12/29/2022]
Abstract
How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis. Visual stimuli can shape how auditory cortical neurons respond to sound mixtures Temporal coherence between senses enhances sound features of a bound multisensory object Visual stimuli elicit changes in the phase of the local field potential in auditory cortex Vision-induced phase effects are lost when visual cortex is reversibly silenced
Collapse
Affiliation(s)
- Huriye Atilgan
- The Ear Institute, University College London, London, UK
| | - Stephen M Town
- The Ear Institute, University College London, London, UK
| | | | - Gareth P Jones
- The Ear Institute, University College London, London, UK
| | - Ross K Maddox
- Department of Biomedical Engineering and Department of Neuroscience, Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA; Institute for Learning and Brain Sciences and Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Adrian K C Lee
- Institute for Learning and Brain Sciences and Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | | |
Collapse
|
5
|
Abstract
Timbre distinguishes sounds of equal loudness, pitch, and duration; however, little is known about the neural mechanisms underlying timbre perception. Such understanding requires animal models such as the ferret in which neuronal and behavioral observation can be combined. The current study asked what spectral cues ferrets use to discriminate between synthetic vowels. Ferrets were trained to discriminate vowels differing in the position of the first (F1) and second formants (F2), inter-formant distance, and spectral centroid. In experiment 1, ferrets responded to probe trials containing novel vowels in which the spectral cues of trained vowels were mismatched. Regression models fitted to behavioral responses determined that F2 and spectral centroid were stronger predictors of ferrets' behavior than either F1 or inter-formant distance. Experiment 2 examined responses to single formant vowels and found that individual spectral peaks failed to account for multi-formant vowel perception. Experiment 3 measured responses to unvoiced vowels and showed that ferrets could generalize vowel identity across voicing conditions. Experiment 4 employed the same design as experiment 1 but with human participants. Their responses were also predicted by F2 and spectral centroid. Together these findings further support the ferret as a model for studying the neural processes underlying timbre perception.
Collapse
Affiliation(s)
- Stephen M Town
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1X 8EE, United Kingdom
| | - Huriye Atilgan
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1X 8EE, United Kingdom
| | - Katherine C Wood
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1X 8EE, United Kingdom
| | - Jennifer K Bizley
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1X 8EE, United Kingdom
| |
Collapse
|
6
|
Maddox RK, Atilgan H, Bizley JK, Lee AKC. Auditory selective attention is enhanced by a task-irrelevant temporally coherent visual stimulus in human listeners. eLife 2015; 4:e04995. [PMID: 25654748 PMCID: PMC4337603 DOI: 10.7554/elife.04995] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2014] [Accepted: 12/27/2014] [Indexed: 11/22/2022] Open
Abstract
In noisy settings, listening is aided by correlated dynamic visual cues gleaned from a talker's face-an improvement often attributed to visually reinforced linguistic information. In this study, we aimed to test the effect of audio-visual temporal coherence alone on selective listening, free of linguistic confounds. We presented listeners with competing auditory streams whose amplitude varied independently and a visual stimulus with varying radius, while manipulating the cross-modal temporal relationships. Performance improved when the auditory target's timecourse matched that of the visual stimulus. The fact that the coherence was between task-irrelevant stimulus features suggests that the observed improvement stemmed from the integration of auditory and visual streams into cross-modal objects, enabling listeners to better attend the target. These findings suggest that in everyday conditions, where listeners can often see the source of a sound, temporal cues provided by vision can help listeners to select one sound source from a mixture.
Collapse
Affiliation(s)
- Ross K Maddox
- Institute for Learning and Brain Sciences, University of Washington, Seattle, United States
| | - Huriye Atilgan
- Ear Institute, University College London, London, United Kingdom
| | | | - Adrian KC Lee
- Institute for Learning and Brain Sciences, University of Washington, Seattle, United States
- Department of Speech and Hearing Sciences, University of Washington, Seattle, United States
| |
Collapse
|