1
|
Van der Burg E, Cass J, Olivers CNL. A CODE model bridging crowding in sparse and dense displays. Vision Res 2024; 215:108345. [PMID: 38142531 DOI: 10.1016/j.visres.2023.108345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 12/04/2023] [Accepted: 12/04/2023] [Indexed: 12/26/2023]
Abstract
Visual crowding is arguably the strongest limitation imposed on extrafoveal vision, and is a relatively well-understood phenomenon. However, most investigations and theories are based on sparse displays consisting of a target and at most a handful of flanker objects. Recent findings suggest that the laws thought to govern crowding may not hold for densely cluttered displays, and that grouping and nearest neighbour effects may be more important. Here we present a computational model that accounts for crowding effects in both sparse and dense displays. The model is an adaptation and extension of an earlier model that has previously successfully accounted for spatial clustering, numerosity and object-based attention phenomena. Our model combines grouping by proximity and similarity with a nearest neighbour rule, and defines crowding as the extent to which target and flankers fail to segment. We show that when the model is optimized for explaining crowding phenomena in classic, sparse displays, it also does a good job in capturing novel crowding patterns in dense displays, in both existing and new data sets. The model thus ties together different principles governing crowding, specifically Bouma's law, grouping, and nearest neighbour similarity effects.
Collapse
Affiliation(s)
| | - John Cass
- MARCS Institute of Brain, Behaviour & Development, Western Sydney University, Australia
| | - Christian N L Olivers
- Institute for Brain and Behaviour Amsterdam, the Netherlands; Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, the Netherlands
| |
Collapse
|
2
|
Wichmann FA, Kornblith S, Geirhos R. Neither hype nor gloom do DNNs justice. Behav Brain Sci 2023; 46:e412. [PMID: 38054281 DOI: 10.1017/s0140525x23001711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
Neither the hype exemplified in some exaggerated claims about deep neural networks (DNNs), nor the gloom expressed by Bowers et al. do DNNs as models in vision science justice: DNNs rapidly evolve, and today's limitations are often tomorrow's successes. In addition, providing explanations as well as prediction and image-computability are model desiderata; one should not be favoured at the expense of the other.
Collapse
Affiliation(s)
- Felix A Wichmann
- Neural Information Processing Group, University of Tübingen, Tübingen, Germany
| | | | | |
Collapse
|
3
|
Doerig A, Sommers RP, Seeliger K, Richards B, Ismael J, Lindsay GW, Kording KP, Konkle T, van Gerven MAJ, Kriegeskorte N, Kietzmann TC. The neuroconnectionist research programme. Nat Rev Neurosci 2023:10.1038/s41583-023-00705-w. [PMID: 37253949 DOI: 10.1038/s41583-023-00705-w] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/21/2023] [Indexed: 06/01/2023]
Abstract
Artificial neural networks (ANNs) inspired by biology are beginning to be widely used to model behavioural and neural data, an approach we call 'neuroconnectionism'. ANNs have been not only lauded as the current best models of information processing in the brain but also criticized for failing to account for basic cognitive functions. In this Perspective article, we propose that arguing about the successes and failures of a restricted set of current ANNs is the wrong approach to assess the promise of neuroconnectionism for brain science. Instead, we take inspiration from the philosophy of science, and in particular from Lakatos, who showed that the core of a scientific research programme is often not directly falsifiable but should be assessed by its capacity to generate novel insights. Following this view, we present neuroconnectionism as a general research programme centred around ANNs as a computational language for expressing falsifiable theories about brain computation. We describe the core of the programme, the underlying computational framework and its tools for testing specific neuroscientific hypotheses and deriving novel understanding. Taking a longitudinal view, we review past and present neuroconnectionist projects and their responses to challenges and argue that the research programme is highly progressive, generating new and otherwise unreachable insights into the workings of the brain.
Collapse
Affiliation(s)
- Adrien Doerig
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany.
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands.
| | - Rowan P Sommers
- Department of Neurobiology of Language, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Katja Seeliger
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Blake Richards
- Department of Neurology and Neurosurgery, McGill University, Montréal, QC, Canada
- School of Computer Science, McGill University, Montréal, QC, Canada
- Mila, Montréal, QC, Canada
- Montréal Neurological Institute, Montréal, QC, Canada
- Learning in Machines and Brains Program, CIFAR, Toronto, ON, Canada
| | | | | | - Konrad P Kording
- Learning in Machines and Brains Program, CIFAR, Toronto, ON, Canada
- Bioengineering, Neuroscience, University of Pennsylvania, Pennsylvania, PA, USA
| | | | | | | | - Tim C Kietzmann
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| |
Collapse
|
4
|
Kirubeswaran OR, Storrs KR. Inconsistent illusory motion in predictive coding deep neural networks. Vision Res 2023; 206:108195. [PMID: 36801664 DOI: 10.1016/j.visres.2023.108195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 01/31/2023] [Accepted: 01/31/2023] [Indexed: 02/19/2023]
Abstract
Why do we perceive illusory motion in some static images? Several accounts point to eye movements, response latencies to different image elements, or interactions between image patterns and motion energy detectors. Recently PredNet, a recurrent deep neural network (DNN) based on predictive coding principles, was reported to reproduce the "Rotating Snakes" illusion, suggesting a role for predictive coding. We begin by replicating this finding, then use a series of "in silico" psychophysics and electrophysiology experiments to examine whether PredNet behaves consistently with human observers and non-human primate neural data. A pretrained PredNet predicted illusory motion for all subcomponents of the Rotating Snakes pattern, consistent with human observers. However, we found no simple response delays in internal units, unlike evidence from electrophysiological data. PredNet's detection of motion in gradients seemed dependent on contrast, but depends predominantly on luminance in humans. Finally, we examined the robustness of the illusion across ten PredNets of identical architecture, retrained on the same video data. There was large variation across network instances in whether they reproduced the Rotating Snakes illusion, and what motion, if any, they predicted for simplified variants. Unlike human observers, no network predicted motion for greyscale variants of the Rotating Snakes pattern. Our results sound a cautionary note: even when a DNN successfully reproduces some idiosyncrasy of human vision, more detailed investigation can reveal inconsistencies between humans and the network, and between different instances of the same network. These inconsistencies suggest that predictive coding does not reliably give rise to human-like illusory motion.
Collapse
Affiliation(s)
| | - Katherine R Storrs
- Department of Experimental Psychology, Justus Liebig University Giessen, Germany; Centre for Mind, Brain and Behaviour (CMBB), University of Marburg and Justus Liebig University Giessen, Germany; School of Psychology, University of Auckland, New Zealand
| |
Collapse
|
5
|
Choung OH, Gordillo D, Roinishvili M, Brand A, Herzog MH, Chkonia E. Intact and deficient contextual processing in schizophrenia patients. Schizophr Res Cogn 2022; 30:100265. [PMID: 36119400 PMCID: PMC9477851 DOI: 10.1016/j.scog.2022.100265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 07/09/2022] [Accepted: 07/09/2022] [Indexed: 11/25/2022] Open
Abstract
Schizophrenia patients are known to have deficits in contextual vision. However, results are often very mixed. In some paradigms, patients do not take the context into account and, hence, perform more veridically than healthy controls. In other paradigms, context deteriorates performance much more strongly in patients compared to healthy controls. These mixed results may be explained by differences in the paradigms as well as by small or biased samples, given the large heterogeneity of patients' deficits. Here, we show that mixed results may also come from idiosyncrasies of the stimuli used because in variants of the same visual paradigm, tested with the same participants, we found intact and deficient processing.
Collapse
Affiliation(s)
- Oh-Hyeon Choung
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Corresponding author. http://lpsy.epfl.ch
| | - Dario Gordillo
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Maya Roinishvili
- Laboratory of Vision Physiology, Ivane Beritashvili Centre of Experimental Biomedicine, Tbilisi, Georgia
- Institute of Cognitive Neurosciences, Free University of Tbilisi, Tbilisi, Georgia
| | - Andreas Brand
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Michael H. Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Eka Chkonia
- Department of Psychiatry, Tbilisi State Medical University, Tbilisi, Georgia
| |
Collapse
|
6
|
Tiurina NA, Markov YA, Choung OH, Herzog MH, Pascucci D. Unlocking crowding by ensemble statistics. Curr Biol 2022; 32:4975-4981.e3. [PMID: 36309011 DOI: 10.1016/j.cub.2022.10.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 08/16/2022] [Accepted: 10/03/2022] [Indexed: 11/06/2022]
Abstract
In crowding,1,2,3,4,5,6,7 objects that can be easily recognized in isolation appear jumbled when surrounded by other elements.8 Traditionally, crowding is explained by local pooling mechanisms,3,6,9,10,11,12,13,14,15 but many findings have shown that the global configuration of the entire stimulus display, rather than local aspects, determines crowding.8,16,17,18,19,20,21,22,23,24,25,26,27,28 However, understanding global configurations is challenging because even slight changes can lead from crowding to uncrowding and vice versa.23,25,28,29 Unfortunately, the number of configurations to explore is virtually infinite. Here, we show that one does not need to know the specific configuration of flankers to determine crowding strength but only their ensemble statistics, which allow for the rapid computation of groups within the stimulus display.30,31,32,33,34,35,36,37 To investigate the role of ensemble statistics in (un)crowding, we used a classic vernier offset discrimination task in which the vernier was flanked by multiple squares. We manipulated the orientation statistics of the squares based on the following rationale: a central square with an orientation different from the mean orientation of the other squares stands out from the rest and groups with the vernier, causing strong crowding. If, on the other hand, all squares group together, the vernier is the only element that stands out, and crowding is weak. These effects should depend exclusively on the perceived ensemble statistics, i.e., on the mean orientation of the squares and not on their individual orientations. In two experiments, we confirmed these predictions.
Collapse
|
7
|
Heinke D, Leonardis A, Leek EC. What do deep neural networks tell us about biological vision? Vision Res 2022; 198:108069. [PMID: 35561463 DOI: 10.1016/j.visres.2022.108069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Dietmar Heinke
- School of Psychology, University of Birmingham, United Kingdom.
| | - Ales Leonardis
- School of Computer Science, University of Birmingham, United Kingdom
| | - E Charles Leek
- Department of Psychology, University of Liverpool, United Kingdom
| |
Collapse
|
8
|
Neri P. Deep networks may capture biological behavior for shallow, but not deep, empirical characterizations. Neural Netw 2022; 152:244-266. [PMID: 35567948 DOI: 10.1016/j.neunet.2022.04.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 04/15/2022] [Accepted: 04/20/2022] [Indexed: 11/19/2022]
Abstract
We assess whether deep convolutional networks (DCN) can account for a most fundamental property of human vision: detection/discrimination of elementary image elements (bars) at different contrast levels. The human visual process can be characterized to varying degrees of "depth," ranging from percentage of correct detection to detailed tuning and operating characteristics of the underlying perceptual mechanism. We challenge deep networks with the same stimuli/tasks used with human observers and apply equivalent characterization of the stimulus-response coupling. In general, we find that popular DCN architectures do not account for signature properties of the human process. For shallow depth of characterization, some variants of network-architecture/training-protocol produce human-like trends; however, more articulate empirical descriptors expose glaring discrepancies. Networks can be coaxed into learning those richer descriptors by shadowing a human surrogate in the form of a tailored circuit perturbed by unstructured input, thus ruling out the possibility that human-model misalignment in standard protocols may be attributable to insufficient representational power. These results urge caution in assessing whether neural networks do or do not capture human behavior: ultimately, our ability to assess "success" in this area can only be as good as afforded by the depth of behavioral characterization against which the network is evaluated. We propose a novel set of metrics/protocols that impose stringent constraints on the evaluation of DCN behavior as an adequate approximation to biological processes.
Collapse
Affiliation(s)
- Peter Neri
- Laboratoire des Systèmes Perceptifs (UMR8248), École normale supérieure, PSL Research University, Paris, France.
| |
Collapse
|
9
|
Charles Leek E, Leonardis A, Heinke D. Deep neural networks and image classification in biological vision. Vision Res 2022; 197:108058. [PMID: 35487146 DOI: 10.1016/j.visres.2022.108058] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Revised: 04/12/2022] [Accepted: 04/13/2022] [Indexed: 10/18/2022]
Abstract
In this paper we consider recent advances in the use of deep convolutional neural networks to understanding biological vision. We focus on claims about the plausibility of feedforward deep convolutional neural networks (fDCNNs) as models of image classification in the biological system. Despite the putative similarity of these networks to some properties of the biological vision system, and the remarkable levels of performance accuracy of some fDCNNs, we argue that their plausibility as a framework for understanding image classification remains unclear. We highlight two key issues that we suggest are relevant to the evaluation of any form of DNN used to examine biological vision: (1) Network transparency under analysis - that is, the challenge of understanding what networks do, and how they do it. (2) Identifying appropriate benchmarks for comparing network performance and the biological system using both quantitative and qualitative performance measures. We show that there are important divergences between fDCNNs and biological vision that reflect fundamental differences in computational architectures, and representational structures, supporting image classification in these networks and the biological system.
Collapse
Affiliation(s)
| | | | - Dietmar Heinke
- School of Computer Science, University of Birmingham, UK
| |
Collapse
|
10
|
Jimenez M, Kimchi R, Yashar A. Mixture-modeling approach reveals global and local processes in visual crowding. Sci Rep 2022; 12:6726. [PMID: 35468981 DOI: 10.1038/s41598-022-10685-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 04/11/2022] [Indexed: 11/21/2022] Open
Abstract
Crowding refers to the inability to recognize objects in clutter, setting a fundamental limit on various perceptual tasks such as reading and facial recognition. While prevailing models suggest that crowding is a unitary phenomenon occurring at an early level of processing, recent studies have shown that crowding might also occur at higher levels of representation. Here we investigated whether local and global crowding interference co-occurs within the same display. To do so, we tested the distinctive contribution of local flanker features and global configurations of the flankers on the pattern of crowding errors. Observers (n = 27) estimated the orientation of a target when presented alone or surrounded by flankers. Flankers were grouped into a global configuration, forming an illusory rectangle when aligned or a rectangular configuration when misaligned. We analyzed the error distributions by fitting probabilistic mixture models. Results showed that participants often misreported the orientation of a flanker instead of that of the target. Interestingly, in some trials the orientation of the global configuration was misreported. These results suggest that crowding occurs simultaneously across multiple levels of visual processing and crucially depends on the spatial configuration of the stimulus. Our results pose a challenge to models of crowding with an early single pooling stage and might be better explained by models which incorporate the possibility of multilevel crowding and account for complex target-flanker interactions.
Collapse
|
11
|
Abstract
Anterior regions of the ventral visual stream encode substantial information about object categories. Are top-down category-level forces critical for arriving at this representation, or can this representation be formed purely through domain-general learning of natural image structure? Here we present a fully self-supervised model which learns to represent individual images, rather than categories, such that views of the same image are embedded nearby in a low-dimensional feature space, distinctly from other recently encountered views. We find that category information implicitly emerges in the local similarity structure of this feature space. Further, these models learn hierarchical features which capture the structure of brain responses across the human ventral visual stream, on par with category-supervised models. These results provide computational support for a domain-general framework guiding the formation of visual representation, where the proximate goal is not explicitly about category information, but is instead to learn unique, compressed descriptions of the visual world.
Collapse
Affiliation(s)
- Talia Konkle
- Department of Psychology & Center for Brain Science, Harvard University, Cambridge, MA, USA.
| | - George A Alvarez
- Department of Psychology & Center for Brain Science, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
12
|
Ernst MR, Burwick T, Triesch J. Recurrent processing improves occluded object recognition and gives rise to perceptual hysteresis. J Vis 2021; 21:6. [PMID: 34905052 PMCID: PMC8684313 DOI: 10.1167/jov.21.13.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Over the past decades, object recognition has been predominantly studied and modelled as a feedforward process. This notion was supported by the fast response times in psychophysical and neurophysiological experiments and the recent success of deep feedforward neural networks for object recognition. Recently, however, this prevalent view has shifted and recurrent connectivity in the brain is now believed to contribute significantly to object recognition — especially under challenging conditions, including the recognition of partially occluded objects. Moreover, recurrent dynamics might be the key to understanding perceptual phenomena such as perceptual hysteresis. In this work we investigate if and how artificial neural networks can benefit from recurrent connections. We systematically compare architectures comprised of bottom-up, lateral, and top-down connections. To evaluate the impact of recurrent connections for occluded object recognition, we introduce three stereoscopic occluded object datasets, which span the range from classifying partially occluded hand-written digits to recognizing three-dimensional objects. We find that recurrent architectures perform significantly better than parameter-matched feedforward models. An analysis of the hidden representation of the models suggests that occluders are progressively discounted in later time steps of processing. We demonstrate that feedback can correct the initial misclassifications over time and that the recurrent dynamics lead to perceptual hysteresis. Overall, our results emphasize the importance of recurrent feedback for object recognition in difficult situations.
Collapse
Affiliation(s)
- Markus R Ernst
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany.,Goethe-Universität Frankfurt, Frankfurt am Main, Germany.,
| | - Thomas Burwick
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany.,Goethe-Universität Frankfurt, Frankfurt am Main, Germany.,
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany.,Goethe-Universität Frankfurt, Frankfurt am Main, Germany., https://www.fias.science/en/fellows/detail/triesch-jochen/
| |
Collapse
|
13
|
Bornet A, Choung OH, Doerig A, Whitney D, Herzog MH, Manassi M. Global and high-level effects in crowding cannot be predicted by either high-dimensional pooling or target cueing. J Vis 2021; 21:10. [PMID: 34812839 PMCID: PMC8626847 DOI: 10.1167/jov.21.12.10] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Accepted: 09/30/2021] [Indexed: 11/24/2022] Open
Abstract
In visual crowding, the perception of a target deteriorates in the presence of nearby flankers. Traditionally, target-flanker interactions have been considered as local, mostly deleterious, low-level, and feature specific, occurring when information is pooled along the visual processing hierarchy. Recently, a vast literature of high-level effects in crowding (grouping effects and face-holistic crowding in particular) led to a different understanding of crowding, as a global, complex, and multilevel phenomenon that cannot be captured or explained by simple pooling models. It was recently argued that these high-level effects may still be captured by more sophisticated pooling models, such as the Texture Tiling model (TTM). Unlike simple pooling models, the high-dimensional pooling stage of the TTM preserves rich information about a crowded stimulus and, in principle, this information may be sufficient to drive high-level and global aspects of crowding. In addition, it was proposed that grouping effects in crowding may be explained by post-perceptual target cueing. Here, we extensively tested the predictions of the TTM on the results of six different studies that highlighted high-level effects in crowding. Our results show that the TTM cannot explain any of these high-level effects, and that the behavior of the model is equivalent to a simple pooling model. In addition, we show that grouping effects in crowding cannot be predicted by post-perceptual factors, such as target cueing. Taken together, these results reinforce once more the idea that complex target-flanker interactions determine crowding and that crowding occurs at multiple levels of the visual hierarchy.
Collapse
Affiliation(s)
- Alban Bornet
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Oh-Hyeon Choung
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Adrien Doerig
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, Netherlands
| | - David Whitney
- Department of Psychology, University of California, Berkeley, California, USA
- Helen Wills Neuroscience Institute, University of California, Berkeley, California, USA
- Vision Science Group, University of California, Berkeley, California, USA
| | - Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Mauro Manassi
- School of Psychology, University of Aberdeen, King's College, Aberdeen, UK
| |
Collapse
|
14
|
Lesica NA, Mehta N, Manjaly JG, Deng L, Wilson BS, Zeng F. Harnessing the power of artificial intelligence to transform hearing healthcare and research. NAT MACH INTELL 2021; 3:840-9. [DOI: 10.1038/s42256-021-00394-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
15
|
Abstract
Deep neural network (DNN) models for computer vision are capable of human-level object recognition. Consequently, similarities between DNN and human vision are of interest. Here, we characterize DNN representations of Scintillating grid visual illusion images in which white disks are perceived to be partially black. Specifically, we use VGG-19 and ResNet-101 DNN models that were trained for image classification and consider the representational dissimilarity (\(L^1\) distance in the penultimate layer) between pairs of images: one with white Scintillating grid disks and the other with disks of decreasing luminance levels. Results showed a nonmonotonic relation, such that decreasing disk luminance led to an increase and subsequently a decrease in representational dissimilarity. That is, the Scintillating grid image with white disks was closer, in terms of the representation, to images with black disks than images with gray disks. In control nonillusion images, such nonmonotonicity was rare. These results suggest that nonmonotonicity in a deep computational representation is a potential test for illusion-like response geometry in DNN models.
Collapse
Affiliation(s)
- Eric D Sun
- Mather House, Harvard University, Cambridge, MA, USA.,
| | - Ron Dekel
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, PA, Israel.,
| |
Collapse
|
16
|
Abstract
In crowding, perception of a target deteriorates in the presence of nearby flankers. Surprisingly, perception can be rescued from crowding if additional flankers are added (uncrowding). Uncrowding is a major challenge for all classic models of crowding and vision in general, because the global configuration of the entire stimulus is crucial. However, it is unclear which characteristics of the configuration impact (un)crowding. Here, we systematically dissected flanker configurations and showed that (un)crowding cannot be easily explained by the effects of the sub-parts or low-level features of the stimulus configuration. Our modeling results suggest that (un)crowding requires global processing. These results are well in line with previous studies showing the importance of global aspects in crowding.
Collapse
Affiliation(s)
- Oh-Hyeon Choung
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Alban Bornet
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Adrien Doerig
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
17
|
Lonnqvist B, Bornet A, Doerig A, Herzog MH. A comparative biology approach to DNN modeling of vision: A focus on differences, not similarities. J Vis 2021; 21:17. [PMID: 34551062 PMCID: PMC8475290 DOI: 10.1167/jov.21.10.17] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 08/26/2021] [Indexed: 11/24/2022] Open
Abstract
Deep neural networks (DNNs) have revolutionized computer science and are now widely used for neuroscientific research. A hot debate has ensued about the usefulness of DNNs as neuroscientific models of the human visual system; the debate centers on to what extent certain shortcomings of DNNs are real failures and to what extent they are redeemable. Here, we argue that the main problem is that we often do not understand which human functions need to be modeled and, thus, what counts as a falsification. Hence, not only is there a problem on the DNN side, but there is also one on the brain side (i.e., with the explanandum-the thing to be explained). For example, should DNNs reproduce illusions? We posit that we can make better use of DNNs by adopting an approach of comparative biology by focusing on the differences, rather than the similarities, between DNNs and humans to improve our understanding of visual information processing in general.
Collapse
Affiliation(s)
- Ben Lonnqvist
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Alban Bornet
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Adrien Doerig
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, Netherlands
| | - Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
18
|
Bornet A, Doerig A, Herzog MH, Francis G, Van der Burg E. Shrinking Bouma's window: How to model crowding in dense displays. PLoS Comput Biol 2021; 17:e1009187. [PMID: 34228703 PMCID: PMC8284675 DOI: 10.1371/journal.pcbi.1009187] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 07/16/2021] [Accepted: 06/16/2021] [Indexed: 11/22/2022] Open
Abstract
In crowding, perception of a target deteriorates in the presence of nearby flankers. Traditionally, it is thought that visual crowding obeys Bouma's law, i.e., all elements within a certain distance interfere with the target, and that adding more elements always leads to stronger crowding. Crowding is predominantly studied using sparse displays (a target surrounded by a few flankers). However, many studies have shown that this approach leads to wrong conclusions about human vision. Van der Burg and colleagues proposed a paradigm to measure crowding in dense displays using genetic algorithms. Displays were selected and combined over several generations to maximize human performance. In contrast to Bouma's law, only the target's nearest neighbours affected performance. Here, we tested various models to explain these results. We used the same genetic algorithm, but instead of selecting displays based on human performance we selected displays based on the model's outputs. We found that all models based on the traditional feedforward pooling framework of vision were unable to reproduce human behaviour. In contrast, all models involving a dedicated grouping stage explained the results successfully. We show how traditional models can be improved by adding a grouping stage.
Collapse
Affiliation(s)
- Alban Bornet
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Adrien Doerig
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Michael H. Herzog
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Gregory Francis
- Department of Psychological Sciences, Purdue University, West Lafayette, Indiana, United States of America
| | - Erik Van der Burg
- TNO, Human Factors, Soesterberg, The Netherlands
- Brain and Cognition, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
19
|
Abstract
Artistic composition (the structural organization of pictorial elements) is often characterized by some basic rules and heuristics, but art history does not offer quantitative tools for segmenting individual elements, measuring their interactions and related operations. To discover whether a metric description of this kind is even possible, we exploit a deep-learning algorithm that attempts to capture the perceptual mechanism underlying composition in humans. We rely on a robust behavioral marker with known relevance to higher-level vision: orientation judgements, that is, telling whether a painting is hung “right-side up.” Humans can perform this task, even for abstract paintings. To account for this finding, existing models rely on “meaningful” content or specific image statistics, often in accordance with explicit rules from art theory. Our approach does not commit to any such assumptions/schemes, yet it outperforms previous models and for a larger database, encompassing a wide range of painting styles. Moreover, our model correctly reproduces human performance across several measurements from a new web-based experiment designed to test whole paintings, as well as painting fragments matched to the receptive-field size of different depths in the model. By exploiting this approach, we show that our deep learning model captures relevant characteristics of human orientation perception across styles and granularities. Interestingly, the more abstract the painting, the more our model relies on extended spatial integration of cues, a property supported by deeper layers.
Collapse
Affiliation(s)
- Pierre Lelièvre
- Laboratoire des systèmes perceptifs, Département d'études cognitives Science Arts Création Recherche (EA 7410), Paris, France.,École normale supérieure, PSL University, CNRS, Paris, France., http://plelievre.com
| | - Peter Neri
- École normale supérieure, PSL University, CNRS, Paris, France.,Laboratoire des systémes perceptifs, Département d'études cognitives, Paris, France., https://sites.google.com/site/neripeter/
| |
Collapse
|
20
|
Funke CM, Borowski J, Stosio K, Brendel W, Wallis TSA, Bethge M. Five points to check when comparing visual perception in humans and machines. J Vis 2021; 21:16. [PMID: 33724362 PMCID: PMC7980041 DOI: 10.1167/jov.21.3.16] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Accepted: 12/02/2020] [Indexed: 11/24/2022] Open
Abstract
With the rise of machines to human-level performance in complex recognition tasks, a growing amount of work is directed toward comparing information processing in humans and machines. These studies are an exciting chance to learn about one system by studying the other. Here, we propose ideas on how to design, conduct, and interpret experiments such that they adequately support the investigation of mechanisms when comparing human and machine perception. We demonstrate and apply these ideas through three case studies. The first case study shows how human bias can affect the interpretation of results and that several analytic tools can help to overcome this human reference point. In the second case study, we highlight the difference between necessary and sufficient mechanisms in visual reasoning tasks. Thereby, we show that contrary to previous suggestions, feedback mechanisms might not be necessary for the tasks in question. The third case study highlights the importance of aligning experimental conditions. We find that a previously observed difference in object recognition does not hold when adapting the experiment to make conditions more equitable between humans and machines. In presenting a checklist for comparative studies of visual reasoning in humans and machines, we hope to highlight how to overcome potential pitfalls in design and inference.
Collapse
Affiliation(s)
| | | | - Karolina Stosio
- University of Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen and Berlin, Germany
- Volkswagen Group Machine Learning Research Lab, Munich, Germany
| | - Wieland Brendel
- University of Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen and Berlin, Germany
- Werner Reichardt Centre for Integrative Neuroscience, Tübingen, Germany
| | - Thomas S A Wallis
- University of Tübingen, Tübingen, Germany
- Present address: Amazon.com, Tübingen
| | - Matthias Bethge
- University of Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen and Berlin, Germany
- Werner Reichardt Centre for Integrative Neuroscience, Tübingen, Germany
| |
Collapse
|
21
|
Herrera-Esposito D, Coen-Cagli R, Gomez-Sena L. Flexible contextual modulation of naturalistic texture perception in peripheral vision. J Vis 2021; 21:1. [PMID: 33393962 PMCID: PMC7794279 DOI: 10.1167/jov.21.1.1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 12/01/2020] [Indexed: 11/24/2022] Open
Abstract
Peripheral vision comprises most of our visual field, and is essential in guiding visual behavior. Its characteristic capabilities and limitations, which distinguish it from foveal vision, have been explained by the most influential theory of peripheral vision as the product of representing the visual input using summary statistics. Despite its success, this account may provide a limited understanding of peripheral vision, because it neglects processes of perceptual grouping and segmentation. To test this hypothesis, we studied how contextual modulation, namely the modulation of the perception of a stimulus by its surrounds, interacts with segmentation in human peripheral vision. We used naturalistic textures, which are directly related to summary-statistics representations. We show that segmentation cues affect contextual modulation, and that this is not captured by our implementation of the summary-statistics model. We then characterize the effects of different texture statistics on contextual modulation, providing guidance for extending the model, as well as for probing neural mechanisms of peripheral vision.
Collapse
Affiliation(s)
- Daniel Herrera-Esposito
- Laboratorio de Neurociencias, Facultad de Ciencias, Universidad de la República, Montevideo, Uruguay
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology and Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Leonel Gomez-Sena
- Laboratorio de Neurociencias, Facultad de Ciencias, Universidad de la República, Montevideo, Uruguay
| |
Collapse
|
22
|
Abstract
Visual clutter affects our ability to see. Objects that would be identifiable on their own may become unrecognizable when presented close together ("crowding"), but the psychophysical characteristics of crowding have resisted simplification. Image properties initially thought to produce crowding have paradoxically yielded unexpected results; for example, adding flanking objects can ameliorate crowding (Manassi, Sayim, & Herzog, 2012; Herzog, Sayim, Chcherov, & Manassi, 2015; Pachai, Doerig, & Herzog, 2016). The resulting theory revisions have been sufficiently complex and specialized as to make it difficult to discern what principles may underlie the observed phenomena. Here, a generalized formulation of simple visual contrast energy is presented, arising from straightforward analyses of center and surround neurons in the early visual stream. Extant contrast measures, such as root mean square contrast, are easily shown to fall out as reduced special cases. The new generalized contrast energy metric surprisingly predicts the principal findings of a broad range of crowding studies. These early crowding phenomena may thus be said to arise predominantly from contrast or are, at least, severely confounded by contrast effects. Note that these findings may be distinct from accounts of other, likely downstream, "configural" or "semantic" instances of crowding, suggesting at least two separate forms of crowding that may resist unification. The new fundamental contrast energy formulation provides a candidate explanatory framework that addresses multiple psychophysical phenomena beyond crowding.
Collapse
Affiliation(s)
- Antonio Rodriguez
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Richard Granger
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
23
|
van Bergen RS, Kriegeskorte N. Going in circles is the way forward: the role of recurrence in visual inference. Curr Opin Neurobiol 2020; 65:176-193. [PMID: 33279795 DOI: 10.1016/j.conb.2020.11.009] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 11/16/2020] [Accepted: 11/16/2020] [Indexed: 11/30/2022]
Abstract
Biological visual systems exhibit abundant recurrent connectivity. State-of-the-art neural network models for visual recognition, by contrast, rely heavily or exclusively on feedforward computation. Any finite-time recurrent neural network (RNN) can be unrolled along time to yield an equivalent feedforward neural network (FNN). This important insight suggests that computational neuroscientists may not need to engage recurrent computation, and that computer-vision engineers may be limiting themselves to a special case of FNN if they build recurrent models. Here we argue, to the contrary, that FNNs are a special case of RNNs and that computational neuroscientists and engineers should engage recurrence to understand how brains and machines can (1) achieve greater and more flexible computational depth (2) compress complex computations into limited hardware (3) integrate priors and priorities into visual inference through expectation and attention (4) exploit sequential dependencies in their data for better inference and prediction and (5) leverage the power of iterative computation.
Collapse
Affiliation(s)
- Ruben S van Bergen
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
| | - Nikolaus Kriegeskorte
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States; Department of Psychology, Columbia University, New York, NY, United States; Department of Neuroscience, Columbia University, New York, NY, United States; Affiliated member, Electrical Engineering, Columbia University, New York, NY, United States.
| |
Collapse
|
24
|
Doerig A, Schmittwilken L, Sayim B, Manassi M, Herzog MH. Capsule networks as recurrent models of grouping and segmentation. PLoS Comput Biol 2020; 16:e1008017. [PMID: 32692780 PMCID: PMC7394447 DOI: 10.1371/journal.pcbi.1008017] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Revised: 07/31/2020] [Accepted: 06/04/2020] [Indexed: 11/18/2022] Open
Abstract
Classically, visual processing is described as a cascade of local feedforward computations. Feedforward Convolutional Neural Networks (ffCNNs) have shown how powerful such models can be. However, using visual crowding as a well-controlled challenge, we previously showed that no classic model of vision, including ffCNNs, can explain human global shape processing. Here, we show that Capsule Neural Networks (CapsNets), combining ffCNNs with recurrent grouping and segmentation, solve this challenge. We also show that ffCNNs and standard recurrent CNNs do not, suggesting that the grouping and segmentation capabilities of CapsNets are crucial. Furthermore, we provide psychophysical evidence that grouping and segmentation are implemented recurrently in humans, and show that CapsNets reproduce these results well. We discuss why recurrence seems needed to implement grouping and segmentation efficiently. Together, we provide mutually reinforcing psychophysical and computational evidence that a recurrent grouping and segmentation process is essential to understand the visual system and create better models that harness global shape computations.
Collapse
Affiliation(s)
- Adrien Doerig
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Lynn Schmittwilken
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Dept. Computational Psychology, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany
| | - Bilge Sayim
- Institute of Psychology, University of Bern, Bern, Switzerland
- Univ. Lille, CNRS, UMR 9193—SCALab—Sciences Cognitives et Sciences Affectives, F-59000 Lille, France
| | - Mauro Manassi
- School of Psychology, University of Aberdeen, Scotland, United Kingdom
| | - Michael H. Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
25
|
Baker N, Lu H, Erlikhman G, Kellman PJ. Local features and global shape information in object classification by deep convolutional neural networks. Vision Res 2020; 172:46-61. [PMID: 32413803 DOI: 10.1016/j.visres.2020.04.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 03/13/2020] [Accepted: 04/06/2020] [Indexed: 01/19/2023]
Abstract
Deep convolutional neural networks (DCNNs) show impressive similarities to the human visual system. Recent research, however, suggests that DCNNs have limitations in recognizing objects by their shape. We tested the hypothesis that DCNNs are sensitive to an object's local contour features but have no access to global shape information that predominates human object recognition. We employed transfer learning to assess local and global shape processing in trained networks. In Experiment 1, we used restricted and unrestricted transfer learning to retrain AlexNet, VGG-19, and ResNet-50 to classify circles and squares. We then probed these networks with stimuli with conflicting global shape and local contour information. We presented networks with overall square shapes comprised of curved elements and circles comprised of corner elements. Networks classified the test stimuli by local contour features rather than global shapes. In Experiment 2, we changed the training data to include circles and squares comprised of different elements so that the local contour features of the object were uninformative. This considerably increased the network's tendency to produce global shape responses, but deeper analyses in Experiment 3 revealed the network still showed no sensitivity to the spatial configuration of local elements. These findings demonstrate that DCNNs' performance is an inversion of human performance with respect to global and local shape processing. Whereas abstract relations of elements predominate in human perception of shape, DCNNs appear to extract only local contour fragments, with no representation of how they spatially relate to each other to form global shapes.
Collapse
Affiliation(s)
- Nicholas Baker
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States.
| | - Hongjing Lu
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States; Department of Statistics, University of California, Los Angeles, Los Angeles, CA, United States
| | - Gennady Erlikhman
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
| | - Philip J Kellman
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
26
|
Lonnqvist B, Clarke ADF, Chakravarthi R. Crowding in humans is unlike that in convolutional neural networks. Neural Netw 2020; 126:262-274. [PMID: 32272430 DOI: 10.1016/j.neunet.2020.03.021] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Revised: 03/10/2020] [Accepted: 03/23/2020] [Indexed: 10/24/2022]
Abstract
Object recognition is a primary function of the human visual system. It has recently been claimed that the highly successful ability to recognise objects in a set of emergent computer vision systems-Deep Convolutional Neural Networks (DCNNs)-can form a useful guide to recognition in humans. To test this assertion, we systematically evaluated visual crowding, a dramatic breakdown of recognition in clutter, in DCNNs and compared their performance to extant research in humans. We examined crowding in three architectures of DCNNs with the same methodology as that used among humans. We manipulated multiple stimulus factors including inter-letter spacing, letter colour, size, and flanker location to assess the extent and shape of crowding in DCNNs. We found that crowding followed a predictable pattern across architectures that was different from that in humans. Some characteristic hallmarks of human crowding, such as invariance to size, the effect of target-flanker similarity, and confusions between target and flanker identities, were completely missing, minimised or even reversed. These data show that DCNNs, while proficient in object recognition, likely achieve this competence through a set of mechanisms that are distinct from those in humans. They are not necessarily equivalent models of human or primate object recognition and caution must be exercised when inferring mechanisms derived from their operation.
Collapse
Affiliation(s)
- Ben Lonnqvist
- Business School, University of Aberdeen, United Kingdom of Great Britain and Northern Ireland.
| | - Alasdair D F Clarke
- Department of Psychology, University of Essex, United Kingdom of Great Britain and Northern Ireland.
| | - Ramakrishna Chakravarthi
- School of Psychology, University of Aberdeen, United Kingdom of Great Britain and Northern Ireland.
| |
Collapse
|