Abstract
Neural maps combine the representation of data by codebook vectors, like a vector quantizer, with the property of topography, like a continuous function. While the quantization error is simple to compute and to compare between different maps, topography of a map is difficult to define and to quantify. Yet, topography of a neural map is an advantageous property, e.g. in the presence of noise in a transmission channel, in data visualization, and in numerous other applications. In this article we review some conceptual aspects of definitions of topography, and some recently proposed measures to quantify topography. We apply the measures first to neural maps trained on synthetic data sets, and check the measures for properties like reproducibility, scalability, systematic dependence of the value of the measure on the topology of the map, etc. We then test the measures on maps generated for four real-world data sets, a chaotic time series, speech data, and two sets of image data. The measures are found to do an imperfect, but an adequate job in selecting a topographically optimal output space dimension, while they consistently single out particular maps as non-topographic.
Collapse