1
|
Newman BA, D’Angelo GJ. A Review of Cervidae Visual Ecology. Animals (Basel) 2024; 14:420. [PMID: 38338063 PMCID: PMC10854973 DOI: 10.3390/ani14030420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 01/21/2024] [Accepted: 01/25/2024] [Indexed: 02/12/2024] Open
Abstract
This review examines the visual systems of cervids in relation to their ability to meet their ecological needs and how their visual systems are specialized for particular tasks. Cervidae encompasses a diverse group of mammals that serve as important ecological drivers within their ecosystems. Despite evidence of highly specialized visual systems, a large portion of cervid research ignores or fails to consider the realities of cervid vision as it relates to their ecology. Failure to account for an animal's visual ecology during research can lead to unintentional biases and uninformed conclusions regarding the decision making and behaviors for a species or population. Our review addresses core behaviors and their interrelationship with cervid visual characteristics. Historically, the study of cervid visual characteristics has been restricted to specific areas of inquiry such as color vision and contains limited integration into broader ecological and behavioral research. The purpose of our review is to bridge these gaps by offering a comprehensive review of cervid visual ecology that emphasizes the interplay between the visual adaptations of cervids and their interactions with habitats and other species. Ultimately, a better understanding of cervid visual ecology allows researchers to gain deeper insights into their behavior and ecology, providing critical information for conservation and management efforts.
Collapse
Affiliation(s)
- Blaise A. Newman
- Warnell School of Forestry and Natural Resources, University of Georgia, Athens, GA 30602, USA
| | | |
Collapse
|
2
|
Visual navigation: properties, acquisition and use of views. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2022:10.1007/s00359-022-01599-2. [PMID: 36515743 DOI: 10.1007/s00359-022-01599-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/20/2022] [Accepted: 11/25/2022] [Indexed: 12/15/2022]
Abstract
Panoramic views offer information on heading direction and on location to visually navigating animals. This review covers the properties of panoramic views and the information they provide to navigating animals, irrespective of image representation. Heading direction can be retrieved by alignment matching between memorized and currently experienced views, and a gradient descent in image differences can lead back to the location at which a view was memorized (positional image matching). Central place foraging insects, such as ants, bees and wasps, conduct distinctly choreographed learning walks and learning flights upon first leaving their nest that are likely to be designed to systematically collect scene memories tagged with information provided by path integration on the direction of and the distance to the nest. Equally, traveling along routes, ants have been shown to engage in scanning movements, in particular when routes are unfamiliar, again suggesting a systematic process of acquiring and comparing views. The review discusses what we know and do not know about how view memories are represented in the brain of insects, how they are acquired and how they are subsequently used for traveling along routes and for pinpointing places.
Collapse
|
3
|
Stankiewicz J, Webb B. Looking down: a model for visual route following in flying insects. BIOINSPIRATION & BIOMIMETICS 2021; 16:055007. [PMID: 34243169 DOI: 10.1088/1748-3190/ac1307] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 07/09/2021] [Indexed: 06/13/2023]
Abstract
Insect visual navigation is often assumed to depend on panoramic views of the horizon, and how these change as the animal moves. However, it is known that honey bees can visually navigate in flat, open meadows where visual information at the horizon is minimal, or would remain relatively constant across a wide range of positions. In this paper we hypothesise that these animals can navigate using view memories of the ground. We find that in natural scenes, low resolution views from an aerial perspective of ostensibly self-similar terrain (e.g. within a field of grass) provide surprisingly robust descriptors of precise spatial locations. We propose a new visual route following approach that makes use of transverse oscillations to centre a flight path along a sequence of learned views of the ground. We deploy this model on an autonomous quadcopter and demonstrate that it provides robust performance in the real world on journeys of up to 30 m. The success of our method is contingent on a robust view matching process which can evaluate the familiarity of a view with a degree of translational invariance. We show that a previously developed wavelet based bandpass orientated filter approach fits these requirements well, exhibiting double the catchment area of standard approaches. Using a realistic simulation package, we evaluate the robustness of our approach to variations in heading direction and aircraft height between inbound and outbound journeys. We also demonstrate that our approach can operate using a vision system with a biologically relevant visual acuity and viewing direction.
Collapse
Affiliation(s)
- J Stankiewicz
- School of Informatics, University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, United Kingdom
| | - B Webb
- School of Informatics, University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, United Kingdom
| |
Collapse
|
4
|
Wystrach A. Movements, embodiment and the emergence of decisions. Insights from insect navigation. Biochem Biophys Res Commun 2021; 564:70-77. [PMID: 34023071 DOI: 10.1016/j.bbrc.2021.04.114] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 04/06/2021] [Accepted: 04/27/2021] [Indexed: 02/07/2023]
Abstract
We readily infer that animals make decisions, but what this implies is usually not clearly defined. The notion of 'decision-making' ultimately stems from human introspection, and is thus loaded with anthropomorphic assumptions. Notably, the decision is made internally, is based on information, and precedes the goal directed behaviour. Also, making a decision implies that 'something' did it, thus hints at the presence of a cognitive mind, whose existence is independent of the decision itself. This view may convey some truth, but here I take the opposite stance. Using examples from research in insect navigation, this essay highlights how apparent decisions can emerge without a brain, how actions can precede information or how sophisticated goal directed behaviours can be implemented without neural decisions. This perspective requires us to shake off the idea that behaviour is a consequence of the brain; and embrace the concept that movements arise from - as much as participate in - distributed interactions between various computational centres - including the body - that reverberate in closed-loop with the environment. From this perspective we may start to picture how a cognitive mind can be the consequence, rather than the cause, of such neural and body movements.
Collapse
Affiliation(s)
- Antoine Wystrach
- Research Centre on Animal Cognition, Centre for Integrative Biology, CNRS, University of Toulouse, 118 route deNarbonne, F-31062, Toulouse, France.
| |
Collapse
|
5
|
Multi-modal cue integration in the black garden ant. Anim Cogn 2020; 23:1119-1127. [DOI: 10.1007/s10071-020-01360-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Revised: 01/29/2020] [Accepted: 02/03/2020] [Indexed: 12/19/2022]
|
6
|
Le Möel F, Wystrach A. Opponent processes in visual memories: A model of attraction and repulsion in navigating insects' mushroom bodies. PLoS Comput Biol 2020; 16:e1007631. [PMID: 32023241 PMCID: PMC7034919 DOI: 10.1371/journal.pcbi.1007631] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Revised: 02/21/2020] [Accepted: 01/04/2020] [Indexed: 11/19/2022] Open
Abstract
Solitary foraging insects display stunning navigational behaviours in visually complex natural environments. Current literature assumes that these insects are mostly driven by attractive visual memories, which are learnt when the insect's gaze is precisely oriented toward the goal direction, typically along its familiar route or towards its nest. That way, an insect could return home by simply moving in the direction that appears most familiar. Here we show using virtual reconstructions of natural environments that this principle suffers from fundamental drawbacks, notably, a given view of the world does not provide information about whether the agent should turn or not to reach its goal. We propose a simple model where the agent continuously compares its current view with both goal and anti-goal visual memories, which are treated as attractive and repulsive respectively. We show that this strategy effectively results in an opponent process, albeit not at the perceptual level-such as those proposed for colour vision or polarisation detection-but at the level of the environmental space. This opponent process results in a signal that strongly correlates with the angular error of the current body orientation so that a single view of the world now suffices to indicate whether the agent should turn or not. By incorporating this principle into a simple agent navigating in reconstructed natural environments, we show that it overcomes the usual shortcomings and produces a step-increase in navigation effectiveness and robustness. Our findings provide a functional explanation to recent behavioural observations in ants and why and how so-called aversive and appetitive memories must be combined. We propose a likely neural implementation based on insects' mushroom bodies' circuitry that produces behavioural and neural predictions contrasting with previous models.
Collapse
Affiliation(s)
- Florent Le Möel
- Research Centre on Animal Cognition, University Paul Sabatier/CNRS, Toulouse, France
| | - Antoine Wystrach
- Research Centre on Animal Cognition, University Paul Sabatier/CNRS, Toulouse, France
| |
Collapse
|
7
|
Hoffmann S, Bley A, Matthes M, Firzlaff U, Luksch H. The Neural Basis of Dim-Light Vision in Echolocating Bats. BRAIN, BEHAVIOR AND EVOLUTION 2019; 94:61-70. [PMID: 31747669 DOI: 10.1159/000504124] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Accepted: 10/12/2019] [Indexed: 11/19/2022]
Abstract
Echolocating bats evolved a sophisticated biosonar imaging system that allows for a life in dim-light habitats. However, especially for far-range operations such as homing, bats can support biosonar by vision. Large eyes and a retina that mainly consists of rods are assumed to be the optical adjustments that enable bats to use visual information at low light levels. In addition to optical mechanisms, many nocturnal animals evolved neural adaptations such as elongated integration times or enlarged spatial sampling areas to further increase the sensitivity of their visual system by temporal or spatial summation of visual information. The neural mechanisms that underlie the visual capabilities of echolocating bats have, however, so far not been investigated. To shed light on spatial and temporal response characteristics of visual neurons in an echolocating bat, Phyllostomus discolor, we recorded extracellular multiunit activity in the retino-recipient superficial layers of the superior colliculus (SC). We discovered that response latencies of these neurons were generally in the mammalian range, whereas neural spatial sampling areas were unusually large compared to those measured in the SC of other mammals. From this we suggest that echolocating bats likely use spatial but not temporal summation of visual input to improve visual performance under dim-light conditions. Furthermore, we hypothesize that bats compensate for the loss of visual spatial precision, which is a byproduct of spatial summation, by integration of spatial information provided by both the visual and the biosonar systems. Given that knowledge about neural adaptations to dim-light vision is mainly based on studies done in non-mammalian species, our novel data provide a valuable contribution to the field and demonstrate the suitability of echolocating bats as a nocturnal animal model to study the neurophysiological aspects of dim-light vision.
Collapse
Affiliation(s)
- Susanne Hoffmann
- Chair of Zoology, Technische Universität München, Freising-Weihenstephan, Germany, .,Max Planck Institute for Ornithology, Department of Behavioural Neurobiology, Seewiesen, Germany,
| | - Alexandra Bley
- Chair of Zoology, Technische Universität München, Freising-Weihenstephan, Germany
| | - Mariana Matthes
- Chair of Zoology, Technische Universität München, Freising-Weihenstephan, Germany
| | - Uwe Firzlaff
- Chair of Zoology, Technische Universität München, Freising-Weihenstephan, Germany
| | - Harald Luksch
- Chair of Zoology, Technische Universität München, Freising-Weihenstephan, Germany
| |
Collapse
|
8
|
Taylor GJ, Tichit P, Schmidt MD, Bodey AJ, Rau C, Baird E. Bumblebee visual allometry results in locally improved resolution and globally improved sensitivity. eLife 2019; 8:40613. [PMID: 30803484 PMCID: PMC6391067 DOI: 10.7554/elife.40613] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Accepted: 12/23/2018] [Indexed: 12/19/2022] Open
Abstract
The quality of visual information that is available to an animal is limited by the size of its eyes. Differences in eye size can be observed even between closely related individuals, yet we understand little about how this affects vision. Insects are good models for exploring the effects of size on visual systems because many insect species exhibit size polymorphism. Previous work has been limited by difficulties in determining the 3D structure of eyes. We have developed a novel method based on x-ray microtomography to measure the 3D structure of insect eyes and to calculate predictions of their visual capabilities. We used our method to investigate visual allometry in the bumblebee Bombus terrestris and found that size affects specific aspects of vision, including binocular overlap, optical sensitivity, and dorsofrontal visual resolution. This reveals that differential scaling between eye areas provides flexibility that improves the visual capabilities of larger bumblebees. Bees fly through complex environments in search of nectar from flowers. They are aided in this quest by excellent eyesight. Scientists have extensively studied the eyesight of honeybees to learn more about how such tiny eyes work and how they process and learn visual information. Less is known about the honeybee’s larger cousins, the bumblebees, which are also important pollinators. Bumblebees come in different sizes and one question scientists have is how eye size affects vision. Bigger bumblebees are known to have bigger eyes, and bigger eyes are usually better. But which aspects of vision are improved in larger eyes is not clear. For example, does the size of a bee’s eyes affect how large their field of view is, or how sensitive they are to light? Or does it impact their visual acuity, a measurement of the smallest objects the eye can see? Scaling up an eye would likely improve all these aspects of sight slightly, but changes in a small area of the eye might more drastically improve some parts of vision. Now, Taylor et al. show that larger bumblebees with bigger eyes have better vision than their smaller counterparts. In the experiments, a technique called microtomography was used to measure the 3D structure of bumblebee eyes. The measurements were then applied to build 3D models of the bumblebee eyes, and computational geometry was used to calculate the sensitivity, acuity, and viewing direction across the entire surface of each model eye. Taylor et al. found that larger bees had improved ability to see small objects in front or slightly above them. They had a bigger area of overlap between the sight in both eyes when they looked forward and up. They were also more sensitive to light across the eye. The experiments show that improvements in eyesight with larger size are very specific and likely help larger bees to adapt to their environment. Behavioral studies could help scientists better understand how these changes help bigger bees and how the traits evolved. These findings might also help engineers trying to design miniature cameras to help small, flying autonomous vehicles navigate. Bees fly through complex environments and face challenges similar to those small flying vehicles would face. Emulating the design of bee eyes and how they change with size might lead to the development of better cameras for these vehicles.
Collapse
Affiliation(s)
| | - Pierre Tichit
- Department of Biology, Lund University, Lund, Sweden
| | - Marie D Schmidt
- Department of Biology, Lund University, Lund, Sweden.,Westphalian University of Applied Sciences, Bocholt, Germany
| | | | | | - Emily Baird
- Department of Biology, Lund University, Lund, Sweden.,Department of Zoology, Stockholm University, Stockholm, Sweden
| |
Collapse
|
9
|
Palavalli-Nettimi R, Ogawa Y, Ryan LA, Hart NS, Narendra A. Miniaturisation reduces contrast sensitivity and spatial resolving power in ants. J Exp Biol 2019; 222:jeb.203018. [DOI: 10.1242/jeb.203018] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Accepted: 05/17/2019] [Indexed: 12/30/2022]
Abstract
Vision is crucial for animals to find prey, locate conspecifics, and to navigate within cluttered landscapes. Animals need to discriminate objects against a visually noisy background. However, the ability to detect spatial information is limited by eye size. In insects, as individuals become smaller, the space available for the eyes reduces, which affects the number of ommatidia, the size of the lens and the downstream information processing capabilities. The evolution of small body size in a lineage, known as miniaturisation, is common in insects. Here, using pattern electroretinography with vertical sinusoidal gratings as stimuli, we studied how miniaturisation affects spatial resolving power and contrast sensitivity in four diurnal ants that live in a similar environment but varied in their body and eye size. We found that ants with fewer and smaller ommatidial facets had lower spatial resolving power and contrast sensitivity. The spatial resolving power was maximum in the largest ant Myrmecia tarsata at 0.60 cycles per degree (cpd) compared to the ant with smallest eyes Rhytidoponera inornata that had 0.48 cpd. Maximum contrast sensitivity (minimum contrast threshold) in M. tarsata (2627 facets) was 15.51 (6.4% contrast detection threshold) at 0.1 cpd, while the smallest ant R. inornata (227 facets) had a maximum contrast sensitivity of 1.34 (74.1% contrast detection threshold) at 0.05 cpd. This is the first study to physiologically investigate contrast sensitivity in the context of insect allometry. Miniaturisation thus dramatically decreases maximum contrast sensitivity and also reduces spatial resolution, which could have implications for visually guided behaviours.
Collapse
Affiliation(s)
| | - Yuri Ogawa
- Department of Biological Sciences, Macquarie University, Sydney, NSW 2109, Australia
| | - Laura A. Ryan
- Department of Biological Sciences, Macquarie University, Sydney, NSW 2109, Australia
| | - Nathan S. Hart
- Department of Biological Sciences, Macquarie University, Sydney, NSW 2109, Australia
| | - Ajay Narendra
- Department of Biological Sciences, Macquarie University, Sydney, NSW 2109, Australia
| |
Collapse
|
10
|
Stone T, Mangan M, Wystrach A, Webb B. Rotation invariant visual processing for spatial memory in insects. Interface Focus 2018; 8:20180010. [PMID: 29951190 PMCID: PMC6015815 DOI: 10.1098/rsfs.2018.0010] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/08/2018] [Indexed: 11/12/2022] Open
Abstract
Visual memory is crucial to navigation in many animals, including insects. Here, we focus on the problem of visual homing, that is, using comparison of the view at a current location with a view stored at the home location to control movement towards home by a novel shortcut. Insects show several visual specializations that appear advantageous for this task, including almost panoramic field of view and ultraviolet light sensitivity, which enhances the salience of the skyline. We discuss several proposals for subsequent processing of the image to obtain the required motion information, focusing on how each might deal with the problem of yaw rotation of the current view relative to the home view. Possible solutions include tagging of views with information from the celestial compass system, using multiple views pointing towards home, or rotation invariant encoding of the view. We illustrate briefly how a well-known shape description method from computer vision, Zernike moments, could provide a compact and rotation invariant representation of sky shapes to enhance visual homing. We discuss the biological plausibility of this solution, and also a fourth strategy, based on observed behaviour of insects, that involves transfer of information from visual memory matching to the compass system.
Collapse
Affiliation(s)
- Thomas Stone
- School of Informatics, University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, UK
| | - Michael Mangan
- Sheffield Robotics, Department of Computer Science, University of Sheffield, Regent Court, Sheffield S1 4DP, UK
| | - Antoine Wystrach
- CNRS, Université Paul Sabatier, Toulouse, 31062 cedex 09, France
| | - Barbara Webb
- School of Informatics, University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, UK
| |
Collapse
|
11
|
Binhi VN. A limit in the dynamic increase in the accuracy of group migration. Biosystems 2018; 166:19-25. [DOI: 10.1016/j.biosystems.2018.02.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Revised: 10/11/2017] [Accepted: 02/14/2018] [Indexed: 11/24/2022]
|
12
|
Murray T, Zeil J. Quantifying navigational information: The catchment volumes of panoramic snapshots in outdoor scenes. PLoS One 2017; 12:e0187226. [PMID: 29088300 PMCID: PMC5663442 DOI: 10.1371/journal.pone.0187226] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2017] [Accepted: 10/16/2017] [Indexed: 11/18/2022] Open
Abstract
Panoramic views of natural environments provide visually navigating animals with two kinds of information: they define locations because image differences increase smoothly with distance from a reference location and they provide compass information, because image differences increase smoothly with rotation away from a reference orientation. The range over which a given reference image can provide navigational guidance (its 'catchment area') has to date been quantified from the perspective of walking animals by determining how image differences develop across the ground plane of natural habitats. However, to understand the information available to flying animals there is a need to characterize the 'catchment volumes' within which panoramic snapshots can provide navigational guidance. We used recently developed camera-based methods for constructing 3D models of natural environments and rendered panoramic views at defined locations within these models with the aim of mapping navigational information in three dimensions. We find that in relatively open woodland habitats, catchment volumes are surprisingly large extending for metres depending on the sensitivity of the viewer to image differences. The size and the shape of catchment volumes depend on the distance of visual features in the environment. Catchment volumes are smaller for reference images close to the ground and become larger for reference images at some distance from the ground and in more open environments. Interestingly, catchment volumes become smaller when only above horizon views are used and also when views include a 1 km distant panorama. We discuss the current limitations of mapping navigational information in natural environments and the relevance of our findings for our understanding of visual navigation in animals and autonomous robots.
Collapse
Affiliation(s)
- Trevor Murray
- Research School of Biology, Australian National University, Canberra, Australia
- * E-mail:
| | - Jochen Zeil
- Research School of Biology, Australian National University, Canberra, Australia
| |
Collapse
|
13
|
Graham P, Philippides A. Vision for navigation: What can we learn from ants? ARTHROPOD STRUCTURE & DEVELOPMENT 2017; 46:718-722. [PMID: 28751148 DOI: 10.1016/j.asd.2017.07.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2016] [Revised: 07/06/2017] [Accepted: 07/23/2017] [Indexed: 06/07/2023]
Abstract
The visual systems of all animals are used to provide information that can guide behaviour. In some cases insects demonstrate particularly impressive visually-guided behaviour and then we might reasonably ask how the low-resolution vision and limited neural resources of insects are tuned to particular behavioural strategies. Such questions are of interest to both biologists and to engineers seeking to emulate insect-level performance with lightweight hardware. One behaviour that insects share with many animals is the use of learnt visual information for navigation. Desert ants, in particular, are expert visual navigators. Across their foraging life, ants can learn long idiosyncratic foraging routes. What's more, these routes are learnt quickly and the visual cues that define them can be implemented for guidance independently of other social or personal information. Here we review the style of visual navigation in solitary foraging ants and consider the physiological mechanisms that underpin it. Our perspective is to consider that robust navigation comes from the optimal interaction between behavioural strategy, visual mechanisms and neural hardware. We consider each of these in turn, highlighting the value of ant-like mechanisms in biomimetic endeavours.
Collapse
Affiliation(s)
- Paul Graham
- Centre for Computational Neuroscience and Robotics, University of Sussex, Brighton, BN1 9QG, UK.
| | - Andrew Philippides
- Centre for Computational Neuroscience and Robotics, University of Sussex, Brighton, BN1 9QG, UK
| |
Collapse
|
14
|
Ramirez-Esquivel F, Leitner NE, Zeil J, Narendra A. The sensory arrays of the ant, Temnothorax rugatulus. ARTHROPOD STRUCTURE & DEVELOPMENT 2017; 46:552-563. [PMID: 28347859 DOI: 10.1016/j.asd.2017.03.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Revised: 03/22/2017] [Accepted: 03/22/2017] [Indexed: 06/06/2023]
Abstract
Individual differences in response thresholds to task-related stimuli may be one mechanism driving task allocation among social insect workers. These differences may arise at various stages in the nervous system. We investigate variability in the peripheral nervous system as a simple mechanism that can introduce inter-individual differences in sensory information. In this study we describe size-dependent variation of the compound eyes and the antennae in the ant Temnothorax rugatulus. Head width in T. rugatulus varies between 0.4 and 0.7 mm (2.6-3.8 mm body length). But despite this limited range of worker sizes we find sensory array variability. We find that the number of ommatidia and of some, but not all, antennal sensilla types vary with head width. The antennal array of T. rugatulus displays the full complement of sensillum types observed in other species of ants, although at much lower quantities than other, larger, studied species. In addition, we describe what we believe to be a new type of sensillum in hymenoptera that occurs on the antennae and on all body segments. T. rugatulus has apposition compound eyes with 45-76 facets per eye, depending on head width, with average lens diameters of 16.5 μm, rhabdom diameters of 5.7 μm and inter-ommatidial angles of 16.8°. The optical system of T. rugatulus ommatidia is severely under focussed, but the absolute sensitivity of the eyes is unusually high. We discuss the functional significance of these findings and the extent to which the variability of sensory arrays may correlate with task allocation.
Collapse
Affiliation(s)
| | - Nicole E Leitner
- Department of Ecology and Evolutionary Biology, University of Arizona, PO Box 210088, Tucson, AZ 85721-0088, USA.
| | - Jochen Zeil
- Research School of Biology, The Australian National University, Canberra, ACT 2601, Australia.
| | - Ajay Narendra
- Department of Biological Sciences, Macquarie University, Sydney, NSW 2109, Australia.
| |
Collapse
|
15
|
Towne WF, Ritrovato AE, Esposto A, Brown DF. Honeybees use the skyline in orientation. ACTA ACUST UNITED AC 2017; 220:2476-2485. [PMID: 28450409 DOI: 10.1242/jeb.160002] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Accepted: 04/23/2017] [Indexed: 11/20/2022]
Abstract
In view-based navigation, animals acquire views of the landscape from various locations and then compare the learned views with current views in order to orient in certain directions or move toward certain destinations. One landscape feature of great potential usefulness in view-based navigation is the skyline, the silhouette of terrestrial objects against the sky, as it is distant, relatively stable and easy to detect. The skyline has been shown to be important in the view-based navigation of ants, but no flying insect has yet been shown definitively to use the skyline in this way. Here, we show that honeybees do indeed orient using the skyline. A feeder was surrounded with an artificial replica of the natural skyline there, and the bees' departures toward the nest were recorded from above with a video camera under overcast skies (to eliminate celestial cues). When the artificial skyline was rotated, the bees' departures were rotated correspondingly, showing that the bees oriented by the artificial skyline alone. We discuss these findings in the context of the likely importance of the skyline in long-range homing in bees, the likely importance of altitude in using the skyline, the likely role of ultraviolet light in detecting the skyline, and what we know about the bees' ability to resolve skyline features.
Collapse
Affiliation(s)
- William F Towne
- Department of Biology, Kutztown University of Pennsylvania, Kutztown, PA 19529, USA
| | | | - Antonina Esposto
- Department of Biology, Kutztown University of Pennsylvania, Kutztown, PA 19529, USA
| | - Duncan F Brown
- Department of Biology, Kutztown University of Pennsylvania, Kutztown, PA 19529, USA
| |
Collapse
|
16
|
Fleischmann PN, Christian M, Müller VL, Rössler W, Wehner R. Ontogeny of learning walks and the acquisition of landmark information in desert ants, Cataglyphis fortis. ACTA ACUST UNITED AC 2016; 219:3137-3145. [PMID: 27481270 DOI: 10.1242/jeb.140459] [Citation(s) in RCA: 67] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2016] [Accepted: 07/25/2016] [Indexed: 01/07/2023]
Abstract
At the beginning of their foraging lives, desert ants (Cataglyphis fortis) are for the first time exposed to the visual world within which they henceforth must accomplish their navigational tasks. Their habitat, North African salt pans, is barren, and the nest entrance, a tiny hole in the ground, is almost invisible. Although natural landmarks are scarce and the ants mainly depend on path integration for returning to the starting point, they can also learn and use landmarks successfully to navigate through their largely featureless habitat. Here, we studied how the ants acquire this information at the beginning of their outdoor lives within a nest-surrounding array of three artificial black cylinders. Individually marked 'newcomers' exhibit a characteristic sequence of learning walks. The meandering learning walks covering all directions of the compass first occur only within a few centimeters of the nest entrance, but then increasingly widen, until after three to seven learning walks, foraging starts. When displaced to a distant test field in which an identical array of landmarks has been installed, the ants shift their search density peaks more closely to the fictive goal position, the more learning walks they have performed. These results suggest that learning of a visual landmark panorama around a goal is a gradual rather than an instantaneous process.
Collapse
Affiliation(s)
- Pauline N Fleischmann
- Behavioral Physiology and Sociobiology (Zoology II), Biozentrum, University of Würzburg, Am Hubland, Würzburg 97074, Germany
| | - Marcelo Christian
- Behavioral Physiology and Sociobiology (Zoology II), Biozentrum, University of Würzburg, Am Hubland, Würzburg 97074, Germany
| | - Valentin L Müller
- Behavioral Physiology and Sociobiology (Zoology II), Biozentrum, University of Würzburg, Am Hubland, Würzburg 97074, Germany
| | - Wolfgang Rössler
- Behavioral Physiology and Sociobiology (Zoology II), Biozentrum, University of Würzburg, Am Hubland, Würzburg 97074, Germany
| | - Rüdiger Wehner
- Brain Research Institute, University of Zürich, Winterthurerstrasse 190, Zürich CH-8057, Switzerland
| |
Collapse
|
17
|
Autonomous Visual Navigation of an Indoor Environment Using a Parsimonious, Insect Inspired Familiarity Algorithm. PLoS One 2016; 11:e0153706. [PMID: 27119720 PMCID: PMC4847926 DOI: 10.1371/journal.pone.0153706] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2015] [Accepted: 04/03/2016] [Indexed: 11/19/2022] Open
Abstract
The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH) has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects' brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path's end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery.
Collapse
|