1
|
Falkingham PL. Reconstructing dinosaur locomotion. Biol Lett 2025; 21:20240441. [PMID: 39809325 PMCID: PMC11732409 DOI: 10.1098/rsbl.2024.0441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Revised: 10/07/2024] [Accepted: 11/14/2024] [Indexed: 01/16/2025] Open
Abstract
Dinosaur locomotor biomechanics are of major interest. Locomotion of an animal affects many, if not most, aspects of life reconstruction, including behaviour, performance, ecology and appearance. Yet locomotion is one aspect of non-avian dinosaurs that we cannot directly observe. To shed light on how dinosaurs moved, we must draw from multiple sources of evidence. Extant taxa provide the basic principles of locomotion, bracket soft-tissue reconstructions and provide validation data for methods and hypotheses applied to dinosaurs. The skeletal evidence itself can be used directly to reconstruct posture, range of motion and mass (segment and whole-body). Building on skeletal reconstructions, musculoskeletal models inform muscle function and form the basis of simulations to test hypotheses of locomotor performance. Finally, fossilized footprints are our only direct record of motion and can provide important snapshots of extinct animals, shedding light on speed, gait and posture. Building confident reconstructions of dinosaur locomotion requires evidence from all four sources of information. This review explores recent work in these areas, with a methodological focus.
Collapse
Affiliation(s)
- Peter L. Falkingham
- School of Biological and Environmental Sciences, Liverpool John Moores University, Liverpool, UK
| |
Collapse
|
2
|
Machuca MDB, Israel JH, Keefe DF, Stuerzlinger W. Toward More Comprehensive Evaluations of 3D Immersive Sketching, Drawing, and Painting. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4648-4664. [PMID: 37186537 DOI: 10.1109/tvcg.2023.3276291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
To understand current practice and explore the potential for more comprehensive evaluations of 3D immersive sketching, drawing, and painting, we present a survey of evaluation methodologies used in existing 3D sketching research, a breakdown and discussion of important phases (sub-tasks) in the 3D sketching process, and a framework that suggests how these factors can inform evaluation strategies in future 3D sketching research. Existing evaluations identified in the survey are organized and discussed within three high-level categories: 1) evaluating the 3D sketching activity, 2) evaluating 3D sketching tools, and 3) evaluating 3D sketching artifacts. The new framework suggests targeting evaluations to one or more of these categories and identifying relevant user populations. In addition, building upon the discussion of the different phases of the 3D sketching process, the framework suggests to evaluate relevant sketching tasks, which may range from low-level perception and hand movements to high-level conceptual design. Finally, we discuss limitations and challenges that arise when evaluating 3D sketching, including a lack of standardization of evaluation methods and multiple, potentially conflicting, ways to evaluate the same task and user interface usability; we also identify opportunities for more holistic evaluations. We hope the results can contribute to accelerating research in this domain and, ultimately, broad adoption of immersive sketching systems.
Collapse
|
3
|
Novotny J, Laidlaw DH. Evaluating Text Reading Speed in VR Scenes and 3D Particle Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2602-2612. [PMID: 38437104 DOI: 10.1109/tvcg.2024.3372093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
This work reports how text size and other rendering conditions affect reading speeds in a virtual reality environment and a scientific data analysis application. Displaying text legibly yet space-efficiently is a challenging problem in immersive displays. Effective text displays that enable users to read at their maximum speed must consider the variety of virtual reality (VR) display hardware and possible visual exploration tasks. We investigate how text size and display parameters affect reading speed and legibility in three state-of-the-art VR displays: two head-mounted displays and one CAVE. In our perception experiments, we establish limits where reading speed declines as the text size approaches the so-called critical print sizes (CPS) of individual displays, which can inform the design of uniform reading experiences across different VR systems. We observe an inverse correlation between display resolution and CPS. Yet, even in high-fidelity VR systems, the measured CPS was larger than in comparable physical text displays, highlighting the value of increased VR display resolutions in certain visualization scenarios. Our findings indicate that CPS can be an effective metric for evaluating VR display usability. Additionally, we evaluate the effects of text panel placement, orientation, and occlusion-reducing rendering methods on reading speeds in generic volumetric particle visualizations. Our study provides insights into the trade-off between text representation and legibility in cluttered immersive environments with specific suggestions for visualization designers and highlight areas for further research.
Collapse
|
4
|
Turner ML, Falkingham PL, Gatesy SM. What is Stance Phase On Deformable Substrates? Integr Comp Biol 2022; 62:icac009. [PMID: 35325150 DOI: 10.1093/icb/icac009] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The stance phase of walking is when forces are applied to the environment to support, propel, and maneuver the body. Unlike solid surfaces, deformable substrates yield under load, allowing the foot to sink to varying degrees. For bipedal birds and their dinosaurian ancestors, a shared response to walking on these substrates has been identified in the looping path the digits follow underground. Because a volume of substrate preserves a 3-D record of stance phase in the form of footprints or tracks, understanding how the bipedal stride cycle relates to this looping motion is critical for building a track-based framework for the study of walking in extinct taxa. Here we used biplanar X-ray imaging to record and analyze 161 stance phases from 81 trials of three Helmeted Guineafowl (Numida meleagris) walking on radiolucent substrates of different consistency (solid, dry granular, firm to semi-liquid muds). Across all substrates, the feet sank to a range of depths up to 78% of hip height. With increasing substrate hydration, the majority of foot motion shifted from above to below ground. Walking kinematics sampled across all stride cycles revealed six sequential gait-based events originating from both feet, conserved throughout the spectrum of substrate consistencies during normal alternating walking. On all substrates that yielded, five sub-phases of gait were drawn out in space and formed a loop of varying shape. We describe the two-footed coordination and weight distribution that likely contributed to the observed looping patterns of an individual foot. Given such complex subsurface foot motion during normal alternating walking and some atypical walking behaviors, we discuss the definition of "stance phase" on deformable substrates. We also discuss implications of the gait-based origins of subsurface looping on the interpretation of locomotory information preserved in fossil dinosaur tracks.
Collapse
Affiliation(s)
- Morgan L Turner
- Department of Ecology, Evolution, and Organismal Biology, Division of Biology and Medicine, Brown University, Providence, RI, 02912, USA
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - Peter L Falkingham
- School of Biological and Environmental Sciences, Liverpool John Moores University, Liverpool, UK
| | - Stephen M Gatesy
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| |
Collapse
|
5
|
Cieri RL, Turner ML, Carney RM, Falkingham PL, Kirk AM, Wang T, Jensen B, Novotny J, Tveite J, Gatesy SM, Laidlaw DH, Kaplan H, Moorman AFM, Howell M, Engel B, Cruz C, Smith A, Gerichs W, Lian Y, Schultz JT, Farmer CG. Virtual and augmented reality: New tools for visualizing, analyzing, and communicating complex morphology. J Morphol 2021; 282:1785-1800. [PMID: 34689352 DOI: 10.1002/jmor.21421] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 09/30/2021] [Accepted: 10/10/2021] [Indexed: 11/09/2022]
Abstract
Virtual and augmented reality (VR/AR) are new technologies with the power to revolutionize the study of morphology. Modern imaging approaches such as computed tomography, laser scanning, and photogrammetry have opened up a new digital world, enabling researchers to share and analyze morphological data electronically and in great detail. Because this digital data exists on a computer screen, however, it can remain difficult to understand and unintuitive to interact with. VR/AR technologies bridge the analog-to-digital divide by presenting 3D data to users in a very similar way to how they would interact with actual anatomy, while also providing a more immersive experience and greater possibilities for exploration. This manuscript describes VR/AR hardware, software, and techniques, and is designed to give practicing morphologists and educators a primer on using these technologies in their research, pedagogy, and communication to a wide variety of audiences. We also include a series of case studies from the presentations and workshop given at the 2019 International Congress of Vertebrate Morphology, and suggest best practices for the use of VR/AR in comparative morphology.
Collapse
Affiliation(s)
- Robert L Cieri
- School of Biological Sciences, University of Utah, Salt Lake City, Utah, USA.,School of Science and Engineering, University of the Sunshine Coast, Maroochydore, Queensland, Australia
| | - Morgan L Turner
- Department of Ecology, Evolution, and Organismal Biology, Brown University, Providence, Rhode Island, USA.,Department of Computer Science and Engineering, University of Minnesota, Minneapolis, Minnesota, USA
| | - Ryan M Carney
- Department of Integrative Biology, University of South Florida, Tampa, Florida, USA
| | - Peter L Falkingham
- School of Biological and Environmental Sciences, Liverpool John Moores University, Liverpool, UK
| | - Alexander M Kirk
- Department of Integrative Biology, University of South Florida, Tampa, Florida, USA
| | - Tobias Wang
- Department of Biology, Zoophysiology, Aarhus University, Aarhus, Denmark
| | - Bjarke Jensen
- Department of Medical Biology, Amsterdam Cardiovascular Sciences, Amsterdam University Medical Centres, Amsterdam, the Netherlands
| | - Johannes Novotny
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria
| | - Joshua Tveite
- Department of Computer Science, Brown University, Providence, Rhode Island, USA
| | - Stephen M Gatesy
- Department of Ecology, Evolution, and Organismal Biology, Brown University, Providence, Rhode Island, USA
| | - David H Laidlaw
- Department of Computer Science, Brown University, Providence, Rhode Island, USA
| | - Howard Kaplan
- Advanced Visualization Center, University of South Florida, Tampa, Florida, USA
| | - Antoon F M Moorman
- Department of Medical Biology, Amsterdam Cardiovascular Sciences, Amsterdam University Medical Centres, Amsterdam, the Netherlands
| | - Mark Howell
- School of Biological Sciences, University of Utah, Salt Lake City, Utah, USA
| | - Benjamin Engel
- School of Dentistry, University of Utah, Salt Lake City, Utah, USA
| | - Cole Cruz
- School of Computing, University of Utah, Salt Lake City, Utah, USA
| | - Adam Smith
- School of Computing, University of Utah, Salt Lake City, Utah, USA
| | - William Gerichs
- School of Computing, University of Utah, Salt Lake City, Utah, USA
| | - Yingjie Lian
- School of Computing, University of Utah, Salt Lake City, Utah, USA
| | - Johanna T Schultz
- School of Science and Engineering, University of the Sunshine Coast, Maroochydore, Queensland, Australia
| | - C G Farmer
- School of Biological Sciences, University of Utah, Salt Lake City, Utah, USA
| |
Collapse
|
6
|
Krekhov A, Cmentowski S, Waschk A, Kruger J. Deadeye Visualization Revisited: Investigation of Preattentiveness and Applicability in Virtual Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:547-557. [PMID: 31425106 DOI: 10.1109/tvcg.2019.2934370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Visualizations rely on highlighting to attract and guide our attention. To make an object of interest stand out independently from a number of distractors, the underlying visual cue, e.g., color, has to be preattentive. In our prior work, we introduced Deadeye as an instantly recognizable highlighting technique that works by rendering the target object for one eye only. In contrast to prior approaches, Deadeye excels by not modifying any visual properties of the target. However, in the case of 2D visualizations, the method requires an additional setup to allow dichoptic presentation, which is a considerable drawback. As a follow-up to requests from the community, this paper explores Deadeye as a highlighting technique for 3D visualizations, because such stereoscopic scenarios support dichoptic presentation out of the box. Deadeye suppresses binocular disparities for the target object, so we cannot assume the applicability of our technique as a given fact. With this motivation, the paper presents quantitative evaluations of Deadeye in VR, including configurations with multiple heterogeneous distractors as an important robustness challenge. After confirming the preserved preattentiveness (all average accuracies above 90%) under such real-world conditions, we explore VR volume rendering as an example application scenario for Deadeye. We depict a possible workflow for integrating our technique, conduct an exploratory survey to demonstrate benefits and limitations, and finally provide related design implications.
Collapse
|
7
|
Bock A, Axelsson E, Costa J, Payne G, Acinapura M, Trakinski V, Emmart C, Silva C, Hansen C, Ynnerman A. OpenSpace: A System for Astrographics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:633-642. [PMID: 31425082 DOI: 10.1109/tvcg.2019.2934259] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Human knowledge about the cosmos is rapidly increasing as instruments and simulations are generating new data supporting the formation of theory and understanding of the vastness and complexity of the universe. OpenSpace is a software system that takes on the mission of providing an integrated view of all these sources of data and supports interactive exploration of the known universe from the millimeter scale showing instruments on spacecrafts to billions of light years when visualizing the early universe. The ambition is to support research in astronomy and space exploration, science communication at museums and in planetariums as well as bringing exploratory astrographics to the class room. There is a multitude of challenges that need to be met in reaching this goal such as the data variety, multiple spatio-temporal scales, collaboration capabilities, etc. Furthermore, the system has to be flexible and modular to enable rapid prototyping and inclusion of new research results or space mission data and thereby shorten the time from discovery to dissemination. To support the different use cases the system has to be hardware agnostic and support a range of platforms and interaction paradigms. In this paper we describe how OpenSpace meets these challenges in an open source effort that is paving the path for the next generation of interactive astrographics.
Collapse
|