1
|
Beger AW, Hannan S, Patel R, Sweeney EM. Virtual escape rooms in anatomy education: case studies from two institutions. ADVANCES IN PHYSIOLOGY EDUCATION 2025; 49:621-632. [PMID: 40178833 DOI: 10.1152/advan.00248.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2024] [Revised: 01/22/2025] [Accepted: 04/01/2025] [Indexed: 04/05/2025]
Abstract
Virtual escape rooms (ERs) require learners to solve puzzles and answer riddles while trying to "escape" a digital room. Although the educational merit of such gamified learning activities continues to be realized, guides on the development of ERs are lacking, as well as student perceptions on how, if, and where they should be integrated into medical curricula. Therefore, the aim of this study was to describe the experiences of building anatomy-themed virtual ERs of differing formats at two separate institutions, Queen's University Belfast (QUB) and Edward Via College of Osteopathic Medicine (VCOM), focusing on abdominal and upper limb anatomy, respectively. Google Workspace applications served as the primary platform. Three-dimensional (3-D) models were built with photogrammetry techniques or Virtual Human Dissector software (www.toltech.net) and integrated into the ER. Of 69 students and staff invited at QUB, 9 (13%) participated in the in-person virtual ER in teams of two or three (7 medical students, 2 anatomy instructors). Of 27 VCOM medical students invited, 8 (30%) agreed to participate and individually completed VCOM's virtual ER remotely. Anonymous surveys and a focus group revealed the ERs to be enjoyable and engaging and that they encouraged participants to think about material in a new way while helping them to identify knowledge gaps. Strengths and weaknesses of different designs (linear vs. nonlinear), delivery methods (in person vs. remote), and grouping of participants (team based vs. individual) were realized and discussed, revealing opportunities for optimizing the experience. Future studies would benefit from increasing sample sizes to assess the learning gain of such activities.NEW & NOTEWORTHY Virtual escape rooms (ERs) offer an innovative way to expose students to educational material in a creative, engaging way, particularly when they incorporate three-dimensional (3-D) models. Activities can be readily built with Google Workspace. Offering this activity to teams in a physical setting may promote collaboration and maximize the educational utility, whereas having learners complete it remotely on an individual basis may be more convenient, allowing them to fit it in their study schedule at their own convenience.
Collapse
Affiliation(s)
- Aaron W Beger
- Department of Biomedical Sciences, Edward Via College of Osteopathic Medicine, Blacksburg, Virginia, United States
| | - Sarah Hannan
- Centre for Biomedical Sciences Education, Queen's University Belfast, Belfast, United Kingdom
| | - Riya Patel
- Department of Biomedical Sciences, Edward Via College of Osteopathic Medicine, Blacksburg, Virginia, United States
| | - Eva M Sweeney
- Centre for Biomedical Sciences Education, Queen's University Belfast, Belfast, United Kingdom
| |
Collapse
|
2
|
Canever JB, Nonnenmacher CH, Lima KMM. Reliability of range of motion measurements obtained by goniometry, photogrammetry and smartphone applications in lower limb: A systematic review. J Bodyw Mov Ther 2025; 42:793-802. [PMID: 40325757 DOI: 10.1016/j.jbmt.2025.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 12/17/2024] [Accepted: 01/12/2025] [Indexed: 05/07/2025]
Abstract
OBJECTIVES To systematically review studies involving the reliability of the lower limbs joint range of motion (ROM) by goniometer, photogrammetry and smartphone applications in young and healthy subjects. DATA SOURCES The search was conducted between December 2020 and January 2021 in PubMed, Embase, LILACS, OVID, and SciELO databases. STUDY SELECTION OR ELIGIBILITY CRITERIA Studies that evaluated the reliability of the lower limb joints ROM measurements. STUDY APPRAISAL AND SYNTHESIS METHODS The studies were independently selected and classified by two reviewers according to the COSMIN checklist. A narrative synthesis of the included studies was performed. RESULTS Twelve studies were included. The intraclass correlation coefficient of ROM measurements ranged from 0.18 to 0.99 for goniometer; 0.78 to 1.00 for photogrammetry and 0.33 to 0.98 for smartphone. LIMITATIONS Number of goniometry studies included was higher than photogrammetry and smartphone applications studies. CONCLUSION The goniometry showed the lowest reliability values. Photogrammetry obtained the highest reliability values, but its clinical application is not widely used. Smartphone applications are relatively new and have average reliability values. SYSTEMATIC REVIEW REGISTRATION NUMBER CRD42021225396. CONTRIBUTION OF PAPER 1) Goniometry reliability is variable but could be useful in clinical field due accessibility and acquisition time; (2) Smartphone apps reliability range median-excellent but depends of cell phone size and acess/validation in some countries; (3) Photogrammetry is recommended for researchers due to better reliability but for clinicals require software' knowledge and time; 4) We recommend familiarization of ROM techniques to reduce reliability variability.
Collapse
Affiliation(s)
- Jaquelini Betta Canever
- Post Graduate Program in Neurosciences, Center for Biological Sciences, Federal University of Santa Catarina, Trindade, Florianópolis, Santa Catarina, Brazil
| | - Carolina Holz Nonnenmacher
- Department of Health Sciences, Federal University of Santa Catarina (UFSC), 88906-072, Araranguá, SC, Brazil.
| | - Kelly Mônica Marinho Lima
- Department of Health Sciences, Federal University of Santa Catarina (UFSC), 88906-072, Araranguá, SC, Brazil.
| |
Collapse
|
3
|
Durrani Z, Penrose F, Anderson J, Ricci E, Carr S, Ressel L. A complete workflow from embalmed specimens to life-like 3D virtual models for veterinary anatomy teaching. J Anat 2025; 246:857-868. [PMID: 39707160 PMCID: PMC11996711 DOI: 10.1111/joa.14192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2024] [Revised: 10/22/2024] [Accepted: 11/22/2024] [Indexed: 12/23/2024] Open
Abstract
Understanding normal structural and functional anatomy is critical for health professionals across various fields such as medicine, veterinary, and dental courses. The landscape of anatomical education has evolved tremendously due to several challenges and advancements in blended learning approaches, which have led to the adoption of the use of high-fidelity 3D digital models in anatomical education. Cost-effective methods such as photogrammetry, which creates digital 3D models from aligning 2D photographs, provide a viable alternative to expensive imaging techniques (i.e. computed tomography and magnetic resonance imaging) whilst maintaining photorealism and serving multiple purposes, including surgical planning and research. This study outlines a comprehensive workflow for producing realistic 3D digital models from embalmed veterinary specimens. The process begins with the preservation of specimens using the modified-WhitWell (WhitWell-Liverpool) embalming protocol, which ensures optimal tissue rigidity and improved colour enhancement, facilitating easier manipulation and better photogrammetry outcomes. Once embalmed, specimens are photographed to create digital 3D models using photogrammetry. Briefly, all images are processed to generate a sparse point cloud, which is then rendered into a 3D mesh. The mesh undergoes decimation and smoothing to reduce computational load, and a texture is applied to create a lifelike model. Additional colour enhancements and adjustments are made using digital tools to restore the natural appearance of the specimens. The 3D models are stored on a cloud repository and integrated into the University of Liverpool's Virtual Learning Environment, providing continuous, remote access to high-quality anatomical resources. The switch to embalmed specimens during the COVID-19 pandemic allowed for longer-term use and detailed dissections, enhancing the quality of digital models. Fresh specimens, though naturally coloured, are less stable for photogrammetry, making embalmed specimens preferable for accurate 3D modelling. Our method ensures embalmed specimens are rigid enough for precise modelling while allowing texture adjustments to enhance digital representation. This approach has improved logistical efficiency, educational delivery, and specimen quality. Innovative embalming techniques and advanced photogrammetry have the power to revolutionise anatomical education with the creation of a vast digital library accessible online to students at any time. This approach paves the way for integrating digital 3D models into immersive environments and assessing their impact on learning outcomes.
Collapse
Affiliation(s)
- Zeeshan Durrani
- Department of Veterinary Anatomy, Physiology and Pathology, Institute of Infection, Veterinary and Ecological SciencesUniversity of LiverpoolLiverpoolUK
| | - Fay Penrose
- Department of Veterinary Anatomy, Physiology and Pathology, Institute of Infection, Veterinary and Ecological SciencesUniversity of LiverpoolLiverpoolUK
| | - James Anderson
- Department of Veterinary Anatomy, Physiology and Pathology, Institute of Infection, Veterinary and Ecological SciencesUniversity of LiverpoolLiverpoolUK
| | - Emanuele Ricci
- Department of Veterinary Anatomy, Physiology and Pathology, Institute of Infection, Veterinary and Ecological SciencesUniversity of LiverpoolLiverpoolUK
| | - Stephanie Carr
- Department of Veterinary Anatomy, Physiology and Pathology, Institute of Infection, Veterinary and Ecological SciencesUniversity of LiverpoolLiverpoolUK
| | - Lorenzo Ressel
- Department of Veterinary Anatomy, Physiology and Pathology, Institute of Infection, Veterinary and Ecological SciencesUniversity of LiverpoolLiverpoolUK
| |
Collapse
|
4
|
Ricci E, Leeming G, Ressel L. Photogrammetry: Adding Another Dimension to Virtual Gross Pathology Teaching. JOURNAL OF VETERINARY MEDICAL EDUCATION 2025; 52:41-45. [PMID: 39504188 DOI: 10.3138/jvme-2023-0159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2024]
Abstract
Pathology is a discipline that relies on the description and interpretation of changes occurring in organs and tissues, and it is largely a "hands-on" experience, both during training and professional practice. Instigated by the need to provide a solution for online learning and teaching, a plethora of different approaches have been tested during the Covid-19 pandemic. The enforced inability to meet in person created the necessity to quickly replace the hands-on experience of practical classes, routinely considered the "gold standard" in undergraduate pathology teaching, with alternative and innovative digital solutions that could allow the students to appreciate most, if not all, features of the specimen to describe and interpret. Here we present a successful deployment of photogrammetry for the purpose of teaching gross veterinary pathology to undergraduate students. Fresh specimens obtained during routine diagnostic post-mortem activity have been photographed using Digital Single-Lens Reflex cameras and rendered into high quality 3D models, preserving almost unaltered morphology, color, and texture, when compared to the original specimen. Once processed using photogrammetry software, exported and uploaded into an online repository, 3D models become readily available via our digital learning platform (CANVAS) to all undergraduate students for self-study and consolidation, as well as to teaching staff for use during online lectures, traditional face-to-face classes, small group teaching and seminars. Preliminary data collected from students' feedback highlighted the positive reception from users, and the enriched learning experience, while prolonging indefinitely the availability of rare and perishable teaching material.
Collapse
Affiliation(s)
- Emanuele Ricci
- Department of Veterinary Anatomy Physiology and Pathology, Institute of Infection, Veterinary and Ecological Sciences, University of Liverpool, Leahurst Campus, Chester High Road, Neston CH64 7TE UK
| | - Gail Leeming
- Department of Veterinary Anatomy Physiology and Pathology, Institute of Infection, Veterinary and Ecological Sciences, University of Liverpool, Leahurst Campus, Chester High Road, Neston CH64 7TE UK
| | - Lorenzo Ressel
- Department of Veterinary Anatomy Physiology and Pathology, Institute of Infection, Veterinary and Ecological Sciences, University of Liverpool, Leahurst Campus, Chester High Road, Neston CH64 7TE UK
| |
Collapse
|
5
|
Feddema JC, Chiu LZF. Accuracy and repeatability of 3D Photogrammetry to digitally reconstruct bones. Morphologie 2024; 108:100793. [PMID: 38964273 DOI: 10.1016/j.morpho.2024.100793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Revised: 06/11/2024] [Accepted: 06/12/2024] [Indexed: 07/06/2024]
Abstract
Advances in computer hardware and software permit the reconstruction of physical objects digitally from digital camera images. Given the varying shapes and sizes of human bones, a comprehensive assessment is required to establish the accuracy of digital bone reconstructions from three-dimensional (3D) photogrammetry. Five human bones (femur, radius, scapula, vertebra, patella) were marked with pencil, to establish between 9 and 29 landmarks. The distances between landmarks were measured from the physical bones and digitized from 3D reconstructions. Images used for reconstructions were taken on two separate days, allowing for repeatability to be established. In comparison to physical measurements, the mean (±standard deviation) absolute differences were between 0.2±0.1mm and 0.4±0.2mm. The mean (±standard deviation) absolute differences between reconstructions were between 0.3±<0.1mm and 0.4±0.4mm. The 3D photogrammetry procedures described are accurate and repeatable, permitting quantitative analyses to be conducted from digital reconstructions. Moreover, 3D photogrammetry may be used to capture and preserve anatomical materials for anatomy education.
Collapse
Affiliation(s)
- J C Feddema
- Neuromusculoskeletal Mechanics Research Program, Faculty of Kinesiology, Sport, and Recreation, University of Alberta, T6G 2H9 Edmonton, AB, Canada
| | - L Z F Chiu
- Neuromusculoskeletal Mechanics Research Program, Faculty of Kinesiology, Sport, and Recreation, University of Alberta, T6G 2H9 Edmonton, AB, Canada.
| |
Collapse
|
6
|
Titmus M, Whittaker G, Radunski M, Ellery P, Ir de Oliveira B, Radley H, Helmholz P, Sun Z. A workflow for the creation of photorealistic 3D cadaveric models using photogrammetry. J Anat 2023; 243:319-333. [PMID: 37432760 DOI: 10.1111/joa.13872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 01/20/2023] [Accepted: 03/17/2023] [Indexed: 07/12/2023] Open
Abstract
Three-dimensional (3D) representations of anatomical specimens are increasingly used as learning resources. Photogrammetry is a well-established technique that can be used to generate 3D models and has only been recently applied to produce visualisations of cadaveric specimens. This study has developed a semi-standardised photogrammetry workflow to produce photorealistic models of human specimens. Eight specimens, each with unique anatomical characteristics, were successfully digitised into interactive 3D models using the described workflow and the strengths and limitations of the technique are described. Various tissue types were reconstructed with apparent preservation of geometry and texture which visually resembled the original specimen. Using this workflow, an institution could digitise their existing cadaveric resources, facilitating the delivery of novel educational experiences.
Collapse
Affiliation(s)
- Morgan Titmus
- Curtin Medical School, Curtin University, Perth, Australia
| | - Gary Whittaker
- Curtin Medical School, Curtin University, Perth, Australia
| | - Milo Radunski
- Curtin Medical School, Curtin University, Perth, Australia
| | - Paul Ellery
- Curtin Medical School, Curtin University, Perth, Australia
| | | | - Hannah Radley
- Curtin Medical School, Curtin University, Perth, Australia
| | - Petra Helmholz
- School of Earth and Planetary Sciences, Curtin University, Perth, Australia
| | - Zhonghua Sun
- Curtin Medical School, Curtin University, Perth, Australia
| |
Collapse
|
7
|
To JK, Wang JN, Vu AN, Ediriwickrema LS, Browne AW. Optimization of a Novel Automated, Low Cost, Three-Dimensional Photogrammetry System (PHACE). MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.04.21.23288659. [PMID: 37131650 PMCID: PMC10153329 DOI: 10.1101/2023.04.21.23288659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Introduction Clinical tools are neither standardized nor ubiquitous to monitor volumetric or morphological changes in the periorbital region and ocular adnexa due to pathology such as oculofacial trauma, thyroid eye disease, and the natural aging process. We have developed a low-cost, three dimensionally printed PHotogrammetry for Automated CarE (PHACE) system to evaluate three-dimensional (3D) measurements of periocular and adnexal tissue. Methods The PHACE system uses two Google Pixel 3 smartphones attached to automatic rotating platforms to image a subject's face through a cutout board patterned with registration marks. Photographs of faces were taken from many perspectives by the cameras placed on the rotating platform. Faces were imaged with and without 3D printed hemispheric phantom lesions (black domes) affixed on the forehead above the brow. Images were rendered into 3D models in Metashape (Agisoft, St. Petersburg, Russia) and then processed and analyzed in CloudCompare (CC) and Autodesk's Meshmixer. The 3D printed hemispheres affixed to the face were then quantified within Meshmixer and compared to their known volumes. Finally, we compared digital exophthalmometry measurements with results from a standard Hertel exophthalmometer in a subject with and without an orbital prosthesis. Results Quantification of 3D printed phantom volumes using optimized stereophotogrammetry demonstrated a 2.5% error for a 244μL phantom, and 7.6% error for a 27.5μL phantom. Digital exophthalmometry measurements differed by 0.72mm from a standard exophthalmometer. Conclusion We demonstrated an optimized workflow using our custom apparatus to analyze and quantify oculofacial volumetric and dimensions changes with a resolution of 244μL. This apparatus is a low-cost tool that can be used in clinical settings to objectively monitor volumetric and morphological changes in periorbital anatomy.
Collapse
Affiliation(s)
- Josiah K To
- Gavin Herbert Eye Institute, Department of Ophthalmology, University of California Irvine, Irvine California
| | - Jenny N Wang
- School of Medicine, University of California Irvine, Irvine California
| | - Anderson N Vu
- Gavin Herbert Eye Institute, Department of Ophthalmology, University of California Irvine, Irvine California
| | - Lilangi S Ediriwickrema
- Gavin Herbert Eye Institute, Department of Ophthalmology, University of California Irvine, Irvine California
- Institute for Clinical and Translational Science, University of California Irvine, Irvine California
| | - Andrew W Browne
- Gavin Herbert Eye Institute, Department of Ophthalmology, University of California Irvine, Irvine California
- School of Medicine, University of California Irvine, Irvine California
- Department of Biomedical Engineering, University of California Irvine, Irvine California
- Institute for Clinical and Translational Science, University of California Irvine, Irvine California
| |
Collapse
|
8
|
Gianotto I, Coutts A, Pérez-Pachón L, Gröning F. Evaluating a Photogrammetry-Based Video for Undergraduate Anatomy Education. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2023; 1421:63-78. [PMID: 37524984 DOI: 10.1007/978-3-031-30379-1_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/02/2023]
Abstract
Modern anatomy education has benefitted from the development of a wide range of digital 3D resources in the past decades, but the impact of the COVID-19 pandemic has sparked an additional demand for high-quality online learning resources. Photogrammetry provides a low-cost technique for departments to create their own photo-realistic 3D models of cadaveric specimens. However, to ensure accessibility, the design of the resulting learning resources should be carefully considered. We aimed to address this by creating a video based on a photogrammetry model of a cadaveric human lung. Students evaluated three different versions of this video in a Likert-type online survey. Most responding students found this type of video useful for their learning and helpful for the identification of anatomical structures in real cadaveric specimens. Respondents also showed a preference for specific design features such as a short video length, white text on black background, and the presence of captions. The positive student feedback is promising for the future development of photogrammetry-based videos for anatomy education and this study has provided pilot data to improve the accessibility of such videos.
Collapse
Affiliation(s)
- Irene Gianotto
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
| | - Alexander Coutts
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
| | - Laura Pérez-Pachón
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
| | - Flora Gröning
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK.
| |
Collapse
|
9
|
Rendón-Medina MA, Hanson-Viana E, Mendoza-Velez MDLA, Hernandez-Ordoñez R, Vazquez-Morales HL, Pacheco-López RC. Comparison of Nasal Analysis by Photographs (2D) against Low-cost Surface Laser Imaging (3D) and against Computed Axial Tomography Imaging. Indian J Plast Surg 2022; 56:147-152. [PMID: 37153340 PMCID: PMC10159724 DOI: 10.1055/s-0042-1759724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Abstract
Introduction In aesthetic surgery, we have a few evaluation tools that numerically and objectively measure the changes we make in patients. This article aimed to evaluate the nasal systematic analysis and compare findings between the three systems of nasal evaluation: photographs 2D, 3D surface imaging with the Kinect system, and 3D CT scan imaging.
Methods We designed a longitudinal and descriptive prospective study with simple non-blind randomization. To compare the systematic nasal analysis between the three methods. If the findings are similar, all three methods would be useful in independent clinical scenarios.
Results A total of 42 observations were included finding a minimum age of 21 with a mean of 28 years old. Also, 64% were female, 93% had adequate facial proportions, and 50% were Fitzpatrick III. For outcome statistics, we found differential nasal deviation between 3D images with a mean of 6.53 mm. While when comparing the nasal dorsum length, we found a statistical significance of p = 0.051. When comparing the nasal dorsum length index, we found no significant difference p = 0.32. Also, we did not find statistical significance when comparing the nasofrontal angle and tip rotation angle p = 1 for both.
Conclusion We found that the population we serve has characteristics of Hispanic mestizo nose. The three methods seem to evaluate systematic nasal analysis in a very similar way, and any of them can be used depending on the scenario and the needs of plastic surgeons.
Collapse
Affiliation(s)
| | - Erik Hanson-Viana
- Department of Plastic and Reconstructive Surgery in The General Hospital Ruben Leñero, Mexico City, Mexico
| | | | - Rubén Hernandez-Ordoñez
- Department of Plastic and Reconstructive Surgery in The General Hospital Ruben Leñero, Mexico City, Mexico
| | - Hecly Lya Vazquez-Morales
- Department of Plastic and Reconstructive Surgery in The General Hospital Ruben Leñero, Mexico City, Mexico
| | - Ricardo C. Pacheco-López
- Department of Plastic and Reconstructive Surgery in The General Hospital Ruben Leñero, Mexico City, Mexico
| |
Collapse
|
10
|
Zhao C, Xiao H, Zhao Z, Wang G. Prediction and Optimization Algorithm for Intersection Point of Spatial Multi-Lines Based on Photogrammetry. SENSORS (BASEL, SWITZERLAND) 2022; 22:9821. [PMID: 36560189 PMCID: PMC9785157 DOI: 10.3390/s22249821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 12/08/2022] [Accepted: 12/12/2022] [Indexed: 06/17/2023]
Abstract
The basic theory of photogrammetry is mature and widely used in engineering. The environment in engineering is very complex, resulting in the corners or multi-line intersections being blocked and unable to be measured directly. In order to solve this problem, a prediction and optimization algorithm for intersection point of spatial multi-lines based on photogrammetry is proposed. The coordinates of points on space lines are calculated by photogrammetry algorithm. Due to the influence of image point distortion and point selection error, many lines do not strictly intersect at one point. The equations of many space lines are used to fit their initial value of intersection point. The initial intersection point is projected onto each image, and the distances between the projection point and each line on the image plane are used to weight the calculated spatial lines in combination with the information entropy. Then the intersection point coordinates are re-fitted, and the intersection point is repeatedly projected and recalculate until the error is less than the threshold value or reached the set number of iterations. Three different scenarios are selected for experiments. The experimental results show that the proposed algorithm significantly improves the prediction accuracy of the intersection point.
Collapse
Affiliation(s)
- Chengli Zhao
- School of Transportation and Logistics Engineering, Wuhan University of Technology, Wuhan 430063, China
| | - Hao Xiao
- CCCC Second Harbor Engineering Company Ltd., Wuhan 430040, China
- Key Laboratory of Large-Span Bridge Construction Technology, Wuhan 430040, China
- Research and Development Center of Transport Industry of Intelligent Manufacturing Technologies of Transport Infrastructure, Wuhan 430040, China
| | - Zhangyan Zhao
- School of Transportation and Logistics Engineering, Wuhan University of Technology, Wuhan 430063, China
| | - Guoxian Wang
- School of Transportation and Logistics Engineering, Wuhan University of Technology, Wuhan 430063, China
| |
Collapse
|
11
|
Detection of Breast Cancer Lump and BRCA1/2 Genetic Mutation under Deep Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9591781. [PMID: 36172325 PMCID: PMC9512604 DOI: 10.1155/2022/9591781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Revised: 08/16/2022] [Accepted: 08/23/2022] [Indexed: 11/17/2022]
Abstract
To diagnose and cure breast cancer early, thus reducing the mortality of patients with breast cancer, a method was provided to judge threshold of image segmentation by wavelet transform (WT). It was used to obtain information about the general area of breast lumps by making a rough segmentation of the suspected area of the lump on mammogram. The boundary signal of the lump was obtained by region growth calculation or contour model of local activity. Meanwhile, multiplex polymerase chain reaction (mPCR) and mPCR-next-generation sequencing (mPCR-NGS) were used to detect BRCA1/2 genome. Sanger test was used for newly high virulent mutations to verify the correctness of mutagenic sites. The results were compared with the information marked by experts in the database. According to Daubechies wavelet coefficients, the average measurement accuracy was 92.9% and the average false positive rate of each image was 86%. According to mPCR-NGS, there was no pathogenic mutation in the 7 patients with high-risk BRCA1/2 genetic mutations. Single nucleotide polymorphism (SNP) in nonsynonymous coding region was detected, which was consistent with the Sanger test results. This method effectively isolated the lump area of human mammogram, and mPCR-NGS had high specificity and sensitivity in detecting BRCA1/2 genetic mutation sites. Compared with traditional Sanger test and target sequence capture test, it also had such advantages as easy operation, short duration, and low cost of consumables, which was worthy of further promotion and adoption.
Collapse
|
12
|
Application of 3D printing in assessment and demonstration of stab injuries. Int J Legal Med 2022; 136:1431-1442. [PMID: 35657431 PMCID: PMC9375752 DOI: 10.1007/s00414-022-02846-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 05/23/2022] [Indexed: 11/17/2022]
Abstract
In stabbing related fatalities, the forensic pathologist has to assess the direction of wound track (thus, the direction of the stabbing) and the weapon’s possible characteristics by examining the stab wound. The determination of these characteristics can be made only with a high level of uncertainty, and the precise direction of the stabbing is often difficult to assess if only soft tissues are injured. Previously reported techniques used for the assessment of these wound characteristics have substantial limitations. This manuscript presents a method using today’s easily accessible three-dimensional (3D) printing technology for blade-wound comparison and wound track determination. Scanning and 3D printing of knives is a useful method to identify weapons and determine the precise stabbing direction in a stabbing incident without compromising the trace evidence or the autopsy results. Ballistic gel experiment, and dynamic stabbing test experiments prove the method can be applied in safety, without compromising the autopsy results. Identification of the exact knife is not possible with complete certainty but excluding certain knives will decrease the number of necessary DNA examinations, hence it can lower the burden on forensic genetic laboratories. The method addresses many of the shortcomings of previously used methods of probe insertion or post-mortem CT. Insertion of the printed knife into the wound gives a good visual demonstration of the stabbing direction, thus easing the forensic reconstruction of the stabbing incident. After combining the 3D printing with photogrammetry, the achieved 3D visualization is useful for courtroom demonstration and educational purposes.
Collapse
|