1
|
Kubota Y, Kodera S, Hirata A. A novel transfer learning framework for non-uniform conductivity estimation with limited data in personalized brain stimulation. Phys Med Biol 2025; 70:105002. [PMID: 40280154 DOI: 10.1088/1361-6560/add105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2024] [Accepted: 04/25/2025] [Indexed: 04/29/2025]
Abstract
Objective. Personalized transcranial magnetic stimulation (TMS) requires individualized head models that incorporate non-uniform conductivity to enable target-specific stimulation. Accurately estimating non-uniform conductivity in individualized head models remains a challenge due to the difficulty of obtaining precise ground truth data. To address this issue, we have developed a novel transfer learning-based approach for automatically estimating non-uniform conductivity in a human head model with limited data.Approach. The proposed method complements the limitations of the previous conductivity network (CondNet) and improves the conductivity estimation accuracy. This method generates a segmentation model from T1- and T2-weighted magnetic resonance images, which is then used for conductivity estimation via transfer learning. To enhance the model's representation capability, a Transformer was incorporated into the segmentation model, while the conductivity estimation model was designed using a combination of Attention Gates and Residual Connections, enabling efficient learning even with a small amount of data.Main results. The proposed method was evaluated using 1494 images, demonstrating a 2.4% improvement in segmentation accuracy and a 29.1% increase in conductivity estimation accuracy compared with CondNet. Furthermore, the proposed method achieved superior conductivity estimation accuracy even with only three training cases, outperforming CondNet, which was trained on an adequate number of cases. The conductivity maps generated by the proposed method yielded better results in brain electrical field simulations than CondNet.Significance. These findings demonstrate the high utility of the proposed method in brain electrical field simulations and suggest its potential applicability to other medical image analysis tasks and simulations.
Collapse
Affiliation(s)
- Yoshiki Kubota
- Department of Electrical and Mechanical Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan
| | - Sachiko Kodera
- Department of Electrical and Mechanical Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan
| | - Akimasa Hirata
- Department of Electrical and Mechanical Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan
| |
Collapse
|
2
|
Chamberland M, Yang JYM, Aydogan DB. Real-time tractography: computation and visualization. Brain Struct Funct 2025; 230:62. [PMID: 40328906 DOI: 10.1007/s00429-025-02928-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2025] [Accepted: 04/27/2025] [Indexed: 05/08/2025]
Abstract
Did you know that even though tractography is often considered a computationally expensive and offline process, the latest algorithms can now be performed in real-time without sacrificing accuracy? Interactive real-time tractography has proven to be valuable in surgical planning and has the potential to enhance neuromodulation therapies, highlighting the importance of speed and precision in the generation of tractograms. This demand has driven the development of nearly 50 visualization tools over the past two decades, with advances in interactive real-time tractography offering new possibilities and providing rich insights into brain connectivity.
Collapse
Affiliation(s)
- Maxime Chamberland
- Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Joseph Yuan-Mou Yang
- Department of Neurosurgery, Neuroscience Advanced Clinical Imaging Service (NACIS), The Royal Children's Hospital, Melbourne, Australia
- Neuroscience Research, Murdoch Children's Research Institute, Parkville, Melbourne, Australia
- Department of Paediatrics, University of Melbourne, Parkville, Melbourne, Australia
| | - Dogu Baran Aydogan
- A.I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|
3
|
Cabral BP, Braga LAM, Conte Filho CG, Penteado B, Freire de Castro Silva SL, Castro L, Fornazin M, Mota F. Future Use of AI in Diagnostic Medicine: 2-Wave Cross-Sectional Survey Study. J Med Internet Res 2025; 27:e53892. [PMID: 40053779 PMCID: PMC11907171 DOI: 10.2196/53892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 05/06/2024] [Accepted: 10/18/2024] [Indexed: 03/09/2025] Open
Abstract
BACKGROUND The rapid evolution of artificial intelligence (AI) presents transformative potential for diagnostic medicine, offering opportunities to enhance diagnostic accuracy, reduce costs, and improve patient outcomes. OBJECTIVE This study aimed to assess the expected future impact of AI on diagnostic medicine by comparing global researchers' expectations using 2 cross-sectional surveys. METHODS The surveys were conducted in September 2020 and February 2023. Each survey captured a 10-year projection horizon, gathering insights from >3700 researchers with expertise in AI and diagnostic medicine from all over the world. The survey sought to understand the perceived benefits, integration challenges, and evolving attitudes toward AI use in diagnostic settings. RESULTS Results indicated a strong expectation among researchers that AI will substantially influence diagnostic medicine within the next decade. Key anticipated benefits include enhanced diagnostic reliability, reduced screening costs, improved patient care, and decreased physician workload, addressing the growing demand for diagnostic services outpacing the supply of medical professionals. Specifically, x-ray diagnosis, heart rhythm interpretation, and skin malignancy detection were identified as the diagnostic tools most likely to be integrated with AI technologies due to their maturity and existing AI applications. The surveys highlighted the growing optimism regarding AI's ability to transform traditional diagnostic pathways and enhance clinical decision-making processes. Furthermore, the study identified barriers to the integration of AI in diagnostic medicine. The primary challenges cited were the difficulties of embedding AI within existing clinical workflows, ethical and regulatory concerns, and data privacy issues. Respondents emphasized uncertainties around legal responsibility and accountability for AI-supported clinical decisions, data protection challenges, and the need for robust regulatory frameworks to ensure safe AI deployment. Ethical concerns, particularly those related to algorithmic transparency and bias, were noted as increasingly critical, reflecting a heightened awareness of the potential risks associated with AI adoption in clinical settings. Differences between the 2 survey waves indicated a growing focus on ethical and regulatory issues, suggesting an evolving recognition of these challenges over time. CONCLUSIONS Despite these barriers, there was notable consistency in researchers' expectations across the 2 survey periods, indicating a stable and sustained outlook on AI's transformative potential in diagnostic medicine. The findings show the need for interdisciplinary collaboration among clinicians, AI developers, and regulators to address ethical and practical challenges while maximizing AI's benefits. This study offers insights into the projected trajectory of AI in diagnostic medicine, guiding stakeholders, including health care providers, policy makers, and technology developers, on navigating the opportunities and challenges of AI integration.
Collapse
Affiliation(s)
- Bernardo Pereira Cabral
- Cellular Communication Laboratory, Oswaldo Cruz Institute, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
- Department of Economics, Faculty of Economics, Federal University of Bahia, Salvador, Brazil
| | - Luiza Amara Maciel Braga
- Cellular Communication Laboratory, Oswaldo Cruz Institute, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
| | | | - Bruno Penteado
- Fiocruz Strategy for the 2030 Agenda, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
| | - Sandro Luis Freire de Castro Silva
- National Cancer Institute, Rio de Janeiro, Brazil
- Graduate Program in Management and Strategy, Federal Rural University of Rio de Janeiro, Seropedica, Brazil
| | - Leonardo Castro
- Fiocruz Strategy for the 2030 Agenda, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
- National School of Public Health, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
| | - Marcelo Fornazin
- Fiocruz Strategy for the 2030 Agenda, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
- National School of Public Health, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
| | - Fabio Mota
- Cellular Communication Laboratory, Oswaldo Cruz Institute, Oswaldo Cruz Foundation, Rio de Janeiro, Brazil
| |
Collapse
|
4
|
Al-Haj Husain A, Stojicevic M, Hainc N, Stadlinger B. Advanced Imaging and Preoperative MR-Based Cinematic Rendering Reconstructions for Neoplasms in the Oral and Maxillofacial Region. Diagnostics (Basel) 2024; 15:33. [PMID: 39795561 PMCID: PMC11720703 DOI: 10.3390/diagnostics15010033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2024] [Revised: 11/19/2024] [Accepted: 12/21/2024] [Indexed: 01/13/2025] Open
Abstract
This case study highlights the use of cinematic rendering (CR) in preoperative planning for the excision of a cyst in the oral and maxillofacial region of a 60-year-old man. The patient presented with a firm, non-tender mass in the right cheek, clinically suspected to be an epidermoid cyst. Conventional imaging, including dental magnetic resonance imaging (MRI) protocols, confirmed the lesion's size, location, and benign nature. CR reconstructions, combining advanced algorithms and novel skin presets, allow for the generation of highly realistic, three-dimensional visualizations from conventional imaging datasets. CR provided an enhanced, detailed depiction of the lesion within its anatomical context, significantly improving spatial understanding for surgical planning. The surgical excision was performed without complications, and histological analysis confirmed the diagnosis of a benign epidermoid cyst with no evidence of dysplasia or malignancy. This case demonstrates the potential of CR to refine preoperative planning, especially in complex anatomical regions such as the face and jaw, by offering superior visualization of superficial and deep structures. Thus, the integration of CR into clinical workflows has the potential to lead to improved diagnostic accuracy and better surgical outcomes.
Collapse
Affiliation(s)
- Adib Al-Haj Husain
- Clinic of Cranio-Maxillofacial and Oral Surgery, Center of Dental Medicine, University of Zurich, 8032 Zurich, Switzerland; (M.S.); (B.S.)
- Department of Cranio-Maxillofacial and Oral Surgery, University Hospital Zurich, University of Zurich, 8032 Zurich, Switzerland
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, 8091 Zurich, Switzerland;
- Department of Cranio-Maxillofacial Surgery, GROW School for Oncology and Reproduction, Maastricht University Medical Centre, 6229 Maastricht, The Netherlands
| | - Milica Stojicevic
- Clinic of Cranio-Maxillofacial and Oral Surgery, Center of Dental Medicine, University of Zurich, 8032 Zurich, Switzerland; (M.S.); (B.S.)
| | - Nicolin Hainc
- Department of Neuroradiology, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, 8091 Zurich, Switzerland;
| | - Bernd Stadlinger
- Clinic of Cranio-Maxillofacial and Oral Surgery, Center of Dental Medicine, University of Zurich, 8032 Zurich, Switzerland; (M.S.); (B.S.)
| |
Collapse
|
5
|
Macrì S, Di-Poï N. The SmARTR pipeline: A modular workflow for the cinematic rendering of 3D scientific imaging data. iScience 2024; 27:111475. [PMID: 39720527 PMCID: PMC11667014 DOI: 10.1016/j.isci.2024.111475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Revised: 09/19/2024] [Accepted: 11/21/2024] [Indexed: 12/26/2024] Open
Abstract
Advancements in noninvasive surface and internal imaging techniques, along with computational methods, have revolutionized 3D visualization of organismal morphology-enhancing research, medical anatomical analysis, and facilitating the preservation and digital archiving of scientific specimens. We introduce the SmARTR pipeline (Small Animal Realistic Three-dimensional Rendering), a comprehensive workflow integrating wet lab procedures, 3D data acquisition, and processing to produce photorealistic scientific data through 3D cinematic rendering. This versatile pipeline supports multiscale visualizations-from tissue-level to whole-organism details across diverse living organisms-and is adaptable to various imaging sources. Its modular design and customizable rendering scenarios, enabled by the global illumination modeling and programming modules available in the free MeVisLab software and seamlessly integrated into detailed SmARTR networks, make it a powerful tool for 3D data analysis. Accessible to a broad audience, the SmARTR pipeline serves as a valuable resource across multiple life science research fields and for education, diagnosis, outreach, and artistic endeavors.
Collapse
Affiliation(s)
- Simone Macrì
- Institute of Biotechnology, Helsinki Institute of Life Science, University of Helsinki, 00014 Helsinki, Finland
| | - Nicolas Di-Poï
- Institute of Biotechnology, Helsinki Institute of Life Science, University of Helsinki, 00014 Helsinki, Finland
| |
Collapse
|
6
|
Huellner MW, Engel K, Morand GB, Stadlinger B. Cinematic rendering of [ 18F]FDG-PET/MR. Eur J Nucl Med Mol Imaging 2024; 51:3805-3806. [PMID: 38951188 PMCID: PMC11445327 DOI: 10.1007/s00259-024-06812-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Accepted: 06/16/2024] [Indexed: 07/03/2024]
Affiliation(s)
- Martin W Huellner
- Department of Nuclear Medicine, University Hospital Zurich, University of Zurich, Raemistrasse 100, Zurich, CH-8091, Switzerland.
| | | | - Grégoire B Morand
- Department of Otolaryngology-Head and Neck Surgery, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Bernd Stadlinger
- Clinic of Cranio-Maxillofacial and Oral Surgery, Center for Dental Medicine, University of Zurich, Zurich, Switzerland
| |
Collapse
|
7
|
Zitnik M, Li MM, Wells A, Glass K, Morselli Gysi D, Krishnan A, Murali TM, Radivojac P, Roy S, Baudot A, Bozdag S, Chen DZ, Cowen L, Devkota K, Gitter A, Gosline SJC, Gu P, Guzzi PH, Huang H, Jiang M, Kesimoglu ZN, Koyuturk M, Ma J, Pico AR, Pržulj N, Przytycka TM, Raphael BJ, Ritz A, Sharan R, Shen Y, Singh M, Slonim DK, Tong H, Yang XH, Yoon BJ, Yu H, Milenković T. Current and future directions in network biology. BIOINFORMATICS ADVANCES 2024; 4:vbae099. [PMID: 39143982 PMCID: PMC11321866 DOI: 10.1093/bioadv/vbae099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 05/31/2024] [Accepted: 07/08/2024] [Indexed: 08/16/2024]
Abstract
Summary Network biology is an interdisciplinary field bridging computational and biological sciences that has proved pivotal in advancing the understanding of cellular functions and diseases across biological systems and scales. Although the field has been around for two decades, it remains nascent. It has witnessed rapid evolution, accompanied by emerging challenges. These stem from various factors, notably the growing complexity and volume of data together with the increased diversity of data types describing different tiers of biological organization. We discuss prevailing research directions in network biology, focusing on molecular/cellular networks but also on other biological network types such as biomedical knowledge graphs, patient similarity networks, brain networks, and social/contact networks relevant to disease spread. In more detail, we highlight areas of inference and comparison of biological networks, multimodal data integration and heterogeneous networks, higher-order network analysis, machine learning on networks, and network-based personalized medicine. Following the overview of recent breakthroughs across these five areas, we offer a perspective on future directions of network biology. Additionally, we discuss scientific communities, educational initiatives, and the importance of fostering diversity within the field. This article establishes a roadmap for an immediate and long-term vision for network biology. Availability and implementation Not applicable.
Collapse
Affiliation(s)
- Marinka Zitnik
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02115, United States
| | - Michelle M Li
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02115, United States
| | - Aydin Wells
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, United States
- Lucy Family Institute for Data and Society, University of Notre Dame, Notre Dame, IN 46556, United States
- Eck Institute for Global Health, University of Notre Dame, Notre Dame, IN 46556, United States
| | - Kimberly Glass
- Channing Division of Network Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA 02115, United States
| | - Deisy Morselli Gysi
- Channing Division of Network Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA 02115, United States
- Department of Statistics, Federal University of Paraná, Curitiba, Paraná 81530-015, Brazil
- Department of Physics, Northeastern University, Boston, MA 02115, United States
| | - Arjun Krishnan
- Department of Biomedical Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, United States
| | - T M Murali
- Department of Computer Science, Virginia Tech, Blacksburg, VA 24061, United States
| | - Predrag Radivojac
- Khoury College of Computer Sciences, Northeastern University, Boston, MA 02115, United States
| | - Sushmita Roy
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, WI 53715, United States
- Wisconsin Institute for Discovery, Madison, WI 53715, United States
| | - Anaïs Baudot
- Aix Marseille Université, INSERM, MMG, Marseille, France
| | - Serdar Bozdag
- Department of Computer Science and Engineering, University of North Texas, Denton, TX 76203, United States
- Department of Mathematics, University of North Texas, Denton, TX 76203, United States
| | - Danny Z Chen
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, United States
| | - Lenore Cowen
- Department of Computer Science, Tufts University, Medford, MA 02155, United States
| | - Kapil Devkota
- Department of Computer Science, Tufts University, Medford, MA 02155, United States
| | - Anthony Gitter
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, WI 53715, United States
- Morgridge Institute for Research, Madison, WI 53715, United States
| | - Sara J C Gosline
- Biological Sciences Division, Pacific Northwest National Laboratory, Seattle, WA 98109, United States
| | - Pengfei Gu
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, United States
| | - Pietro H Guzzi
- Department of Medical and Surgical Sciences, University Magna Graecia of Catanzaro, Catanzaro, 88100, Italy
| | - Heng Huang
- Department of Computer Science, University of Maryland College Park, College Park, MD 20742, United States
| | - Meng Jiang
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, United States
| | - Ziynet Nesibe Kesimoglu
- Department of Computer Science and Engineering, University of North Texas, Denton, TX 76203, United States
- National Center of Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD 20814, United States
| | - Mehmet Koyuturk
- Department of Computer and Data Sciences, Case Western Reserve University, Cleveland, OH 44106, United States
| | - Jian Ma
- Ray and Stephanie Lane Computational Biology Department, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Alexander R Pico
- Institute of Data Science and Biotechnology, Gladstone Institutes, San Francisco, CA 94158, United States
| | - Nataša Pržulj
- Department of Computer Science, University College London, London, WC1E 6BT, England
- ICREA, Catalan Institution for Research and Advanced Studies, Barcelona, 08010, Spain
- Barcelona Supercomputing Center (BSC), Barcelona, 08034, Spain
| | - Teresa M Przytycka
- National Center of Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD 20814, United States
| | - Benjamin J Raphael
- Department of Computer Science, Princeton University, Princeton, NJ 08544, United States
| | - Anna Ritz
- Department of Biology, Reed College, Portland, OR 97202, United States
| | - Roded Sharan
- School of Computer Science, Tel Aviv University, Tel Aviv, 69978, Israel
| | - Yang Shen
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, United States
| | - Mona Singh
- Department of Computer Science, Princeton University, Princeton, NJ 08544, United States
- Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, NJ 08544, United States
| | - Donna K Slonim
- Department of Computer Science, Tufts University, Medford, MA 02155, United States
| | - Hanghang Tong
- Department of Computer Science, University of Illinois Urbana-Champaign, Urbana, IL 61801, United States
| | - Xinan Holly Yang
- Department of Pediatrics, University of Chicago, Chicago, IL 60637, United States
| | - Byung-Jun Yoon
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, United States
- Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973, United States
| | - Haiyuan Yu
- Department of Computational Biology, Weill Institute for Cell and Molecular Biology, Cornell University, Ithaca, NY 14853, United States
| | - Tijana Milenković
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, United States
- Lucy Family Institute for Data and Society, University of Notre Dame, Notre Dame, IN 46556, United States
- Eck Institute for Global Health, University of Notre Dame, Notre Dame, IN 46556, United States
| |
Collapse
|
8
|
Shim J, Lee Y. No-Reference-Based and Noise Level Evaluations of Cinematic Rendering in Bone Computed Tomography. Bioengineering (Basel) 2024; 11:563. [PMID: 38927799 PMCID: PMC11201129 DOI: 10.3390/bioengineering11060563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Revised: 05/07/2024] [Accepted: 05/30/2024] [Indexed: 06/28/2024] Open
Abstract
Cinematic rendering (CR) is a new 3D post-processing technology widely used to produce bone computed tomography (CT) images. This study aimed to evaluate the performance quality of CR in bone CT images using blind quality and noise level evaluations. Bone CT images of the face, shoulder, lumbar spine, and wrist were acquired. Volume rendering (VR), which is widely used in the field of diagnostic medical imaging, was additionally set along with CR. A no-reference-based blind/referenceless image spatial quality evaluator (BRISQUE) and coefficient of variation (COV) were used to evaluate the overall quality of the acquired images. The average BRISQUE values derived from the four areas were 39.87 and 46.44 in CR and VR, respectively. The difference between the two values was approximately 1.16, and the difference between the resulting values increased, particularly in the bone CT image, where metal artifacts were observed. In addition, we confirmed that the COV value improved by 2.20 times on average when using CR compared to VR. This study proved that CR is useful in reconstructing bone CT 3D images and that various applications in the diagnostic medical field will be possible.
Collapse
Affiliation(s)
- Jina Shim
- Department of Diagnostic Radiology, Severance Hospital, 50-1, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea;
| | - Youngjin Lee
- Department of Radiological Science, Gachon University, Incheon 21936, Republic of Korea
| |
Collapse
|
9
|
Brookmeyer C, Chu LC, Rowe SP, Fishman EK. Clinical implementation of cinematic rendering. Curr Probl Diagn Radiol 2024; 53:313-328. [PMID: 38365458 DOI: 10.1067/j.cpradiol.2024.01.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Accepted: 01/16/2024] [Indexed: 02/18/2024]
Abstract
Cinematic rendering is a recently developed photorealistic display technique for standard volumetric data sets. It has broad-reaching applications in cardiovascular, musculoskeletal, abdominopelvic, and thoracic imaging. It has been used for surgical planning and has emerging use in educational settings. We review the logistics of performing this post-processing step and its integration into existing workflow.
Collapse
Affiliation(s)
- Claire Brookmeyer
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States.
| | - Linda C Chu
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Steven P Rowe
- Department of Radiology, University of North Carolina School of Medicine, Chapel Hill, NC, United States
| | - Elliot K Fishman
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
10
|
Layden N, Brassil C, Jha N, Saundankar J, Yim D, Andrews D, Patukale A, Srigandan S, Murray CP. Cinematic versus volume rendered imaging for the depiction of complex congenital heart disease. J Med Imaging Radiat Oncol 2023; 67:487-491. [PMID: 36916320 DOI: 10.1111/1754-9485.13518] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 01/31/2023] [Indexed: 03/16/2023]
Abstract
INTRODUCTION Planning for surgical intervention for patients with complex congenital heart disease requires a comprehensive understanding of the individual's anatomy. Cinematic rendering (CR) is a novel technique that purportedly builds on traditional volume rendering (VR) by converting CT image data into clearly defined 3D reconstructions through the stimulation and propagation of light rays. The purpose of this study was to compare CR to VR for the understanding of critical anatomy in unoperated complex congenital heart disease. METHODS In this retrospective study, CT data sets from 20 sequential scanned cases of unoperated paediatric patients with complex congenital heart disease were included. 3D images were produced at standardised and selected orientations, matched for both VR and CR. The images were then independently reviewed by two cardiologists, two radiologists and two surgeons for overall image quality, depth perception and the visualisation of surgically relevant anatomy, the coronary arteries and the pulmonary veins. RESULTS Cinematic rendering demonstrated significantly superior image quality, depth perception and visualisation of surgically relevant anatomy than VR. CONCLUSION Cinematic rendering is a novel 3D CT-rendering technique that may surpass the traditionally used volumetric rendering technique in the provision of actionable pre-operative anatomical detail for complex congenital heart disease.
Collapse
Affiliation(s)
- Natalie Layden
- Department of Medical Imaging, Perth Children's Hospital, Perth, Western Australia, Australia
| | | | - Nihar Jha
- Department of Medical Imaging, Perth Children's Hospital, Perth, Western Australia, Australia
| | - Jelena Saundankar
- Department of Cardiology, Perth Children's Hospital, Perth, Western Australia, Australia
| | - Deane Yim
- Department of Cardiology, Perth Children's Hospital, Perth, Western Australia, Australia
| | - David Andrews
- Department of Cardiothoracic Surgery, Perth Children's Hospital, Perth, Western Australia, Australia
| | - Aditya Patukale
- Department of Cardiothoracic Surgery, Perth Children's Hospital, Perth, Western Australia, Australia
| | - Shrivuthsun Srigandan
- Department of Medical Imaging, Mazankowski Alberta Heart Institute, University of Alberta, Edmonton, Alberta, Canada
| | | |
Collapse
|
11
|
Niedermair JF, Antipova V, Manhal S, Siwetz M, Wimmer-Röll M, Hammer N, Fellner FA. On the added benefit of virtual anatomy for dissection-based skills. ANATOMICAL SCIENCES EDUCATION 2023; 16:439-451. [PMID: 36453060 DOI: 10.1002/ase.2234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 11/15/2022] [Accepted: 11/23/2022] [Indexed: 05/11/2023]
Abstract
Technological approaches deploying three-dimensional visualization to integrate virtual anatomy are increasingly used to provide medical students with state-of-the-art teaching. It is unclear to date to which extent virtual anatomy may help replace the dissection course. Medical students of Johannes Kepler University attend both a dissection and a virtual anatomy course. This virtual anatomy course is based on Cinematic Rendering and radiological imaging and teaches anatomy and pathology. This study aims to substantiate student benefits achieved from this merged teaching approach. Following their dissection course, 120 second-year students took part in objective structured practical examinations (OSPE) conducted on human specimens prior to and following a course on Cinematic Rendering virtual anatomy. Likert-based and open-ended surveys were conducted to evaluate student perceptions of both courses and their utility. Virtual anatomy teaching was found to be unrelated to improvements in student's ability to identify anatomical structures in anatomical prosections, yielding only a 1.5% increase in the OSPE score. While the students rated the dissection course as being more important and impactful, the virtual anatomy course helped them display the learning content in a more comprehensible and clinically applicable way. It is likely that Cinematic Rendering-based virtual anatomy affects knowledge gain in domains other than the recognition of anatomical structures in anatomical prosections. These findings underline students' preference for the pedagogic strategy of the dissection course and for blending this classical approach with novel developments like Cinematic Rendering, thus preparing future doctors for their clinical work.
Collapse
Affiliation(s)
| | - Veronica Antipova
- Department of Macroscopic and Clinical Anatomy, Gottfried Schatz Research Center, Medical University of Graz, Graz, Austria
| | - Simone Manhal
- Office of the Vice Rector for Studies and Teaching, Medical University of Graz, Graz, Austria
| | | | - Monika Wimmer-Röll
- Institute of Anatomy and Cell Biology, Johannes Kepler University, Linz, Austria
| | - Niels Hammer
- Department of Macroscopic and Clinical Anatomy, Gottfried Schatz Research Center, Medical University of Graz, Graz, Austria
- Department of Orthopedic and Trauma Surgery, University of Leipzig, Leipzig, Germany
- Medical Branch, Fraunhofer Institute for Machine Tools and Forming Technology (IWU), Chemnitz, Germany
| | - Franz A Fellner
- Central Radiology Institute, Johannes Kepler University Hospital, Linz, Austria
- Division of Virtual Morphology, Institute of Anatomy and Cell Biology, Johannes Kepler University, Linz, Austria
| |
Collapse
|
12
|
Willershausen I, Necker F, Kloeckner R, Seidel CL, Paulsen F, Gölz L, Scholz M. Cinematic rendering to improve visualization of supplementary and ectopic teeth using CT datasets. Dentomaxillofac Radiol 2023; 52:20230058. [PMID: 37015249 PMCID: PMC10170174 DOI: 10.1259/dmfr.20230058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 03/06/2023] [Indexed: 04/06/2023] Open
Abstract
OBJECTIVES Ectopic, impacted, and supplementary teeth are the number one reason for cross-sectional imaging in pediatric dentistry. The accurate post-processing of acquired data sets is crucial to obtain precise, yet also intuitively understandable three-dimensional (3D) models, which facilitate clinical decision-making and improve treatment outcomes. Cinematic rendering (CR) is anovel visualization technique using physically based volume rendering to create photorealistic images from DICOM data. The aim of the present study was to tailor pre-existing CR reconstruction parameters for use in dental imaging with the example of the diagnostic 3D visualization of ectopic, impacted, and supplementary teeth. METHODS CR was employed for the volumetric image visualization of midface CT data sets. Predefined reconstruction parameters were specifically modified to visualize the presented dental pathologies, dentulous jaw, and isolated teeth. The 3D spatial relationship of the teeth, as well as their structural relationship with the antagonizing dentition, could immediately be investigated and highlighted by separate, interactive 3D visualization after segmentation through windowing. RESULTS To the best of our knowledge, CR has not been implemented for the visualization of supplementary and ectopic teeth segmented from the surrounding bone because the software has not yet provided appropriate customized reconstruction parameters for dental imaging. When employing our new, modified reconstruction parameters, its application presents a fast approach to obtain realistic visualizations of both dental and osseous structures. CONCLUSIONS CR enables dentists and oral surgeons to gain an improved 3D understanding of anatomical structures, allowing for more intuitive treatment planning and patient communication.
Collapse
Affiliation(s)
- Ines Willershausen
- Department of Orthodontics and Orofacial Orthopedics, Friedrich-Alexander-University Erlangen-Nürnberg, Gluecksstrasse, Erlangen, Germany
| | | | - Roman Kloeckner
- Institute of Interventional Radiology University Hospital of Schleswig-Holstein-Campus Lübeck, Ratzeburger Allee, Lübeck, Germany
| | - Corinna Lesley Seidel
- Department of Orthodontics and Orofacial Orthopedics, Friedrich-Alexander-University Erlangen-Nürnberg, Gluecksstrasse, Erlangen, Germany
| | - Friedrich Paulsen
- Institute of Functional and Clinical Anatomy Friedrich-Alexander-University Erlangen-Nürnberg, Krankenhausstrasse, Erlangen, Germany
| | - Lina Gölz
- Department of Orthodontics and Orofacial Orthopedics, Friedrich-Alexander-University Erlangen-Nürnberg, Gluecksstrasse, Erlangen, Germany
| | - Michael Scholz
- Institute of Functional and Clinical Anatomy Friedrich-Alexander-University Erlangen-Nürnberg, Krankenhausstrasse, Erlangen, Germany
| |
Collapse
|
13
|
Cardobi N, Nocini R, Molteni G, Favero V, Fior A, Marchioni D, Montemezzi S, D’Onofrio M. Path Tracing vs. Volume Rendering Technique in Post-Surgical Assessment of Bone Flap in Oncologic Head and Neck Reconstructive Surgery: A Preliminary Study. J Imaging 2023; 9:jimaging9020024. [PMID: 36826943 PMCID: PMC9967273 DOI: 10.3390/jimaging9020024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 11/15/2022] [Accepted: 12/26/2022] [Indexed: 01/22/2023] Open
Abstract
This study aims to compare a relatively novel three-dimensional rendering called Path Tracing (PT) to the Volume Rendering technique (VR) in the post-surgical assessment of head and neck oncologic surgery followed by bone flap reconstruction. This retrospective study included 39 oncologic patients who underwent head and neck surgery with free bone flap reconstructions. All exams were acquired using a 64 Multi-Detector CT (MDCT). PT and VR images were created on a dedicated workstation. Five readers, with different expertise in bone flap reconstructive surgery, independently reviewed the images (two radiologists, one head and neck surgeon and two otorhinolaryngologists, respectively). Every observer evaluated the images according to a 5-point Likert scale. The parameters assessed were image quality, anatomical accuracy, bone flap evaluation, and metal artefact. Mean and median values for all the parameters across the observer were calculated. The scores of both reconstruction methods were compared using a Wilcoxon matched-pairs signed rank test. Inter-reader agreement was calculated using Spearman's rank correlation coefficient. PT was considered significantly superior to VR 3D reconstructions by all readers (p < 0.05). Inter-reader agreement was moderate to strong across four out of five readers. The agreement was stronger with PT images compared to VR images. In conclusion, PT reconstructions are significantly better than VR ones. Although they did not modify patient outcomes, they may improve the post-surgical evaluation of bone-free flap reconstructions following major head and neck surgery.
Collapse
Affiliation(s)
- Nicolò Cardobi
- Radiology Unit, Department of Pathology and Diagnostics, University Hospital of Verona, Piazzale Aristide Stefani, 1, 37126 Verona, Italy
- Correspondence:
| | - Riccardo Nocini
- Otolaryngology-Head and Neck Surgery Department, University Hospital of Verona, Piazzale Aristide Stefani, 1, 37126 Verona, Italy
| | - Gabriele Molteni
- Otolaryngology-Head and Neck Surgery Department, University Hospital of Verona, Piazzale Aristide Stefani, 1, 37126 Verona, Italy
| | - Vittorio Favero
- Unit of Maxillo-Facial Surgery and Dentistry, University of Verona, P.le L.A. Scuro 10, 37134 Verona, Italy
| | - Andrea Fior
- Unit of Maxillo-Facial Surgery and Dentistry, University of Verona, P.le L.A. Scuro 10, 37134 Verona, Italy
| | - Daniele Marchioni
- Otolaryngology-Head and Neck Surgery Department, University Hospital of Verona, Piazzale Aristide Stefani, 1, 37126 Verona, Italy
| | - Stefania Montemezzi
- Radiology Unit, Department of Pathology and Diagnostics, University Hospital of Verona, Piazzale Aristide Stefani, 1, 37126 Verona, Italy
| | - Mirko D’Onofrio
- Department of Radiology, G.B. Rossi University Hospital, University of Verona, 37134 Verona, Italy
| |
Collapse
|
14
|
Rowe SP, Pomper MG, Leal JP, Schneider R, Krüger S, Chu LC, Fishman EK. Photorealistic three-dimensional visualization of fusion datasets: cinematic rendering of PET/CT. Abdom Radiol (NY) 2022; 47:3916-3920. [PMID: 35916942 DOI: 10.1007/s00261-022-03614-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 07/04/2022] [Accepted: 07/05/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE Cinematic rendering (CR) is a method of photorealistic 3D visualization of volumetric imaging data. We applied this technique to fusion PET/CT data. METHODS Two recent PET/CT cases were selected, one each of prostate-specific membrane antigen (PSMA)-targeted 18F-DCFPyL, and somatostatin-receptor-targeted 68 Ga-DOTATATE. Targeted radiotracers were selected in order to provide high-contrast images for this proof-of-principle study. Cinematic rendering was performed with an enhanced algorithm that incorporated internal lighting within the PET-avid organs and lesions to allow for a distinct visual signature. RESULTS The use of internal lighting for PET data provided CR of fused PET/CT scans. The interpreting radiologist must make judicious use of presets and cut planes in order to ensure important findings are not missed. CONCLUSIONS CR of PET/CT data provides a photorealistic means of visualizing complex fusion imaging datasets. Such visualizations may aid anatomic understanding for surgical or procedural applications, may improve teaching of trainees, and may allow improved communication with patients.
Collapse
Affiliation(s)
- Steven P Rowe
- Department of Radiology and Radiological Science, The Russell H. Morgan, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Baltimore, MD, 21287, USA.
| | - Martin G Pomper
- Department of Radiology and Radiological Science, The Russell H. Morgan, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Baltimore, MD, 21287, USA
| | - Jeffrey P Leal
- Department of Radiology and Radiological Science, The Russell H. Morgan, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Baltimore, MD, 21287, USA
| | | | | | - Linda C Chu
- Department of Radiology and Radiological Science, The Russell H. Morgan, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Baltimore, MD, 21287, USA
| | - Elliot K Fishman
- Department of Radiology and Radiological Science, The Russell H. Morgan, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Baltimore, MD, 21287, USA
| |
Collapse
|
15
|
Lakhani DA, Deib G. Photorealistic Depiction of Intracranial Tumors Using Cinematic Rendering of Volumetric 3T MRI Data. Acad Radiol 2022; 29:e211-e218. [PMID: 35033449 DOI: 10.1016/j.acra.2021.12.017] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 12/10/2021] [Accepted: 12/15/2021] [Indexed: 12/14/2022]
Abstract
RATIONALE AND OBJECTIVES Cinematic Rendering (CR) incorporates a complex lightning model that creates photorealistic models from isotropic 3D imaging data. The utility of CR in depicting volumetric MRI data for pre-therapeutic planning is discussed, with intracranial tumors as a demonstrative example. MATERIALS AND METHODS We present a series of Cinematically Rendered intracranial tumors and discuss their utility in multidisciplinary pre-therapeutic evaluation. Isotropic, high-resolution, volumetric MRI data was collected, and CR was performed utilizing a proprietary application, "Anatomy Education" Siemens, Munich, Germany. RESULTS Discrimination of cortex to white matter, brain surface to vessels, subarachnoid space to cortex and skull to intracranial structures was achieved and optimized by using various display settings on the Anatomy education application. Progressive removal of tissue layers allowed for a comprehensive assessment of the entire region of interest. Complex, small structures were demonstrated in very high detail. The depth and architecture of the sulci was appreciated in a format that more closely mimicked gross pathology than traditional imaging modalities. With appropriate display settings, the relationship of the cortical surface to the adjacent vasculature was also delineated. CONCLUSION CR depicts the anatomic location of brain tumors in a format that depicts the relative proximity of adjacent structures in all dimensions and degrees of freedom. This allows for better conceptualization of the pathology and greater ease of communication between radiologists and other clinical teams, especially in the context of pretherapeutic planning.
Collapse
Affiliation(s)
- Dhairya A Lakhani
- Department of Radiology (D.A.L.), West Virginia University, 1 Medical Center Drive, Morgantown, West Virginia 26506, USA; Department of Neuroradiology (G.D.), West Virginia University, Morgantown, West Virginia, USA.
| | - Gerard Deib
- Department of Radiology (D.A.L.), West Virginia University, 1 Medical Center Drive, Morgantown, West Virginia 26506, USA; Department of Neuroradiology (G.D.), West Virginia University, Morgantown, West Virginia, USA
| |
Collapse
|
16
|
Hu R, Zhang XY, Liu J, Wu JH, Wang RP, Zeng XC. Clinical application of cinematic rendering in maxillofacial fractures. Int J Oral Maxillofac Surg 2022; 51:1562-1569. [PMID: 35680483 DOI: 10.1016/j.ijom.2022.05.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 03/24/2022] [Accepted: 05/18/2022] [Indexed: 12/01/2022]
Abstract
The purpose of this study was to evaluate the clinical application of cinematically rendered reconstructions of maxillofacial fractures. Ten surgeons and eight radiologists were shown three-dimensional images of 25 different patient cases, generated using both the volume rendering (VR) technique and the cinematic rendering (CR) technique. They were asked to mark the site of the fracture on the three-dimensional images and record the time this activity took. The effectiveness of the reconstructions to communicate with patients was assessed through the opinions of the surgeons and radiologists, as well as 25 patients. Subjective evaluations of the clinical value of the images were performed by the 18 surgeons and radiologists using a 10-item questionnaire. The percentages of correctly identified fractures of the nasal bone (P = 0.034), fracture dislocation (P < 0.001), and free bone fragments (P < 0.001) were significantly higher for CR images when compared to VR images, and identification took an average of 20.81 seconds for CR and 27.48 seconds for VR (P < 0.001). CR images were found to be more beneficial for communication with patients and scored higher for the display of fracture dislocation and free bone fragments than VR images (P < 0.05). CR images were found to have high clinical value in the visualization of maxillofacial fractures.
Collapse
Affiliation(s)
- Rong Hu
- School of Public Health, Guizhou Medical University, Guiyang, China; Department of Radiology, Guizhou Provincial People's Hospital, Guiyang, China
| | - Xiao-Yong Zhang
- Department of Radiology, Guizhou Provincial People's Hospital, Guiyang, China
| | - Jian Liu
- Department of Radiology, Guizhou Provincial People's Hospital, Guiyang, China
| | - Jia-Hong Wu
- School of Basic Medicine, Guizhou Medical University, Guiyang, China
| | - Rong-Pin Wang
- Department of Radiology, Guizhou Provincial People's Hospital, Guiyang, China
| | - Xian-Chun Zeng
- Department of Radiology, Guizhou Provincial People's Hospital, Guiyang, China.
| |
Collapse
|
17
|
Cheng X, Zhang X, Gu F, Tian C, Wang R, Chen J, Liu J, Zeng X. Multiple systemic arteries to pulmonary artery malformations: a case description. Quant Imaging Med Surg 2021; 11:4671-4675. [PMID: 34737933 DOI: 10.21037/qims-21-109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 04/23/2021] [Indexed: 11/06/2022]
Affiliation(s)
- Xinge Cheng
- Department of Graduate School, Zunyi Medical University, Zunyi, China.,Department of Radiology, Guizhou Provincial People's Hospital, Key Laboratory of Intelligent Medical Imaging Analysis and Accurate Diagnosis of Guizhou Province, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guiyang, China
| | - Xiaoyong Zhang
- Department of Radiology, Guizhou Provincial People's Hospital, Key Laboratory of Intelligent Medical Imaging Analysis and Accurate Diagnosis of Guizhou Province, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guiyang, China
| | - Fujia Gu
- Department of Interventional Radiology, Guizhou Provincial People's Hospital, Guiyang, China
| | - Chong Tian
- Department of Radiology, Guizhou Provincial People's Hospital, Key Laboratory of Intelligent Medical Imaging Analysis and Accurate Diagnosis of Guizhou Province, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guiyang, China
| | - Rongpin Wang
- Department of Radiology, Guizhou Provincial People's Hospital, Key Laboratory of Intelligent Medical Imaging Analysis and Accurate Diagnosis of Guizhou Province, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guiyang, China
| | - Jiaxiang Chen
- Guizhou University School of Medicine, Guiyang, China
| | - Jian Liu
- Department of Graduate School, Zunyi Medical University, Zunyi, China.,Department of Radiology, Guizhou Provincial People's Hospital, Key Laboratory of Intelligent Medical Imaging Analysis and Accurate Diagnosis of Guizhou Province, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guiyang, China
| | - Xianchun Zeng
- Department of Radiology, Guizhou Provincial People's Hospital, Key Laboratory of Intelligent Medical Imaging Analysis and Accurate Diagnosis of Guizhou Province, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, Guiyang, China
| |
Collapse
|
18
|
Hauck T, Arkudas A, Horch RE, Ströbel A, May MS, Binder J, Krautz C, Ludolph I. The third dimension in perforator mapping-Comparison of Cinematic Rendering and maximum intensity projection in abdominal-based autologous breast reconstruction. J Plast Reconstr Aesthet Surg 2021; 75:536-543. [PMID: 34756655 DOI: 10.1016/j.bjps.2021.09.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 09/19/2021] [Indexed: 11/17/2022]
Abstract
BACKGROUND Cinematic Rendering (CR) is a recently introduced post-processing three-dimensional (3D) visualization imaging tool. The aim of this study was to assess its clinical value in the preoperative planning of deep inferior epigastric artery perforator (DIEP) or muscle-sparing transverse rectus abdominis myocutaneous (MS-TRAM) flaps, and to compare it with maximum intensity projection (MIP) images. The study presents the first application of CR for perforator mapping prior to autologous breast reconstruction. METHODS Two senior surgeons independently analyzed CR and MIP images based on computed tomography angiography (CTA) datasets of 20 patients in terms of vascular pedicle characteristics, the possibility to harvest a DIEP or MS-TRAM flap, and the side of the flap harvest. We calculated inter- and intra-observer agreement in order to examine the accordance of both image techniques. RESULTS We observed a good inter- and intra-observer agreement concerning the type of flap and the side of the flap harvest. However, the agreement on the pedicle characteristics varies depending on the considered variable. Both investigators identified a significantly higher number of perforators with MIP compared with CR (observer 1, p<0.0001 and observer 2, p<0.0385). CONCLUSION The current study serves as an explorative study, showing first experiences with CR in abdominal-based autologous breast reconstruction. In addition to MIP images, CR might improve the surgeon's understanding of the individual's anatomy. Future studies are required to compare CR with other 3D visualization tools and its possible effects on operative parameters.
Collapse
Affiliation(s)
- Theresa Hauck
- Department of Plastic and Hand Surgery and Laboratory for Tissue Engineering and Regenerative Medicine, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany.
| | - Andreas Arkudas
- Department of Plastic and Hand Surgery and Laboratory for Tissue Engineering and Regenerative Medicine, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany
| | - Raymund E Horch
- Department of Plastic and Hand Surgery and Laboratory for Tissue Engineering and Regenerative Medicine, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany
| | - Armin Ströbel
- Center for Clinical Studies, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU, Germany)
| | - Matthias S May
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany
| | - Johannes Binder
- Department of Surgery, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany
| | - Christian Krautz
- Department of Surgery, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany
| | - Ingo Ludolph
- Department of Plastic and Hand Surgery and Laboratory for Tissue Engineering and Regenerative Medicine, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany
| |
Collapse
|
19
|
Sermesant M, Delingette H, Cochet H, Jaïs P, Ayache N. Applications of artificial intelligence in cardiovascular imaging. Nat Rev Cardiol 2021; 18:600-609. [PMID: 33712806 DOI: 10.1038/s41569-021-00527-2] [Citation(s) in RCA: 73] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 02/08/2021] [Indexed: 01/31/2023]
Abstract
Research into artificial intelligence (AI) has made tremendous progress over the past decade. In particular, the AI-powered analysis of images and signals has reached human-level performance in many applications owing to the efficiency of modern machine learning methods, in particular deep learning using convolutional neural networks. Research into the application of AI to medical imaging is now very active, especially in the field of cardiovascular imaging because of the challenges associated with acquiring and analysing images of this dynamic organ. In this Review, we discuss the clinical questions in cardiovascular imaging that AI can be used to address and the principal methodological AI approaches that have been developed to solve the related image analysis problems. Some approaches are purely data-driven and rely mainly on statistical associations, whereas others integrate anatomical and physiological information through additional statistical, geometric and biophysical models of the human heart. In a structured manner, we provide representative examples of each of these approaches, with particular attention to the underlying computational imaging challenges. Finally, we discuss the remaining limitations of AI approaches in cardiovascular imaging (such as generalizability and explainability) and how they can be overcome.
Collapse
Affiliation(s)
| | | | - Hubert Cochet
- IHU Liryc, CHU Bordeaux, Université Bordeaux, Inserm 1045, Pessac, France
| | - Pierre Jaïs
- IHU Liryc, CHU Bordeaux, Université Bordeaux, Inserm 1045, Pessac, France
| | | |
Collapse
|
20
|
Claassen H, Busch C, May MS, Schicht M, Scholz M, Schulze M, Paulsen F, Wree A. Cor Triatriatum Sinistrum Combined with Changes in Atrial Septum and Right Atrium in a 60-Year-Old Woman. MEDICINA-LITHUANIA 2021; 57:medicina57080777. [PMID: 34440984 PMCID: PMC8402230 DOI: 10.3390/medicina57080777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 07/27/2021] [Accepted: 07/28/2021] [Indexed: 11/16/2022]
Abstract
Background and Objectives: A rare case of cor triatriatum sinistrum in combination with anomalies in the atrial septum and in the right atrium of a 60-year-old female body donor is described here. Materials and Methods: In addition to classical dissection, ultrasound and magnetic resonance imaging, computer tomography and cinematic rendering were performed. In a reference series of 59 regularly formed hearts (33 men, 26 women), we looked for features in the left and right atrium or atrial septum. In addition, we measured the atrial and ventricular wall thickness in 15 regularly formed hearts (7 men, 8 women). Results: In the case described, the left atrium was partly divided into two chambers by an intra-atrial membrane penetrated by two small openings. The 2.5 cm-high membrane originated in the upper level of the oval fossa and left an opening of about 4 cm in diameter. Apparently, the membrane did not lead to a functionally significant flow obstruction due to the broad intra-atrial communication between the proximal and distal chamber of the left atrium. In concordance with this fact, left atrial wall thickness was not elevated in the cor triatriatum sinistrum when compared with 15 regularly formed hearts. In addition, two further anomalies were found: 1. the oval fossa was deepened and arched in the direction of the left atrium; 2. the right atrium showed a membrane-like structure at its posterior and lateral walls, which began at the lower edge of the oval fossa. It probably corresponds to a strongly developed eustachian valve (valve of the inferior vena cava). Conclusions: The case described suggests that malformations in the development of the atrial septum and in the regression of the valve of the right sinus vein are involved in the pathogenesis of cor triatriatum sinistrum.
Collapse
Affiliation(s)
- Horst Claassen
- Department of Anatomy, Rostock University Medical Center, Gertrudenstrasse 9, 18057 Rostock, Germany; (M.S.); (A.W.)
- Institute of Functional and Clinical Anatomy, Friedrich-Alexander-University Erlangen-Nürnberg, Universitätsstrasse 19, 91054 Erlangen, Germany; (M.S.); (M.S.); (F.P.)
- Correspondence: ; Tel.: +49-381-494-8438
| | - Christian Busch
- Department of Internal Medicine, Federal Armed Forces Hospital, Lesserstrasse 180, 22049 Hamburg, Germany;
| | - Matthias Stefan May
- Department of Radiology, University Hospital Erlangen, Friedrich-Alexander-University Erlangen-Nürnberg, Maximiliansplatz 3, 91054 Erlangen, Germany;
- Imaging Science Institute Erlangen, Ulmenweg 18, 91054 Erlangen, Germany
| | - Martin Schicht
- Institute of Functional and Clinical Anatomy, Friedrich-Alexander-University Erlangen-Nürnberg, Universitätsstrasse 19, 91054 Erlangen, Germany; (M.S.); (M.S.); (F.P.)
| | - Michael Scholz
- Institute of Functional and Clinical Anatomy, Friedrich-Alexander-University Erlangen-Nürnberg, Universitätsstrasse 19, 91054 Erlangen, Germany; (M.S.); (M.S.); (F.P.)
| | - Marko Schulze
- Department of Anatomy, Rostock University Medical Center, Gertrudenstrasse 9, 18057 Rostock, Germany; (M.S.); (A.W.)
- AG 3 Anatomie und Zellbiologie, Medizinische Fakultät OWL Universität Bielefeld, Morgenbreede 1, 33615 Bielefeld, Germany
| | - Friedrich Paulsen
- Institute of Functional and Clinical Anatomy, Friedrich-Alexander-University Erlangen-Nürnberg, Universitätsstrasse 19, 91054 Erlangen, Germany; (M.S.); (M.S.); (F.P.)
- Department of Topographic Anatomy and Operative Surgery, Sechenov University, Rossolimo Street 15/13, 119992 Moscow, Russia
| | - Andreas Wree
- Department of Anatomy, Rostock University Medical Center, Gertrudenstrasse 9, 18057 Rostock, Germany; (M.S.); (A.W.)
| |
Collapse
|
21
|
Bueno MR, Estrela C, Granjeiro JM, Estrela MRDA, Azevedo BC, Diogenes A. Cone-beam computed tomography cinematic rendering: clinical, teaching and research applications. Braz Oral Res 2021; 35:e024. [PMID: 33624709 DOI: 10.1590/1807-3107bor-2021.vol35.0024] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Accepted: 10/22/2020] [Indexed: 02/08/2023] Open
Abstract
Cone-beam computed tomography (CBCT) is an essential imaging method that increases the accuracy of diagnoses, planning and follow-up of endodontic complex cases. Image postprocessing and subsequent visualization relies on software for three-dimensional navigation, and application of indexation tools to provide clinically useful information according to a set of volumetric data. Image postprocessing has a crucial impact on diagnostic quality and various techniques have been employed on computed tomography (CT) and magnetic resonance imaging (MRI) data sets. These include: multiplanar reformations (MPR), maximum intensity projection (MIP) and volume rendering (VR). A recent advance in 3D data visualization is the new cinematic rendering reconstruction method, a technique that generates photorealistic 3D images from conventional CT and MRI data. This review discusses the importance of CBCT cinematic rendering for clinical decision-making, teaching, and research in Endodontics, and a presents series of cases that illustrate the diagnostic value of 3D cinematic rendering in clinical care.
Collapse
Affiliation(s)
| | - Carlos Estrela
- Universidade Federal de Goiás - UFGO, School of Dentistry, Stomatologic Science Department, Goiânia, GO, Brazil
| | - José Mauro Granjeiro
- Instituto Nacional de Metrologia, Qualidade e Tecnologia - Inmetro, Duque de Caxias, RJ, Brazil
| | | | - Bruno Correa Azevedo
- University of Louisville, School of Dentistry, Oral Radiology Department, Louisville, KY, USA
| | - Anibal Diogenes
- University of Texas Health at San Antonio, School of Dentistry, Endodontics Department, San Antonio, TX, USA
| |
Collapse
|
22
|
Gehrsitz P, Rompel O, Schöber M, Cesnjevar R, Purbojo A, Uder M, Dittrich S, Alkassar M. Cinematic Rendering in Mixed-Reality Holograms: A New 3D Preoperative Planning Tool in Pediatric Heart Surgery. Front Cardiovasc Med 2021; 8:633611. [PMID: 33634174 PMCID: PMC7900175 DOI: 10.3389/fcvm.2021.633611] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 01/13/2021] [Indexed: 11/13/2022] Open
Abstract
Cinematic rendering (CR) is based on a new algorithm that creates a photo-realistic three-dimensional (3D) picture from cross-sectional images. Previous studies have shown its positive impact on preoperative planning. To date, CR presentation has only been possible on 2D screens which limited natural 3D perception. To depict CR-hearts spatially, we used mixed-reality technology and mapped corresponding hearts as holograms in 3D space. Our aim was to assess the benefits of CR-holograms in the preoperative planning of cardiac surgery. Including 3D prints allowed a direct comparison of two spatially resolved display methods. Twenty-six patients were recruited between February and September 2019. CT or MRI was used to visualize the patient's heart preoperatively. The surgeon was shown the anatomy in cross-sections on a 2D screen, followed by spatial representations as a 3D print and as a high-resolution hologram. The holographic representation was carried out using mixed-reality glasses (HoloLens®). To create the 3D prints, corresponding structures were segmented to create STL files which were printed out of resin. In 22 questions, divided in 5 categories (3D-imaging effect, representation of pathology, structure resolution, cost/benefit ratio, influence on surgery), the surgeons compared each spatial representation with the 2D method, using a five-level Likert scale. The surgical preparation time was assessed by comparing retrospectively matched patient pairs, using a paired t-test. CR-holograms surpassed 2D-monitor imaging in all categories. CR-holograms were superior to 3D prints in all categories (mean Likert scale 4.4 ± 1.0 vs. 3.7 ± 1.3, P < 0.05). Compared to 3D prints it especially improved the depth perception (4.7 ± 0.7 vs. 3.7 ± 1.2) and the representation of the pathology (4.4 ± 0.9 vs. 3.6 ± 1.2). 3D imaging reduced the intraoperative preparation time (n = 24, 59 ± 23 min vs. 73 ± 43 min, P < 0.05). In conclusion, the combination of an extremely photo-realistic presentation via cinematic rendering and the spatial presentation in 3D space via mixed-reality technology allows a previously unattained level of comprehension of anatomy and pathology in preoperative planning.
Collapse
Affiliation(s)
- Pia Gehrsitz
- Department of Pediatric Cardiology, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Oliver Rompel
- Institute of Radiology, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Martin Schöber
- Department of Pediatric Cardiology, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Robert Cesnjevar
- Department of Pediatric Cardiac Surgery, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Ariawan Purbojo
- Department of Pediatric Cardiac Surgery, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Michael Uder
- Institute of Radiology, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Sven Dittrich
- Department of Pediatric Cardiology, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Muhannad Alkassar
- Department of Pediatric Cardiology, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| |
Collapse
|
23
|
Binder JS, Scholz M, Ellmann S, Uder M, Grützmann R, Weber GF, Krautz C. Cinematic Rendering in Anatomy: A Crossover Study Comparing a Novel 3D Reconstruction Technique to Conventional Computed Tomography. ANATOMICAL SCIENCES EDUCATION 2021; 14:22-31. [PMID: 32521121 DOI: 10.1002/ase.1989] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Revised: 06/02/2020] [Accepted: 06/02/2020] [Indexed: 06/11/2023]
Abstract
Integration of medical imaging into preclinical anatomy courses is already underway in many medical schools. However, interpretation of two-dimensional grayscale images is difficult and conventional volume rendering techniques provide only images of limited quality. In this regard, a more photorealistic visualization provided by Cinematic Rendering (CR) may be more suitable for anatomical education. A randomized, two-period crossover study was conducted from July to December 2018, at the University Hospital of Erlangen, Germany to compare CR and conventional computed tomography (CT) imaging for speed and comprehension of anatomy. Sixteen students were randomized into two assessment sequences. During each assessment period, participants had to answer 15 anatomy-related questions that were divided into three categories: parenchymal, musculoskeletal, and vascular anatomy. After a washout period of 14 days, assessments were crossed over to the respective second reconstruction technique. The mean interperiod differences for the time to answer differed significantly between the CR-CT sequence (-204.21 ± 156.0 seconds) and the CT-CR sequence (243.33 ± 113.83 seconds; P < 0.001). Overall time reduction by CR was 65.56%. Cinematic Rendering visualization of musculoskeletal and vascular anatomy was higher rated compared to CT visualization (P < 0.001 and P = 0.003), whereas CT visualization of parenchymal anatomy received a higher scoring than CR visualization (P < 0.001). No carryover effects were observed. A questionnaire revealed that students consider CR to be beneficial for medical education. These results suggest that CR has a potential to enhance knowledge acquisition and transfer from medical imaging data in medical education.
Collapse
Affiliation(s)
- Johannes S Binder
- Klinik für Allgemein- und Viszeralchirurgie, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Michael Scholz
- Institut für Funktionelle und Klinische Anatomie, Friedrich-Alexander-Universtät Erlangen-Nürnberg, Erlangen, Germany
| | - Stephan Ellmann
- Institut für Radiologie, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Michael Uder
- Institut für Radiologie, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Robert Grützmann
- Klinik für Allgemein- und Viszeralchirurgie, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Georg F Weber
- Klinik für Allgemein- und Viszeralchirurgie, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Christian Krautz
- Klinik für Allgemein- und Viszeralchirurgie, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
24
|
Mixed Reality Interaction and Presentation Techniques for Medical Visualisations. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020. [PMID: 33211310 DOI: 10.1007/978-3-030-47483-6_7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/25/2023]
Abstract
Mixed, Augmented and Virtual reality technologies are burgeoning with new applications and use cases appearing rapidly. This chapter provides a brief overview of the fundamental display presentation methods; head-worn, hand-held and projector-based displays. We present a summary of visualisation methods that employ these technologies in the medical domain with a diverse range of examples presented including diagnostic and exploration, intervention and clinical, interaction and gestures, and education.
Collapse
|
25
|
Schapher M, Koch M, Weidner D, Scholz M, Wirtz S, Mahajan A, Herrmann I, Singh J, Knopf J, Leppkes M, Schauer C, Grüneboom A, Alexiou C, Schett G, Iro H, Muñoz LE, Herrmann M. Neutrophil Extracellular Traps Promote the Development and Growth of Human Salivary Stones. Cells 2020; 9:cells9092139. [PMID: 32971767 PMCID: PMC7564068 DOI: 10.3390/cells9092139] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 09/13/2020] [Accepted: 09/16/2020] [Indexed: 12/16/2022] Open
Abstract
Salivary gland stones, or sialoliths, are the most common cause of the obstruction of salivary glands. The mechanism behind the formation of sialoliths has been elusive. Symptomatic sialolithiasis has a prevalence of 0.45% in the general population, is characterized by recurrent painful periprandial swelling of the affected gland, and often results in sialadenitis with the need for surgical intervention. Here, we show by the use of immunohistochemistry, immunofluorescence, computed tomography (CT) scans and reconstructions, special dye techniques, bacterial genotyping, and enzyme activity analyses that neutrophil extracellular traps (NETs) initiate the formation and growth of sialoliths in humans. The deposition of neutrophil granulocyte extracellular DNA around small crystals results in the dense aggregation of the latter, and the subsequent mineralization creates alternating layers of dense mineral, which are predominantly calcium salt deposits and DNA. The further agglomeration and appositional growth of these structures promotes the development of macroscopic sialoliths that finally occlude the efferent ducts of the salivary glands, causing clinical symptoms and salivary gland dysfunction. These findings provide an entirely novel insight into the mechanism of sialolithogenesis, in which an immune system-mediated response essentially participates in the physicochemical process of concrement formation and growth.
Collapse
Affiliation(s)
- Mirco Schapher
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Otolaryngology, Head and Neck Surgery, Universitätsklinikum Erlangen, Waldstrasse 1, 91054 Erlangen, Germany; (M.S.); (M.K.); (C.A.); (H.I.)
| | - Michael Koch
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Otolaryngology, Head and Neck Surgery, Universitätsklinikum Erlangen, Waldstrasse 1, 91054 Erlangen, Germany; (M.S.); (M.K.); (C.A.); (H.I.)
| | - Daniela Weidner
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Internal Medicine 3, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (D.W.); (A.M.); (I.H.); (J.S.); (J.K.); (C.S.); (A.G.); (G.S.); (L.E.M.)
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Deutsches Zentrum für Immuntherapie, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (S.W.); (M.L.)
| | - Michael Scholz
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Institute of Functional and Clinical Anatomy, Universitätsstrasse 19, 91054 Erlangen, Germany;
| | - Stefan Wirtz
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Deutsches Zentrum für Immuntherapie, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (S.W.); (M.L.)
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Internal Medicine 1, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany
| | - Aparna Mahajan
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Internal Medicine 3, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (D.W.); (A.M.); (I.H.); (J.S.); (J.K.); (C.S.); (A.G.); (G.S.); (L.E.M.)
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Deutsches Zentrum für Immuntherapie, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (S.W.); (M.L.)
| | - Irmgard Herrmann
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Internal Medicine 3, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (D.W.); (A.M.); (I.H.); (J.S.); (J.K.); (C.S.); (A.G.); (G.S.); (L.E.M.)
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Deutsches Zentrum für Immuntherapie, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (S.W.); (M.L.)
| | - Jeeshan Singh
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Internal Medicine 3, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (D.W.); (A.M.); (I.H.); (J.S.); (J.K.); (C.S.); (A.G.); (G.S.); (L.E.M.)
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Deutsches Zentrum für Immuntherapie, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (S.W.); (M.L.)
| | - Jasmin Knopf
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Internal Medicine 3, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (D.W.); (A.M.); (I.H.); (J.S.); (J.K.); (C.S.); (A.G.); (G.S.); (L.E.M.)
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Deutsches Zentrum für Immuntherapie, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (S.W.); (M.L.)
| | - Moritz Leppkes
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Deutsches Zentrum für Immuntherapie, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (S.W.); (M.L.)
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Internal Medicine 1, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany
| | - Christine Schauer
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Internal Medicine 3, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (D.W.); (A.M.); (I.H.); (J.S.); (J.K.); (C.S.); (A.G.); (G.S.); (L.E.M.)
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Deutsches Zentrum für Immuntherapie, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (S.W.); (M.L.)
| | - Anika Grüneboom
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Internal Medicine 3, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (D.W.); (A.M.); (I.H.); (J.S.); (J.K.); (C.S.); (A.G.); (G.S.); (L.E.M.)
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Deutsches Zentrum für Immuntherapie, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (S.W.); (M.L.)
| | - Christoph Alexiou
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Otolaryngology, Head and Neck Surgery, Universitätsklinikum Erlangen, Waldstrasse 1, 91054 Erlangen, Germany; (M.S.); (M.K.); (C.A.); (H.I.)
| | - Georg Schett
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Internal Medicine 3, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (D.W.); (A.M.); (I.H.); (J.S.); (J.K.); (C.S.); (A.G.); (G.S.); (L.E.M.)
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Deutsches Zentrum für Immuntherapie, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (S.W.); (M.L.)
| | - Heinrich Iro
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Otolaryngology, Head and Neck Surgery, Universitätsklinikum Erlangen, Waldstrasse 1, 91054 Erlangen, Germany; (M.S.); (M.K.); (C.A.); (H.I.)
| | - Luis E. Muñoz
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Internal Medicine 3, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (D.W.); (A.M.); (I.H.); (J.S.); (J.K.); (C.S.); (A.G.); (G.S.); (L.E.M.)
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Deutsches Zentrum für Immuntherapie, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (S.W.); (M.L.)
| | - Martin Herrmann
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Department of Internal Medicine 3, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (D.W.); (A.M.); (I.H.); (J.S.); (J.K.); (C.S.); (A.G.); (G.S.); (L.E.M.)
- Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Deutsches Zentrum für Immuntherapie, Universitätsklinikum Erlangen, Ulmenweg 18, 91054 Erlangen, Germany; (S.W.); (M.L.)
- Correspondence:
| |
Collapse
|
26
|
Wang DD, Qian Z, Vukicevic M, Engelhardt S, Kheradvar A, Zhang C, Little SH, Verjans J, Comaniciu D, O'Neill WW, Vannan MA. 3D Printing, Computational Modeling, and Artificial Intelligence for Structural Heart Disease. JACC Cardiovasc Imaging 2020; 14:41-60. [PMID: 32861647 DOI: 10.1016/j.jcmg.2019.12.022] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 11/27/2019] [Accepted: 12/02/2019] [Indexed: 01/19/2023]
Abstract
Structural heart disease (SHD) is a new field within cardiovascular medicine. Traditional imaging modalities fall short in supporting the needs of SHD interventions, as they have been constructed around the concept of disease diagnosis. SHD interventions disrupt traditional concepts of imaging in requiring imaging to plan, simulate, and predict intraprocedural outcomes. In transcatheter SHD interventions, the absence of a gold-standard open cavity surgical field deprives physicians of the opportunity for tactile feedback and visual confirmation of cardiac anatomy. Hence, dependency on imaging in periprocedural guidance has led to evolution of a new generation of procedural skillsets, concept of a visual field, and technologies in the periprocedural planning period to accelerate preclinical device development, physician, and patient education. Adaptation of 3-dimensional (3D) printing in clinical care and procedural planning has demonstrated a reduction in early-operator learning curve for transcatheter interventions. Integration of computation modeling to 3D printing has accelerated research and development understanding of fluid mechanics within device testing. Application of 3D printing, computational modeling, and ultimately incorporation of artificial intelligence is changing the landscape of physician training and delivery of patient-centric care. Transcatheter structural heart interventions are requiring in-depth periprocedural understanding of cardiac pathophysiology and device interactions not afforded by traditional imaging metrics.
Collapse
Affiliation(s)
- Dee Dee Wang
- Center for Structural Heart Disease, Division of Cardiology, Henry Ford Health System, Detroit, Michigan, USA.
| | - Zhen Qian
- Hippocrates Research Lab, Tencent America, Palo Alto, California, USA
| | - Marija Vukicevic
- Department of Cardiology, Methodist DeBakey Heart Center, Houston Methodist Hospital, Houston, Texas, USA
| | - Sandy Engelhardt
- Artificial Intelligence in Cardiovascular Medicine, Heidelberg University Hospital, Heidelberg, Germany
| | - Arash Kheradvar
- Department of Biomedical Engineering, Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, Irvine, California, USA
| | - Chuck Zhang
- H. Milton Stewart School of Industrial & Systems Engineering and Georgia Tech Manufacturing Institute, Georgia Institute of Technology, Atlanta Georgia, USA
| | - Stephen H Little
- Department of Cardiology, Methodist DeBakey Heart Center, Houston Methodist Hospital, Houston, Texas, USA
| | - Johan Verjans
- Australian Institute for Machine Learning, University of Adelaide, Adelaide South Australia, Australia
| | - Dorin Comaniciu
- Siemens Healthineers, Medical Imaging Technologies, Princeton, New Jersey, USA
| | - William W O'Neill
- Center for Structural Heart Disease, Division of Cardiology, Henry Ford Health System, Detroit, Michigan, USA
| | - Mani A Vannan
- Hippocrates Research Lab, Tencent America, Palo Alto, California, USA
| |
Collapse
|
27
|
Chu LC, Park S, Kawamoto S, Yuille AL, Hruban RH, Fishman EK. Pancreatic Cancer Imaging: A New Look at an Old Problem. Curr Probl Diagn Radiol 2020; 50:540-550. [PMID: 32988674 DOI: 10.1067/j.cpradiol.2020.08.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 08/21/2020] [Indexed: 12/18/2022]
Abstract
Computed tomography is the most commonly used imaging modality to detect and stage pancreatic cancer. Previous advances in pancreatic cancer imaging have focused on optimizing image acquisition parameters and reporting standards. However, current state-of-the-art imaging approaches still misdiagnose some potentially curable pancreatic cancers and do not provide prognostic information or inform optimal management strategies beyond stage. Several recent developments in pancreatic cancer imaging, including artificial intelligence and advanced visualization techniques, are rapidly changing the field. The purpose of this article is to review how these recent advances have the potential to revolutionize pancreatic cancer imaging.
Collapse
Affiliation(s)
- Linda C Chu
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD.
| | - Seyoun Park
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD
| | - Satomi Kawamoto
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD
| | - Alan L Yuille
- Department of Computer Science, Johns Hopkins University, Baltimore, MD
| | - Ralph H Hruban
- Sol Goldman Pancreatic Cancer Research Center, Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, MD
| | - Elliot K Fishman
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD
| |
Collapse
|
28
|
Elshafei M, Binder J, Baecker J, Brunner M, Uder M, Weber GF, Grützmann R, Krautz C. Comparison of Cinematic Rendering and Computed Tomography for Speed and Comprehension of Surgical Anatomy. JAMA Surg 2020; 154:738-744. [PMID: 31141115 PMCID: PMC6705138 DOI: 10.1001/jamasurg.2019.1168] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Question Does the use of cinematic rendering improve the comprehension of the surgical anatomy? Findings In this German preclinical randomized crossover study, visualization with cinematic rendering allowed a more correct and faster comprehension of the surgical anatomy compared with conventional computed tomography independent of the level of surgical experience. Meaning Cinematic rendering is a tool that may assist general surgeons with preoperative preparation and intraoperative guidance through an improved interpretation of computed tomography imaging data. Importance Three-dimensional (3-D) volume rendering has been shown to improve visualization in general surgery. Cinematic rendering (CR), a novel 3-D visualization technology for postprocessing of computed tomographaphy (CT) images, provides photorealistic images with the potential to improve visualization of anatomic details. Objective To determine the value of CR for the comprehension of the surgical anatomy. Design, Setting, and Participants This preclinical, randomized, 2-sequence crossover study was conducted from February to November 1, 2018, at University Hospital of Erlangen, Germany. The 40 patient cases were evaluated by 18 resident and attending surgeons using a prepared set of CT and CR images. The patient cases were randomized to 2 assessment sequences (CR-CT and CT-CR). During each assessment period, participants answered 1 question per case that addressed crucial issues of anatomic understanding, preoperative planning, and intraoperative strategies. After a washout period of 2 weeks, case evaluations were crossed over to the respective second image modality. Main Outcomes and Measures The primary outcome measure was the correctness of answers. Secondary outcome was the time needed to answer. Results The mean (SD) interperiod differences for the percentage of correct answers in the CR-CT sequence (8.5% [7.0%]) differed significantly from those in the CT-CR sequence (−13.1% [6.3%]) (P < .001). The mean (SD) interperiod differences for the time spent to answer the questions in the CR-CT sequence (−18.3 [76.9] seconds) also differed significantly from those in the CT-CR sequence (52.4 [88.5] seconds) (P < .001). Subgroup analysis revealed that residents as well as attending physicians benefitted from CR visualization. Analysis of the case assessment questionnaire showed that CR added significant value to the comprehension of the surgical anatomy (overall mean [SD] score, 4.53 [0.75]). No carryover or period effects were observed. Conclusions and Relevance The visualization with CR allowed a more correct and faster comprehension of the surgical anatomy compared with conventional CT imaging, independent of level of surgeon experience. Therefore, CR may assist general surgeons with preoperative preparation and intraoperative guidance.
Collapse
Affiliation(s)
- Moustafa Elshafei
- Department of Surgery, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen Nürnberg, Erlangen, Germany
| | - Johannes Binder
- Department of Surgery, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen Nürnberg, Erlangen, Germany
| | - Justus Baecker
- Department of Surgery, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen Nürnberg, Erlangen, Germany
| | - Maximilian Brunner
- Department of Surgery, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen Nürnberg, Erlangen, Germany
| | - Michael Uder
- Institute of Radiology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen Nürnberg, Erlangen, Germany
| | - Georg F Weber
- Department of Surgery, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen Nürnberg, Erlangen, Germany
| | - Robert Grützmann
- Department of Surgery, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen Nürnberg, Erlangen, Germany
| | - Christian Krautz
- Department of Surgery, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen Nürnberg, Erlangen, Germany
| |
Collapse
|
29
|
Rashed EA, Gomez-Tames J, Hirata A. Deep Learning-Based Development of Personalized Human Head Model With Non-Uniform Conductivity for Brain Stimulation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2351-2362. [PMID: 31995479 DOI: 10.1109/tmi.2020.2969682] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Electromagnetic stimulation of the human brain is a key tool for neurophysiological characterization and the diagnosis of several neurological disorders. Transcranial magnetic stimulation (TMS) is a commonly used clinical procedure. However, personalized TMS requires a pipeline for individual head model generation to provide target-specific stimulation. This process includes intensive segmentation of several head tissues based on magnetic resonance imaging (MRI), which has significant potential for segmentation error, especially for low-contrast tissues. Additionally, a uniform electrical conductivity is assigned to each tissue in the model, which is an unrealistic assumption based on conventional volume conductor modeling. This study proposes a novel approach for fast and automatic estimation of the electric conductivity in the human head for volume conductor models without anatomical segmentation. A convolutional neural network is designed to estimate personalized electrical conductivity values based on anatomical information obtained from T1- and T2-weighted MRI scans. This approach can avoid the time-consuming process of tissue segmentation and maximize the advantages of position-dependent conductivity assignment based on the water content values estimated from MRI intensity values. The computational results of the proposed approach provide similar but smoother electric field distributions of the brain than that provided by conventional approaches.
Collapse
|
30
|
Etiopathogenesis of lacrimal sac mucopeptide concretions: insights from cinematic rendering techniques. Graefes Arch Clin Exp Ophthalmol 2020; 258:2299-2303. [DOI: 10.1007/s00417-020-04793-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 04/09/2020] [Accepted: 06/06/2020] [Indexed: 12/16/2022] Open
|
31
|
Is CT-based cinematic rendering superior to volume rendering technique in the preoperative evaluation of multifragmentary intraarticular lower extremity fractures? Eur J Radiol 2020; 126:108911. [PMID: 32171910 DOI: 10.1016/j.ejrad.2020.108911] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Revised: 01/07/2020] [Accepted: 02/16/2020] [Indexed: 12/14/2022]
Abstract
PURPOSE Cinematic rendering (CR), a recently launched, FDA-approved rendering technique converts CT image datasets into nearly photorealistic 3D reconstructions by using a unique lighting model. The purpose of this study was to compare CR to volume rendering technique (VRT) images in the preoperative visualization of multifragmentary intraarticular lower extremity fractures. METHOD In this retrospective study, CT datasets of 41 consecutive patients (female: n = 13; male: n = 28; mean age: 52.3 ± 17.9y) with multifragmentary intraarticular lower extremity fractures (calcaneus: n = 16; tibial pilon: n = 19; acetabulum: n = 6) were included. All datasets were acquired using a 128-row dual-source CT. A dedicated workstation was used to reconstruct CR and VRT images which were reviewed independently by two experienced board-certified traumatologists trained in special trauma surgery. Image quality, anatomical accuracy and fracture visualization were assessed on a 6-point-Likert-scale (1 = non-diagnostic; 6=excellent). The regular CT image reconstructions served as reverence standard. For each score, median values between both readers were calculated. Scores of both reconstruction methods were compared using a Wilcoxon-Ranksum test with p < 0.05 indicating statistical significance. Inter-reader agreement was calculated using Spearman's rank correlation coefficient. RESULTS Compared to VRT, CR demonstrated a higher image quality (VRT:2.5; CR:6.0; p < 0.001), a higher anatomical accuracy (VRT:3.5; CR:5.5; p < 0.001) and provided a more detailed visualization of the fracture (VRT:2.5; CR:6.0; p < 0.001). An additional benefit of CR reconstructions compared to VRT reconstructions was reported by both readers in 65.9 % (27/41) of all patients. CONCLUSIONS CR reconstructions are superior to VRT due to higher image quality and higher anatomical accuracy. Traumatologists find CR reconstructions to improve visualization of lower extremity fractures which should thus be used for fracture demonstration during interdisciplinary conferences.
Collapse
|
32
|
Caton MT, Wiggins WF, Nunez D. Three‐Dimensional Cinematic Rendering to Optimize Visualization of Cerebrovascular Anatomy and Disease in CT Angiography. J Neuroimaging 2020; 30:286-296. [DOI: 10.1111/jon.12697] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2019] [Revised: 02/01/2020] [Accepted: 02/04/2020] [Indexed: 12/12/2022] Open
Affiliation(s)
- M. Travis Caton
- Department of RadiologyBrigham and Women's Hospital Boston MA
- Harvard Medical SchoolHarvard University Boston MA
| | - Walter F. Wiggins
- Department of RadiologyBrigham and Women's Hospital Boston MA
- Harvard Medical SchoolHarvard University Boston MA
| | - Diego Nunez
- Department of RadiologyBrigham and Women's Hospital Boston MA
- Harvard Medical SchoolHarvard University Boston MA
| |
Collapse
|
33
|
Baldi D, Tramontano L, Punzo B, Orsini M, Cavaliere C. CT cinematic rendering for glomus jugulare tumor with intracranial extension. Quant Imaging Med Surg 2020; 10:522-526. [PMID: 32190578 DOI: 10.21037/qims.2019.12.13] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
34
|
Petroulia V, Surial B, Verma RK, Hauser C, Hakim A. Calvarial osteomyelitis in secondary syphilis: evaluation by MRI and CT, including cinematic rendering. Heliyon 2020; 6:e03090. [PMID: 31938744 PMCID: PMC6953708 DOI: 10.1016/j.heliyon.2019.e03090] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Revised: 09/23/2019] [Accepted: 12/17/2019] [Indexed: 11/02/2022] Open
Abstract
This is a case of a 22-year-old, HIV-negative, male patient with asymptomatic syphilitic osteomyelitis of the skull in the context of secondary syphilis. The diagnosis was made based on serology as well as CT and MRI scans. CT volumetric data was post-processed with cinematic rendering, which is a novel algorithm that allows for a photorealistic visualization of the lesions. Imaging and follow-up scans after treatment confirmed the diagnosis without the need to perform invasive procedures such as a biopsy.
Collapse
Affiliation(s)
- Valentina Petroulia
- University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Bernard Surial
- Department of Infectious Diseases, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Rajeev Kumar Verma
- University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Christoph Hauser
- Department of Infectious Diseases, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Arsany Hakim
- University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| |
Collapse
|
35
|
Vercauteren T, Unberath M, Padoy N, Navab N. CAI4CAI: The Rise of Contextual Artificial Intelligence in Computer Assisted Interventions. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:198-214. [PMID: 31920208 PMCID: PMC6952279 DOI: 10.1109/jproc.2019.2946993] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 09/12/2019] [Accepted: 10/04/2019] [Indexed: 05/10/2023]
Abstract
Data-driven computational approaches have evolved to enable extraction of information from medical images with a reliability, accuracy and speed which is already transforming their interpretation and exploitation in clinical practice. While similar benefits are longed for in the field of interventional imaging, this ambition is challenged by a much higher heterogeneity. Clinical workflows within interventional suites and operating theatres are extremely complex and typically rely on poorly integrated intra-operative devices, sensors, and support infrastructures. Taking stock of some of the most exciting developments in machine learning and artificial intelligence for computer assisted interventions, we highlight the crucial need to take context and human factors into account in order to address these challenges. Contextual artificial intelligence for computer assisted intervention, or CAI4CAI, arises as an emerging opportunity feeding into the broader field of surgical data science. Central challenges being addressed in CAI4CAI include how to integrate the ensemble of prior knowledge and instantaneous sensory information from experts, sensors and actuators; how to create and communicate a faithful and actionable shared representation of the surgery among a mixed human-AI actor team; how to design interventional systems and associated cognitive shared control schemes for online uncertainty-aware collaborative decision making ultimately producing more precise and reliable interventions.
Collapse
Affiliation(s)
- Tom Vercauteren
- School of Biomedical Engineering & Imaging SciencesKing’s College LondonLondonWC2R 2LSU.K.
| | - Mathias Unberath
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMD21218USA
| | - Nicolas Padoy
- ICube institute, CNRS, IHU Strasbourg, University of Strasbourg67081StrasbourgFrance
| | - Nassir Navab
- Fakultät für InformatikTechnische Universität München80333MunichGermany
| |
Collapse
|
36
|
Chu L, Rowe S, Fishman E. Cinematic rendering of focal liver masses. Diagn Interv Imaging 2019; 100:467-476. [DOI: 10.1016/j.diii.2019.04.003] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 04/09/2019] [Accepted: 04/09/2019] [Indexed: 12/12/2022]
|
37
|
Stadlinger B, Valdec S, Wacht L, Essig H, Winklhofer S. 3D-cinematic rendering for dental and maxillofacial imaging. Dentomaxillofac Radiol 2019; 49:20190249. [PMID: 31356110 DOI: 10.1259/dmfr.20190249] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
OBJECTIVES Aim of this technical note is to show the applicability of cinematic rendering (CR) for a photorealistic 3-dimensional (3D) visualization of maxillofacial structures. The focus is on maxillofacial hard tissue pathologies. METHODS High density maxillofacial pathologies were selected in which CR is applicable. Data from both, CT and cone beam CT (CBCT) were postprocessed using a prototype CR software. RESULTS CR 3D postprocessing of CT and CBCT imaging data is applicable on high density structures and pathologies such as bones, teeth, and tissue calcifications. Image reconstruction allows for a detailed visualization of surface structures, their plasticity, and 3D configuration. CONCLUSIONS CR allows for the generation of photorealistic 3D reconstructions of high density structures and pathologies. Potential applications for maxillofacial bone and tooth imaging are given and examples for CT and CBCT images are displayed.
Collapse
Affiliation(s)
- Bernd Stadlinger
- Clinic of Cranio-Maxillofacial and Oral Surgery, University Hospital Zurich, University of Zurich, Switzerland
| | - Silvio Valdec
- Clinic of Cranio-Maxillofacial and Oral Surgery, University Hospital Zurich, University of Zurich, Switzerland
| | - Lorenz Wacht
- Department of Radiology and Nuclear Medicine, Triemli Hospital Zurich, Switzerland.,Department of Neuroradiology, University Hospital Zurich, University of Zurich, Switzerland
| | - Harald Essig
- Clinic of Cranio-Maxillofacial and Oral Surgery, University Hospital Zurich, University of Zurich, Switzerland
| | - Sebastian Winklhofer
- Department of Neuroradiology, University Hospital Zurich, University of Zurich, Switzerland
| |
Collapse
|
38
|
Zheng Q, Delingette H, Ayache N. Explainable cardiac pathology classification on cine MRI with motion characterization by semi-supervised learning of apparent flow. Med Image Anal 2019; 56:80-95. [PMID: 31200290 DOI: 10.1016/j.media.2019.06.001] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Revised: 03/27/2019] [Accepted: 06/04/2019] [Indexed: 12/28/2022]
Abstract
We propose a method to classify cardiac pathology based on a novel approach to extract image derived features to characterize the shape and motion of the heart. An original semi-supervised learning procedure, which makes efficient use of a large amount of non-segmented images and a small amount of images segmented manually by experts, is developed to generate pixel-wise apparent flow between two time points of a 2D+t cine MRI image sequence. Combining the apparent flow maps and cardiac segmentation masks, we obtain a local apparent flow corresponding to the 2D motion of myocardium and ventricular cavities. This leads to the generation of time series of the radius and thickness of myocardial segments to represent cardiac motion. These time series of motion features are reliable and explainable characteristics of pathological cardiac motion. Furthermore, they are combined with shape-related features to classify cardiac pathologies. Using only nine feature values as input, we propose an explainable, simple and flexible model for pathology classification. On ACDC training set and testing set, the model achieves 95% and 94% respectively as classification accuracy. Its performance is hence comparable to that of the state-of-the-art. Comparison with various other models is performed to outline some advantages of our model.
Collapse
Affiliation(s)
- Qiao Zheng
- Université Côte d'Azur, Inria, 2004 Route des Lucioles, 06902 Sophia Antipolis, France.
| | - Hervé Delingette
- Université Côte d'Azur, Inria, 2004 Route des Lucioles, 06902 Sophia Antipolis, France
| | - Nicholas Ayache
- Université Côte d'Azur, Inria, 2004 Route des Lucioles, 06902 Sophia Antipolis, France
| |
Collapse
|
39
|
Cinematic rendering of skin and subcutaneous soft tissues: potential applications in acute trauma. Emerg Radiol 2019; 26:573-580. [DOI: 10.1007/s10140-019-01697-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2019] [Accepted: 05/17/2019] [Indexed: 12/28/2022]
|
40
|
Yang J, Liu X, Liao C, Li Q, Han D. Cinematic rendering: a new imaging approach for ulcerative colitis. Jpn J Radiol 2019; 37:590-596. [DOI: 10.1007/s11604-019-00844-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Accepted: 05/16/2019] [Indexed: 12/11/2022]
|
41
|
Evaluation of the Applicability of 3d Models as Perceived by the Students of Health Sciences. J Med Syst 2019; 43:108. [PMID: 30887131 DOI: 10.1007/s10916-019-1238-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Accepted: 03/06/2019] [Indexed: 10/27/2022]
Abstract
The methodology and style of teaching anatomy in the faculties of Health Sciences is evolving due to the changes being introduced as a result of the application of new technologies. This brings a more positive attitude in the students, enabling an active participation during the lessons. One of these new technologies is the creation of 3D models that reliably recreates the anatomical details of real bone pieces and allow access of anatomy students to bone pieces that are not damaged and possess easily identifiable anatomical details. In our work, we have presented previously created 3D models of skull and jaw to the students of anatomy in the Faculties of Health Sciences of the University of Salamanca, Spain. The faculties included were odontology, medicine, occupational therapy nursing, health sciences and physiotherapy. A survey was carried out to assess the usefulness of these 3D models in the practical study of anatomy. The total number of students included in the survey was 280.The analysis of the results presents a positive evaluation about the use of 3D models by the students studying anatomy in different Faculties of Health Sciences.
Collapse
|
42
|
Leveraging medical imaging for medical education — A cinematic rendering-featured lecture. Ann Anat 2019; 222:159-165. [DOI: 10.1016/j.aanat.2018.12.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2018] [Accepted: 12/11/2018] [Indexed: 02/07/2023]
|
43
|
Shen Y, Lu Q, Chen R, Yang X, Zhang H, Lu C. False negative results for eosinophils in the 00-19E (Build 5) Information Processing Unit version of automated hematology analyzer sysmex XN series. Scandinavian Journal of Clinical and Laboratory Investigation 2019; 79:126-128. [PMID: 30600719 DOI: 10.1080/00365513.2018.1550669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Yajuan Shen
- a Department of Clinical Laboratory , Shandong Provincial Hospital Affiliated to Shandong University , Jinan , 250021 , China
| | - Qifeng Lu
- a Department of Clinical Laboratory , Shandong Provincial Hospital Affiliated to Shandong University , Jinan , 250021 , China
| | - Ruidan Chen
- a Department of Clinical Laboratory , Shandong Provincial Hospital Affiliated to Shandong University , Jinan , 250021 , China
| | - Xixian Yang
- a Department of Clinical Laboratory , Shandong Provincial Hospital Affiliated to Shandong University , Jinan , 250021 , China
| | - Hanyue Zhang
- a Department of Clinical Laboratory , Shandong Provincial Hospital Affiliated to Shandong University , Jinan , 250021 , China
| | - Chao Lu
- a Department of Clinical Laboratory , Shandong Provincial Hospital Affiliated to Shandong University , Jinan , 250021 , China
| |
Collapse
|
44
|
Cinematic rendering of pancreatic neoplasms: preliminary observations and opportunities. Abdom Radiol (NY) 2018; 43:3009-3015. [PMID: 29550959 DOI: 10.1007/s00261-018-1559-3] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Pancreatic cancer is the third most common cause of cancer death and CT is the most commonly used modality for the initial evaluation of suspected pancreatic cancer. Post-processing of CT data into 2D multiplanar and 3D reconstructions has been shown to improve tumor visualization and assessment of tumor resectability compared to axial slices, and is considered the standard of care. Cinematic rendering is a new 3D-rendering technique that produces photorealistic images, and it has the potential to more accurately depict anatomic detail compared to traditional 3D reconstruction techniques. The purpose of this article is to describe the potential application of CR to imaging of pancreatic neoplasms. CR has the potential to improve visualization of subtle pancreatic neoplasms, differentiation of solid and cystic pancreatic neoplasms, assessment of local tumor extension and vascular invasion, and visualization of metastatic disease.
Collapse
|
45
|
Yang J, Li K, Deng H, Feng J, Fei Y, Jin Y, Liao C, Li Q. CT cinematic rendering for pelvic primary tumor photorealistic visualization. Quant Imaging Med Surg 2018; 8:804-818. [PMID: 30306061 DOI: 10.21037/qims.2018.09.21] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Pelvic tumors can be both complicated and challenging, and computed tomography (CT) has played an important role in the diagnosis and treatment planning of these conditions. Cinematic rendering (CR) is a new method of 3D imaging using CT volumetric data. Unlike traditional 3D methods, CR uses the global illumination model to produce high-definition surface details and shadow effects to generate photorealistic images. In this pictorial review, a series of primary pelvic tumor cases are presented to demonstrate the potential value of CR relative to conventional volume rendering (VR). This technique holds great potential in disease diagnosis, preoperative planning, medical education and patient communication.
Collapse
Affiliation(s)
- Jun Yang
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming 650118, China
| | - Kun Li
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming 650118, China
| | - Huiyuan Deng
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming 650118, China
| | - Jun Feng
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming 650118, China
| | - Yong Fei
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming 650118, China
| | - Yiren Jin
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming 650118, China
| | - Chengde Liao
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming 650118, China
| | - Qinqing Li
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming 650118, China
| |
Collapse
|
46
|
Mahmood F, Chen R, Sudarsky S, Yu D, Durr NJ. Deep learning with cinematic rendering: fine-tuning deep neural networks using photorealistic medical images. Phys Med Biol 2018; 63:185012. [PMID: 30113015 DOI: 10.1088/1361-6560/aada93] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Deep learning has emerged as a powerful artificial intelligence tool to interpret medical images for a growing variety of applications. However, the paucity of medical imaging data with high-quality annotations that is necessary for training such methods ultimately limits their performance. Medical data is challenging to acquire due to privacy issues, shortage of experts available for annotation, limited representation of rare conditions and cost. This problem has previously been addressed by using synthetically generated data. However, networks trained on synthetic data often fail to generalize to real data. Cinematic rendering simulates the propagation and interaction of light passing through tissue models reconstructed from CT data, enabling the generation of photorealistic images. In this paper, we present one of the first applications of cinematic rendering in deep learning, in which we propose to fine-tune synthetic data-driven networks using cinematically rendered CT data for the task of monocular depth estimation in endoscopy. Our experiments demonstrate that: (a) convolutional neural networks (CNNs) trained on synthetic data and fine-tuned on photorealistic cinematically rendered data adapt better to real medical images and demonstrate more robust performance when compared to networks with no fine-tuning, (b) these fine-tuned networks require less training data to converge to an optimal solution, and (c) fine-tuning with data from a variety of photorealistic rendering conditions of the same scene prevents the network from learning patient-specific information and aids in generalizability of the model. Our empirical evaluation demonstrates that networks fine-tuned with cinematically rendered data predict depth with 56.87% less error for rendered endoscopy images and 27.49% less error for real porcine colon endoscopy images.
Collapse
Affiliation(s)
- Faisal Mahmood
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | | | | | | | | |
Collapse
|
47
|
|
48
|
Hosny A, Keating SJ, Dilley JD, Ripley B, Kelil T, Pieper S, Kolb D, Bader C, Pobloth AM, Griffin M, Nezafat R, Duda G, Chiocca EA, Stone JR, Michaelson JS, Dean MN, Oxman N, Weaver JC. From Improved Diagnostics to Presurgical Planning: High-Resolution Functionally Graded Multimaterial 3D Printing of Biomedical Tomographic Data Sets. 3D PRINTING AND ADDITIVE MANUFACTURING 2018; 5:103-113. [DOI: 10.1089/3dp.2017.0140] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
Affiliation(s)
- Ahmed Hosny
- Wyss Institute for Biologically Inspired Engineering, Harvard University, Cambridge, Massachusetts
| | | | - Joshua D. Dilley
- Department of Anesthesia, Critical Care and Pain Medicine, Harvard Medical School, Massachusetts General Hospital, Boston, Massachusetts
| | - Beth Ripley
- Department of Radiology, University of Washington, Seattle, Washington
| | - Tatiana Kelil
- Department of Radiology, University of California, San Francisco, California
| | - Steve Pieper
- Surgical Planning Laboratory, Department of Radiology, Brigham and Women's Hospital, Boston, Massachusetts
- Isomics, Inc., Cambridge, Massachusetts
| | | | | | - Anne-Marie Pobloth
- Department of Biomechanics and Musculoskeletal Regeneration, Julius Wolff Institute and Berlin-Brandenburg Center for Regenerative Therapies, Charité—Universitätsmedizin Berlin, Berlin, Germany
| | - Molly Griffin
- Division of Surgical Oncology, Gillette Center for Women's Cancers, Massachusetts General Hospital, Boston, Massachusetts
| | - Reza Nezafat
- Department of Medicine (Cardiovascular Division), Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts
| | - Georg Duda
- Department of Biomechanics and Musculoskeletal Regeneration, Julius Wolff Institute and Berlin-Brandenburg Center for Regenerative Therapies, Charité—Universitätsmedizin Berlin, Berlin, Germany
| | - Ennio A. Chiocca
- Department of Neurosurgery at Harvard Medical School, Center for Neuro-oncology at Dana-Farber Cancer Institute, Boston, Massachusetts
- Department of Neurosurgery and Institute for the Neurosciences at Brigham and Women's Hospital, Center for Neuro-oncology at Dana-Farber Cancer Institute, Boston, Massachusetts
| | - James R. Stone
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - James S. Michaelson
- Department of Pathology and Surgery, Massachusetts General Hospital, Boston, Massachusetts
- Department of Pathology, Harvard Medical School, Cambridge, Massachusetts
| | - Mason N. Dean
- Department of Biomaterials, Max Planck Institute of Colloids and Interfaces, Potsdam, Germany
| | | | - James C. Weaver
- Wyss Institute for Biologically Inspired Engineering, Harvard University, Cambridge, Massachusetts
| |
Collapse
|
49
|
Menon KV, Raniga SB. Trabecular Anatomy of the Axis Vertebra: A Study of Shaded Volume-Rendered Computed Tomography Images. World Neurosurg 2018; 110:526-532.e10. [PMID: 29433177 DOI: 10.1016/j.wneu.2017.06.185] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2017] [Revised: 06/24/2017] [Accepted: 06/28/2017] [Indexed: 11/26/2022]
Abstract
BACKGROUND To date, trabecular morphology studies have been conducted on thin-section computed tomography (CT) scans of cadaveric bone. Here we describe the trabecular anatomy of the axis vertebra as revealed by an innovative imaging tool. METHODS Ten patients who underwent thin-slice CT scans for suspected cervical spine injury were prospectively subjected to shaded volume-rendered 3-dimensional reconstruction of the images. The trabecular anatomy thus depicted was recreated, and the mechanical vectors were deduced independently by a senior radiologist and spine surgeon and then matched. The clinical implications were postulated. RESULTS The most striking trabeculae are the vertical compression trabeculae connecting the C1 facet to the C3 body. The center of the body of C2 has a space with sparse trabeculae; similarly, the pars interarticularis also has a clear void. The dens contain predominantly tensile trabeculae that are retained even in older patients. Midline remnants of the odontoid body synchondrosis persist even into late adulthood. CONCLUSIONS Shaded volume-rendered imaging appears to be an excellent tool for studying the trabecular anatomy of cancellous bone. The weight-bearing trabeculae run from the C1-2 facet to the C3 body; the inferior facet contributes little to weight-bearing.
Collapse
Affiliation(s)
- K Venugopal Menon
- Department of Orthopedics, Khoula Hospital, Mina al Fahal, Muscat, Oman.
| | - Sameer B Raniga
- Department of Radiology and Molecular Imaging, Sultan Qaboos University Hospital, Muscat, Oman
| |
Collapse
|
50
|
Zheng J, Miao S, Jane Wang Z, Liao R. Pairwise domain adaptation module for CNN-based 2-D/3-D registration. J Med Imaging (Bellingham) 2018; 5:021204. [PMID: 29376104 DOI: 10.1117/1.jmi.5.2.021204] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Accepted: 12/05/2017] [Indexed: 11/14/2022] Open
Abstract
Accurate two-dimensional to three-dimensional (2-D/3-D) registration of preoperative 3-D data and intraoperative 2-D x-ray images is a key enabler for image-guided therapy. Recent advances in 2-D/3-D registration formulate the problem as a learning-based approach and exploit the modeling power of convolutional neural networks (CNN) to significantly improve the accuracy and efficiency of 2-D/3-D registration. However, for surgery-related applications, collecting a large clinical dataset with accurate annotations for training can be very challenging or impractical. Therefore, deep learning-based 2-D/3-D registration methods are often trained with synthetically generated data, and a performance gap is often observed when testing the trained model on clinical data. We propose a pairwise domain adaptation (PDA) module to adapt the model trained on source domain (i.e., synthetic data) to target domain (i.e., clinical data) by learning domain invariant features with only a few paired real and synthetic data. The PDA module is designed to be flexible for different deep learning-based 2-D/3-D registration frameworks, and it can be plugged into any pretrained CNN model such as a simple Batch-Norm layer. The proposed PDA module has been quantitatively evaluated on two clinical applications using different frameworks of deep networks, demonstrating its significant advantages of generalizability and flexibility for 2-D/3-D medical image registration when a small number of paired real-synthetic data can be obtained.
Collapse
Affiliation(s)
- Jiannan Zheng
- Siemens Healthineers, Princeton, New Jersey, United States.,University of British Columbia, Faculty of Applied Science, Department of Electrical and Computer Engineering, Vancouver, British Columbia, Canada
| | - Shun Miao
- Siemens Healthineers, Princeton, New Jersey, United States
| | - Z Jane Wang
- University of British Columbia, Faculty of Applied Science, Department of Electrical and Computer Engineering, Vancouver, British Columbia, Canada
| | - Rui Liao
- Siemens Healthineers, Princeton, New Jersey, United States
| |
Collapse
|