1
|
Zhou Z, Yin P, Liu Y, Hu J, Qian X, Chen G, Hu C, Dai Y. Uncertain prediction of deformable image registration on lung CT using multi-category features and supervised learning. Med Biol Eng Comput 2024; 62:2669-2686. [PMID: 38658497 DOI: 10.1007/s11517-024-03092-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 04/08/2024] [Indexed: 04/26/2024]
Abstract
The assessment of deformable registration uncertainty is an important task for the safety and reliability of registration methods in clinical applications. However, it is typically done by a manual and time-consuming procedure. We propose a novel automatic method to predict registration uncertainty based on multi-category features and supervised learning. Three types of features, including deformation field statistical features, deformation field physiologically realistic features, and image similarity features, are introduced and calculated to train the random forest regressor for local registration uncertain prediction. Deformation field statistical features represent the numerical stability of registration optimization, which are correlated to the uncertainty of deformation fields; deformation field physiologically realistic features represent the biomechanical properties of organ motions, which mathematically reflect the physiological reality of deformation; image similarity features reflect the similarity between the warped image and fixed image. The multi-category features comprehensively reflect the registration uncertainty. The strategy of spatial adaptive random perturbations is also introduced to accurately simulate spatial distribution of registration uncertainty, which makes deformation field statistical features more discriminative to the uncertainty of deformation fields. Experiments were conducted on three publicly available thoracic CT image datasets. Seventeen randomly selected image pairs are used to train the random forest model, and 9 image pairs are used to evaluate the prediction model. The quantitative experiments on lung CT images show that the proposed method outperforms the baseline method for uncertain prediction of classical iterative optimization-based registration and deep learning-based registration with different registration qualities. The proposed method achieves good performance for registration uncertain prediction, which has great potential in improving the accuracy of registration uncertain prediction.
Collapse
Affiliation(s)
- Zhiyong Zhou
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Pengfei Yin
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Yuhang Liu
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China
| | - Jisu Hu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Xusheng Qian
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Guangqiang Chen
- The Second Affiliated Hospital of Soochow University, Suzhou, 215163, China
| | - Chunhong Hu
- The First Affiliated Hospital of Soochow University, Suzhou, 215163, China.
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Suzhou, 215163, China.
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China.
| |
Collapse
|
2
|
Hong J, Hnatyshyn R, Santos EAD, Maciejewski R, Isenberg T. A Survey of Designs for Combined 2D+3D Visual Representations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2888-2902. [PMID: 38648152 DOI: 10.1109/tvcg.2024.3388516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
We examine visual representations of data that make use of combinations of both 2D and 3D data mappings. Combining 2D and 3D representations is a common technique that allows viewers to understand multiple facets of the data with which they are interacting. While 3D representations focus on the spatial character of the data or the dedicated 3D data mapping, 2D representations often show abstract data properties and take advantage of the unique benefits of mapping to a plane. Many systems have used unique combinations of both types of data mappings effectively. Yet there are no systematic reviews of the methods in linking 2D and 3D representations. We systematically survey the relationships between 2D and 3D visual representations in major visualization publications-IEEE VIS, IEEE TVCG, and EuroVis-from 2012 to 2022. We closely examined 105 articles where 2D and 3D representations are connected visually, interactively, or through animation. These approaches are designed based on their visual environment, the relationships between their visual representations, and their possible layouts. Through our analysis, we introduce a design space as well as provide design guidelines for effectively linking 2D and 3D visual representations.
Collapse
|
3
|
Sambri A, Fiore M, Rottoli M, Bianchi G, Pignatti M, Bortoli M, Ercolino A, Ancetti S, Perrone AM, De Iaco P, Cipriani R, Brunocilla E, Donati DM, Gargiulo M, Poggioli G, De Paolis M. A Planned Multidisciplinary Surgical Approach to Treat Primary Pelvic Malignancies. Curr Oncol 2023; 30:1106-1115. [PMID: 36661733 PMCID: PMC9857743 DOI: 10.3390/curroncol30010084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 01/10/2023] [Accepted: 01/11/2023] [Indexed: 01/15/2023] Open
Abstract
The pelvic anatomy poses great challenges to orthopedic surgeons. Sarcomas are often large in size and typically enclosed in the narrow confines of the pelvis with the close proximity of vital structures. The aim of this study is to report a systematic planned multidisciplinary surgical approach to treat pelvic sarcomas. Seventeen patients affected by bone and soft tissue sarcomas of the pelvis, treated using a planned multidisciplinary surgical approach, combining the expertise of orthopedic oncology and other surgeons (colleagues from urology, vascular surgery, abdominal surgery, gynecology and plastic surgery), were included. Seven patients were treated with hindquarter amputation; 10 patients underwent excision of the tumor. Reconstruction of bone defects was conducted in six patients with a custom-made 3D-printed pelvic prosthesis. Thirteen patients experienced at least one complication. Well-organized multidisciplinary collaborations between each subspecialty are the cornerstone for the management of patients affected by pelvic sarcomas, which should be conducted in specialized centers. A multidisciplinary surgical approach is of paramount importance in order to obtain the best successful surgical results and adequate margins for achieving acceptable outcomes.
Collapse
Affiliation(s)
- Andrea Sambri
- Orthopedic and Traumatology Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| | - Michele Fiore
- Orthopedic and Traumatology Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| | - Matteo Rottoli
- General Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| | | | - Marco Pignatti
- Plastic Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| | - Marta Bortoli
- Orthopedic and Traumatology Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| | - Amelio Ercolino
- Division of Urology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| | - Stefano Ancetti
- Vascular Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| | - Anna Myriam Perrone
- Gynecologic Oncoloy Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| | - Pierandrea De Iaco
- Gynecologic Oncoloy Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| | - Riccardo Cipriani
- Plastic Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| | - Eugenio Brunocilla
- Division of Urology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| | | | - Mauro Gargiulo
- Vascular Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| | - Gilberto Poggioli
- General Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| | - Massimiliano De Paolis
- Orthopedic and Traumatology Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| |
Collapse
|
4
|
Zhou L, Fan M, Hansen C, Johnson CR, Weiskopf D. A Review of Three-Dimensional Medical Image Visualization. HEALTH DATA SCIENCE 2022; 2022:9840519. [PMID: 38487486 PMCID: PMC10880180 DOI: 10.34133/2022/9840519] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/25/2021] [Accepted: 03/17/2022] [Indexed: 03/17/2024]
Abstract
Importance. Medical images are essential for modern medicine and an important research subject in visualization. However, medical experts are often not aware of the many advanced three-dimensional (3D) medical image visualization techniques that could increase their capabilities in data analysis and assist the decision-making process for specific medical problems. Our paper provides a review of 3D visualization techniques for medical images, intending to bridge the gap between medical experts and visualization researchers.Highlights. Fundamental visualization techniques are revisited for various medical imaging modalities, from computational tomography to diffusion tensor imaging, featuring techniques that enhance spatial perception, which is critical for medical practices. The state-of-the-art of medical visualization is reviewed based on a procedure-oriented classification of medical problems for studies of individuals and populations. This paper summarizes free software tools for different modalities of medical images designed for various purposes, including visualization, analysis, and segmentation, and it provides respective Internet links.Conclusions. Visualization techniques are a useful tool for medical experts to tackle specific medical problems in their daily work. Our review provides a quick reference to such techniques given the medical problem and modalities of associated medical images. We summarize fundamental techniques and readily available visualization tools to help medical experts to better understand and utilize medical imaging data. This paper could contribute to the joint effort of the medical and visualization communities to advance precision medicine.
Collapse
Affiliation(s)
- Liang Zhou
- National Institute of Health Data Science, Peking University, Beijing, China
| | - Mengjie Fan
- National Institute of Health Data Science, Peking University, Beijing, China
| | - Charles Hansen
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, USA
| | - Chris R. Johnson
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, USA
| | - Daniel Weiskopf
- Visualization Research Center (VISUS), University of Stuttgart, Stuttgart, Germany
| |
Collapse
|
5
|
Jadhav S, Dmitriev K, Marino J, Barish M, Kaufman AE. 3D Virtual Pancreatography. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1457-1468. [PMID: 32870794 PMCID: PMC8884473 DOI: 10.1109/tvcg.2020.3020958] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We present 3D virtual pancreatography (VP), a novel visualization procedure and application for non-invasive diagnosis and classification of pancreatic lesions, the precursors of pancreatic cancer. Currently, non-invasive screening of patients is performed through visual inspection of 2D axis-aligned CT images, though the relevant features are often not clearly visible nor automatically detected. VP is an end-to-end visual diagnosis system that includes: A machine learning based automatic segmentation of the pancreatic gland and the lesions, a semi-automatic approach to extract the primary pancreatic duct, a machine learning based automatic classification of lesions into four prominent types, and specialized 3D and 2D exploratory visualizations of the pancreas, lesions and surrounding anatomy. We combine volume rendering with pancreas- and lesion-centric visualizations and measurements for effective diagnosis. We designed VP through close collaboration and feedback from expert radiologists, and evaluated it on multiple real-world CT datasets with various pancreatic lesions and case studies examined by the expert radiologists.
Collapse
|
6
|
Allgaier M, Neyazi B, Preim B, Saalfeld S. Distance and force visualisations for improved simulation of intracranial aneurysm clipping. Int J Comput Assist Radiol Surg 2021; 16:1297-1304. [PMID: 34053014 PMCID: PMC8295166 DOI: 10.1007/s11548-021-02413-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 05/20/2021] [Indexed: 11/05/2022]
Abstract
Purpose The treatment of cerebral aneurysms shifted from microsurgical to endovascular therapy. But for some difficult aneurysm configurations, e.g. wide neck aneurysms, microsurgical clipping is better suited. From this combination of limited interventions and the complexity of these cases, the need for improved training possibilities for young neurosurgeons arises. Method We designed and implemented a clipping simulation that requires only a monoscopic display, mouse and keyboard. After a virtual craniotomy, the user can apply a clip at the aneurysm which is deformed based on a mass–spring model. Additionally, concepts for visualising distances as well as force were implemented. The distance visualisations aim to enhance spatial relations, improving the navigation of the clip. The force visualisations display the force acting on the vessel surface by the applied clip. The developed concepts include colour maps and visualisations based on rays, single objects and glyphs. Results The concepts were quantitatively evaluated via an online survey and qualitatively evaluated by a neurosurgeon. Regarding force visualisations, a colour map is the most appropriate concept. The necessity of distance visualisations became apparent, as the expert was unable to estimate distances and to properly navigate the clip. The distance rays were the only concept supporting the navigation appropriately. Conclusion The easily accessible surgical training simulation for aneurysm clipping benefits from a visualisation of distances and simulated forces.
Collapse
Affiliation(s)
- Mareen Allgaier
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Universitätsplatz 2, 39106, Magdeburg, Germany.
| | - Belal Neyazi
- University Hospital Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany
| | - Bernhard Preim
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Universitätsplatz 2, 39106, Magdeburg, Germany
| | - Sylvia Saalfeld
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Universitätsplatz 2, 39106, Magdeburg, Germany.,Forschungscampus STIMULATE, Magdeburg, Germany
| |
Collapse
|
7
|
Liu J, Aviles-Rivero AI, Ji H, Schönlieb CB. Rethinking medical image reconstruction via shape prior, going deeper and faster: Deep joint indirect registration and reconstruction. Med Image Anal 2020; 68:101930. [PMID: 33378731 DOI: 10.1016/j.media.2020.101930] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 11/23/2020] [Accepted: 11/24/2020] [Indexed: 11/18/2022]
Abstract
Indirect image registration is a promising technique to improve image reconstruction quality by providing a shape prior for the reconstruction task. In this paper, we propose a novel hybrid method that seeks to reconstruct high quality images from few measurements whilst requiring low computational cost. With this purpose, our framework intertwines indirect registration and reconstruction tasks is a single functional. It is based on two major novelties. Firstly, we introduce a model based on deep nets to solve the indirect registration problem, in which the inversion and registration mappings are recurrently connected through a fixed-point interaction based sparse optimisation. Secondly, we introduce specific inversion blocks, that use the explicit physical forward operator, to map the acquired measurements to the image reconstruction. We also introduce registration blocks based deep nets to predict the registration parameters and warp transformation accurately and efficiently. We demonstrate, through extensive numerical and visual experiments, that our framework outperforms significantly classic reconstruction schemes and other bi-task method; this in terms of both image quality and computational time. Finally, we show generalisation capabilities of our approach by demonstrating their performance on fast Magnetic Resonance Imaging (MRI), sparse view computed tomography (CT) and low dose CT with measurements much below the Nyquist limit.
Collapse
Affiliation(s)
- Jiulong Liu
- Department of Mathematics, National University of Singapore, Singapore. https://github.com/jiulongliu/Deep-Joint-Indirect-Registration-and-Reconstruction
| | | | - Hui Ji
- Department of Mathematics, National University of Singapore, Singapore
| | | |
Collapse
|
8
|
Jonsson D, Steneteg P, Sunden E, Englund R, Kottravel S, Falk M, Ynnerman A, Hotz I, Ropinski T. Inviwo - A Visualization System with Usage Abstraction Levels. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:3241-3254. [PMID: 31180858 DOI: 10.1109/tvcg.2019.2920639] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The complexity of today's visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for instance by providing a data flow network editor. Unfortunately, these abstractions result in several issues, which need to be circumvented through an abstraction-centered system design. Often, a high level of abstraction hides low level details, which makes it difficult to directly access the underlying computing platform, which would be important to achieve an optimal performance. Therefore, we propose a layer structure developed for modern and sustainable visualization systems allowing developers to interact with all contained abstraction levels. We refer to this interaction capabilities as usage abstraction levels, since we target application developers with various levels of experience. We formulate the requirements for such a system, derive the desired architecture, and present how the concepts have been exemplary realized within the Inviwo visualization system. Furthermore, we address several specific challenges that arise during the realization of such a layered architecture, such as communication between different computing platforms, performance centered encapsulation, as well as layer-independent development by supporting cross layer documentation and debugging capabilities.
Collapse
|
9
|
Visual Analytics for the Representation, Exploration, and Analysis of High-Dimensional, Multi-faceted Medical Data. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2019; 1138:137-162. [PMID: 31313263 DOI: 10.1007/978-3-030-14227-8_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2023]
Abstract
Medicine is among those research fields with a significant impact on humans and their health. Already for decades, medicine has established a tight coupling with the visualization domain, proving the importance of developing visualization techniques, designed exclusively for this research discipline. However, medical data is steadily increasing in complexity with the appearance of heterogeneous, multi-modal, multi-parametric, cohort or population, as well as uncertain data. To deal with this kind of complex data, the field of Visual Analytics has emerged. In this chapter, we discuss the many dimensions and facets of medical data. Based on this classification, we provide a general overview of state-of-the-art visualization systems and solutions dealing with high-dimensional, multi-faceted data. Our particular focus will be on multi-modal, multi-parametric data, on data from cohort or population studies and on uncertain data, especially with respect to Visual Analytics applications for the representation, exploration, and analysis of high-dimensional, multi-faceted medical data.
Collapse
|
10
|
Sokooti H, Saygili G, Glocker B, Lelieveldt BPF, Staring M. Quantitative error prediction of medical image registration using regression forests. Med Image Anal 2019; 56:110-121. [PMID: 31226661 DOI: 10.1016/j.media.2019.05.005] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Revised: 04/25/2019] [Accepted: 05/10/2019] [Indexed: 11/17/2022]
Abstract
Predicting registration error can be useful for evaluation of registration procedures, which is important for the adoption of registration techniques in the clinic. In addition, quantitative error prediction can be helpful in improving the registration quality. The task of predicting registration error is demanding due to the lack of a ground truth in medical images. This paper proposes a new automatic method to predict the registration error in a quantitative manner, and is applied to chest CT scans. A random regression forest is utilized to predict the registration error locally. The forest is built with features related to the transformation model and features related to the dissimilarity after registration. The forest is trained and tested using manually annotated corresponding points between pairs of chest CT scans in two experiments: SPREAD (trained and tested on SPREAD) and inter-database (including three databases SPREAD, DIR-Lab-4DCT and DIR-Lab-COPDgene). The results show that the mean absolute errors of regression are 1.07 ± 1.86 and 1.76 ± 2.59 mm for the SPREAD and inter-database experiment, respectively. The overall accuracy of classification in three classes (correct, poor and wrong registration) is 90.7% and 75.4%, for SPREAD and inter-database respectively. The good performance of the proposed method enables important applications such as automatic quality control in large-scale image analysis.
Collapse
Affiliation(s)
- Hessam Sokooti
- Leiden University Medical Center, Leiden, the Netherlands.
| | - Gorkem Saygili
- Leiden University Medical Center, Leiden, the Netherlands
| | | | - Boudewijn P F Lelieveldt
- Leiden University Medical Center, Leiden, the Netherlands; Delft University of Technology, Delft, the Netherlands
| | - Marius Staring
- Leiden University Medical Center, Leiden, the Netherlands; Delft University of Technology, Delft, the Netherlands
| |
Collapse
|
11
|
Smit N, Bruckner S. Towards Advanced Interactive Visualization for Virtual Atlases. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2019; 1156:85-96. [PMID: 31338779 DOI: 10.1007/978-3-030-19385-0_6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
An atlas is generally defined as a bound collection of tables, charts or illustrations describing a phenomenon. In an anatomical atlas for example, a collection of representative illustrations and text describes anatomy for the purpose of communicating anatomical knowledge. The atlas serves as reference frame for comparing and integrating data from different sources by spatially or semantically relating collections of drawings, imaging data, and/or text. In the field of medical image processing, atlas information is often constructed from a collection of regions of interest, which are based on medical images that are annotated by domain experts. Such an atlas may be employed, for example, for automatic segmentation of medical imaging data. The combination of interactive visualization techniques with atlas information opens up new possibilities for content creation, curation, and navigation in virtual atlases. With interactive visualization of atlas information, students are able to inspect and explore anatomical atlases in ways that were not possible with the traditional method of presenting anatomical atlases in book format, such as viewing the illustrations from other viewpoints. With advanced interaction techniques, it becomes possible to query the data that forms the basis for the atlas, thus empowering researchers to access a wealth of information in new ways. So far, atlas-based visualization has been employed mainly for medical education, as well as biological research. In this survey, we provide an overview of current digital biomedical atlas tasks and applications and summarize relevant visualization techniques. We discuss recent approaches for providing next-generation visual interfaces to navigate atlas data that go beyond common text-based search and hierarchical lists. Finally, we reflect on open challenges and opportunities for the next steps in interactive atlas visualization.
Collapse
Affiliation(s)
- Noeska Smit
- Department of Informatics, University of Bergen, Bergen, Norway. .,Mohn Medical Imaging and Visualization Centre, Haukeland University Hospital, Bergen, Norway.
| | - Stefan Bruckner
- Department of Informatics, University of Bergen, Bergen, Norway
| |
Collapse
|
12
|
Tang Z, Yap PT, Shen D. A New Multi-Atlas Registration Framework for Multimodal Pathological Images Using Conventional Monomodal Normal Atlases. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 28:10.1109/TIP.2018.2884563. [PMID: 30571622 PMCID: PMC6579720 DOI: 10.1109/tip.2018.2884563] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Using multi-atlas registration (MAR), information carried by atlases can be transferred onto a new input image for the tasks of region of interest (ROI) segmentation, anatomical landmark detection, and so on. Conventional atlases used in MAR methods are monomodal and contain only normal anatomical structures. Therefore, the majority of MAR methods cannot handle input multimodal pathological images, which are often collected in routine image-based diagnosis. This is because registering monomodal atlases with normal appearances to multimodal pathological images involves two major problems: (1) missing imaging modalities in the monomodal atlases, and (2) influence from pathological regions. In this paper, we propose a new MAR framework to tackle these problems. In this framework, a deep learning based image synthesizers are applied for synthesizing multimodal normal atlases from conventional monomodal normal atlases. To reduce the influence from pathological regions, we further propose a multimodal lowrank approach to recover multimodal normal-looking images from multimodal pathological images. Finally, the multimodal normal atlases can be registered to the recovered multimodal images in a multi-channel way. We evaluate our MAR framework via brain ROI segmentation of multimodal tumor brain images. Due to the utilization of multimodal information and the reduced influence from pathological regions, experimental results show that registration based on our method is more accurate and robust, leading to significantly improved brain ROI segmentation compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Zhenyu Tang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA and also the School of Computer Science and Technology, Anhui University
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA and also Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
13
|
Asensio Romero L, Asensio Gómez M, Prats-Galino A, Juanes Méndez JA. 3D Models of Female Pelvis Structures Reconstructed and Represented in Combination with Anatomical and Radiological Sections. J Med Syst 2018; 42:37. [PMID: 29333592 DOI: 10.1007/s10916-018-0891-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Accepted: 01/02/2018] [Indexed: 12/28/2022]
Abstract
We present a computer program designed to visualize and interact with three-dimensional models of the main anatomical structures of the female pelvis. They are reconstructed from serial sections of corpse, from the Visible Human project of the Medical Library of the United States and from serial sections of high-resolution magnetic resonance. It is possible to represent these three-dimensional structures in any spatial orientation, together with sectional images of corpse and magnetic resonance imaging, in the three planes of space (axial, coronal and sagittal) that facilitates the anatomical understanding and the identification of the set of visceral structures of this body region. Actually, there are few studies that analysze in detail the radiological anatomy of the female pelvis using three-dimensional models together with sectional images, making use of open applications for the representation of virtual scenes on low cost Windows® platforms. Our technological development allows the observation of the main female pelvis viscera in three dimensions with a very intuitive graphic interface. This computer application represents an important training tool for both medical students and specialists in gynecology and as a preliminary step in the planning of pelvic floor surgery.
Collapse
Affiliation(s)
- L Asensio Romero
- Department of Human Anatomy and Histology, School of Medicine, University of Salamanca, Salamanca, Spain.
| | - M Asensio Gómez
- Department of Human Anatomy and Histology, School of Medicine, University of Salamanca, Salamanca, Spain
| | - A Prats-Galino
- Department of Human Anatomy and Embryology, School of Medicine, University of Barcelona, Barcelona, Spain
| | - J A Juanes Méndez
- Department of Human Anatomy and Histology, School of Medicine, University of Salamanca, Salamanca, Spain
| |
Collapse
|