1
|
Salg GA, Steinle V, Labode J, Wagner W, Studier-Fischer A, Reiser J, Farjallah E, Guettlein M, Albers J, Hilgenfeld T, Giese NA, Stiller W, Nickel F, Loos M, Michalski CW, Kauczor HU, Hackert T, Dullin C, Mayer P, Kenngott HG. Multiscale and multimodal imaging for three-dimensional vascular and histomorphological organ structure analysis of the pancreas. Sci Rep 2024; 14:10136. [PMID: 38698049 PMCID: PMC11065985 DOI: 10.1038/s41598-024-60254-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 04/20/2024] [Indexed: 05/05/2024] Open
Abstract
Exocrine and endocrine pancreas are interconnected anatomically and functionally, with vasculature facilitating bidirectional communication. Our understanding of this network remains limited, largely due to two-dimensional histology and missing combination with three-dimensional imaging. In this study, a multiscale 3D-imaging process was used to analyze a porcine pancreas. Clinical computed tomography, digital volume tomography, micro-computed tomography and Synchrotron-based propagation-based imaging were applied consecutively. Fields of view correlated inversely with attainable resolution from a whole organism level down to capillary structures with a voxel edge length of 2.0 µm. Segmented vascular networks from 3D-imaging data were correlated with tissue sections stained by immunohistochemistry and revealed highly vascularized regions to be intra-islet capillaries of islets of Langerhans. Generated 3D-datasets allowed for three-dimensional qualitative and quantitative organ and vessel structure analysis. Beyond this study, the method shows potential for application across a wide range of patho-morphology analyses and might possibly provide microstructural blueprints for biotissue engineering.
Collapse
Affiliation(s)
- Gabriel Alexander Salg
- Clinic for General-, Visceral- and Transplantation Surgery, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany.
- Medical Faculty, Heidelberg University, Heidelberg, Germany.
| | - Verena Steinle
- Clinic for Diagnostic and Interventional Radiology, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
| | - Jonas Labode
- Institute of Functional and Applied Anatomy, Hannover Medical School, Carl-Neuberg-Str. 1, 30625, Hannover, Germany
| | - Willi Wagner
- Clinic for Diagnostic and Interventional Radiology, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- Translational Lung Research Center, Member of the German Center for Lung Research, University of Heidelberg, Im Neuenheimer Feld 130.3, 69120, Heidelberg, Germany
| | - Alexander Studier-Fischer
- Clinic for General-, Visceral- and Transplantation Surgery, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Johanna Reiser
- Clinic for General-, Visceral- and Transplantation Surgery, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- Clinic for Diagnostic and Interventional Radiology, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Elyes Farjallah
- Clinic for General-, Visceral- and Transplantation Surgery, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Michelle Guettlein
- Clinic for Diagnostic and Interventional Radiology, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Jonas Albers
- Hamburg Unit, European Molecular Biology Laboratory, c/o Deutsches Elektronen-Synchrotron DESY Hamburg, Notkestr. 85, 22607, Hamburg, Germany
| | - Tim Hilgenfeld
- Department of Neuroradiology, University Hospital Heidelberg, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany
| | - Nathalia A Giese
- Clinic for General-, Visceral- and Transplantation Surgery, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Wolfram Stiller
- Clinic for Diagnostic and Interventional Radiology, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- Translational Lung Research Center, Member of the German Center for Lung Research, University of Heidelberg, Im Neuenheimer Feld 130.3, 69120, Heidelberg, Germany
| | - Felix Nickel
- Clinic for General-, Visceral- and Transplantation Surgery, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- Clinic for General-, Visceral- and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany
| | - Martin Loos
- Clinic for General-, Visceral- and Transplantation Surgery, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Christoph W Michalski
- Clinic for General-, Visceral- and Transplantation Surgery, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Hans-Ulrich Kauczor
- Clinic for Diagnostic and Interventional Radiology, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- Translational Lung Research Center, Member of the German Center for Lung Research, University of Heidelberg, Im Neuenheimer Feld 130.3, 69120, Heidelberg, Germany
| | - Thilo Hackert
- Clinic for General-, Visceral- and Transplantation Surgery, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- Clinic for General-, Visceral- and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany
| | - Christian Dullin
- Clinic for Diagnostic and Interventional Radiology, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- Translational Lung Research Center, Member of the German Center for Lung Research, University of Heidelberg, Im Neuenheimer Feld 130.3, 69120, Heidelberg, Germany
- Institute for Diagnostic and Interventional Radiology, University Medical Center Goettingen, Robert-Koch-Str. 40, Goettingen, Germany
- Translational Molecular Imaging, Max Planck Institute for Multidisciplinary Sciences, Hermann-Rein-Str. 3, Göttingen, Germany
| | - Philipp Mayer
- Clinic for Diagnostic and Interventional Radiology, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Hannes Goetz Kenngott
- Clinic for General-, Visceral- and Transplantation Surgery, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| |
Collapse
|
2
|
Banerjee S, Nysjö F, Toumpanakis D, Dhara AK, Wikström J, Strand R. Streamlining neuroradiology workflow with AI for improved cerebrovascular structure monitoring. Sci Rep 2024; 14:9245. [PMID: 38649692 PMCID: PMC11035663 DOI: 10.1038/s41598-024-59529-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 04/11/2024] [Indexed: 04/25/2024] Open
Abstract
Radiological imaging to examine intracranial blood vessels is critical for preoperative planning and postoperative follow-up. Automated segmentation of cerebrovascular anatomy from Time-Of-Flight Magnetic Resonance Angiography (TOF-MRA) can provide radiologists with a more detailed and precise view of these vessels. This paper introduces a domain generalized artificial intelligence (AI) solution for volumetric monitoring of cerebrovascular structures from multi-center MRAs. Our approach utilizes a multi-task deep convolutional neural network (CNN) with a topology-aware loss function to learn voxel-wise segmentation of the cerebrovascular tree. We use Decorrelation Loss to achieve domain regularization for the encoder network and auxiliary tasks to provide additional regularization and enable the encoder to learn higher-level intermediate representations for improved performance. We compare our method to six state-of-the-art 3D vessel segmentation methods using retrospective TOF-MRA datasets from multiple private and public data sources scanned at six hospitals, with and without vascular pathologies. The proposed model achieved the best scores in all the qualitative performance measures. Furthermore, we have developed an AI-assisted Graphical User Interface (GUI) based on our research to assist radiologists in their daily work and establish a more efficient work process that saves time.
Collapse
Affiliation(s)
- Subhashis Banerjee
- Department of Information Technology, Uppsala University, Uppsala, Sweden.
| | - Fredrik Nysjö
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Dimitrios Toumpanakis
- Department of Surgical Sciences, Neuroradiology, Uppsala University, Uppsala, Sweden
| | - Ashis Kumar Dhara
- Department of Electrical Engineering, National Institute of Technology Durgapur, Durgapur, India
| | - Johan Wikström
- Department of Surgical Sciences, Neuroradiology, Uppsala University, Uppsala, Sweden
| | - Robin Strand
- Department of Information Technology, Uppsala University, Uppsala, Sweden.
| |
Collapse
|
3
|
Pal SC, Toumpanakis D, Wikstrom J, Ahuja CK, Strand R, Dhara AK. Multi-Level Residual Dual Attention Network for Major Cerebral Arteries Segmentation in MRA Toward Diagnosis of Cerebrovascular Disorders. IEEE Trans Nanobioscience 2024; 23:167-175. [PMID: 37486852 DOI: 10.1109/tnb.2023.3298444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/26/2023]
Abstract
Segmentation of major brain vessels is very important for the diagnosis of cerebrovascular disorders and subsequent surgical planning. Vessel segmentation is an important preprocessing step for a wide range of algorithms for the automatic diagnosis or treatment of several vascular pathologies and as such, it is valuable to have a well-performing vascular segmentation pipeline. In this article, we propose an end-to-end multiscale residual dual attention deep neural network for resilient major brain vessel segmentation. In the proposed network, the encoder and decoder blocks of the U-Net are replaced with the multi-level atrous residual blocks to enhance the learning capability by increasing the receptive field to extract the various semantic coarse- and fine-grained features. Dual attention block is incorporated in the bottleneck to perform effective multiscale information fusion to obtain detailed structure of blood vessels. The methods were evaluated on the publicly available TubeTK data set. The proposed method outperforms the state-of-the-art techniques with dice of 0.79 on the whole-brain prediction. The statistical and visual assessments indicate that proposed network is robust to outliers and maintains higher consistency in vessel continuity than the traditional U-Net and its variations.
Collapse
|
4
|
Zhang H, Zhu L, Zhang Q, Wang Y, Song A. Online view enhancement for exploration inside medical volumetric data using virtual reality. Comput Biol Med 2023; 163:107217. [PMID: 37450968 DOI: 10.1016/j.compbiomed.2023.107217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 06/13/2023] [Accepted: 06/25/2023] [Indexed: 07/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Medical image visualization is an essential tool for conveying anatomical information. Ray-casting-based volume rendering is commonly used for generating visualizations of raw medical images. However, exposing a target area inside the skin often requires manual tuning of transfer functions or segmentation of original images, as preset parameters in volume rendering may not work well for arbitrary scanned data. This process is tedious and unnatural. To address this issue, we propose a volume visualization system that enhances the view inside the skin, enabling flexible exploration of medical volumetric data using virtual reality. METHODS In our proposed system, we design a virtual reality interface that allows users to walk inside the data. We introduce a view-dependent occlusion weakening method based on geodesic distance transform to support this interaction. By combining these methods, we develop a virtual reality system with intuitive interactions, facilitating online view enhancement for medical data exploration and annotation inside the volume. RESULTS Our rendering results demonstrate that the proposed occlusion weakening method effectively weakens obstacles while preserving the target area. Furthermore, comparative analysis with other alternative solutions highlights the advantages of our method in virtual reality. We conducted user studies to evaluate our system, including area annotation and line drawing tasks. The results showed that our method with enhanced views achieved 47.73% and 35.29% higher accuracy compared to the group with traditional volume rendering. Additionally, subjective feedback from medical experts further supported the effectiveness of the designed interactions in virtual reality. CONCLUSIONS We successfully address the occlusion problems in the exploration of medical volumetric data within a virtual reality environment. Our system allows for flexible integration of scanned medical volumes without requiring extensive manual preprocessing. The results of our user studies demonstrate the feasibility and effectiveness of walk-in interaction for medical data exploration.
Collapse
Affiliation(s)
- Hongkun Zhang
- State Key Laboratory of Digital Medical Engineering, Jiangsu Key Lab of Remote Measurement and Control, School of Instrument Science and Engineering, Southeast University, Nanjing, Jiangsu, PR China
| | - Lifeng Zhu
- State Key Laboratory of Digital Medical Engineering, Jiangsu Key Lab of Remote Measurement and Control, School of Instrument Science and Engineering, Southeast University, Nanjing, Jiangsu, PR China.
| | | | - Yunhai Wang
- Department of Computer Science, Shandong University, Shandong, PR China
| | - Aiguo Song
- State Key Laboratory of Digital Medical Engineering, Jiangsu Key Lab of Remote Measurement and Control, School of Instrument Science and Engineering, Southeast University, Nanjing, Jiangsu, PR China
| |
Collapse
|
5
|
Weng W, Ding H, Bai J, Zhou W, Wang G. VCerebrovascular Segmentation in Phase-Contrast Magnetic Resonance Angiography by a Radon Projection Composition Network. Comput Med Imaging Graph 2023; 107:102228. [PMID: 37054491 DOI: 10.1016/j.compmedimag.2023.102228] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 03/03/2023] [Accepted: 04/03/2023] [Indexed: 04/09/2023]
Abstract
Cerebrovascular segmentation based on phase-contrast magnetic resonance angiography (PC-MRA) provides patient-specific intracranial vascular structures for neurosurgery planning. However, the vascular complex topology and spatial sparsity make the task challenging. Inspired by the computed tomography reconstruction, this paper proposes a Radon Projection Composition Network (RPC-Net) for cerebrovascular segmentation in PC-MRA, aiming to enhance distribution probability of vessels and fully obtain the vascular topological information. Multi-directional Radon projections of the images are introduced and a two-stream network is used to learn the features of the 3D images and projections. The projection domain features are remapped to the 3D image domain by filtered back-projection transform to obtain the image-projection joint features for predicting vessel voxels. A four-fold cross-validation experiment was performed on a local dataset containing 128 PC-MRA scans. The average Dice similarity coefficient, precision and recall of the RPC-Net achieved 86.12%, 85.91% and 86.50%, respectively, while the average completeness and validity of the vessel structure were 85.50% and 92.38%, respectively. The proposed method outperformed the existing methods, especially with significant improvement on the extraction of small and low-intensity vessels. Moreover, the applicability of the segmentation for electrode trajectory planning was also validated. The results demonstrate that the RPC-Net realizes an accurate and complete cerebrovascular segmentation and has potential applications in assisting neurosurgery preoperative planning.
Collapse
Affiliation(s)
- Wenhai Weng
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Hui Ding
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Jianjun Bai
- Department of Epilepsy center, Tsinghua University Yuquan Hospital, No.5 Shijingshan Road, Beijing 100049, China
| | - Wenjing Zhou
- Department of Epilepsy center, Tsinghua University Yuquan Hospital, No.5 Shijingshan Road, Beijing 100049, China
| | - Guangzhi Wang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
6
|
Miao R. Emotion Analysis and Opinion Monitoring of Social Network Users Under Deep Convolutional Neural Network. JOURNAL OF GLOBAL INFORMATION MANAGEMENT 2023. [DOI: 10.4018/jgim.319309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
Abstract
With the development of the internet, the user behavior and emotional characteristics behind social networks have attracted scholars' attention. Meanwhile, identifying user emotion can promote the development of mobile communication technology and network intelligence industrialization. Based on this, this work explores the emotions of social network users and discusses the public comments on the speeches through the speeches of social network users. After 100 times of training, F1 of the BiLSTM algorithm can reach 97.32%, and after 100 times of training, its function loss can be reduced to 1.33%, which can reduce the impact of function loss on emotion recognition. The exploration is of great significance for analyzing the emotional behavior of social network users and provides a reference for the intelligent and systematic development of internet social model as well as the information management.
Collapse
Affiliation(s)
- Ruomu Miao
- School of Media and Communication, Shanghai Jiao Tong University, China
| |
Collapse
|
7
|
Afzal S, Ghani S, Hittawe MM, Rashid SF, Knio OM, Hadwiger M, Hoteit I. Visualization and Visual Analytics Approaches for Image and Video Datasets: A Survey. ACM T INTERACT INTEL 2023. [DOI: 10.1145/3576935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Image and video data analysis has become an increasingly important research area with applications in different domains such as security surveillance, healthcare, augmented and virtual reality, video and image editing, activity analysis and recognition, synthetic content generation, distance education, telepresence, remote sensing, sports analytics, art, non-photorealistic rendering, search engines, and social media. Recent advances in Artificial Intelligence (AI) and particularly deep learning have sparked new research challenges and led to significant advancements, especially in image and video analysis. These advancements have also resulted in significant research and development in other areas such as visualization and visual analytics, and have created new opportunities for future lines of research. In this survey paper, we present the current state of the art at the intersection of visualization and visual analytics, and image and video data analysis. We categorize the visualization papers included in our survey based on different taxonomies used in visualization and visual analytics research. We review these papers in terms of task requirements, tools, datasets, and application areas. We also discuss insights based on our survey results, trends and patterns, the current focus of visualization research, and opportunities for future research.
Collapse
Affiliation(s)
- Shehzad Afzal
- King Abdullah University of Science & Technology, Saudi Arabia
| | - Sohaib Ghani
- King Abdullah University of Science & Technology, Saudi Arabia
| | | | | | - Omar M Knio
- King Abdullah University of Science & Technology, Saudi Arabia
| | - Markus Hadwiger
- King Abdullah University of Science & Technology, Saudi Arabia
| | - Ibrahim Hoteit
- King Abdullah University of Science & Technology, Saudi Arabia
| |
Collapse
|
8
|
Wang Q, Chen Z, Wang Y, Qu H. A Survey on ML4VIS: Applying Machine Learning Advances to Data Visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:5134-5153. [PMID: 34437063 DOI: 10.1109/tvcg.2021.3106142] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Inspired by the great success of machine learning (ML), researchers have applied ML techniques to visualizations to achieve a better design, development, and evaluation of visualizations. This branch of studies, known as ML4VIS, is gaining increasing research attention in recent years. To successfully adapt ML techniques for visualizations, a structured understanding of the integration of ML4VIS is needed. In this article, we systematically survey 88 ML4VIS studies, aiming to answer two motivating questions: "what visualization processes can be assisted by ML?" and "how ML techniques can be used to solve visualization problems? "This survey reveals seven main processes where the employment of ML techniques can benefit visualizations: Data Processing4VIS, Data-VIS Mapping, Insight Communication, Style Imitation, VIS Interaction, VIS Reading, and User Profiling. The seven processes are related to existing visualization theoretical models in an ML4VIS pipeline, aiming to illuminate the role of ML-assisted visualization in general visualizations. Meanwhile, the seven processes are mapped into main learning tasks in ML to align the capabilities of ML with the needs in visualization. Current practices and future opportunities of ML4VIS are discussed in the context of the ML4VIS pipeline and the ML-VIS mapping. While more studies are still needed in the area of ML4VIS, we hope this article can provide a stepping-stone for future exploration. A web-based interactive browser of this survey is available at https://ml4vis.github.io.
Collapse
|
9
|
Design and Implementation of a Multidimensional Visualization Reconstruction System for Old Urban Spaces Based on Neural Networks. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4253128. [PMID: 35694601 PMCID: PMC9184188 DOI: 10.1155/2022/4253128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 05/11/2022] [Accepted: 05/18/2022] [Indexed: 11/17/2022]
Abstract
This article presents an in-depth study and analysis of the construction of a convolutional neural network model and multidimensional visualization system of old urban space and proposes the design of a multifaceted visualization reconstruction system of old urban space based on a neural network. It also quantitatively analyzes the essential spatial attribute characteristics of urban shadow areas as nodes of the overall urban dynamic network in three dimensions—spatial connection strength, spatial connection distance, and spatial connection direction—summarizes the characteristics of urban old spatial structure from the perspective of a dynamic network, and then proposes the model of urban old spatial design from the perspective of an active network. The shallow depth of the network structure is used to reduce the parameters in the learning process of reconfigurable convolutional neural networks using data sets so that the model learns more general features. For the situation where the number of data sets is small, data augmentation is used to expand the size of the data sets and improve the recognition accuracy of the reconfigurable convolutional neural network. A real-time update method of multifaceted data visualization for big data scenarios is proposed and implemented to reduce the network load and network latency caused by charts of multidimensional data changes, reduce the data error rate, and maintain the system stability in the old urban space concurrency scenario.
Collapse
|
10
|
Bai R, Liu X, Jiang S, Sun H. Deep Learning Based Real-Time Semantic Segmentation of Cerebral Vessels and Cranial Nerves in Microvascular Decompression Scenes. Cells 2022; 11:1830. [PMID: 35681525 PMCID: PMC9180010 DOI: 10.3390/cells11111830] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 05/19/2022] [Accepted: 05/25/2022] [Indexed: 12/03/2022] Open
Abstract
Automatic extraction of cerebral vessels and cranial nerves has important clinical value in the treatment of trigeminal neuralgia (TGN) and hemifacial spasm (HFS). However, because of the great similarity between different cerebral vessels and between different cranial nerves, it is challenging to segment cerebral vessels and cranial nerves in real time on the basis of true-color microvascular decompression (MVD) images. In this paper, we propose a lightweight, fast semantic segmentation Microvascular Decompression Network (MVDNet) for MVD scenarios which achieves a good trade-off between segmentation accuracy and speed. Specifically, we designed a Light Asymmetric Bottleneck (LAB) module in the encoder to encode context features. A Feature Fusion Module (FFM) was introduced into the decoder to effectively combine high-level semantic features and underlying spatial details. The proposed network has no pretrained model, fewer parameters, and a fast inference speed. Specifically, MVDNet achieved 76.59% mIoU on the MVD test set, has 0.72 M parameters, and has a 137 FPS speed using a single GTX 2080Ti card.
Collapse
Affiliation(s)
- Ruifeng Bai
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (R.B.); (S.J.)
- University of Chinese Academy of Sciences, Beijing 100049, China;
| | - Xinrui Liu
- University of Chinese Academy of Sciences, Beijing 100049, China;
- Department of Neurosurgery, The First Hospital of Jilin University, Changchun 130021, China
| | - Shan Jiang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (R.B.); (S.J.)
| | - Haijiang Sun
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (R.B.); (S.J.)
| |
Collapse
|
11
|
Lu X. Deep Learning Based Emotion Recognition and Visualization of Figural Representation. Front Psychol 2022; 12:818833. [PMID: 35069400 PMCID: PMC8770983 DOI: 10.3389/fpsyg.2021.818833] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Accepted: 12/07/2021] [Indexed: 11/28/2022] Open
Abstract
This exploration aims to study the emotion recognition of speech and graphic visualization of expressions of learners under the intelligent learning environment of the Internet. After comparing the performance of several neural network algorithms related to deep learning, an improved convolution neural network-Bi-directional Long Short-Term Memory (CNN-BiLSTM) algorithm is proposed, and a simulation experiment is conducted to verify the performance of this algorithm. The experimental results indicate that the Accuracy of CNN-BiLSTM algorithm reported here reaches 98.75%, which is at least 3.15% higher than that of other algorithms. Besides, the Recall is at least 7.13% higher than that of other algorithms, and the recognition rate is not less than 90%. Evidently, the improved CNN-BiLSTM algorithm can achieve good recognition results, and provide significant experimental reference for research on learners’ emotion recognition and graphic visualization of expressions in an intelligent learning environment.
Collapse
Affiliation(s)
- Xiaofeng Lu
- Department of Fine Arts, Shandong University of Arts, Jinan, China
| |
Collapse
|
12
|
Ghodrati V, Rivenson Y, Prosper A, de Haan K, Ali F, Yoshida T, Bedayat A, Nguyen KL, Finn JP, Hu P. Automatic segmentation of peripheral arteries and veins in ferumoxytol-enhanced MR angiography. Magn Reson Med 2021; 87:984-998. [PMID: 34611937 DOI: 10.1002/mrm.29026] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Revised: 09/03/2021] [Accepted: 09/09/2021] [Indexed: 11/10/2022]
Abstract
PURPOSE To automate the segmentation of the peripheral arteries and veins in the lower extremities based on ferumoxytol-enhanced MR angiography (FE-MRA). METHODS Our automated pipeline has 2 sequential stages. In the first stage, we used a 3D U-Net with local attention gates, which was trained based on a combination of the Focal Tversky loss with region mutual loss under a deep supervision mechanism to segment the vasculature from the high-resolution FE-MRA datasets. In the second stage, we used time-resolved images to separate the arteries from the veins. Because the ultimate segmentation quality of the arteries and veins relies on the performance of the first stage, we thoroughly evaluated the different aspects of the segmentation network and compared its performance in blood vessel segmentation with currently accepted state-of-the-art networks, including Volumetric-Net, DeepVesselNet-FCN, and Uception. RESULTS We achieved a competitive F1 = 0.8087 and recall = 0.8410 for blood vessel segmentation compared with F1 = (0.7604, 0.7573, 0.7651) and recall = (0.7791, 0.7570, 0.7774) obtained with Volumetric-Net, DeepVesselNet-FCN, and Uception. For the artery and vein separation stage, we achieved F1 = (0.8274/0.7863) in the calf region, which is the most challenging region in peripheral arteries and veins segmentation. CONCLUSION Our pipeline is capable of fully automatic vessel segmentation based on FE-MRA without need for human interaction in <4 min. This method improves upon manual segmentation by radiologists, which routinely takes several hours.
Collapse
Affiliation(s)
- Vahid Ghodrati
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, California, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, USA
| | - Ashley Prosper
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, USA
| | - Fadil Ali
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, California, USA
| | - Takegawa Yoshida
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Arash Bedayat
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Kim-Lien Nguyen
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Department of Medicine (Cardiology), David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - J Paul Finn
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Peng Hu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, California, USA
| |
Collapse
|
13
|
Cai W, Wang Y, Gu L, Ji X, Shen Q, Ren X. Detection of 3D Arterial Centerline Extraction in Spiral CT Coronary Angiography. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:2670793. [PMID: 34471506 PMCID: PMC8405334 DOI: 10.1155/2021/2670793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Revised: 08/08/2021] [Accepted: 08/12/2021] [Indexed: 11/17/2022]
Abstract
This paper presents an in-depth study and analysis of the 3D arterial centerline in spiral CT coronary angiography, and constructs its detection and extraction technique. The first time, the distance transform is used to complete the boundary search of the original figure; the second time, the distance transform is used to calculate the value of the distance transform of all voxels, and according to the value of the distance transform, unnecessary voxels are deleted, to complete the initial contraction of the vascular region and reduce the computational consumption in the next process; then, the nonwitnessed voxels are used to construct the maximum inner joint sphere model and find the skeletal voxels that can reflect the shape of the original figure. Finally, the skeletal lines were optimized on these initially extracted skeletal voxels using a dichotomous-like principle to obtain the final coronary artery centerline. Through the evaluation of the experimental results, the algorithm can extract the coronary centerline more accurately. In this paper, the segmentation method is evaluated on the test set data by two kinds of indexes: one is the index of segmentation result evaluation, including dice coefficient, accuracy, specificity, and sensitivity; the other is the index of clinical diagnosis result evaluation, which is to refine the segmentation result for vessel diameter detection. The results obtained in this paper were compared with the physicians' labeling results. In terms of network performance, the Dice coefficient obtained in this paper was 0.89, the accuracy was 98.36%, the sensitivity was 93.36%, and the specificity was 98.76%, which reflected certain advantages in comparison with the advanced methods proposed by previous authors. In terms of clinical evaluation indexes, by performing skeleton line extraction and diameter calculation on the results obtained by the segmentation method proposed in this paper, the absolute error obtained after comparing with the diameter of the labeled image was 0.382 and the relative error was 0.112, which indicates that the segmentation method in this paper can recover the vessel contour more accurately. Then, the results of coronary artery centerline extraction with and without fine branch elimination were evaluated, which proved that the coronary artery centerline has higher accuracy after fine branch elimination. The algorithm is also used to extract the centerline of the complete coronary artery tree, and the results prove that the algorithm has better results for the centerline extraction of the complete coronary vascular tree.
Collapse
Affiliation(s)
- Wenjuan Cai
- Changshu Hospital of Chinese Medicine, Changshu 215516, Jiangsu, China
| | - Yanzhe Wang
- Changshu Hospital of Chinese Medicine, Changshu 215516, Jiangsu, China
| | - Liya Gu
- Changshu Hospital of Chinese Medicine, Changshu 215516, Jiangsu, China
| | - Xuefeng Ji
- Changshu Hospital of Chinese Medicine, Changshu 215516, Jiangsu, China
| | - Qiusheng Shen
- Changshu Hospital of Chinese Medicine, Changshu 215516, Jiangsu, China
| | - Xiaogang Ren
- Changshu Hospital of Chinese Medicine, Changshu 215516, Jiangsu, China
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, Jiangsu, China
| |
Collapse
|