1
|
Chen X, Xie H, Li Z, Cheng G, Leng M, Wang FL. Information fusion and artificial intelligence for smart healthcare: a bibliometric study. Inf Process Manag 2023. [DOI: 10.1016/j.ipm.2022.103113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
2
|
Chao Z, Duan X, Jia S, Guo X, Liu H, Jia F. Medical image fusion via discrete stationary wavelet transform and an enhanced radial basis function neural network. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108542] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
3
|
Chao Z, Xu W. A New General Maximum Intensity Projection Technology via the Hybrid of U-Net and Radial Basis Function Neural Network. J Digit Imaging 2021; 34:1264-1278. [PMID: 34508300 PMCID: PMC8432629 DOI: 10.1007/s10278-021-00504-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 07/16/2021] [Accepted: 08/05/2021] [Indexed: 11/29/2022] Open
Abstract
Maximum intensity projection (MIP) technology is a computer visualization method that projects three-dimensional spatial data on a visualization plane. According to the specific purposes, the specific lab thickness and direction can be selected. This technology can better show organs, such as blood vessels, arteries, veins, and bronchi and so forth, from different directions, which could bring more intuitive and comprehensive results for doctors in the diagnosis of related diseases. However, in this traditional projection technology, the details of the small projected target are not clearly visualized when the projected target is not much different from the surrounding environment, which could lead to missed diagnosis or misdiagnosis. Therefore, it is urgent to develop a new technology that can better and clearly display the angiogram. However, to the best of our knowledge, research in this area is scarce. To fill this gap in the literature, in the present study, we propose a new method based on the hybrid of convolutional neural network (CNN) and radial basis function neural network (RBFNN) to synthesize the projection image. We first adopted the U-net to obtain feature or enhanced images to be projected; subsequently, the RBF neural network performed further synthesis processing for these data; finally, the projection images were obtained. For experimental data, in order to increase the robustness of the proposed algorithm, the following three different types of datasets were adopted: the vascular projection of the brain, the bronchial projection of the lung parenchyma, and the vascular projection of the liver. In addition, radiologist evaluation and five classic metrics of image definition were implemented for effective analysis. Finally, compared to the traditional MIP technology and other structures, the use of a large number of different types of data and superior experimental results proved the versatility and robustness of the proposed method.
Collapse
Affiliation(s)
- Zhen Chao
- College of Artificial Intelligence and Big Data for Medical Sciences, Shandong First Medical University & Shandong Academy of Medical Sciences, Huaiyin District, 6699 Qingdao Road, Jinan, 250117, Shandong, China.
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
- Department of Radiation Convergence Engineering, College of Health Science, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon, 26493, South Korea.
| | - Wenting Xu
- Department of Radiation Convergence Engineering, College of Health Science, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon, 26493, South Korea
| |
Collapse
|
4
|
Gao Y, Ma S, Liu J, Liu Y, Zhang X. Fusion of medical images based on salient features extraction by PSO optimized fuzzy logic in NSST domain. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102852] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
5
|
Guo K, Li X, Zang H, Fan T. Multi-Modal Medical Image Fusion Based on FusionNet in YIQ Color Space. ENTROPY 2020; 22:e22121423. [PMID: 33348893 PMCID: PMC7766984 DOI: 10.3390/e22121423] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Revised: 12/04/2020] [Accepted: 12/11/2020] [Indexed: 11/18/2022]
Abstract
In order to obtain the physiological information and key features of source images to the maximum extent, improve the visual effect and clarity of the fused image, and reduce the computation, a multi-modal medical image fusion framework based on feature reuse is proposed. The framework consists of intuitive fuzzy processing (IFP), capture image details network (CIDN), fusion, and decoding. First, the membership function of the image is redefined to remove redundant features and obtain the image with complete features. Then, inspired by DenseNet, we proposed a new encoder to capture all the medical information features in the source image. In the fusion layer, we calculate the weight of each feature graph in the required fusion coefficient according to the trajectory of the feature graph. Finally, the filtered medical information is spliced and decoded to reproduce the required fusion image. In the encoding and image reconstruction networks, the mixed loss function of cross entropy and structural similarity is adopted to greatly reduce the information loss in image fusion. To assess performance, we conducted three sets of experiments on medical images of different grayscales and colors. Experimental results show that the proposed algorithm has advantages not only in detail and structure recognition but also in visual features and time complexity compared with other algorithms.
Collapse
Affiliation(s)
- Kai Guo
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China; (K.G.); (X.L.)
- College of Computer Science and Technology, Jilin University, Changchun 130012, China
| | - Xiongfei Li
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China; (K.G.); (X.L.)
- College of Computer Science and Technology, Jilin University, Changchun 130012, China
| | - Hongrui Zang
- Information and Communication Company, State Grid Jilin Electric Power Co., Ltd., Changchun 130022, China;
| | - Tiehu Fan
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun 130012, China
- Correspondence: ; Tel.: +86-15590549925
| |
Collapse
|
6
|
Zhang S, Liu PX, Zheng M, Shi W. A diffeomorphic unsupervised method for deformable soft tissue image registration. Comput Biol Med 2020; 120:103708. [PMID: 32217285 DOI: 10.1016/j.compbiomed.2020.103708] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Revised: 03/17/2020] [Accepted: 03/17/2020] [Indexed: 11/27/2022]
Abstract
BACKGROUND AND OBJECTIVES The image registration methods for deformable soft tissues utilize nonlinear transformations to align a pair of images precisely. In some situations, when there is huge gray scale difference or large deformation between the images to be registered, the deformation field tends to fold at some local voxels, which will result in the breakdown of the one-to-one mapping between images and the reduction of invertibility of the deformation field. In order to address this issue, a novel registration approach based on unsupervised learning is presented for deformable soft tissue image registration. METHODS A novel unsupervised learning based registration approach, which consists of a registration network, a velocity field integration module and a grid sampling module, is presented for deformable soft tissue image registration. The main contributions are: (1) A novel encoder-decoder network is presented for the evaluation of stationary velocity field. (2) A Jacobian determinant based penalty term (Jacobian loss) is developed to reduce the folding voxels and to improve the invertibility of the deformation field. RESULTS AND CONCLUSIONS The experimental results show that a new pair of images can be accurately registered using the trained registration model. In comparison with the conventional state-of-the-art method, SyN, the invertibility of the deformation field, accuracy and speed are all improved. Compared with the deep learning based method, VoxelMorph, the proposed method improves the invertibility of the deformation field.
Collapse
Affiliation(s)
- Shuo Zhang
- School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing, 100044, PR China.
| | - Peter Xiaoping Liu
- Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada.
| | - Minhua Zheng
- School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing, 100044, PR China
| | - Wen Shi
- School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing, 100044, PR China
| |
Collapse
|
7
|
Multi-modality medical images fusion based on local-features fuzzy sets and novel sum-modified-Laplacian in non-subsampled shearlet transform domain. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101724] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
8
|
Chao Z, Kim D, Kim HJ. Multiplanar reconstruction with incomplete data via enhanced fuzzy radial basis function neural networks. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101766] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
9
|
Chao Z, Kim HJ. Slice interpolation of medical images using enhanced fuzzy radial basis function neural networks. Comput Biol Med 2019; 110:66-78. [PMID: 31129416 DOI: 10.1016/j.compbiomed.2019.05.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Revised: 05/15/2019] [Accepted: 05/15/2019] [Indexed: 11/29/2022]
Abstract
Volume data composed of complete slice images play an indispensable role in medical diagnoses. However, system or human factors often lead to the loss of slice images. In recent years, various interpolation algorithms have been proposed to solve these problems. Although these algorithms are effective, the interpolated images have some shortcomings, such as less accurate recovery and missing details. In this study, we propose a new method based on an enhanced fuzzy radial basis function neural network to improve the performance of the interpolation method. The neural network includes an input layer (six input neurons), three hidden layers of neurons, and the output layer (one output neuron), and we propose a patch matching method to select the input variables of the neural network. Accordingly, we use two normal pending images to be interpolated as the input. Final output data is obtained by applying the trained neural network. In examining four groups of medical images, the proposed method outperforms five other methods, achieving the highest similarity image metric (ESSIM) values of 0.96, 0.95, 0.94, and 0.92 and the lowest mean squared difference (MSD) values of 35.5, 41.2, 50.9, and 47.1. In addition, for a whole MRI brain volume data experiment, the average MSD and ESSIM values of the proposed method and other methods are (41.62, 0.95) and (57.13, 0.90), respectively. The results indicate that the proposed method is superior to the other methods.
Collapse
Affiliation(s)
- Zhen Chao
- Department of Radiation Convergence Engineering, College of Health Science, Yonsei University, 1Yonseidae-gil, Wonju, Gangwon, 220-710, South Korea
| | - Hee-Joung Kim
- Department of Radiation Convergence Engineering, College of Health Science, Yonsei University, 1Yonseidae-gil, Wonju, Gangwon, 220-710, South Korea; Department of Radiological Science, College of Health Science, Yonsei University, 1Yonseidae-gil, Wonju, Gangwon, 220-710, South Korea.
| |
Collapse
|