1
|
Tang L, Hui Y, Yang H, Zhao Y, Tian C. Medical image fusion quality assessment based on conditional generative adversarial network. Front Neurosci 2022; 16:986153. [PMID: 36033610 PMCID: PMC9400712 DOI: 10.3389/fnins.2022.986153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 07/13/2022] [Indexed: 11/23/2022] Open
Abstract
Multimodal medical image fusion (MMIF) has been proven to effectively improve the efficiency of disease diagnosis and treatment. However, few works have explored dedicated evaluation methods for MMIF. This paper proposes a novel quality assessment method for MMIF based on the conditional generative adversarial networks. First, with the mean opinion scores (MOS) as the guiding condition, the feature information of the two source images is extracted separately through the dual channel encoder-decoder. The features of different levels in the encoder-decoder are hierarchically input into the self-attention feature block, which is a fusion strategy for self-identifying favorable features. Then, the discriminator is used to improve the fusion objective of the generator. Finally, we calculate the structural similarity index between the fake image and the true image, and the MOS corresponding to the maximum result will be used as the final assessment result of the fused image quality. Based on the established MMIF database, the proposed method achieves the state-of-the-art performance among the comparison methods, with excellent agreement with subjective evaluations, indicating that the method is effective in the quality assessment of medical fusion images.
Collapse
Affiliation(s)
- Lu Tang
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| | - Yu Hui
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| | - Hang Yang
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| | - Yinghong Zhao
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| | - Chuangeng Tian
- School of Information and Electrical Engineering, Xuzhou University of Technology, Xuzhou, China
| |
Collapse
|
2
|
Goyal B, Dogra A, Khoond R, Al-Turjman F. An Efficient Medical Assistive Diagnostic Algorithm for Visualisation of Structural and Tissue Details in CT and MRI Fusion. Cognit Comput 2021. [DOI: 10.1007/s12559-021-09958-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
3
|
Lévêque L, Outtas M, Liu H, Zhang L. Comparative study of the methodologies used for subjective medical image quality assessment. Phys Med Biol 2021; 66. [PMID: 34225264 DOI: 10.1088/1361-6560/ac1157] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 07/05/2021] [Indexed: 11/12/2022]
Abstract
Healthcare professionals have been increasingly viewing medical images and videos in their routine clinical practice, and this in a wide variety of environments. Both the perception and interpretation of medical visual information, across all branches of practice or medical specialties (e.g. diagnostic, therapeutic, or surgical medicine), career stages, and practice settings (e.g. emergency care), appear to be critical for patient care. However, medical images and videos are not self-explanatory and, therefore, need to be interpreted by humans, i.e. medical experts. In addition, various types of degradations and artifacts may appear during image acquisition or processing, and consequently affect medical imaging data. Such distortions tend to impact viewers' quality of experience, as well as their clinical practice. It is accordingly essential to better understand how medical experts perceive the quality of visual content. Thankfully, progress has been made in the recent literature towards such understanding. In this article, we present an up-to-date state-of the-art of relatively recent (i.e. not older than ten years old) existing studies on the subjective quality assessment of medical images and videos, as well as research works using task-based approaches. Furthermore, we discuss the merits and drawbacks of the methodologies used, and we provide recommendations about experimental designs and statistical processes to evaluate the perception of medical images and videos for future studies, which could then be used to optimise the visual experience of image readers in real clinical practice. Finally, we tackle the issue of the lack of available annotated medical image and video quality databases, which appear to be indispensable for the development of new dedicated objective metrics.
Collapse
Affiliation(s)
- Lucie Lévêque
- Nantes Laboratory of Digital Sciences (LS2N), University of Nantes, Nantes, France
| | - Meriem Outtas
- Department of Industrial Computer Science and Electronics, National Institute of Applied Sciences, Rennes, France
| | - Hantao Liu
- School of Computer Science and Informatics, Cardiff University, Cardiff, United Kingdom
| | - Lu Zhang
- Department of Industrial Computer Science and Electronics, National Institute of Applied Sciences, Rennes, France
| |
Collapse
|
4
|
Zhang YD, Dong Z, Wang SH, Yu X, Yao X, Zhou Q, Hu H, Li M, Jiménez-Mesa C, Ramirez J, Martinez FJ, Gorriz JM. Advances in multimodal data fusion in neuroimaging: Overview, challenges, and novel orientation. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2020; 64:149-187. [PMID: 32834795 PMCID: PMC7366126 DOI: 10.1016/j.inffus.2020.07.006] [Citation(s) in RCA: 111] [Impact Index Per Article: 27.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 07/06/2020] [Accepted: 07/14/2020] [Indexed: 05/13/2023]
Abstract
Multimodal fusion in neuroimaging combines data from multiple imaging modalities to overcome the fundamental limitations of individual modalities. Neuroimaging fusion can achieve higher temporal and spatial resolution, enhance contrast, correct imaging distortions, and bridge physiological and cognitive information. In this study, we analyzed over 450 references from PubMed, Google Scholar, IEEE, ScienceDirect, Web of Science, and various sources published from 1978 to 2020. We provide a review that encompasses (1) an overview of current challenges in multimodal fusion (2) the current medical applications of fusion for specific neurological diseases, (3) strengths and limitations of available imaging modalities, (4) fundamental fusion rules, (5) fusion quality assessment methods, and (6) the applications of fusion for atlas-based segmentation and quantification. Overall, multimodal fusion shows significant benefits in clinical diagnosis and neuroscience research. Widespread education and further research amongst engineers, researchers and clinicians will benefit the field of multimodal neuroimaging.
Collapse
Affiliation(s)
- Yu-Dong Zhang
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Zhengchao Dong
- Department of Psychiatry, Columbia University, USA
- New York State Psychiatric Institute, New York, NY 10032, USA
| | - Shui-Hua Wang
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- School of Architecture Building and Civil engineering, Loughborough University, Loughborough, LE11 3TU, UK
- School of Mathematics and Actuarial Science, University of Leicester, LE1 7RH, UK
| | - Xiang Yu
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
| | - Xujing Yao
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
| | - Qinghua Zhou
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
| | - Hua Hu
- Department of Psychiatry, Columbia University, USA
- Department of Neurology, The Second Affiliated Hospital of Soochow University, China
| | - Min Li
- Department of Psychiatry, Columbia University, USA
- School of Internet of Things, Hohai University, Changzhou, China
| | - Carmen Jiménez-Mesa
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
| | - Javier Ramirez
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
| | - Francisco J Martinez
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
- Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK
| |
Collapse
|
5
|
Wang K, Zheng M, Wei H, Qi G, Li Y. Multi-Modality Medical Image Fusion Using Convolutional Neural Network and Contrast Pyramid. SENSORS (BASEL, SWITZERLAND) 2020; 20:E2169. [PMID: 32290472 PMCID: PMC7218740 DOI: 10.3390/s20082169] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2020] [Revised: 04/06/2020] [Accepted: 04/08/2020] [Indexed: 12/21/2022]
Abstract
Medical image fusion techniques can fuse medical images from different morphologies to make the medical diagnosis more reliable and accurate, which play an increasingly important role in many clinical applications. To obtain a fused image with high visual quality and clear structure details, this paper proposes a convolutional neural network (CNN) based medical image fusion algorithm. The proposed algorithm uses the trained Siamese convolutional network to fuse the pixel activity information of source images to realize the generation of weight map. Meanwhile, a contrast pyramid is implemented to decompose the source image. According to different spatial frequency bands and a weighted fusion operator, source images are integrated. The results of comparative experiments show that the proposed fusion algorithm can effectively preserve the detailed structure information of source images and achieve good human visual effects.
Collapse
Affiliation(s)
- Kunpeng Wang
- School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, China;
- Robot Technology Used for Special Environment Key Laboratory of Sichuan Province, Mianyang 621010, China
| | - Mingyao Zheng
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (M.Z.); (H.W.)
| | - Hongyan Wei
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (M.Z.); (H.W.)
| | - Guanqiu Qi
- Computer Information Systems Department, State University of New York at Buffalo State, Buffalo, NY 14222, USA;
| | - Yuanyuan Li
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (M.Z.); (H.W.)
| |
Collapse
|
6
|
Fusion of Enhanced and Synthetic Vision System Images for Runway and Horizon Detection. SENSORS 2019; 19:s19173802. [PMID: 31484303 PMCID: PMC6749261 DOI: 10.3390/s19173802] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 08/30/2019] [Accepted: 09/01/2019] [Indexed: 11/26/2022]
Abstract
Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented in real-world situations due to signal misalignment. We address this through a registration step to align EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented, and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. The fusion architecture developed in this study holds promise for incorporation into manned heads-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provides a basis for rule selection in other signal fusion applications.
Collapse
|
7
|
Liang Y, Mao Y, Xia J, Xiang Y, Liu J. Scale-invariant structure saliency selection for fast image fusion. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.04.043] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
8
|
Parvathy VS, Pothiraj S. Multi-modality medical image fusion using hybridization of binary crow search optimization. Health Care Manag Sci 2019; 23:661-669. [PMID: 31292844 DOI: 10.1007/s10729-019-09492-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 07/04/2019] [Indexed: 11/25/2022]
Abstract
In clinical applications, single modality images do not provide sufficient diagnostic information. Therefore, it is necessary to combine the advantages or complementarities of different modalities of images. In this paper, we propose an efficient medical image fusion system based on discrete wavelet transform and binary crow search optimization (BCSO) algorithm. Here, we consider two different patterns of images as the input of the system and the output is the fused image. In this approach, at first, to enhance the image, we apply a median filter which is used to remove the noise present in the input image. Then, we apply a discrete wavelet transform on both the input modalities. Then, the approximation coefficients of modality 1 and detailed coefficients of modality 2 are combined. Similarly, approximation coefficients of modality 2 and detailed coefficients of modality 1 are combined. Finally, we fuse the two modality information using novel fusion rule. The fusion rule parameters are optimally selected using binary crow search optimization (BCSO) algorithm. To evaluate the performance of the proposed method, we used different quality metrics such as structural similarity index measure (SSIM), Fusion Factor (FF), and entropy. The presented model shows superior results with 6.63 of entropy, 0.849 of SSIM and 5.9 of FF.
Collapse
Affiliation(s)
- Velmurugan Subbiah Parvathy
- Department of Electronics and Communication Engineering, Kalasalingam Academy of Research and Education, Tamil Nadu, India.
| | - Sivakumar Pothiraj
- Department of Electronics and Communication Engineering, Kalasalingam Academy of Research and Education, Tamil Nadu, India
| |
Collapse
|
9
|
A New Deep Learning Based Multi-Spectral Image Fusion Method. ENTROPY 2019; 21:e21060570. [PMID: 33267284 PMCID: PMC7515058 DOI: 10.3390/e21060570] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2019] [Revised: 06/02/2019] [Accepted: 06/03/2019] [Indexed: 11/16/2022]
Abstract
In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source images. A CNN plays a role in automatic encoding an image into a feature domain for classification. By applying the proposed method, the key problems in image fusion, which are the activity level measurement and fusion rule design, can be figured out in one shot. The fusion is carried out through the multi-scale image decomposition based on wavelet transform, and the reconstruction result is more perceptual to a human visual system. In addition, the visual qualitative effectiveness of the proposed fusion method is evaluated by comparing pedestrian detection results with other methods, by using the YOLOv3 object detector using a public benchmark dataset. The experimental results show that our proposed method showed competitive results in terms of both quantitative assessment and visual quality.
Collapse
|
10
|
Li W, Du J, Zhao Z, Long J. Fusion of Medical Sensors Using Adaptive Cloud Model in Local Laplacian Pyramid Domain. IEEE Trans Biomed Eng 2019; 66:1172-1183. [DOI: 10.1109/tbme.2018.2869432] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
11
|
Du J, Li W, Xiao B. Fusion of anatomical and functional images using parallel saliency features. Inf Sci (N Y) 2018. [DOI: 10.1016/j.ins.2017.12.008] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
12
|
Tensor Sparse Representation for 3-D Medical Image Fusion Using Weighted Average Rule. IEEE Trans Biomed Eng 2018; 65:2622-2633. [PMID: 29993511 DOI: 10.1109/tbme.2018.2811243] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE The technique of fusing multimodal medical images into single image has a great impact on the clinical diagnosis. The previous works mostly concern the two-dimensional (2-D) image fusion performed on each slice individually, that may destroy the 3-D correlation across adjacent slices. To address this issue, this paper proposes a novel 3-D image fusion scheme based on Tensor Sparse Representation (TSR). METHODS First, each medical volume is arranged as a three-order tensor, and represented by TSR with learned dictionaries. Second, a novel "weighted average" rule, calculated from the tensor sparse coefficients using 3-D local-to-global strategy. The weights are then employed to combine the multimodal medical volumes through weighted average. RESULTS The visual and objective comparisons show that the proposed method is competitive to the existing methods on various medical volumes in different imaging modalities. CONCLUSION The TSR-based 3-D fusion approach with weighted average rule can preserve the 3-D structure of medical volume, and reduce the low contrast and artifacts in fused product. SIGNIFICANCE The designed weights offer the effective assigned weights and accurate salience levels measure, which can improve the performance of fusion approach.
Collapse
|
13
|
Du J, Li W, Xiao B. Anatomical-functional image fusion by information of interest in local Laplacian filtering domain. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:5855-5866. [PMID: 28858799 DOI: 10.1109/tip.2017.2745202] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
A novel method for performing anatomical (MRI)-functional (PET or SPECT) image fusion is presented. The method merges specific feature information from input image signals of a single or multiple medical imaging modalities into a single fused image while preserving more information and generating less distortion. The proposed method uses a local Laplacian filtering based technique realized through a novel multi-scale system architecture. Firstly, the input images are generated in a multi-scale image representation and are processed using local Laplacian filtering. Secondly, at each scale, the decomposed images are combined to produce fused approximate images using a local energy maximum scheme and produce the fused residual images using an information of interest-based scheme. Finally, a fused image is obtained using a reconstruction process that is analogous to that of conventional Laplacian pyramid transform. Experimental results computed using individual multi-scale analysis-based decomposition schemes or fusion rules clearly demonstrate the superiority of the proposed method through subjective observation as well as objective metrics. Furthermore, the proposed method can obtain better performance, compared to the state-of-the-art fusion methods.
Collapse
|
14
|
Haddadpour M, Daneshvar S, Seyedarabi H. PET and MRI image fusion based on combination of 2-D Hilbert transform and IHS method. Biomed J 2017; 40:219-225. [PMID: 28918910 PMCID: PMC6136288 DOI: 10.1016/j.bj.2017.05.002] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2016] [Revised: 05/25/2017] [Accepted: 05/31/2017] [Indexed: 12/02/2022] Open
Abstract
Background The process of medical image fusion is combining two or more medical images such as Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) and mapping them to a single image as fused image. So purpose of our study is assisting physicians to diagnose and treat the diseases in the least of the time. Methods We used Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) as input images, so fused them based on combination of two dimensional Hilbert transform (2-D HT) and Intensity Hue Saturation (IHS) method. Evaluation metrics that we apply are Discrepancy (Dk) as an assessing spectral features and Average Gradient (AGk) as an evaluating spatial features and also Overall Performance (O.P) to verify properly of the proposed method. Results In this paper we used three common evaluation metrics like Average Gradient (AGk) and the lowest Discrepancy (Dk) and Overall Performance (O.P) to evaluate the performance of our method. Simulated and numerical results represent the desired performance of proposed method. Conclusions Since that the main purpose of medical image fusion is preserving both spatial and spectral features of input images, so based on numerical results of evaluation metrics such as Average Gradient (AGk), Discrepancy (Dk) and Overall Performance (O.P) and also desired simulated results, it can be concluded that our proposed method can preserve both spatial and spectral features of input images.
Collapse
Affiliation(s)
- Mozhdeh Haddadpour
- Department of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran.
| | - Sabalan Daneshvar
- Department of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| | - Hadi Seyedarabi
- Department of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| |
Collapse
|
15
|
An Automatic Multi-Target Independent Analysis Framework for Non-Planar Infrared-Visible Registration. SENSORS 2017; 17:s17081696. [PMID: 28933724 PMCID: PMC5579876 DOI: 10.3390/s17081696] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2017] [Revised: 07/11/2017] [Accepted: 07/21/2017] [Indexed: 11/17/2022]
Abstract
In this paper, we propose a novel automatic multi-target registration framework for non-planar infrared-visible videos. Previous approaches usually analyzed multiple targets together and then estimated a global homography for the whole scene, however, these cannot achieve precise multi-target registration when the scenes are non-planar. Our framework is devoted to solving the problem using feature matching and multi-target tracking. The key idea is to analyze and register each target independently. We present a fast and robust feature matching strategy, where only the features on the corresponding foreground pairs are matched. Besides, new reservoirs based on the Gaussian criterion are created for all targets, and a multi-target tracking method is adopted to determine the relationships between the reservoirs and foreground blobs. With the matches in the corresponding reservoir, the homography of each target is computed according to its moving state. We tested our framework on both public near-planar and non-planar datasets. The results demonstrate that the proposed framework outperforms the state-of-the-art global registration method and the manual global registration matrix in all tested datasets.
Collapse
|
16
|
|
17
|
Liu X, Mei W, Du H. Structure tensor and nonsubsampled shearlet transform based algorithm for CT and MRI image fusion. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.01.006] [Citation(s) in RCA: 81] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
18
|
Medical Image Fusion Based on Feature Extraction and Sparse Representation. Int J Biomed Imaging 2017; 2017:3020461. [PMID: 28321246 PMCID: PMC5339635 DOI: 10.1155/2017/3020461] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Revised: 01/01/2017] [Accepted: 01/10/2017] [Indexed: 11/17/2022] Open
Abstract
As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods.
Collapse
|
19
|
Du J, Li W, Xiao B, Nawaz Q. Medical image fusion by combining parallel features on multi-scale local extrema scheme. Knowl Based Syst 2016. [DOI: 10.1016/j.knosys.2016.09.008] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
20
|
|
21
|
Kottayil NK, Bogdanova R, Cheng I, Basu A. Investigation of gaze patterns in multi view laparoscopic surgery. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2016:4031-4034. [PMID: 28269168 DOI: 10.1109/embc.2016.7591611] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Laparoscopic Surgery (LS) is a modern surgical technique whereby the surgery is performed through an incision with tools and camera as opposed to conventional open surgery. This promises minimal recovery times and less hemorrhaging. Multi view LS is the latest development in the field, where the system uses multiple cameras to give the surgeon more information about the surgical site, potentially making the surgery easier. In this publication, we study the gaze patterns of a high performing subject in a multi-view LS environment and compare it with that of a novice to detect the differences between the gaze behavior. This was done by conducting a user study with 20 university students with varying levels of expertise in Multi-view LS. The subjects performed an laparoscopic task in simulation with three cameras (front/top/side). The subjects were then separated as high and low performers depending on the performance times and their data was analyzed. Our results show statistically significant differences between the two behaviors. This opens up new areas from of training novices to Multi-view LS to making smart displays that guide your shows the optimum view depending on the situation.
Collapse
|
22
|
Multispectral MRI Image Fusion for Enhanced Visualization of Meningioma Brain Tumors and Edema Using Contourlet Transform and Fuzzy Statistics. J Med Biol Eng 2016. [DOI: 10.1007/s40846-016-0149-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
23
|
Du J, Li W, Xiao B, Nawaz Q. Union Laplacian pyramid with multiple features for medical image fusion. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.02.047] [Citation(s) in RCA: 64] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
24
|
Log-Gabor energy based multimodal medical image fusion in NSCT domain. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2014; 2014:835481. [PMID: 25214889 PMCID: PMC4158263 DOI: 10.1155/2014/835481] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2014] [Revised: 08/05/2014] [Accepted: 08/06/2014] [Indexed: 11/25/2022]
Abstract
Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images.
Collapse
|
25
|
Multimodal medical volumetric data fusion using 3-D discrete shearlet transform and global-to-local rule. IEEE Trans Biomed Eng 2013; 61:197-206. [PMID: 23974522 DOI: 10.1109/tbme.2013.2279301] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Traditional two-dimensional (2-D) fusion framework usually suffers from the loss of the between-slice information of the third dimension. For example, the fusion of three-dimensional (3-D) MRI slices must account for the information not only within the given slice but also the adjacent slices. In this paper, a fusion method is developed in 3-D shearlet space to overcome the drawback. On the other hand, the popularly used average-maximum fusion rule can capture only the local information but not any of the global information for it is implemented in a local window region. Thus, a global-to-local fusion rule is proposed. We firstly show the 3-D shearlet coefficients of the high-pass subbands are highly non-Gaussian. Then, we show this heavy-tailed phenomenon can be modeled by the generalized Gaussian density (GGD) and the global information between two subbands can be described by the Kullback-Leibler distance (KLD) of two GGDs. The finally fused global information can be selected according to the asymmetry of the KLD. Experiments on synthetic data and real data demonstrate that better fusion results can be obtained by the proposed method.
Collapse
|