1
|
Abstract
Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.
Collapse
|
2
|
Zhang F, Song Y, Cai W, Liu S, Liu S, Pujol S, Kikinis R, Xia Y, Fulham MJ, Feng DD, Alzheimers Disease Neuroimaging Initiative. Pairwise Latent Semantic Association for Similarity Computation in Medical Imaging. IEEE Trans Biomed Eng 2015; 63:1058-1069. [PMID: 26372117 DOI: 10.1109/tbme.2015.2478028] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Retrieving medical images that present similar diseases is an active research area for diagnostics and therapy. However, it can be problematic given the visual variations between anatomical structures. In this paper, we propose a new feature extraction method for similarity computation in medical imaging. Instead of the low-level visual appearance, we design a CCA-PairLDA feature representation method to capture the similarity between images with high-level semantics. First, we extract the PairLDA topics to represent an image as a mixture of latent semantic topics in an image pair context. Second, we generate a CCA-correlation model to represent the semantic association between an image pair for similarity computation. While PairLDA adjusts the latent topics for all image pairs, CCA-correlation helps to associate an individual image pair. In this way, the semantic descriptions of an image pair are closely correlated, and naturally correspond to similarity computation between images. We evaluated our method on two public medical imaging datasets for image retrieval and showed improved performance.
Collapse
Affiliation(s)
- Fan Zhang
- Biomedical and Multimedia Information Technology Research Group, School of Information Technologies, University of Sydney, Sydney, N.S.W., Australia
| | - Yang Song
- Biomedical and BMIT Research Group, School of Information Technologies, University of Sydney
| | - Weidong Cai
- Biomedical and Multimedia Information Technology Research Group, School of Information Technologies, University of Sydney
| | - Sidong Liu
- Biomedical and BMIT Research Group, School of Information Technologies, University of Sydney
| | - Siqi Liu
- Biomedical and Multimedia Information Technology Research Group, School of Information Technologies, University of Sydney
| | - Sonia Pujol
- Surgical Planning Lab, Brigham & Women's Hospital, Harvard Medical School
| | - Ron Kikinis
- Surgical Planning Lab, Brigham & Women's Hospital, Harvard Medical School
| | - Yong Xia
- Shaanxi Key Lab of Speech and Image Information Processing, School of Computer Science and Technology, Northwestern Polytechnical University
| | - Michael J Fulham
- Department of PET and Nuclear Medicine, Royal Prince Alfred Hospital
| | - David Dagan Feng
- BMIT Research Group, School of Information Technologies, University of Sydney
| | | |
Collapse
|
3
|
Liu S, Cai W, Liu S, Zhang F, Fulham M, Feng D, Pujol S, Kikinis R. Multimodal neuroimaging computing: the workflows, methods, and platforms. Brain Inform 2015; 2:181-195. [PMID: 27747508 PMCID: PMC4737665 DOI: 10.1007/s40708-015-0020-4] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2015] [Accepted: 08/20/2015] [Indexed: 12/20/2022] Open
Abstract
The last two decades have witnessed the explosive growth in the development and use of noninvasive neuroimaging technologies that advance the research on human brain under normal and pathological conditions. Multimodal neuroimaging has become a major driver of current neuroimaging research due to the recognition of the clinical benefits of multimodal data, and the better access to hybrid devices. Multimodal neuroimaging computing is very challenging, and requires sophisticated computing to address the variations in spatiotemporal resolution and merge the biophysical/biochemical information. We review the current workflows and methods for multimodal neuroimaging computing, and also demonstrate how to conduct research using the established neuroimaging computing packages and platforms.
Collapse
Affiliation(s)
- Sidong Liu
- School of IT, The University of Sydney, Sydney, Australia.
| | - Weidong Cai
- School of IT, The University of Sydney, Sydney, Australia
| | - Siqi Liu
- School of IT, The University of Sydney, Sydney, Australia
| | - Fan Zhang
- School of IT, The University of Sydney, Sydney, Australia
- Surgical Planning Laboratory, Harvard Medical School, Boston, USA
| | - Michael Fulham
- Department of PET and Nuclear Medicine, Royal Prince Alfred Hospital, Sydney Medical School, The University of Sydney, Sydney, Australia
| | - Dagan Feng
- School of IT, The University of Sydney, Sydney, Australia
- Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Sonia Pujol
- Surgical Planning Laboratory, Harvard Medical School, Boston, USA
| | - Ron Kikinis
- Surgical Planning Laboratory, Harvard Medical School, Boston, USA
| |
Collapse
|
4
|
Liu S, Liu S, Cai W, Che H, Pujol S, Kikinis R, Feng D, Fulham MJ. Multimodal neuroimaging feature learning for multiclass diagnosis of Alzheimer's disease. IEEE Trans Biomed Eng 2014; 62:1132-40. [PMID: 25423647 DOI: 10.1109/tbme.2014.2372011] [Citation(s) in RCA: 229] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
The accurate diagnosis of Alzheimer's disease (AD) is essential for patient care and will be increasingly important as disease modifying agents become available, early in the course of the disease. Although studies have applied machine learning methods for the computer-aided diagnosis of AD, a bottleneck in the diagnostic performance was shown in previous methods, due to the lacking of efficient strategies for representing neuroimaging biomarkers. In this study, we designed a novel diagnostic framework with deep learning architecture to aid the diagnosis of AD. This framework uses a zero-masking strategy for data fusion to extract complementary information from multiple data modalities. Compared to the previous state-of-the-art workflows, our method is capable of fusing multimodal neuroimaging features in one setting and has the potential to require less labeled data. A performance gain was achieved in both binary classification and multiclass classification of AD. The advantages and limitations of the proposed framework are discussed.
Collapse
|
5
|
Liu S, Cai W, Wen L, Feng DD, Pujol S, Kikinis R, Fulham MJ, Eberl S. Multi-Channel neurodegenerative pattern analysis and its application in Alzheimer's disease characterization. Comput Med Imaging Graph 2014; 38:436-44. [PMID: 24933011 PMCID: PMC4135007 DOI: 10.1016/j.compmedimag.2014.05.003] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2012] [Revised: 04/10/2014] [Accepted: 05/02/2014] [Indexed: 11/25/2022]
Abstract
Neuroimaging has played an important role in non-invasive diagnosis and differentiation of neurodegenerative disorders, such as Alzheimer's disease and Mild Cognitive Impairment. Various features have been extracted from the neuroimaging data to characterize the disorders, and these features can be roughly divided into global and local features. Recent studies show a tendency of using local features in disease characterization, since they are capable of identifying the subtle disease-specific patterns associated with the effects of the disease on human brain. However, problems arise if the neuroimaging database involved multiple disorders or progressive disorders, as disorders of different types or at different progressive stages might exhibit different degenerative patterns. It is difficult for the researchers to reach consensus on what brain regions could effectively distinguish multiple disorders or multiple progression stages. In this study we proposed a Multi-Channel pattern analysis approach to identify the most discriminative local brain metabolism features for neurodegenerative disorder characterization. We compared our method to global methods and other pattern analysis methods based on clinical expertise or statistics tests. The preliminary results suggested that the proposed Multi-Channel pattern analysis method outperformed other approaches in Alzheimer's disease characterization, and meanwhile provided important insights into the underlying pathology of Alzheimer's disease and Mild Cognitive Impairment.
Collapse
Affiliation(s)
- Sidong Liu
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, University of Sydney, Australia; Surgical Planning Laboratory (SPL), Brigham and Women's Hospital, Harvard Medical School, United States.
| | - Weidong Cai
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, University of Sydney, Australia
| | - Lingfeng Wen
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, University of Sydney, Australia; Department of PET and Nuclear Medicine, Royal Prince Alfred Hospital, Sydney, Australia
| | - David Dagan Feng
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, University of Sydney, Australia; Med-X Research Institute, Shanghai Jiao Tong University, China
| | - Sonia Pujol
- Surgical Planning Laboratory (SPL), Brigham and Women's Hospital, Harvard Medical School, United States
| | - Ron Kikinis
- Surgical Planning Laboratory (SPL), Brigham and Women's Hospital, Harvard Medical School, United States
| | - Michael J Fulham
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, University of Sydney, Australia; Department of PET and Nuclear Medicine, Royal Prince Alfred Hospital, Sydney, Australia; Sydney Medical School, University of Sydney, Australia
| | - Stefan Eberl
- Biomedical and Multimedia Information Technology (BMIT) Research Group, School of Information Technologies, University of Sydney, Australia; Department of PET and Nuclear Medicine, Royal Prince Alfred Hospital, Sydney, Australia
| |
Collapse
|