1
|
Gupta D, Loane R, Gayen S, Demner-Fushman D. Medical Image Retrieval via Nearest Neighbor Search on Pre-trained Image Features. Knowl Based Syst 2023; 278:110907. [PMID: 37780058 PMCID: PMC10540469 DOI: 10.1016/j.knosys.2023.110907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/03/2023]
Abstract
Nearest neighbor search, also known as NNS, is a technique used to locate the points in a high-dimensional space closest to a given query point. This technique has multiple applications in medicine, such as searching large medical imaging databases, disease classification, and diagnosis. However, when the number of points is significantly large, the brute-force approach for finding the nearest neighbor becomes computationally infeasible. Therefore, various approaches have been developed to make the search faster and more efficient to support the applications. With a focus on medical imaging, this paper proposes DenseLinkSearch (DLS), an effective and efficient algorithm that searches and retrieves the relevant images from heterogeneous sources of medical images. Towards this, given a medical database, the proposed algorithm builds an index that consists of pre-computed links of each point in the database. The search algorithm utilizes the index to efficiently traverse the database in search of the nearest neighbor. We also explore the role of medical image feature representation in content-based medical image retrieval tasks. We propose a Transformer-based feature representation technique that outperformed the existing pre-trained Transformer-based approaches on benchmark medical image retrieval datasets. We extensively tested the proposed NNS approach and compared the performance with state-of-the-art NNS approaches on benchmark datasets and our created medical image datasets. The proposed approach outperformed the existing approaches in terms of retrieving accurate neighbors and retrieval speed. In comparison to the existing approximate NNS approaches, our proposed DLS approach outperformed them in terms of lower average time per query and ≥ 99% R@10 on 11 out of 13 benchmark datasets. We also found that the proposed medical feature representation approach is better for representing medical images compared to the existing pre-trained image models. The proposed feature extraction strategy obtained an improvement of 9.37%, 7.0%, and 13.33% in terms of P@5, P@10, and P@20, respectively, in comparison to the best-performing pre-trained image model. The source code and datasets of our experiments are available at https://github.com/deepaknlp/DLS.
Collapse
Affiliation(s)
- Deepak Gupta
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Russell Loane
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Soumya Gayen
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Dina Demner-Fushman
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
2
|
Wu Y, Gao D, Fang Y, Xu X, Gao H, Ju Z. SDE-YOLO: A Novel Method for Blood Cell Detection. Biomimetics (Basel) 2023; 8:404. [PMID: 37754155 PMCID: PMC10526168 DOI: 10.3390/biomimetics8050404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 08/24/2023] [Accepted: 08/30/2023] [Indexed: 09/28/2023] Open
Abstract
This paper proposes an improved target detection algorithm, SDE-YOLO, based on the YOLOv5s framework, to address the low detection accuracy, misdetection, and leakage in blood cell detection caused by existing single-stage and two-stage detection algorithms. Initially, the Swin Transformer is integrated into the back-end of the backbone to extract the features in a better way. Then, the 32 × 32 network layer in the path-aggregation network (PANet) is removed to decrease the number of parameters in the network while increasing its accuracy in detecting small targets. Moreover, PANet substitutes traditional convolution with depth-separable convolution to accurately recognize small targets while maintaining a fast speed. Finally, replacing the complete intersection over union (CIOU) loss function with the Euclidean intersection over union (EIOU) loss function can help address the imbalance of positive and negative samples and speed up the convergence rate. The SDE-YOLO algorithm achieves a mAP of 99.5%, 95.3%, and 93.3% on the BCCD blood cell dataset for white blood cells, red blood cells, and platelets, respectively, which is an improvement over other single-stage and two-stage algorithms such as SSD, YOLOv4, and YOLOv5s. The experiment yields excellent results, and the algorithm detects blood cells very well. The SDE-YOLO algorithm also has advantages in accuracy and real-time blood cell detection performance compared to the YOLOv7 and YOLOv8 technologies.
Collapse
Affiliation(s)
- Yonglin Wu
- School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang 110158, China; (Y.W.); (H.G.)
| | - Dongxu Gao
- School of Computing, University of Portsmouth, Portsmouth PO13HE, UK
| | - Yinfeng Fang
- School of Telecommunication Engineering, Hangzhou Dianzi University, Hangzhou 311305, China;
| | - Xue Xu
- China Tobacco Zhejiang Indusirial Co., Ltd., Hangzhou 311500, China;
| | - Hongwei Gao
- School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang 110158, China; (Y.W.); (H.G.)
| | - Zhaojie Ju
- School of Computing, University of Portsmouth, Portsmouth PO13HE, UK
| |
Collapse
|
3
|
Atasever S, Azginoglu N, Terzi DS, Terzi R. A comprehensive survey of deep learning research on medical image analysis with focus on transfer learning. Clin Imaging 2023; 94:18-41. [PMID: 36462229 DOI: 10.1016/j.clinimag.2022.11.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 10/17/2022] [Accepted: 11/01/2022] [Indexed: 11/13/2022]
Abstract
This survey aims to identify commonly used methods, datasets, future trends, knowledge gaps, constraints, and limitations in the field to provide an overview of current solutions used in medical image analysis in parallel with the rapid developments in transfer learning (TL). Unlike previous studies, this survey grouped the last five years of current studies for the period between January 2017 and February 2021 according to different anatomical regions and detailed the modality, medical task, TL method, source data, target data, and public or private datasets used in medical imaging. Also, it provides readers with detailed information on technical challenges, opportunities, and future research trends. In this way, an overview of recent developments is provided to help researchers to select the most effective and efficient methods and access widely used and publicly available medical datasets, research gaps, and limitations of the available literature.
Collapse
Affiliation(s)
- Sema Atasever
- Computer Engineering Department, Nevsehir Hacı Bektas Veli University, Nevsehir, Turkey.
| | - Nuh Azginoglu
- Computer Engineering Department, Kayseri University, Kayseri, Turkey.
| | | | - Ramazan Terzi
- Computer Engineering Department, Amasya University, Amasya, Turkey.
| |
Collapse
|
4
|
Rasoolijaberi M, Babaei M, Riasatian A, Hemati S, Ashrafi P, Gonzalez R, Tizhoosh HR. Multi-Magnification Image Search in Digital Pathology. IEEE J Biomed Health Inform 2022; 26:4611-4622. [PMID: 35687644 DOI: 10.1109/jbhi.2022.3181531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper investigates the effect of magnification on content-based image search in digital pathology archives and proposes to use multi-magnification image representation. Image search in large archives of digital pathology slides provides researchers and medical professionals with an opportunity to match records of current and past patients and learn from evidently diagnosed and treated cases. When working with microscopes, pathologists switch between different magnification levels while examining tissue specimens to find and evaluate various morphological features. Inspired by the conventional pathology workflow, we have investigated several magnification levels in digital pathology and their combinations to minimize the gap between AI-enabled image search methods and clinical settings. The proposed searching framework does not rely on any regional annotation and potentially applies to millions of unlabelled (raw) whole slide images. This paper suggests two approaches for combining magnification levels and compares their performance. The first approach obtains a single-vector deep feature representation for a digital slide, whereas the second approach works with a multi-vector deep feature representation. We report the search results of 20×, 10×, and 5× magnifications and their combinations on a subset of The Cancer Genome Atlas (TCGA) repository. The experiments verify that cell-level information at the highest magnification is essential for searching for diagnostic purposes. In contrast, low-magnification information may improve this assessment depending on the tumor type. Our multi-magnification approach achieved up to 11% F1-score improvement in searching among the urinary tract and brain tumor subtypes compared to the single-magnification image search.
Collapse
|
5
|
Khaled R, Helal M, Alfarghaly O, Mokhtar O, Elkorany A, El Kassas H, Fahmy A. Categorized contrast enhanced mammography dataset for diagnostic and artificial intelligence research. Sci Data 2022; 9:122. [PMID: 35354835 PMCID: PMC8967853 DOI: 10.1038/s41597-022-01238-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 02/22/2022] [Indexed: 12/30/2022] Open
Abstract
Contrast-enhanced spectral mammography (CESM) is a relatively recent imaging modality with increased diagnostic accuracy compared to digital mammography (DM). New deep learning (DL) models were developed that have accuracies equal to that of an average radiologist. However, most studies trained the DL models on DM images as no datasets exist for CESM images. We aim to resolve this limitation by releasing a Categorized Digital Database for Low energy and Subtracted Contrast Enhanced Spectral Mammography images (CDD-CESM) to evaluate decision support systems. The dataset includes 2006 images, with an average resolution of 2355 × 1315, consisting of 310 mass images, 48 architectural distortion images, 222 asymmetry images, 238 calcifications images, 334 mass enhancement images, 184 non-mass enhancement images, 159 postoperative images, 8 post neoadjuvant chemotherapy images, and 751 normal images, with 248 images having more than one finding. This is the first dataset to incorporate data selection, segmentation annotation, medical reports, and pathological diagnosis for all cases. Moreover, we propose and evaluate a DL-based technique to automatically segment abnormal findings in images. Measurement(s) | Dual-Energy Contrast-Enhanced Digital Spectral Mammography | Technology Type(s) | digital curation | Sample Characteristic - Organism | Homo sapiens • Breast | Sample Characteristic - Location | Egypt |
Collapse
|
6
|
Shamna P, Govindan V, Abdul Nazeer K. Content-based medical image retrieval by spatial matching of visual words. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2018.10.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
7
|
Medical Image Retrieval Using Empirical Mode Decomposition with Deep Convolutional Neural Network. BIOMED RESEARCH INTERNATIONAL 2021; 2020:6687733. [PMID: 33426062 PMCID: PMC7781707 DOI: 10.1155/2020/6687733] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Revised: 12/07/2020] [Accepted: 12/14/2020] [Indexed: 11/17/2022]
Abstract
Content-based medical image retrieval (CBMIR) systems attempt to search medical image database to narrow the semantic gap in medical image analysis. The efficacy of high-level medical information representation using features is a major challenge in CBMIR systems. Features play a vital role in the accuracy and speed of the search process. In this paper, we propose a deep convolutional neural network- (CNN-) based framework to learn concise feature vector for medical image retrieval. The medical images are decomposed into five components using empirical mode decomposition (EMD). The deep CNN is trained in a supervised way with multicomponent input, and the learned features are used to retrieve medical images. The IRMA dataset, containing 11,000 X-ray images, 116 classes, is used to validate the proposed method. We achieve a total IRMA error of 43.21 and a mean average precision of 0.86 for retrieval task and IRMA error of 68.48 and F1 measure of 0.66 on classification task, which is the best result compared with existing literature for this dataset.
Collapse
|
8
|
Kalra S, Tizhoosh HR, Shah S, Choi C, Damaskinos S, Safarpoor A, Shafiei S, Babaie M, Diamandis P, Campbell CJV, Pantanowitz L. Pan-cancer diagnostic consensus through searching archival histopathology images using artificial intelligence. NPJ Digit Med 2020; 3:31. [PMID: 32195366 PMCID: PMC7064517 DOI: 10.1038/s41746-020-0238-2] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Accepted: 02/11/2020] [Indexed: 02/07/2023] Open
Abstract
The emergence of digital pathology has opened new horizons for histopathology. Artificial intelligence (AI) algorithms are able to operate on digitized slides to assist pathologists with different tasks. Whereas AI-involving classification and segmentation methods have obvious benefits for image analysis, image search represents a fundamental shift in computational pathology. Matching the pathology of new patients with already diagnosed and curated cases offers pathologists a new approach to improve diagnostic accuracy through visual inspection of similar cases and computational majority vote for consensus building. In this study, we report the results from searching the largest public repository (The Cancer Genome Atlas, TCGA) of whole-slide images from almost 11,000 patients. We successfully indexed and searched almost 30,000 high-resolution digitized slides constituting 16 terabytes of data comprised of 20 million 1000 × 1000 pixels image patches. The TCGA image database covers 25 anatomic sites and contains 32 cancer subtypes. High-performance storage and GPU power were employed for experimentation. The results were assessed with conservative "majority voting" to build consensus for subtype diagnosis through vertical search and demonstrated high accuracy values for both frozen section slides (e.g., bladder urothelial carcinoma 93%, kidney renal clear cell carcinoma 97%, and ovarian serous cystadenocarcinoma 99%) and permanent histopathology slides (e.g., prostate adenocarcinoma 98%, skin cutaneous melanoma 99%, and thymoma 100%). The key finding of this validation study was that computational consensus appears to be possible for rendering diagnoses if a sufficiently large number of searchable cases are available for each cancer subtype.
Collapse
Affiliation(s)
- Shivam Kalra
- Huron Digital Pathology, St. Jacobs, ON Canada
- Kimia Lab, University of Waterloo, Waterloo, ON Canada
| | - H. R. Tizhoosh
- Kimia Lab, University of Waterloo, Waterloo, ON Canada
- Vector Institute, MaRS Centre, Toronto, ON Canada
| | | | | | | | | | | | | | | | - Clinton J. V. Campbell
- Stem Cell and Cancer Research Institute, McMaster University, Hamilton, Canada
- Department of Pathology and Molecular Medicine, McMaster University, Hamilton, Canada
| | - Liron Pantanowitz
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, PA USA
| |
Collapse
|
9
|
Wu L, Yang X, Cao W, Zhao K, Li W, Ye W, Chen X, Zhou Z, Liu Z, Liang C. Multiple Level CT Radiomics Features Preoperatively Predict Lymph Node Metastasis in Esophageal Cancer: A Multicentre Retrospective Study. Front Oncol 2020; 9:1548. [PMID: 32039021 PMCID: PMC6985546 DOI: 10.3389/fonc.2019.01548] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Accepted: 12/20/2019] [Indexed: 12/24/2022] Open
Abstract
Background: Lymph node (LN) metastasis is the most important prognostic factor in esophageal squamous cell carcinoma (ESCC). Traditional clinical factor and existing methods based on CT images are insufficiently effective in diagnosing LN metastasis. A more efficient method to predict LN status based on CT image is needed. Methods: In this multicenter retrospective study, 411 patients with pathologically confirmed ESCC were registered from two hospitals. Quantitative image features including handcrafted-, computer vision-(CV-), and deep-features were extracted from preoperative arterial phase CT images for each patient. A handcrafted-, CV-, and deep-radiomics signature were built, respectively. Then, multiple radiomics models were constructed by merging independent clinical risk factor into radiomics signatures. The performance of models were evaluated with respect to the discrimination, calibration, and clinical usefulness. Finally, an independent external validation cohort was used to validate the model's predictive performance. Results: Five, seven, and nine features were selected for building handcrafted-, CV-, and deep-radiomics signatures from extracted features, respectively. Those signatures were statistically significant different between LN-positive and LN-negative patients in all cohorts (p < 0.001). The developed multiple level CT radiomics model that integrates multiple radiomics signatures with clinical risk factor, was superior to traditional clinical factors and the results reported by existing methods, and achieved satisfactory discrimination performance with C-statistic of 0.875 in development cohort, 0.874 in internal validation cohort and 0.840 in independent external validation cohort. Nomogram and decision curve analysis (DCA) further confirmed our method may serve as an effective tool for clinicians to evaluate the risk of LN metastasis in patients with ESCC and further choose treatment strategy. Conclusions: The proposed multiple level CT radiomics model which integrate multiple level radiomics features into clinical risk factor can be used for preoperative predicting LN metastasis of patients with ESCC.
Collapse
Affiliation(s)
- Lei Wu
- School of Medicine, South China University of Technology, Guangzhou, China.,Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Xiaojun Yang
- School of Medicine, South China University of Technology, Guangzhou, China.,Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wuteng Cao
- School of Medicine, South China University of Technology, Guangzhou, China.,Department of Radiology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Ke Zhao
- School of Medicine, South China University of Technology, Guangzhou, China.,Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wenli Li
- Department of Radiology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Weitao Ye
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Xin Chen
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Zhiyang Zhou
- Department of Radiology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Zaiyi Liu
- School of Medicine, South China University of Technology, Guangzhou, China.,Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Changhong Liang
- School of Medicine, South China University of Technology, Guangzhou, China.,Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| |
Collapse
|
10
|
Khatami A, Araghi S, Babaei T. Evaluating the performance of different classification methods on medical X-ray images. SN APPLIED SCIENCES 2019. [DOI: 10.1007/s42452-019-1174-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
|
11
|
Ahn E, Kumar A, Fulham M, Feng D, Kim J. Convolutional sparse kernel network for unsupervised medical image analysis. Med Image Anal 2019; 56:140-151. [DOI: 10.1016/j.media.2019.06.005] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 02/04/2019] [Accepted: 06/11/2019] [Indexed: 11/16/2022]
|
12
|
Oliveira PH, Scabora LC, Cazzolato MT, Oliveira WD, Paixao RS, Traina AJM, Traina C. Employing Domain Indexes to Efficiently Query Medical Data From Multiple Repositories. IEEE J Biomed Health Inform 2018; 23:2220-2229. [PMID: 30452381 DOI: 10.1109/jbhi.2018.2881381] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Content-based retrieval still remains one of the main problems with respect to controversies and challenges in digital healthcare over big data. To properly address this problem, there is a need for efficient computational techniques, especially in scenarios involving queries across multiple data repositories. In such scenarios, the common computational approach searches the repositories separately and combines the results into one final response, which slows down the process altogether. In order to improve the performance of queries in that kind of scenario, we present the Domain Index, a new category of index structures intended to efficiently query a data domain across multiple repositories, regardless of the repository to which the data belong. To evaluate our method, we carried out experiments involving content-based queries, namely range and k nearest neighbor (kNN) queries, 1) over real-world data from a public data set of mammograms, as well as 2) over synthetic data to perform scalability evaluations. The results show that images from any repository are seamlessly retrieved, sustaining performance gains of up to 53% in range queries and up to 81% in kNN queries. Regarding scalability, our proposal scaled well as we increased 1) the cardinality of data (up to 59% of gain) and 2) the number of queried repositories (up to 71% of gain). Hence, our method enables significant performance improvements, and should be of most importance for medical data repository maintainers and for physicians' IT support.
Collapse
|
13
|
Blažun Vošner H, Železnik D, Kokol P. Bibliometric analysis of the International Medical Informatics Association official journals. Inform Health Soc Care 2018; 44:405-421. [PMID: 30351983 DOI: 10.1080/17538157.2018.1525734] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Objectives: This research article aims to analyze the bibliometric characteristics of four official International Medical Informatics Association (IMIA) journals, namely: the International Journal of Medical Informatics, Methods of Information in Medicine, Applied Clinical Informatics, and Informatics for Health and Social Care.Method: We used descriptive bibliometrics to study the trends of literature production, identify documents` types, most prolific authors, institutions, countries, and most cited publications of all four IMIA journals. Additionally, we visualized the content of published publications using bibliometric mapping to identify journals' main themes and the most prolific and most cited research terms.Results: In total, 6,837 publications were published in all four IMIA journals. Among them, there were 5,137 original articles, meaning that articles were the leading document type. Research is being conducted globally among various research institutions. The most prolific countries are the United States of America, the United Kingdom, Germany, the Netherlands, and Canada. Thematic analyses of clusters show that themes are overlapping between all four journals.Conclusion: The journals contribute to the advances in technology related to health information systems, knowledge-based and decision-making systems, health literacy, and electronic health records.
Collapse
Affiliation(s)
- Helena Blažun Vošner
- Community Healthcare Center Dr. Adolf Drolc Maribor, Department for Science and Research, Maribor, Slovenia
| | - Danica Železnik
- University College of Health Sciences Slovenj Gradec, Research Institute, Slovenj Gradec, Slovenia
| | - Peter Kokol
- University of Maribor, Faculty of Electrical Engineering and Computer Science, Maribor, Slovenia
| |
Collapse
|
14
|
|
15
|
Gefeller O, Aronsky D, Leong TY, Sarkar IN, Bergemann D, Lindberg DAB, van Bemmel JH, Haux R, McCray AT. The Birth and Evolution of a Discipline Devoted to Information in Biomedicine and Health Care. Methods Inf Med 2018; 50:491-507. [DOI: 10.3414/me11-06-0001] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
SummaryBackground: The journal Methods of Information in Medicine, founded in 1962, has now completed its 50th volume. Its publications during the last five decades reflect the formation of a discipline that deals with information in biomedicine and health care. Objectives: To report about 1) the journal‘s origin, 2) the individuals who have significantly contributed to it, 3) trends in the journal’s aims and scope, 4) influential papers and 5) major topics published in Methods over the years.Methods: Methods included analysing the correspondence and journal issues in the archives of the editorial office and of the publisher, citation analysis using the ISI and Scopus databases, and analysing the articles’ Medical Subject Headings (MeSH) in MEDLINE.Results: In the journal’s first 50 years 208 editorial board members and/or editors contributed to the journal’s development, with most individuals coming from Europe and North America. The median time of service was 11 years. At the time of analysis 2,456 articles had been indexed with Me SH. Topics included computerized systems of various types, informatics methodologies, and topics related to a specific medical domain. Some MeSH topic entries were heavily and regularly represented in each of the journal‘s five decades (e.g. information systems and medical records), while others were important in a particular decade, but not in other decades (e.g. punched-card systems and systems integration). Seven papers were cited more than 100 times and these also covered a broad range of themes such as knowledge representation, analysis of biomedical data and knowledge, clinical decision support and electronic patient records. Conclusions: Methods of Information in Medicine is the oldest international journal in biomedical informatics. The journal’s development over the last 50 years correlates with the formation of this new discipline. It has and continues to stress the basic methodology and scientific fundamentals of organizing, representing and analysing data, information and knowledge in biomedicine and health care. It has and continues to stimulate multi-disciplinary communication on research that is devoted to high-quality, efficient health care, to quality of life and to the progress of biomedicine and the health sciences.
Collapse
|
16
|
Lee RS, Gimenez F, Hoogi A, Miyake KK, Gorovoy M, Rubin DL. A curated mammography data set for use in computer-aided detection and diagnosis research. Sci Data 2017; 4:170177. [PMID: 29257132 PMCID: PMC5735920 DOI: 10.1038/sdata.2017.177] [Citation(s) in RCA: 178] [Impact Index Per Article: 25.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2016] [Accepted: 10/09/2017] [Indexed: 11/08/2022] Open
Abstract
Published research results are difficult to replicate due to the lack of a standard evaluation data set in the area of decision support systems in mammography; most computer-aided diagnosis (CADx) and detection (CADe) algorithms for breast cancer in mammography are evaluated on private data sets or on unspecified subsets of public databases. This causes an inability to directly compare the performance of methods or to replicate prior results. We seek to resolve this substantial challenge by releasing an updated and standardized version of the Digital Database for Screening Mammography (DDSM) for evaluation of future CADx and CADe systems (sometimes referred to generally as CAD) research in mammography. Our data set, the CBIS-DDSM (Curated Breast Imaging Subset of DDSM), includes decompressed images, data selection and curation by trained mammographers, updated mass segmentation and bounding boxes, and pathologic diagnosis for training data, formatted similarly to modern computer vision data sets. The data set contains 753 calcification cases and 891 mass cases, providing a data-set size capable of analyzing decision support systems in mammography.
Collapse
Affiliation(s)
- Rebecca Sawyer Lee
- Biomedical Informatics Training Program, Stanford University, Stanford, CA 94305, USA
| | - Francisco Gimenez
- Biomedical Informatics Training Program, Stanford University, Stanford, CA 94305, USA
| | - Assaf Hoogi
- Department of Radiology and Medicine (Biomedical Informatics Research), Stanford University, Stanford, CA 94305, USA
| | - Kanae Kawai Miyake
- Department of Radiology (Breast Imaging), Stanford University, Stanford, CA 94305, USA
| | - Mia Gorovoy
- Department of Radiology (Breast Imaging), Stanford University, Stanford, CA 94305, USA
| | - Daniel L. Rubin
- Department of Radiology and Medicine (Biomedical Informatics Research), Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
17
|
Li Z, Zhang X, Müller H, Zhang S. Large-scale retrieval for medical image analytics: A comprehensive review. Med Image Anal 2017; 43:66-84. [PMID: 29031831 DOI: 10.1016/j.media.2017.09.007] [Citation(s) in RCA: 74] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2017] [Revised: 08/01/2017] [Accepted: 09/29/2017] [Indexed: 12/27/2022]
Abstract
Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis.
Collapse
Affiliation(s)
- Zhongyu Li
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
| | - Xiaofan Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
| | - Henning Müller
- Information Systems Institute, HES-SO Valais, Sierre, Switzerland
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA.
| |
Collapse
|
18
|
Ahmad J, Sajjad M, Mehmood I, Baik SW. SiNC: Saliency-injected neural codes for representation and efficient retrieval of medical radiographs. PLoS One 2017; 12:e0181707. [PMID: 28771497 PMCID: PMC5542646 DOI: 10.1371/journal.pone.0181707] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2016] [Accepted: 07/06/2017] [Indexed: 01/10/2023] Open
Abstract
Medical image collections contain a wealth of information which can assist radiologists and medical experts in diagnosis and disease detection for making well-informed decisions. However, this objective can only be realized if efficient access is provided to semantically relevant cases from the ever-growing medical image repositories. In this paper, we present an efficient method for representing medical images by incorporating visual saliency and deep features obtained from a fine-tuned convolutional neural network (CNN) pre-trained on natural images. Saliency detector is employed to automatically identify regions of interest like tumors, fractures, and calcified spots in images prior to feature extraction. Neuronal activation features termed as neural codes from different CNN layers are comprehensively studied to identify most appropriate features for representing radiographs. This study revealed that neural codes from the last fully connected layer of the fine-tuned CNN are found to be the most suitable for representing medical images. The neural codes extracted from the entire image and salient part of the image are fused to obtain the saliency-injected neural codes (SiNC) descriptor which is used for indexing and retrieval. Finally, locality sensitive hashing techniques are applied on the SiNC descriptor to acquire short binary codes for allowing efficient retrieval in large scale image collections. Comprehensive experimental evaluations on the radiology images dataset reveal that the proposed framework achieves high retrieval accuracy and efficiency for scalable image retrieval applications and compares favorably with existing approaches.
Collapse
Affiliation(s)
- Jamil Ahmad
- College of Software and Convergence Technology, Department of Software, Sejong University, Seoul, Republic of Korea
| | - Muhammad Sajjad
- Digital Image Processing Lab, Department of Computer Science, Islamia College, Peshawar, Pakistan
| | - Irfan Mehmood
- Department of Computer Science and Engineering, Sejong University, Seoul, Republic of Korea
| | - Sung Wook Baik
- College of Software and Convergence Technology, Department of Software, Sejong University, Seoul, Republic of Korea
| |
Collapse
|
19
|
Xu Y, Shen F, Xu X, Gao L, Wang Y, Tan X. Large-scale image retrieval with supervised sparse hashing. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.05.109] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
20
|
de Lima SML, da Silva-Filho AG, Dos Santos WP. Detection and classification of masses in mammographic images in a multi-kernel approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2016; 134:11-29. [PMID: 27480729 DOI: 10.1016/j.cmpb.2016.04.029] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2015] [Revised: 04/29/2016] [Accepted: 04/29/2016] [Indexed: 06/06/2023]
Abstract
BACKGROUND AND OBJECTIVE According to the World Health Organization, breast cancer is the main cause of cancer death among adult women in the world. Although breast cancer occurs indiscriminately in countries with several degrees of social and economic development, among developing and underdevelopment countries mortality rates are still high due to low availability of early detection technologies. From the clinical point of view, mammography is still the most effective diagnostic technology, given the wide diffusion of the use and interpretation of these images. METHODS Herein this work we propose a method to detect and classify mammographic lesions using the regions of interest of images. Our proposal consists in decomposing each image using multi-resolution wavelets. Zernike moments are extracted from each wavelet component. Using this approach, we can combine both texture and shape features, which can be applied both to the detection and classification of mammary lesions. We used 355 images of fatty breast tissue of IRMA database, with 233 normal instances (no lesion), 72 benign, and 83 malignant cases. RESULTS Classification was performed by using SVM and ELM networks with modified kernels in order to optimize accuracy rates, reaching 94.11%. Considering both accuracy rates and training times, we defined the ration between average percentage accuracy and average training time in a reverse order. Our proposal was 50 times higher than the ratio obtained using state-of-the-art approaches. CONCLUSIONS As our proposed model can combine high accuracy rate with low learning time, whenever a new data is received, our work will be able to save a lot of time, hours, in learning process in relation to the best method of the state-of-the-art.
Collapse
Affiliation(s)
- Sidney M L de Lima
- Center of Informatics-CIn, Federal University of Pernambuco, UFPE, Recife, Brazil.
| | | | | |
Collapse
|
21
|
Abstract
Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.
Collapse
|
22
|
Markonis D, Holzer M, Baroz F, De Castaneda RLR, Boyer C, Langs G, Müller H. User-oriented evaluation of a medical image retrieval system for radiologists. Int J Med Inform 2015; 84:774-83. [DOI: 10.1016/j.ijmedinf.2015.04.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2013] [Revised: 04/17/2015] [Accepted: 04/19/2015] [Indexed: 11/29/2022]
|
23
|
Zhang F, Song Y, Cai W, Liu S, Liu S, Pujol S, Kikinis R, Xia Y, Fulham MJ, Feng DD, Alzheimers Disease Neuroimaging Initiative. Pairwise Latent Semantic Association for Similarity Computation in Medical Imaging. IEEE Trans Biomed Eng 2015; 63:1058-1069. [PMID: 26372117 DOI: 10.1109/tbme.2015.2478028] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Retrieving medical images that present similar diseases is an active research area for diagnostics and therapy. However, it can be problematic given the visual variations between anatomical structures. In this paper, we propose a new feature extraction method for similarity computation in medical imaging. Instead of the low-level visual appearance, we design a CCA-PairLDA feature representation method to capture the similarity between images with high-level semantics. First, we extract the PairLDA topics to represent an image as a mixture of latent semantic topics in an image pair context. Second, we generate a CCA-correlation model to represent the semantic association between an image pair for similarity computation. While PairLDA adjusts the latent topics for all image pairs, CCA-correlation helps to associate an individual image pair. In this way, the semantic descriptions of an image pair are closely correlated, and naturally correspond to similarity computation between images. We evaluated our method on two public medical imaging datasets for image retrieval and showed improved performance.
Collapse
Affiliation(s)
- Fan Zhang
- Biomedical and Multimedia Information Technology Research Group, School of Information Technologies, University of Sydney, Sydney, N.S.W., Australia
| | - Yang Song
- Biomedical and BMIT Research Group, School of Information Technologies, University of Sydney
| | - Weidong Cai
- Biomedical and Multimedia Information Technology Research Group, School of Information Technologies, University of Sydney
| | - Sidong Liu
- Biomedical and BMIT Research Group, School of Information Technologies, University of Sydney
| | - Siqi Liu
- Biomedical and Multimedia Information Technology Research Group, School of Information Technologies, University of Sydney
| | - Sonia Pujol
- Surgical Planning Lab, Brigham & Women's Hospital, Harvard Medical School
| | - Ron Kikinis
- Surgical Planning Lab, Brigham & Women's Hospital, Harvard Medical School
| | - Yong Xia
- Shaanxi Key Lab of Speech and Image Information Processing, School of Computer Science and Technology, Northwestern Polytechnical University
| | - Michael J Fulham
- Department of PET and Nuclear Medicine, Royal Prince Alfred Hospital
| | - David Dagan Feng
- BMIT Research Group, School of Information Technologies, University of Sydney
| | | |
Collapse
|
24
|
Cao Y, Steffey S, He J, Xiao D, Tao C, Chen P, Müller H. Medical Image Retrieval: A Multimodal Approach. Cancer Inform 2015; 13:125-36. [PMID: 26309389 PMCID: PMC4533857 DOI: 10.4137/cin.s14053] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2014] [Revised: 03/29/2015] [Accepted: 04/07/2015] [Indexed: 11/21/2022] Open
Abstract
Medical imaging is becoming a vital component of war on cancer. Tremendous amounts of medical image data are captured and recorded in a digital format during cancer care and cancer research. Facing such an unprecedented volume of image data with heterogeneous image modalities, it is necessary to develop effective and efficient content-based medical image retrieval systems for cancer clinical practice and research. While substantial progress has been made in different areas of content-based image retrieval (CBIR) research, direct applications of existing CBIR techniques to the medical images produced unsatisfactory results, because of the unique characteristics of medical images. In this paper, we develop a new multimodal medical image retrieval approach based on the recent advances in the statistical graphic model and deep learning. Specifically, we first investigate a new extended probabilistic Latent Semantic Analysis model to integrate the visual and textual information from medical images to bridge the semantic gap. We then develop a new deep Boltzmann machine-based multimodal learning model to learn the joint density model from multimodal information in order to derive the missing modality. Experimental results with large volume of real-world medical images have shown that our new approach is a promising solution for the next-generation medical imaging indexing and retrieval system.
Collapse
Affiliation(s)
- Yu Cao
- Department of Computer Science, The University of Massachusetts Lowell, Lowell, MA, USA
| | - Shawn Steffey
- Department of Computer Science, The University of Massachusetts Lowell, Lowell, MA, USA
| | - Jianbiao He
- School of Information Science and Engineering, Central South University, Changsha, PR China
| | - Degui Xiao
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, PR China
| | - Cui Tao
- School of Biomedical Informatics, The University of Texas, Health Science Center at Houston, Houston, TX, USA
| | - Ping Chen
- Department of Computer Science, University of Massachusetts Boston, Boston, MA, USA
| | - Henning Müller
- Department of Business Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Medical Informatics, University Hospitals and University of Geneva, Geneva, Switzerland
| |
Collapse
|
25
|
Accelerating content-based image retrieval via GPU-adaptive index structure. ScientificWorldJournal 2014; 2014:829059. [PMID: 24782668 PMCID: PMC3980781 DOI: 10.1155/2014/829059] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2013] [Accepted: 02/17/2014] [Indexed: 11/29/2022] Open
Abstract
A tremendous amount of work has been conducted in content-based image retrieval (CBIR) on designing effective index structure to accelerate the retrieval process. Most of them improve the retrieval efficiency via complex index structures, and few take into account the parallel implementation of them on underlying hardware, making the existing index structures suffer from low-degree of parallelism. In this paper, a novel graphics processing unit (GPU) adaptive index structure, termed as plane semantic ball (PSB), is proposed to simultaneously reduce the work of retrieval process and exploit the parallel acceleration of underlying hardware. In PSB, semantics are embedded into the generation of representative pivots and multiple balls are selected to cover more informative reference features. With PSB, the online retrieval of CBIR is factorized into independent components that are implemented on GPU efficiently. Comparative experiments with GPU-based brute force approach demonstrate that the proposed approach can achieve high speedup with little information loss. Furthermore, PSB is compared with the state-of-the-art approach, random ball cover (RBC), on two standard image datasets, Corel 10 K and GIST 1 M. Experimental results show that our approach achieves higher speedup than RBC on the same accuracy level.
Collapse
|
26
|
A novel similarity learning method via relative comparison for content-based medical image retrieval. J Digit Imaging 2014; 26:850-65. [PMID: 23563792 DOI: 10.1007/s10278-013-9591-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
Abstract
Nowadays, the huge volume of medical images represents an enormous challenge towards health-care organizations, as it is often hard for clinicians and researchers to manage, access, and share the image database easily. Content-based medical image retrieval (CBMIR) techniques are employed to facilitate the above process. It is known that a few concrete factors, including visual attributes extracted from images, measures encoding the similarity between images, user interaction, etc. play important roles in determining the retrieval performance. This paper concentrates on the similarity learning problem of CBMIR. A novel similarity learning paradigm is proposed via relative comparison, and a large database composed of 5,000 images is utilized to evaluate the retrieval performance. Extensive experimental results and comprehensive statistical analysis demonstrate the superiority of adopting the newly introduced learning paradigm, compared with several conventional supervised and semi-supervised similarity learning methods, in the presented CBMIR application.
Collapse
|
27
|
De S, Stanley RJ, Cheng B, Antani S, Long R, Thoma G. Automated Text Detection and Recognition in Annotated Biomedical Publication Images. INTERNATIONAL JOURNAL OF HEALTHCARE INFORMATION SYSTEMS AND INFORMATICS 2014. [DOI: 10.4018/ijhisi.2014040103] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Images in biomedical publications often convey important information related to an article's content. When referenced properly, these images aid in clinical decision support. Annotations such as text labels and symbols, as provided by medical experts, are used to highlight regions of interest within the images. These annotations, if extracted automatically, could be used in conjunction with either the image caption text or the image citations (mentions) in the articles to improve biomedical information retrieval. In the current study, automatic detection and recognition of text labels in biomedical publication images was investigated. This paper presents both image analysis and feature-based approaches to extract and recognize specific regions of interest (text labels) within images in biomedical publications. Experiments were performed on 6515 characters extracted from text labels present in 200 biomedical publication images. These images are part of the data set from ImageCLEF 2010. Automated character recognition experiments were conducted using geometry-, region-, exemplar-, and profile-based correlation features and Fourier descriptors extracted from the characters. Correct recognition as high as 92.67% was obtained with a support vector machine classifier, compared to a 75.90% correct recognition rate with a benchmark Optical Character Recognition technique.
Collapse
Affiliation(s)
- Soumya De
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, Missouri, USA
| | - R. Joe Stanley
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, Missouri, USA
| | - Beibei Cheng
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, Missouri, USA
| | - Sameer Antani
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA
| | - Rodney Long
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA
| | - George Thoma
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA
| |
Collapse
|
28
|
Harmsen M, Fischer B, Schramm H, Seidl T, Deserno TM. Support vector machine classification based on correlation prototypes applied to bone age assessment. IEEE J Biomed Health Inform 2012. [PMID: 23192601 DOI: 10.1109/titb.2012.2228211] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Bone age assessment (BAA) on hand radiographs is a frequent and time consuming task in radiology. We present a method for (semi)automatic BAA which is done in several steps: (i) extract 14 epiphyseal regions from the radiographs, (ii) for each region, retain image features using the IRMA framework, (iii) use these features to build a classifier model (training phase), (iv) evaluate performance on cross validation schemes (testing phase), (v) classify unknown hand images (application phase). In this paper, we combine a support vector machine (SVM) with cross-correlation to a prototype image for each class. These prototypes are obtained choosing one random hand per class. A systematic evaluation is presented comparing nominal- and real-valued SVM with k nearest neighbor (kNN) classification on 1,097 hand radiographs of 30 diagnostic classes (0 19 years). Mean error in age prediction is 1.0 and 0.83 years for 5-NN and SVM, respectively. Accuracy of nominal- and real-valued SVM based on 6 prominent regions (prototypes) is 91.57% and 96.16%, respectively, for accepting about two years age range.
Collapse
|
29
|
Cavallaro A, Kriegel HP, Petri M, Schubert M. Semantic localization-driven partial image retrieval in CT series. Methods Inf Med 2012; 51:557-65. [PMID: 23154618 DOI: 10.3414/me11-02-0028] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2011] [Accepted: 06/26/2012] [Indexed: 11/09/2022]
Abstract
BACKGROUND Picture archiving and communication systems (PACS) contain very large amounts of computed tomography (CT) data. When querying a PACS for a particular series, the user is often not interested in the complete series but in a certain region of interest (ROI), described e.g. by an example view in another series or an anatomical concept. OBJECTIVES Restricting a retrieval query to such an ROI saves both loading time and navigational effort. In this paper, we propose an efficient method for defining and retrieving ROIs. METHODS We employ interpolation and regression techniques for mapping the slices of a series to a newly generated standardized height atlas of the human body. RESULTS Examinations of the accuracy and the saved input/output (I/O) costs of our new method on a repository of 1,360 CT series demonstrate the advantages of our system. Depending on the scope of the retrieval query, we can economize up to 99% of the total loading time. CONCLUSION Our proposed method for flexible, context-based, partial image retrieval enables the user to directly focus on the relevant portion of the image material and it targets the high potential of I/O cost reduction of a common PACS.
Collapse
Affiliation(s)
- A Cavallaro
- Institute for Informatics, Ludwig-Maximilians-Universität München, Oettingenstr. 67, 80538 Munich, Germany
| | | | | | | |
Collapse
|
30
|
Welter P, Fischer B, Günther RW, Deserno né Lehmann TM. Generic integration of content-based image retrieval in computer-aided diagnosis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2012; 108:589-599. [PMID: 21975083 DOI: 10.1016/j.cmpb.2011.08.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2010] [Revised: 07/04/2011] [Accepted: 08/29/2011] [Indexed: 05/31/2023]
Abstract
Content-based image retrieval (CBIR) offers approved benefits for computer-aided diagnosis (CAD), but is still not well established in radiological routine yet. An essential factor is the integration gap between CBIR systems and clinical information systems. The international initiative Integrating the Healthcare Enterprise (IHE) aims at improving interoperability of medical computer systems. We took into account deficiencies in IHE compliance of current picture archiving and communication systems (PACS), and developed an intermediate integration scheme based on the IHE post-processing workflow integration profile (PWF) adapted to CBIR in CAD. The Image Retrieval in Medical Applications (IRMA) framework was used to apply our integration scheme exemplarily, resulting in the application called IRMAcon. The novel IRMAcon scheme provides a generic, convenient and reliable integration of CBIR systems into clinical systems and workflows. Based on the IHE PWF and designed to grow at a pace with the IHE compliance of the particular PACS, it provides sustainability and fosters CBIR in CAD.
Collapse
Affiliation(s)
- Petra Welter
- Department of Medical Informatics, RWTH Aachen University of Technology, and Department of Diagnostic Radiology, RWTH Aachen University Hospital, Pauwelsstraße 30, 52074 Aachen, Germany.
| | | | | | | |
Collapse
|
31
|
Deserno TM, Welter P, Horsch A. Towards a repository for standardized medical image and signal case data annotated with ground truth. J Digit Imaging 2012; 25:213-26. [PMID: 22075810 DOI: 10.1007/s10278-011-9428-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
Abstract
Validation of medical signal and image processing systems requires quality-assured, representative and generally acknowledged databases accompanied by appropriate reference (ground truth) and clinical metadata, which are composed laboriously for each project and are not shared with the scientific community. In our vision, such data will be stored centrally in an open repository. We propose an architecture for a standardized case data and ground truth information repository supporting the evaluation and analysis of computer-aided diagnosis based on (a) the Reference Model for an Open Archival Information System (OAIS) provided by the NASA Consultative Committee for Space Data Systems (ISO 14721:2003), (b) the Dublin Core Metadata Initiative (DCMI) Element Set (ISO 15836:2009), (c) the Open Archive Initiative (OAI) Protocol for Metadata Harvesting, and (d) the Image Retrieval in Medical Applications (IRMA) framework. In our implementation, a portal bunches all of the functionalities that are needed for data submission and retrieval. The complete life cycle of the data (define, create, store, sustain, share, use, and improve) is managed. Sophisticated search tools make it easier to use the datasets, which may be merged from different providers. An integrated history record guarantees reproducibility. A standardized creation report is generated with a permanent digital object identifier. This creation report must be referenced by all of the data users. Peer-reviewed e-publishing of these reports will create a reputation for the data contributors and will form de-facto standards regarding image and signal datasets. Good practice guidelines for validation methodology complement the concept of the case repository. This procedure will increase the comparability of evaluation studies for medical signal and image processing methods and applications.
Collapse
Affiliation(s)
- Thomas M Deserno
- Dept. of Medical Informatics, RWTH Aachen University, Pauwelsstraße 30, 52074, Aachen, Germany.
| | | | | |
Collapse
|
32
|
Wang S, Summers RM. Machine learning and radiology. Med Image Anal 2012; 16:933-51. [PMID: 22465077 PMCID: PMC3372692 DOI: 10.1016/j.media.2012.02.005] [Citation(s) in RCA: 315] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2011] [Revised: 01/05/2012] [Accepted: 02/12/2012] [Indexed: 02/06/2023]
Abstract
In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers.
Collapse
Affiliation(s)
- Shijun Wang
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10 Room 1C224D MSC 1182, Bethesda, MD 20892-1182
| | - Ronald M. Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10 Room 1C224D MSC 1182, Bethesda, MD 20892-1182
| |
Collapse
|
33
|
Welter P, Deserno TM, Fischer B, Günther RW, Spreckelsen C. Towards case-based medical learning in radiological decision making using content-based image retrieval. BMC Med Inform Decis Mak 2011; 11:68. [PMID: 22032775 PMCID: PMC3217894 DOI: 10.1186/1472-6947-11-68] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2010] [Accepted: 10/27/2011] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Radiologists' training is based on intensive practice and can be improved with the use of diagnostic training systems. However, existing systems typically require laboriously prepared training cases and lack integration into the clinical environment with a proper learning scenario. Consequently, diagnostic training systems advancing decision-making skills are not well established in radiological education. METHODS We investigated didactic concepts and appraised methods appropriate to the radiology domain, as follows: (i) Adult learning theories stress the importance of work-related practice gained in a team of problem-solvers; (ii) Case-based reasoning (CBR) parallels the human problem-solving process; (iii) Content-based image retrieval (CBIR) can be useful for computer-aided diagnosis (CAD). To overcome the known drawbacks of existing learning systems, we developed the concept of image-based case retrieval for radiological education (IBCR-RE). The IBCR-RE diagnostic training is embedded into a didactic framework based on the Seven Jump approach, which is well established in problem-based learning (PBL). In order to provide a learning environment that is as similar as possible to radiological practice, we have analysed the radiological workflow and environment. RESULTS We mapped the IBCR-RE diagnostic training approach into the Image Retrieval in Medical Applications (IRMA) framework, resulting in the proposed concept of the IRMAdiag training application. IRMAdiag makes use of the modular structure of IRMA and comprises (i) the IRMA core, i.e., the IRMA CBIR engine; and (ii) the IRMAcon viewer. We propose embedding IRMAdiag into hospital information technology (IT) infrastructure using the standard protocols Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7). Furthermore, we present a case description and a scheme of planned evaluations to comprehensively assess the system. CONCLUSIONS The IBCR-RE paradigm incorporates a novel combination of essential aspects of diagnostic learning in radiology: (i) Provision of work-relevant experiences in a training environment integrated into the radiologist's working context; (ii) Up-to-date training cases that do not require cumbersome preparation because they are provided by routinely generated electronic medical records; (iii) Support of the way adults learn while remaining suitable for the patient- and problem-oriented nature of medicine. Future work will address unanswered questions to complete the implementation of the IRMAdiag trainer.
Collapse
Affiliation(s)
- Petra Welter
- Department of Medical Informatics, RWTH Aachen University of Technology, Germany.
| | | | | | | | | |
Collapse
|
34
|
Terrestrial Remotely Sensed Imagery in Support of Public Health: New Avenues of Research Using Object-Based Image Analysis. REMOTE SENSING 2011. [DOI: 10.3390/rs3112321] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|
35
|
Zhou X, Stern R, Müller H. Case-based fracture image retrieval. Int J Comput Assist Radiol Surg 2011; 7:401-11. [PMID: 21800188 DOI: 10.1007/s11548-011-0643-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2011] [Accepted: 06/30/2011] [Indexed: 10/17/2022]
Abstract
PURPOSE Case-based fracture image retrieval can assist surgeons in decisions regarding new cases by supplying visually similar past cases. This tool may guide fracture fixation and management through comparison of long-term outcomes in similar cases. METHODS A fracture image database collected over 10 years at the orthopedic service of the University Hospitals of Geneva was used. This database contains 2,690 fracture cases associated with 43 classes (based on the AO/OTA classification). A case-based retrieval engine was developed and evaluated using retrieval precision as a performance metric. Only cases in the same class as the query case are considered as relevant. The scale-invariant feature transform (SIFT) is used for image analysis. Performance evaluation was computed in terms of mean average precision (MAP) and early precision (P10, P30). Retrieval results produced with the GNU image finding tool (GIFT) were used as a baseline. Two sampling strategies were evaluated. One used a dense 40 × 40 pixel grid sampling, and the second one used the standard SIFT features. Based on dense pixel grid sampling, three unsupervised feature selection strategies were introduced to further improve retrieval performance. With dense pixel grid sampling, the image is divided into 1,600 (40 × 40) square blocks. The goal is to emphasize the salient regions (blocks) and ignore irrelevant regions. Regions are considered as important when a high variance of the visual features is found. The first strategy is to calculate the variance of all descriptors on the global database. The second strategy is to calculate the variance of all descriptors for each case. A third strategy is to perform a thumbnail image clustering in a first step and then to calculate the variance for each cluster. Finally, a fusion between a SIFT-based system and GIFT is performed. RESULTS A first comparison on the selection of sampling strategies using SIFT features shows that dense sampling using a pixel grid (MAP = 0.18) outperformed the SIFT detector-based sampling approach (MAP = 0.10). In a second step, three unsupervised feature selection strategies were evaluated. A grid parameter search is applied to optimize parameters for feature selection and clustering. Results show that using half of the regions (700 or 800) obtains the best performance for all three strategies. Increasing the number of clusters in clustering can also improve the retrieval performance. The SIFT descriptor variance in each case gave the best indication of saliency for the regions (MAP = 0.23), better than the other two strategies (MAP = 0.20 and 0.21). Combining GIFT (MAP = 0.23) and the best SIFT strategy (MAP = 0.23) produced significantly better results (MAP = 0.27) than each system alone. CONCLUSIONS A case-based fracture retrieval engine was developed and is available for online demonstration. SIFT is used to extract local features, and three feature selection strategies were introduced and evaluated. A baseline using the GIFT system was used to evaluate the salient point-based approaches. Without supervised learning, SIFT-based systems with optimized parameters slightly outperformed the GIFT system. A fusion of the two approaches shows that the information contained in the two approaches is complementary. Supervised learning on the feature space is foreseen as the next step of this study.
Collapse
|
36
|
Depeursinge A, Fischer B, Müller H, Deserno TM. Prototypes for content-based image retrieval in clinical practice. Open Med Inform J 2011; 5:58-72. [PMID: 21892374 PMCID: PMC3149811 DOI: 10.2174/1874431101105010058] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2011] [Revised: 05/20/2011] [Accepted: 05/20/2011] [Indexed: 02/07/2023] Open
Abstract
Content-based image retrieval (CBIR) has been proposed as key technology for computer-aided diagnostics (CAD). This paper reviews the state of the art and future challenges in CBIR for CAD applied to clinical practice.We define applicability to clinical practice by having recently demonstrated the CBIR system on one of the CAD demonstration workshops held at international conferences, such as SPIE Medical Imaging, CARS, SIIM, RSNA, and IEEE ISBI. From 2009 to 2011, the programs of CADdemo@CARS and the CAD Demonstration Workshop at SPIE Medical Imaging were sought for the key word "retrieval" in the title. The systems identified were analyzed and compared according to the hierarchy of gaps for CBIR systems.In total, 70 software demonstrations were analyzed. 5 systems were identified meeting the criterions. The fields of application are (i) bone age assessment, (ii) bone fractures, (iii) interstitial lung diseases, and (iv) mammography. Bridging the particular gaps of semantics, feature extraction, feature structure, and evaluation have been addressed most frequently.In specific application domains, CBIR technology is available for clinical practice. While system development has mainly focused on bridging content and feature gaps, performance and usability have become increasingly important. The evaluation must be based on a larger set of reference data, and workflow integration must be achieved before CBIR-CAD is really established in clinical practice.
Collapse
Affiliation(s)
- Adrien Depeursinge
- Business Information Systems, University of Applied Sciences Western Switzerland (HES–SO), TechnoArk 3, 3960 Sierre, Switzerland
- Service of Medical Informatics, University and University Hospitals of Geneva (HUG), Rue Gabrielle–Perret–Gentil 4,1211 Geneva 14, Switzerland
| | - Benedikt Fischer
- Department of Medical Informatics, RWTH Aachen University, Pauwelsstr. 30, D-52057 Aachen, Germany
| | - Henning Müller
- Business Information Systems, University of Applied Sciences Western Switzerland (HES–SO), TechnoArk 3, 3960 Sierre, Switzerland
- Service of Medical Informatics, University and University Hospitals of Geneva (HUG), Rue Gabrielle–Perret–Gentil 4,1211 Geneva 14, Switzerland
| | - Thomas M Deserno
- Department of Medical Informatics, RWTH Aachen University, Pauwelsstr. 30, D-52057 Aachen, Germany
| |
Collapse
|
37
|
Gao XW, Qian Y, Hui R. The state of the art of medical imaging technology: from creation to archive and back. Open Med Inform J 2011; 5 Suppl 1:73-85. [PMID: 21915232 PMCID: PMC3170936 DOI: 10.2174/1874431101105010073] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2011] [Revised: 05/17/2011] [Accepted: 05/17/2011] [Indexed: 11/22/2022] Open
Abstract
Medical imaging has learnt itself well into modern medicine and revolutionized medical industry in the last 30 years. Stemming from the discovery of X-ray by Nobel laureate Wilhelm Roentgen, radiology was born, leading to the creation of large quantities of digital images as opposed to film-based medium. While this rich supply of images provides immeasurable information that would otherwise not be possible to obtain, medical images pose great challenges in archiving them safe from corrupted, lost and misuse, retrievable from databases of huge sizes with varying forms of metadata, and reusable when new tools for data mining and new media for data storing become available. This paper provides a summative account on the creation of medical imaging tomography, the development of image archiving systems and the innovation from the existing acquired image data pools. The focus of this paper is on content-based image retrieval (CBIR), in particular, for 3D images, which is exemplified by our developed online e-learning system, MIRAGE, home to a repository of medical images with variety of domains and different dimensions. In terms of novelties, the facilities of CBIR for 3D images coupled with image annotation in a fully automatic fashion have been developed and implemented in the system, resonating with future versatile, flexible and sustainable medical image databases that can reap new innovations.
Collapse
Affiliation(s)
- Xiaohong W Gao
- School of Engineering and Information Sciences, Middlesex University, London, NW4 4BT, UK
| | - Yu Qian
- School of Engineering and Information Sciences, Middlesex University, London, NW4 4BT, UK
| | - Rui Hui
- School of Engineering and Information Sciences, Middlesex University, London, NW4 4BT, UK
- Department of Neurosurgery, General Navy Hospital, Beijing, P.R. China
| |
Collapse
|
38
|
Web-based bone age assessment by content-based image retrieval for case-based reasoning. Int J Comput Assist Radiol Surg 2011; 7:389-99. [DOI: 10.1007/s11548-011-0627-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2011] [Accepted: 05/25/2011] [Indexed: 10/18/2022]
|
39
|
Avni U, Greenspan H, Konen E, Sharon M, Goldberger J. X-ray categorization and retrieval on the organ and pathology level, using patch-based visual words. IEEE TRANSACTIONS ON MEDICAL IMAGING 2011; 30:733-746. [PMID: 21118769 DOI: 10.1109/tmi.2010.2095026] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
In this study we present an efficient image categorization and retrieval system applied to medical image databases, in particular large radiograph archives. The methodology is based on local patch representation of the image content, using a "bag of visual words" approach. We explore the effects of various parameters on system performance, and show best results using dense sampling of simple features with spatial content, and a nonlinear kernel-based support vector machine (SVM) classifier. In a recent international competition the system was ranked first in discriminating orientation and body regions in X-ray images. In addition to organ-level discrimination, we show an application to pathology-level categorization of chest X-ray data, the most popular examination in radiology. The system discriminates between healthy and pathological cases, and is also shown to successfully identify specific pathologies in a set of chest radiographs taken from a routine hospital examination. This is a first step towards similarity-based categorization, which has a major clinical implications for computer-assisted diagnostics.
Collapse
Affiliation(s)
- Uri Avni
- Department of Biomedical Engineering, Tel-Aviv University, 69978 Tel Aviv, Israel.
| | | | | | | | | |
Collapse
|
40
|
|
41
|
Cheng C, Stokes TH, Hang S, Wang MD. TissueWiki Mobile: an Integrative Protein Expression Image Browser for Pathological Knowledge Sharing and Annotation on a Mobile Device. IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE WORKSHOPS. IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE 2010; 2010:473-480. [PMID: 27532057 PMCID: PMC4983421 DOI: 10.1109/bibmw.2010.5703848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Doctors need fast and convenient access to medical data. This motivates the use of mobile devices for knowledge retrieval and sharing. We have developed TissueWikiMobile on the Apple iPhone and iPad to seamlessly access TissueWiki, an enormous repository of medical histology images. TissueWiki is a three terabyte database of antibody information and histology images from the Human Protein Atlas (HPA). Using TissueWikiMobile, users are capable of extracting knowledge from protein expression, adding annotations to highlight regions of interest on images, and sharing their professional insight. By providing an intuitive human computer interface, users can efficiently operate TissueWikiMobile to access important biomedical data without losing mobility. TissueWikiMobile furnishes the health community a ubiquitous way to collaborate and share their expert opinions not only on the performance of various antibodies stains but also on histology image annotation.
Collapse
Affiliation(s)
- Chihwen Cheng
- Electrical and Computer Engineering, Georgia Institute of Technology
| | | | - Sovandy Hang
- Biomedical Engineering, Georgia Institute of Technology
| | - May D. Wang
- Electrical and Computer Engineering, Georgia Institute of Technology
- Biomedical Engineering, Georgia Institute of Technology
| |
Collapse
|
42
|
de Oliveira JEE, Machado AMC, Chavez GC, Lopes APB, Deserno TM, Araújo ADA. MammoSys: A content-based image retrieval system using breast density patterns. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2010; 99:289-297. [PMID: 20207441 DOI: 10.1016/j.cmpb.2010.01.005] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2009] [Revised: 01/19/2010] [Accepted: 01/21/2010] [Indexed: 05/28/2023]
Abstract
In this paper, we present a content-based image retrieval system designed to retrieve mammographies from large medical image database. The system is developed based on breast density, according to the four categories defined by the American College of Radiology, and is integrated to the database of the Image Retrieval in Medical Applications (IRMA) project, that provides images with classification ground truth. Two-dimensional principal component analysis is used in breast density texture characterization, in order to effectively represent texture and allow for dimensionality reduction. A support vector machine is used to perform the retrieval process. Average precision rates are in the range from 83% to 97% considering a data set of 5024 images. The results indicate the potential of the system as the first stage of a computer-aided diagnosis framework.
Collapse
Affiliation(s)
- Júlia E E de Oliveira
- Universidade Federal de Minas Gerais, Departamento de Ciência da Computação, Av. Antônio Carlos, 6627, 31270-901, Belo Horizonte, MG, Brazil.
| | | | | | | | | | | |
Collapse
|
43
|
Workflow management of content-based image retrieval for CAD support in PACS environments based on IHE. Int J Comput Assist Radiol Surg 2010; 5:393-400. [PMID: 20379792 DOI: 10.1007/s11548-010-0416-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2010] [Accepted: 03/24/2010] [Indexed: 10/19/2022]
Abstract
PURPOSE Content-based image retrieval (CBIR) bears great potential for computer-aided diagnosis (CAD). However, current CBIR systems are not able to integrate with clinical workflow and PACS generally. One essential factor in this setting is scheduling. Applied and proved with modalities and the acquisition of images for a long time, we now establish scheduling with CBIR. METHODS Our workflow is based on the IHE integration profile 'Post-Processing Workflow' (PPW) and the use of a DICOM work list. RESULTS We configured dcm4chee PACS and its including IHE actors for the application of CBIR. In order to achieve a convenient interface for integrating arbitrary CBIR systems, we realized an adapter between the CBIR system and PACS. Our system architecture constitutes modular components communicating over standard protocols. CONCLUSION The proposed workflow management system offers the possibility to embed CBIR conveniently into PACS environments. We achieve a chain of references that fills the information gap between acquisition and post-processing. Our approach takes into account the tight and solid organization of scheduled and performed tasks in clinical settings.
Collapse
|
44
|
Bosman HHWJ, Petkov N, Jonkman MF. Comparison of color representations for content-based image retrieval in dermatology. Skin Res Technol 2010; 16:109-13. [DOI: 10.1111/j.1600-0846.2009.00405.x] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
45
|
Coatrieux G, Le Guillou C, Cauvin JM, Roux C. Reversible watermarking for knowledge digest embedding and reliability control in medical images. ACTA ACUST UNITED AC 2009; 13:158-65. [PMID: 19272858 DOI: 10.1109/titb.2008.2007199] [Citation(s) in RCA: 131] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
To improve medical image sharing in applications such as e-learning or remote diagnosis aid, we propose to make the image more usable by watermarking it with a digest of its associated knowledge. The aim of such a knowledge digest (KD) is for it to be used for retrieving similar images with either the same findings or differential diagnoses. It summarizes the symbolic descriptions of the image, the symbolic descriptions of the findings semiology, and the similarity rules that contribute to balancing the importance of previous descriptors when comparing images. Instead of modifying the image file format by adding some extra header information, watermarking is used to embed the KD in the pixel gray-level values of the corresponding images. When shared through open networks, watermarking also helps to convey reliability proofs (integrity and authenticity) of an image and its KD. The interest of these new image functionalities is illustrated in the updating of the distributed users' databases within the framework of an e-learning application demonstrator of endoscopic semiology.
Collapse
|
46
|
Névéol A, Deserno TM, Darmoni SJ, Güld MO, Aronson AR. Natural Language Processing Versus Content-Based Image Analysis for Medical Document Retrieval. ACTA ACUST UNITED AC 2009; 60:123-134. [PMID: 19633735 DOI: 10.1002/asi.20955] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
One of the most significant recent advances in health information systems has been the shift from paper to electronic documents. While research on automatic text and image processing has taken separate paths, there is a growing need for joint efforts, particularly for electronic health records and biomedical literature databases. This work aims at comparing text-based versus image-based access to multimodal medical documents using state-of-the-art methods of processing text and image components. A collection of 180 medical documents containing an image accompanied by a short text describing it was divided into training and test sets. Content-based image analysis and natural language processing techniques are applied individually and combined for multimodal document analysis. The evaluation consists of an indexing task and a retrieval task based on the "gold standard" codes manually assigned to corpus documents. The performance of text-based and image-based access, as well as combined document features, is compared. Image analysis proves more adequate for both the indexing and retrieval of the images. In the indexing task, multimodal analysis outperforms both independent image and text analysis. This experiment shows that text describing images can be usefully analyzed in the framework of a hybrid text/image retrieval system.
Collapse
Affiliation(s)
- Aurélie Névéol
- U.S. National Library of Medicine, National Institutes of Health, 8600 Rockville Pike, Bethesda, MD 20894. E-mail:
| | | | | | | | | |
Collapse
|
47
|
A framework and baseline results for the CLEF medical automatic annotation task. Pattern Recognit Lett 2008. [DOI: 10.1016/j.patrec.2008.05.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
48
|
Pourghassem H, Ghassemian H. Content-based medical image classification using a new hierarchical merging scheme. Comput Med Imaging Graph 2008; 32:651-61. [PMID: 18789648 DOI: 10.1016/j.compmedimag.2008.07.006] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2007] [Revised: 06/23/2008] [Accepted: 07/23/2008] [Indexed: 10/21/2022]
Abstract
Automatic medical image classification is a technique for assigning a medical image to a class among a number of image categories. Due to computational complexity, it is an important task in the content-based image retrieval (CBIR). In this paper, we propose a hierarchical medical image classification method including two levels using a perfect set of various shape and texture features. Furthermore, a tessellation-based spectral feature as well as a directional histogram has been proposed. In each level of the hierarchical classifier, using a new merging scheme and multilayer perceptron (MLP) classifiers (merging-based classification), homogenous (semantic) classes are created from overlapping classes in the database. The proposed merging scheme employs three measures to detect the overlapping classes: accuracy, miss-classified ratio, and dissimilarity. The first two measures realize a supervised classification method and the last one realizes an unsupervised clustering technique. In each level, the merging-based classification is applied to a merged class of the previous level and splits it to several classes. This procedure is progressive to achieve more classes. The proposed algorithm is evaluated on a database consisting of 9100 medical X-ray images of 40 classes. It provides accuracy rate of 90.83% on 25 merged classes in the first level. If the correct class is considered within the best three matches, this value will increase to 97.9%.
Collapse
Affiliation(s)
- Hossein Pourghassem
- School of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran. h
| | | |
Collapse
|
49
|
Ontology of gaps in content-based image retrieval. J Digit Imaging 2008; 22:202-15. [PMID: 18239964 DOI: 10.1007/s10278-007-9092-x] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2007] [Revised: 11/19/2007] [Accepted: 12/04/2007] [Indexed: 10/22/2022] Open
Abstract
Content-based image retrieval (CBIR) is a promising technology to enrich the core functionality of picture archiving and communication systems (PACS). CBIR has a potential for making a strong impact in diagnostics, research, and education. Research as reported in the scientific literature, however, has not made significant inroads as medical CBIR applications incorporated into routine clinical medicine or medical research. The cause is often attributed (without supporting analysis) to the inability of these applications in overcoming the "semantic gap." The semantic gap divides the high-level scene understanding and interpretation available with human cognitive capabilities from the low-level pixel analysis of computers, based on mathematical processing and artificial intelligence methods. In this paper, we suggest a more systematic and comprehensive view of the concept of "gaps" in medical CBIR research. In particular, we define an ontology of 14 gaps that addresses the image content and features, as well as system performance and usability. In addition to these gaps, we identify seven system characteristics that impact CBIR applicability and performance. The framework we have created can be used a posteriori to compare medical CBIR systems and approaches for specific biomedical image domains and goals and a priori during the design phase of a medical CBIR application, as the systematic analysis of gaps provides detailed insight in system comparison and helps to direct future research.
Collapse
|
50
|
Deserno TM, Molander B, Güld MO, Thies C, Gröndahl HG. Content-based access to oral and maxillofacial radiographs. Dentomaxillofac Radiol 2007; 36:328-35. [PMID: 17699702 DOI: 10.1259/dmfr/11645252] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVES Content-based access (CBA) to medical image archives, i.e. data retrieval by means of image-based numerical features computed automatically, has capabilities to improve diagnostics, research and education. In this study, the applicability of CBA methods in dentomaxillofacial radiology is evaluated. METHODS Recent research has discovered numerical features that were successfully applied for an automatic categorization of radiographs. In our experiments, oral and maxillofacial radiographs were obtained from the day-to-day routine of a university hospital and labelled by an experienced dental radiologist regarding the technique and direction of imaging, as well as the displayed anatomy and biosystem. In total, 2000 radiographs of 71 classes with at least 10 samples per class were analysed. A combination of co-occurrence-based texture features and correlation-based similarity measures was used in leaving-one-out experiments for automatic classification. The impact of automatic detection and separation of multi-field images and automatic separability of biosystems were analysed. RESULTS Automatic categorization yielded error rates of 23.20%, 7.95% and 4.40% with respect to a correct match within the first, fifth and tenth best returns. These figures improved to 23.05%, 7.00%, 4.20%, and 20.05%, 5.65% and 3.25% if automatic decomposition was applied and the classifier was optimized to the dentomaxillofacial imagery, respectively. The dentulous and implant systems were difficult to distinguish. Experiments on non-dental radiographs (10,000 images of 57 classes) yielded 12.6%, 5.6% and 3.6%. CONCLUSION Using the same numerical features as in medical radiology, oral and maxillofacial radiographs can be reliably indexed by global texture features for CBA and data mining.
Collapse
Affiliation(s)
- T M Deserno
- Department of Medical Informatics, Aachen University of Technology (RWTH), Aachen, Germany.
| | | | | | | | | |
Collapse
|