1
|
Freedman D, Bagga B, Melamud K, O'Donnell T, Vega E, Westerhoff M, Dane B. Quality assessment of expedited AI generated reformatted images for ED acquired CT abdomen and pelvis imaging. Abdom Radiol (NY) 2025; 50:1441-1447. [PMID: 39292278 DOI: 10.1007/s00261-024-04578-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 09/06/2024] [Accepted: 09/09/2024] [Indexed: 09/19/2024]
Abstract
PURPOSE Retrospectively compare image quality, radiologist diagnostic confidence, and time for images to reach PACS for contrast enhanced abdominopelvic CT examinations created on the scanner console by technologists versus those generated automatically by thin-client artificial intelligence (AI) mechanisms. METHODS A retrospective PACS search identified adults who underwent an emergency department contrast-enhanced abdominopelvic CT in 07/2022 (Console Cohort) and 07/2023 (Server Cohort). Coronal and sagittal multiplanar reformatted images (MPR) were created by AI software in the Server cohort. Time to completion of MPR images was compared using 2-sample t-tests for all patients in both cohorts. Two radiologists qualitatively assessed image quality and diagnostic confidence on 5-point Likert scales for 50 consecutive examinations from each cohort. Additionally, they assessed for acute abdominopelvic findings. Continuous variables and qualitative scores were compared with the Mann-Whitney U test. A p < .05 indicated statistical significance. RESULTS Mean[SD] time to exam completion in PACS was 8.7[11.1] minutes in the Console cohort (n = 728) and 4.6[6.6] minutes in the Server cohort (n = 892), p < .001. 50 examinations in the Console Cohort (28 women 22 men, 51[19] years) and Server cohort (27 women 23 men, 57[19] years) were included for radiologist review. Age, sex, CTDlvol, and DLP were not statistically different between the cohorts (all p > .05). There was no significant difference in image quality or diagnostic confidence for either reader when comparing the Console and Server cohorts (all p > .05). CONCLUSION Examinations utilizing AI generated MPRs on a thin-client architecture were completed approximately 50% faster than those utilizing reconstructions generated at the console with no statistical difference in diagnostic confidence or image quality.
Collapse
Affiliation(s)
| | - Barun Bagga
- New York University Langone Medical Center, New York, USA
| | - Kira Melamud
- New York University Langone Medical Center, New York, USA
| | | | - Emilio Vega
- New York University Langone Medical Center, New York, USA
| | | | - Bari Dane
- New York University Langone Medical Center, New York, USA
| |
Collapse
|
2
|
Sohrabniya F, Hassanzadeh-Samani S, Ourang SA, Jafari B, Farzinnia G, Gorjinejad F, Ghalyanchi-Langeroudi A, Mohammad-Rahimi H, Tichy A, Motamedian SR, Schwendicke F. Exploring a decade of deep learning in dentistry: A comprehensive mapping review. Clin Oral Investig 2025; 29:143. [PMID: 39969623 DOI: 10.1007/s00784-025-06216-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2024] [Accepted: 02/08/2025] [Indexed: 02/20/2025]
Abstract
OBJECTIVES Artificial Intelligence (AI), particularly deep learning, has significantly impacted healthcare, including dentistry, by improving diagnostics, treatment planning, and prognosis prediction. This systematic mapping review explores the current applications of deep learning in dentistry, offering a comprehensive overview of trends, models, and their clinical significance. MATERIALS AND METHODS Following a structured methodology, relevant studies published from January 2012 to September 2023 were identified through database searches in PubMed, Scopus, and Embase. Key data, including clinical purpose, deep learning tasks, model architectures, and data modalities, were extracted for qualitative synthesis. RESULTS From 21,242 screened studies, 1,007 were included. Of these, 63.5% targeted diagnostic tasks, primarily with convolutional neural networks (CNNs). Classification (43.7%) and segmentation (22.9%) were the main methods, and imaging data-such as cone-beam computed tomography and orthopantomograms-were used in 84.4% of cases. Most studies (95.2%) applied fully supervised learning, emphasizing the need for annotated data. Pathology (21.5%), radiology (17.5%), and orthodontics (10.2%) were prominent fields, with 24.9% of studies relating to more than one specialty. CONCLUSION This review explores the advancements in deep learning in dentistry, particulary for diagnostics, and identifies areas for further improvement. While CNNs have been used successfully, it is essential to explore emerging model architectures, learning approaches, and ways to obtain diverse and reliable data. Furthermore, fostering trust among all stakeholders by advancing explainable AI and addressing ethical considerations is crucial for transitioning AI from research to clinical practice. CLINICAL RELEVANCE This review offers a comprehensive overview of a decade of deep learning in dentistry, showcasing its significant growth in recent years. By mapping its key applications and identifying research trends, it provides a valuable guide for future studies and highlights emerging opportunities for advancing AI-driven dental care.
Collapse
Affiliation(s)
- Fatemeh Sohrabniya
- ITU/WHO/WIPO Global Initiative on Artificial Intelligence for Health - Dental Diagnostics and Digital Dentistry, Geneva, Switzerland
| | - Sahel Hassanzadeh-Samani
- ITU/WHO/WIPO Global Initiative on Artificial Intelligence for Health - Dental Diagnostics and Digital Dentistry, Geneva, Switzerland
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Seyed AmirHossein Ourang
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Bahare Jafari
- Division of Orthodontics, The Ohio State University, Columbus, OH, 43210, USA
| | | | - Fatemeh Gorjinejad
- ITU/WHO/WIPO Global Initiative on Artificial Intelligence for Health - Dental Diagnostics and Digital Dentistry, Geneva, Switzerland
| | - Azadeh Ghalyanchi-Langeroudi
- Medical Physics & Biomedical Engineering Department, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
- Research Center for Biomedical Technologies and Robotics (RCBTR),Advanced Medical Technology and Equipment Institute (AMTEI), Tehran University of Medical Science (TUMS), Tehran, Iran
| | - Hossein Mohammad-Rahimi
- Department of Dentistry and Oral Health, Aarhus University, Vennelyst Boulevard 9, Aarhus C, 8000, Aarhus, Denmark
- Department of Conservative Dentistry and Periodontology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Antonin Tichy
- Department of Conservative Dentistry and Periodontology, LMU University Hospital, LMU Munich, Munich, Germany
- Institute of Dental Medicine, First Faculty of Medicine of the Charles University and General University Hospital, Prague, Czech Republic
| | - Saeed Reza Motamedian
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
- Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Falk Schwendicke
- Department of Conservative Dentistry and Periodontology, LMU University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
3
|
Xu M, Liu B, Luo Z, Ma H, Sun M, Wang Y, Yin N, Tang X, Song T. Using a New Deep Learning Method for 3D Cephalometry in Patients With Cleft Lip and Palate. J Craniofac Surg 2023; 34:1485-1488. [PMID: 36944601 DOI: 10.1097/scs.0000000000009299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 12/28/2022] [Indexed: 03/23/2023] Open
Abstract
Deep learning algorithms based on automatic 3-dimensional (D) cephalometric marking points about people without craniomaxillofacial deformities has achieved good results. However, there has been no previous report about cleft lip and palate. The purpose of this study is to apply a new deep learning method based on a 3D point cloud graph convolutional neural network to predict and locate landmarks in patients with cleft lip and palate based on the relationships between points. The authors used the PointNet++ model to investigate the automatic 3D cephalometric marking points. And the mean distance error of the center coordinate position and the success detection rate (SDR) were used to evaluate the accuracy of systematic labeling. A total of 150 patients were enrolled. The mean distance error for all 27 landmarks was 1.33 mm, and 9 landmarks (30%) showed SDRs at 2 mm over 90%, and 3 landmarks (35%) showed SDRs at 2 mm under 70%. The automatic 3D cephalometric marking points take 16 seconds per dataset. In summary, our training sets were derived from the cleft lip with/without palate computed tomography to achieve accurate results. The 3D cephalometry system based on the graph convolutional neural network algorithm may be suitable for 3D cephalometry system in cleft lip and palate cases. More accurate results may be obtained if the cleft lip and palate training set is expanded in the future.
Collapse
Affiliation(s)
- Meng Xu
- Cleft Lip and Palate Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College
| | - Bingyang Liu
- Maxillofacial Surgery Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing
| | - Zhaoyang Luo
- HaiChuang Future Medical Technology Co. Ltd, Hangzhou
| | - Hengyuan Ma
- Digital Technology Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Min Sun
- Cleft Lip and Palate Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College
| | - Yongqian Wang
- Cleft Lip and Palate Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College
| | - Ningbei Yin
- Cleft Lip and Palate Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College
| | - Xiaojun Tang
- Maxillofacial Surgery Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing
| | - Tao Song
- Cleft Lip and Palate Center, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College
| |
Collapse
|
4
|
Potočnik J, Foley S, Thomas E. Current and potential applications of artificial intelligence in medical imaging practice: A narrative review. J Med Imaging Radiat Sci 2023; 54:376-385. [PMID: 37062603 DOI: 10.1016/j.jmir.2023.03.033] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 03/25/2023] [Accepted: 03/28/2023] [Indexed: 04/18/2023]
Abstract
BACKGROUND AND PURPOSE Artificial intelligence (AI) is present in many areas of our lives. Much of the digital data generated in health care can be used for building automated systems to bring improvements to existing workflows and create a more personalised healthcare experience for patients. This review outlines select current and potential AI applications in medical imaging practice and provides a view of how diagnostic imaging suites will operate in the future. Challenges associated with potential applications will be discussed and healthcare staff considerations necessary to benefit from AI-enabled solutions will be outlined. METHODS Several electronic databases, including PubMed, ScienceDirect, Google Scholar, and University College Dublin Library Database, were used to identify relevant articles with a Boolean search strategy. Textbooks, government sources and vendor websites were also considered. RESULTS/DISCUSSION Many AI-enabled solutions in radiographic practice are available with more automation on the horizon. Traditional workflow will become faster, more effective, and more user friendly. AI can handle administrative or technical types of work, meaning it is applicable across all aspects of medical imaging practice. CONCLUSION AI offers significant potential to automate most of the manual tasks, ensure service consistency, and improve patient care. Radiographers, radiation therapists, and clinicians should ensure they have adequate understanding of the technology to enable ethical oversight of its implementation.
Collapse
Affiliation(s)
- Jaka Potočnik
- University College Dublin School of Medicine, Radiography & Diagnostic Imaging, Room A223, Belfield, Dublin 4, Ireland.
| | - Shane Foley
- University College Dublin School of Medicine, Radiography & Diagnostic Imaging, Room A223, Belfield, Dublin 4, Ireland
| | - Edel Thomas
- University College Dublin School of Medicine, Radiography & Diagnostic Imaging, Room A223, Belfield, Dublin 4, Ireland
| |
Collapse
|
5
|
Ao Y, Wu H. Feature Aggregation and Refinement Network for 2D Anatomical Landmark Detection. J Digit Imaging 2023; 36:547-561. [PMID: 36401132 PMCID: PMC10039137 DOI: 10.1007/s10278-022-00718-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 10/06/2022] [Accepted: 10/13/2022] [Indexed: 11/19/2022] Open
Abstract
Localization of anatomical landmarks is essential for clinical diagnosis, treatment planning, and research. This paper proposes a novel deep network named feature aggregation and refinement network (FARNet) for automatically detecting anatomical landmarks. FARNet employs an encoder-decoder structure architecture. To alleviate the problem of limited training data in the medical domain, we adopt a backbone network pre-trained on natural images as the encoder. The decoder includes a multi-scale feature aggregation module for multi-scale feature fusion and a feature refinement module for high-resolution heatmap regression. Coarse-to-fine supervisions are applied to the two modules to facilitate end-to-end training. We further propose a novel loss function named Exponential Weighted Center loss for accurate heatmap regression, which focuses on the losses from the pixels near landmarks and suppresses the ones from far away. We evaluate FARNet on three publicly available anatomical landmark detection datasets, including cephalometric, hand, and spine radiographs. Our network achieves state-of-the-art performances on all three datasets. Code is available at https://github.com/JuvenileInWind/FARNet .
Collapse
Affiliation(s)
- Yueyuan Ao
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731 China
| | - Hong Wu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731 China
| |
Collapse
|
6
|
Lang Y, Lian C, Xiao D, Deng H, Thung KH, Yuan P, Gateno J, Kuang T, Alfi DM, Wang L, Shen D, Xia JJ, Yap PT. Localization of Craniomaxillofacial Landmarks on CBCT Images Using 3D Mask R-CNN and Local Dependency Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2856-2866. [PMID: 35544487 PMCID: PMC9673501 DOI: 10.1109/tmi.2022.3174513] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Cephalometric analysis relies on accurate detection of craniomaxillofacial (CMF) landmarks from cone-beam computed tomography (CBCT) images. However, due to the complexity of CMF bony structures, it is difficult to localize landmarks efficiently and accurately. In this paper, we propose a deep learning framework to tackle this challenge by jointly digitalizing 105 CMF landmarks on CBCT images. By explicitly learning the local geometrical relationships between the landmarks, our approach extends Mask R-CNN for end-to-end prediction of landmark locations. Specifically, we first apply a detection network on a down-sampled 3D image to leverage global contextual information to predict the approximate locations of the landmarks. We subsequently leverage local information provided by higher-resolution image patches to refine the landmark locations. On patients with varying non-syndromic jaw deformities, our method achieves an average detection accuracy of 1.38± 0.95mm, outperforming a related state-of-the-art method.
Collapse
|
7
|
Ewertowski NP, Schleich C, Abrar DB, Hosalkar HS, Bittersohl B. Automated measurement of alpha angle on 3D-magnetic resonance imaging in femoroacetabular impingement hips: a pilot study. J Orthop Surg Res 2022; 17:370. [PMID: 35907886 PMCID: PMC9338591 DOI: 10.1186/s13018-022-03256-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 07/16/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Femoroacetabular impingement (FAI) syndrome is an established pre-osteoarthritic condition. Diagnosis is based on both clinical and radiographic parameters. An abnormal manually calculated alpha angle in magnetic resonance imaging (MRI) is traditionally utilized to diagnose abnormal femoral head-neck offset. This pilot study aimed to assess the feasibility of automated alpha angle measurements in patients with FAI syndrome, and to compare automated with manual measurements data with regard to the time and effort needed in each method. METHODS Alpha angles were measured with manual and automated techniques, using postprocessing software in nineteen hip MRIs of FAI syndrome patients. Two observers conducted manual measurements. Intra- and inter-observer reproducibility and correlation of manual and automated alpha angle measurements were calculated using intra-class correlation (ICC) analysis. Both techniques were compared regarding the time taken (in minutes) and effort required, measured as the amount of mouse button presses performed. RESULTS The first observer's intra-observer reproducibility was good (ICC 0.77; p < 0.001) while the second observer's was good-to-excellent (ICC 0.93; p < 0.001). Inter-observer reproducibility between both observers in the first (ICC 0.45; p < 0.001) and second (ICC 0.56; p < 0.001) manual alpha angle assessment was moderate. The intra-class correlation coefficients between manual and automated alpha angle measurements were ICC = 0.24 (p = 0.052; observer 1, 1st measurement), ICC = 0.32 (p = 0.015; observer 1, 2nd measurement), ICC = 0.50 (p < 0.001; observer 2, 1st measurement), and ICC = 0.45 (p < 0.001; observer 2, 2nd measurement). Average runtime for automatic processing of the image data for the automated assessment was 16.6 ± 1.9 min. Automatic alpha angle measurements took longer (time difference: 14.6 ± 3.9 min; p < 0.001) but required less effort (difference in button presses: 231 ± 23; p < 0.001). While the automatic processing is running, the user can perform other tasks. CONCLUSIONS This pilot study demonstrates that objective and reliable automated alpha angle measurement of MRIs in FAI syndrome hips is feasible. Trial registration The Ethics Committee of the University of Düsseldorf approved our study (Registry-ID: 2017084398).
Collapse
Affiliation(s)
- Nastassja Pamela Ewertowski
- Department for Orthopedics and Trauma Surgery, Medical Faculty and University Hospital Düsseldorf, Heinrich-Heine-University, Düsseldorf, Germany
| | | | - Daniel Benjamin Abrar
- Department of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital Düsseldorf, Heinrich-Heine-University, Düsseldorf, Germany
| | - Harish S Hosalkar
- Paradise Valley Hospital, San Diego, CA, USA.,Tri-City Medical Center, Oceanside, CA, USA.,Sharp Grossmont Hospital, La Mesa, CA, USA.,Scripps Hospital, San Diego, CA, USA
| | - Bernd Bittersohl
- Department for Orthopedics and Trauma Surgery, Medical Faculty and University Hospital Düsseldorf, Heinrich-Heine-University, Düsseldorf, Germany.
| |
Collapse
|
8
|
Zhao Y, Zeng K, Zhao Y, Bhatia P, Ranganath M, Kozhikkavil ML, Li C, Hermosillo G. Deep learning solution for medical image localization and orientation detection. Med Image Anal 2022; 81:102529. [PMID: 35870296 DOI: 10.1016/j.media.2022.102529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 04/28/2022] [Accepted: 07/01/2022] [Indexed: 11/20/2022]
Abstract
Magnetic Resonance (MR) imaging plays an important role in medical diagnosis and biomedical research. Due to the high in-slice resolution and low through-slice resolution nature of MR imaging, the usefulness of the reconstruction highly depends on the positioning of the slice group. Traditional clinical workflow relies on time-consuming manual adjustment that cannot be easily reproduced. Automation of this task can therefore bring important benefits in terms of accuracy, speed and reproducibility. Current auto-slice-positioning methods rely on automatically detected landmarks to derive the positioning, and previous studies suggest that a large, redundant set of landmarks are required to achieve robust results. However, a costly data curation procedure is needed to generate training labels for those landmarks, and the results can still be highly sensitive to landmark detection errors. More importantly, a set of anatomical landmark locations are not naturally produced during the standard clinical workflow, which makes online learning impossible. To address these limitations, we propose a novel framework for auto-slice-positioning that focuses on localizing the canonical planes within a 3D volume. The proposed framework consists of two major steps. A multi-resolution region proposal network is first used to extract a volume-of-interest, after which a V-net-like segmentation network is applied to segment the orientation planes. Importantly, our algorithm also includes a Performance Measurement Index as an indication of the algorithm's confidence. We evaluate the proposed framework on both knee and shoulder MR scans. Our method outperforms state-of-the-art automatic positioning algorithms in terms of accuracy and robustness.
Collapse
Affiliation(s)
- Yu Zhao
- SYNGO division, Siemens Medical Solutions, Malvern 19355, USA.
| | - Ke Zeng
- SYNGO division, Siemens Medical Solutions, Malvern 19355, USA.
| | - Yiyuan Zhao
- SYNGO division, Siemens Medical Solutions, Malvern 19355, USA.
| | - Parmeet Bhatia
- SYNGO division, Siemens Medical Solutions, Malvern 19355, USA.
| | | | | | - Chen Li
- Thayer School of Engineering, Dartmouth College, Hanover 03755, USA.
| | | |
Collapse
|
9
|
Ichikawa S, Itadani H, Sugimori H. Toward automatic reformation at the orbitomeatal line in head computed tomography using object detection algorithm. Phys Eng Sci Med 2022; 45:835-845. [PMID: 35793033 DOI: 10.1007/s13246-022-01153-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 06/07/2022] [Indexed: 11/24/2022]
Abstract
Consistent cross-sectional imaging is desirable to accurately detect lesions and facilitate follow-up in head computed tomography (CT). However, manual reformation causes image variations among technologists and requires additional time. We therefore developed a system that reformats head CT images at the orbitomeatal (OM) line and evaluated the system performance using real-world clinical data. Retrospective data were obtained for 681 consecutive patients who underwent non-contrast head CT. The datasets were randomly divided into one of three sets for training, validation, or testing. Four landmarks (bilateral eyes and external auditory canal) were detected with the trained You Look Only Once (YOLO)v5 model, and the head CT images were reformatted at the OM line. The precision, recall, and mean average precision at the intersection over union threshold of 0.5 were computed in the validation sets. The reformation quality in testing sets was evaluated by three radiological technologists on a qualitative 4-point scale. The precision, recall, and mean average precision of the trained YOLOv5 model for all categories were 0.688, 0.949, and 0.827, respectively. In our environment, the mean implementation time was 23.5 ± 2.4 s for each case. The qualitative evaluation in the testing sets showed that post-processed images of automatic reformation had clinically useful quality with scores 3 and 4 in 86.8%, 91.2%, and 94.1% for observers 1, 2, and 3, respectively. Our system demonstrated acceptable quality in reformatting the head CT images at the OM line using an object detection algorithm and was highly time efficient.
Collapse
Affiliation(s)
- Shota Ichikawa
- Graduate School of Health Sciences, Hokkaido University, Kita-12, Nishi-5, Kita-ku, Sapporo, 060-0812, Japan.,Department of Radiological Technology, Kurashiki Central Hospital, 1-1-1 Miwa, Kurashiki, Okayama, 710-8602, Japan
| | - Hideki Itadani
- Department of Radiological Technology, Kurashiki Central Hospital, 1-1-1 Miwa, Kurashiki, Okayama, 710-8602, Japan
| | - Hiroyuki Sugimori
- Faculty of Health Sciences, Hokkaido University, Kita-12, Nishi-5, Kita-ku, Sapporo, 060-0812, Japan.
| |
Collapse
|
10
|
Artificial Intelligence and the Radiographer/Radiological Technologist Profession: A joint statement of the International Society of Radiographers and Radiological Technologists and the European Federation of Radiographer Societies. Radiography (Lond) 2021; 26:93-95. [PMID: 32252972 DOI: 10.1016/j.radi.2020.03.007] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
11
|
Varçın F, Erbay H, Çetin E, Çetin İ, Kültür T. End-To-End Computerized Diagnosis of Spondylolisthesis Using Only Lumbar X-rays. J Digit Imaging 2021; 34:85-95. [PMID: 33432447 PMCID: PMC7887126 DOI: 10.1007/s10278-020-00402-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 10/06/2020] [Accepted: 11/18/2020] [Indexed: 01/25/2023] Open
Abstract
Lumbar spondylolisthesis (LS) is the anterior shift of one of the lower vertebrae about the subjacent vertebrae. There are several symptoms to define LS, and these symptoms are not detected in the early stages of LS. This leads to disease progress further without being identified. Thus, advanced treatment mechanisms are required to implement for diagnosing LS, which is crucial in terms of early diagnosis, rehabilitation, and treatment planning. Herein, a transfer learning-based CNN model is developed that uses only lumbar X-rays. The model was trained with 1922 images, and 187 images were used for validation. Later, the model was tested with 598 images. During training, the model extracts the region of interests (ROIs) via Yolov3, and then the ROIs are split into training and validation sets. Later, the ROIs are fed into the fine-tuned MobileNet CNN to accomplish the training. However, during testing, the images enter the model, and then they are classified as spondylolisthesis or normal. The end-to-end transfer learning-based CNN model reached the test accuracy of 99%, whereas the test sensitivity was 98% and the test specificity 99%. The performance results are encouraging and state that the model can be used in outpatient clinics where any experts are not present.
Collapse
Affiliation(s)
- Fatih Varçın
- Department of Computer Engineering, Faculty of Engineering, Kırıkkale University, 71451, Kırıkkale, Turkey.
| | - Hasan Erbay
- Department of Computer Engineering, Faculty of Engineering, University of Turkish Aeronautical Association, 06790, Ankara, Turkey
| | - Eyüp Çetin
- Department of Neurosurgery, Faculty of Medicine, Van Yüzüncü Yıl University, 65080, Van, Turkey
| | - İhsan Çetin
- Department of Medical Biochemistry, Faculty of Medicine, Hitit University, 19040, Corum, Turkey
| | - Turgut Kültür
- Department of Physical Medicine and Rehabilitation, Faculty of Medicine, Kırıkkale University, 71450, Kırıkkale, Turkey
| |
Collapse
|
12
|
Zhang J, Liu M, Wang L, Chen S, Yuan P, Li J, Shen SGF, Tang Z, Chen KC, Xia JJ, Shen D. Context-guided fully convolutional networks for joint craniomaxillofacial bone segmentation and landmark digitization. Med Image Anal 2019; 60:101621. [PMID: 31816592 DOI: 10.1016/j.media.2019.101621] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Revised: 07/01/2019] [Accepted: 11/19/2019] [Indexed: 12/24/2022]
Abstract
Cone-beam computed tomography (CBCT) scans are commonly used in diagnosing and planning surgical or orthodontic treatment to correct craniomaxillofacial (CMF) deformities. Based on CBCT images, it is clinically essential to generate an accurate 3D model of CMF structures (e.g., midface, and mandible) and digitize anatomical landmarks. This process often involves two tasks, i.e., bone segmentation and anatomical landmark digitization. Because landmarks usually lie on the boundaries of segmented bone regions, the tasks of bone segmentation and landmark digitization could be highly associated. Also, the spatial context information (e.g., displacements from voxels to landmarks) in CBCT images is intuitively important for accurately indicating the spatial association between voxels and landmarks. However, most of the existing studies simply treat bone segmentation and landmark digitization as two standalone tasks without considering their inherent relationship, and rarely take advantage of the spatial context information contained in CBCT images. To address these issues, we propose a Joint bone Segmentation and landmark Digitization (JSD) framework via context-guided fully convolutional networks (FCNs). Specifically, we first utilize displacement maps to model the spatial context information in CBCT images, where each element in the displacement map denotes the displacement from a voxel to a particular landmark. An FCN is learned to construct the mapping from the input image to its corresponding displacement maps. Using the learned displacement maps as guidance, we further develop a multi-task FCN model to perform bone segmentation and landmark digitization jointly. We validate the proposed JSD method on 107 subjects, and the experimental results demonstrate that our method is superior to the state-of-the-art approaches in both tasks of bone segmentation and landmark digitization.
Collapse
Affiliation(s)
- Jun Zhang
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA.
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA.
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA.
| | - Si Chen
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100191, China
| | - Peng Yuan
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Jianfu Li
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Steve Guo-Fang Shen
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Zhen Tang
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Ken-Chung Chen
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - James J Xia
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA.
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
13
|
Zhang J, Liu M, Shen D. Detecting Anatomical Landmarks From Limited Medical Imaging Data Using Two-Stage Task-Oriented Deep Neural Networks. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:4753-4764. [PMID: 28678706 PMCID: PMC5729285 DOI: 10.1109/tip.2017.2721106] [Citation(s) in RCA: 73] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
One of the major challenges in anatomical landmark detection, based on deep neural networks, is the limited availability of medical imaging data for network learning. To address this problem, we present a two-stage task-oriented deep learning method to detect large-scale anatomical landmarks simultaneously in real time, using limited training data. Specifically, our method consists of two deep convolutional neural networks (CNN), with each focusing on one specific task. Specifically, to alleviate the problem of limited training data, in the first stage, we propose a CNN based regression model using millions of image patches as input, aiming to learn inherent associations between local image patches and target anatomical landmarks. To further model the correlations among image patches, in the second stage, we develop another CNN model, which includes a) a fully convolutional network that shares the same architecture and network weights as the CNN used in the first stage and also b) several extra layers to jointly predict coordinates of multiple anatomical landmarks. Importantly, our method can jointly detect large-scale (e.g., thousands of) landmarks in real time. We have conducted various experiments for detecting 1200 brain landmarks from the 3D T1-weighted magnetic resonance images of 700 subjects, and also 7 prostate landmarks from the 3D computed tomography images of 73 subjects. The experimental results show the effectiveness of our method regarding both accuracy and efficiency in the anatomical landmark detection.
Collapse
|
14
|
Li S, Jiang H, Yao YD, Yang B. Organ Location Determination and Contour Sparse Representation for Multiorgan Segmentation. IEEE J Biomed Health Inform 2017; 22:852-861. [PMID: 28534802 DOI: 10.1109/jbhi.2017.2705037] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Organ segmentation on computed tomography (CT) images is of great importance in medical diagnoses and treatment. This paper proposes organ location determination and contour sparse representation methods (OLD-CSR) for multiorgan segmentation (liver, kidney, and spleen) on abdomen CT images using an extreme learning machine classifier. First, a location determination method is designed to obtain location information of each organ, which is used for coarse segmentation. Second, for coarse-to-fine segmentation, a contour gradient and rate change based feature point extraction method is proposed. A sparse optimization model is developed for refining the contour feature points. Experimentations with 153 CT images demonstrate the performance advantages of OLD-CSR as compared with related work.
Collapse
|
15
|
Longitudinal Computed Tomography Monitoring of Pelvic Bones in Patients With Breast Cancer Using Automated Bone Subtraction Software. Invest Radiol 2017; 52:288-294. [DOI: 10.1097/rli.0000000000000343] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
16
|
Zhang J, Gao Y, Gao Y, Munsell BC, Shen D. Detecting Anatomical Landmarks for Fast Alzheimer's Disease Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:2524-2533. [PMID: 27333602 PMCID: PMC5153382 DOI: 10.1109/tmi.2016.2582386] [Citation(s) in RCA: 72] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Structural magnetic resonance imaging (MRI) is a very popular and effective technique used to diagnose Alzheimer's disease (AD). The success of computer-aided diagnosis methods using structural MRI data is largely dependent on the two time-consuming steps: 1) nonlinear registration across subjects, and 2) brain tissue segmentation. To overcome this limitation, we propose a landmark-based feature extraction method that does not require nonlinear registration and tissue segmentation. In the training stage, in order to distinguish AD subjects from healthy controls (HCs), group comparisons, based on local morphological features, are first performed to identify brain regions that have significant group differences. In general, the centers of the identified regions become landmark locations (or AD landmarks for short) capable of differentiating AD subjects from HCs. In the testing stage, using the learned AD landmarks, the corresponding landmarks are detected in a testing image using an efficient technique based on a shape-constrained regression-forest algorithm. To improve detection accuracy, an additional set of salient and consistent landmarks are also identified to guide the AD landmark detection. Based on the identified AD landmarks, morphological features are extracted to train a support vector machine (SVM) classifier that is capable of predicting the AD condition. In the experiments, our method is evaluated on landmark detection and AD classification sequentially. Specifically, the landmark detection error (manually annotated versus automatically detected) of the proposed landmark detector is 2.41 mm , and our landmark-based AD classification accuracy is 83.7%. Lastly, the AD classification performance of our method is comparable to, or even better than, that achieved by existing region-based and voxel-based methods, while the proposed method is approximately 50 times faster.
Collapse
Affiliation(s)
- Jun Zhang
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA
| | - Yue Gao
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA. Department of Computer Science, University of North Carolina, Chapel Hill, NC, USA
| | - Brent C. Munsell
- Department of Computer Science, College of Charleston, Charleston, SC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA. Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
17
|
Horger M, Thaiss WM, Ditt H, Weisel K, Fritz J, Nikolaou K, Liao S, Kloth C. Improved MDCT monitoring of pelvic myeloma bone disease through the use of a novel longitudinal bone subtraction post-processing algorithm. Eur Radiol 2016; 27:2969-2977. [PMID: 27882427 DOI: 10.1007/s00330-016-4642-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2016] [Revised: 09/16/2016] [Accepted: 10/10/2016] [Indexed: 11/28/2022]
Abstract
PURPOSE To evaluate the diagnostic performance of a novel CT post-processing software that generates subtraction maps of baseline and follow-up CT examinations in the course of myeloma bone lesions. MATERIALS AND METHODS This study included 61 consecutive myeloma patients who underwent repeated whole-body reduced-dose MDCT at our institution between November 2013 and June 2015. CT subtraction maps classified a progressive disease (PD) vs. stable disease (SD)/remission. Bone subtraction maps (BSMs) only and in combination with 1-mm (BSM+) source images were compared with 5-mm axial/MPR scans. RESULTS Haematological response categories at follow-up were: complete remission (n = 9), very good partial remission (n = 2), partial remission (n = 17) and SDh (n = 19) vs. PDh (n = 14). Five-millimetre CT scan yielded PD (n = 14) and SD/remission (n = 47) whereas bone subtraction + 1-mm axial scans (BSM+) reading resulted in PD (n = 18) and SD/remission (n = 43). Sensitivity/ specificity/accuracy for 5-mm/1-mm/BSM(alone)/BSM + in "lesion-by-lesion" reading was 89.4 %/98.9 %/98.3 %/ 99.5 %; 69.1 %/96.9 %/72 %/92.1 % and 83.8 %/98.4 %/92.1 %/98.3 %, respectively. The use of BSM+ resulted in a change of response classification in 9.8 % patients (n = 6) from SD to PD. CONCLUSION BSM reading is more accurate for monitoring myeloma compared to axial scans whereas BSM+ yields similar results with 1-mm reading (gold standard) but by significantly reduced reading time. KEY POINTS • CT evaluation of myeloma bone disease using a longitudinal bone subtraction post-processing algorithm. • Bone subtraction post-processing algorithm is more accurate for assessment of therapy. • Bone subtraction allowed improved and more efficient detection of myeloma bone lesions. • Post-processing tool demonstrating a change in response classification in 9.8 % patients (all showing PD). • Reading time could be substantially shortened as compared to regular CT assessment.
Collapse
Affiliation(s)
- Marius Horger
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tübingen, Hoppe-Seyler-Str.3, D-72076, Tuebingen, Germany
| | - Wolfgang M Thaiss
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tübingen, Hoppe-Seyler-Str.3, D-72076, Tuebingen, Germany
| | - Hendrik Ditt
- Siemens AG Healthcare, Sector Imaging and Interventional Radiology, Siemensstr. 1, D-91301, Forchheim, Germany
| | - Katja Weisel
- Department of Internal Medicine II, Eberhard-Karls-University Tübingen, D-72076, Tübingen, Germany
| | - Jan Fritz
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medcine, 601 N. Caroline Street, JHOC 3142, Baltimore, MD, 21287, USA
| | - Konstantin Nikolaou
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tübingen, Hoppe-Seyler-Str.3, D-72076, Tuebingen, Germany
| | - Shu Liao
- Siemens Medical Solutions, Malvern, PA, 19355, USA
| | - Christopher Kloth
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University Tübingen, Hoppe-Seyler-Str.3, D-72076, Tuebingen, Germany.
| |
Collapse
|
18
|
Zhang J, Gao Y, Wang L, Tang Z, Xia JJ, Shen D. Automatic Craniomaxillofacial Landmark Digitization via Segmentation-Guided Partially-Joint Regression Forest Model and Multiscale Statistical Features. IEEE Trans Biomed Eng 2016; 63:1820-1829. [PMID: 26625402 PMCID: PMC4879598 DOI: 10.1109/tbme.2015.2503421] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images. METHODS We propose a segmentation-guided partially-joint regression forest (S-PRF) model to automatically digitize CMF landmarks. In this model, a regression voting strategy is first adopted to localize each landmark by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, CBCT image segmentation is utilized to remove uninformative voxels caused by morphological variations across patients. Third, a partially-joint model is further proposed to separately localize landmarks based on the coherence of landmark positions to improve the digitization reliability. In addition, we propose a fast vector quantization method to extract high-level multiscale statistical features to describe a voxel's appearance, which has low dimensionality, high efficiency, and is also invariant to the local inhomogeneity caused by artifacts. RESULTS Mean digitization errors for 15 landmarks, in comparison to the ground truth, are all less than 2 mm. CONCLUSION Our model has addressed challenges of both interpatient morphological variations and imaging artifacts. Experiments on a CBCT dataset show that our approach achieves clinically acceptable accuracy for landmark digitalization. SIGNIFICANCE Our automatic landmark digitization method can be used clinically to reduce the labor cost and also improve digitalization consistency.
Collapse
Affiliation(s)
- Jun Zhang
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA ()
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA
- Department of Computer Science, University of North Carolina, Chapel Hill, NC, USA
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA
| | - Zhen Tang
- Houston Methodist Hospital, Houston, TX, USA
| | - James J. Xia
- Houston Methodist Hospital, Houston, TX, USA
- Weill Medical College, Cornell University, New York, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
19
|
Liao S, Zhan Y, Dong Z, Yan R, Gong L, Zhou XS, Salganicoff M, Fei J. Automatic Lumbar Spondylolisthesis Measurement in CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1658-1669. [PMID: 26849859 DOI: 10.1109/tmi.2016.2523452] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Lumbar spondylolisthesis is one of the most common spinal diseases. It is caused by the anterior shift of a lumbar vertebrae relative to subjacent vertebrae. In current clinical practices, staging of spondylolisthesis is often conducted in a qualitative way. Although meyerding grading opens the door to stage spondylolisthesis in a more quantitative way, it relies on the manual measurement, which is time consuming and irreproducible. Thus, an automatic measurement algorithm becomes desirable for spondylolisthesis diagnosis and staging. However, there are two challenges. 1) Accurate detection of the most anterior and posterior points on the superior and inferior surfaces of each lumbar vertebrae. Due to the small size of the vertebrae, slight errors of detection may lead to significant measurement errors, hence, wrong disease stages. 2) Automatic localize and label each lumbar vertebrae is required to provide the semantic meaning of the measurement. It is difficult since different lumbar vertebraes have high similarity of both shape and image appearance. To resolve these challenges, a new auto measurement framework is proposed with two major contributions: First, a learning based spine labeling method that integrates both the image appearance and spine geometry information is designed to detect lumbar vertebrae. Second, a hierarchical method using both the population information from atlases and domain-specific information in the target image is proposed for most anterior and posterior points positioning. Validated on 258 CT spondylolisthesis patients, our method shows very similar results to manual measurements by radiologists and significantly increases the measurement efficiency.
Collapse
|
20
|
Dai X, Gao Y, Shen D. Online updating of context-aware landmark detectors for prostate localization in daily treatment CT images. Med Phys 2015; 42:2594-606. [PMID: 25979051 PMCID: PMC4409630 DOI: 10.1118/1.4918755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Revised: 02/22/2015] [Accepted: 03/20/2015] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In image guided radiation therapy, it is crucial to fast and accurately localize the prostate in the daily treatment images. To this end, the authors propose an online update scheme for landmark-guided prostate segmentation, which can fully exploit valuable patient-specific information contained in the previous treatment images and can achieve improved performance in landmark detection and prostate segmentation. METHODS To localize the prostate in the daily treatment images, the authors first automatically detect six anatomical landmarks on the prostate boundary by adopting a context-aware landmark detection method. Specifically, in this method, a two-layer regression forest is trained as a detector for each target landmark. Once all the newly detected landmarks from new treatment images are reviewed or adjusted (if necessary) by clinicians, they are further included into the training pool as new patient-specific information to update all the two-layer regression forests for the next treatment day. As more and more treatment images of the current patient are acquired, the two-layer regression forests can be continually updated by incorporating the patient-specific information into the training procedure. After all target landmarks are detected, a multiatlas random sample consensus (multiatlas RANSAC) method is used to segment the entire prostate by fusing multiple previously segmented prostates of the current patient after they are aligned to the current treatment image. Subsequently, the segmented prostate of the current treatment image is again reviewed (or even adjusted if needed) by clinicians before including it as a new shape example into the prostate shape dataset for helping localize the entire prostate in the next treatment image. RESULTS The experimental results on 330 images of 24 patients show the effectiveness of the authors' proposed online update scheme in improving the accuracies of both landmark detection and prostate segmentation. Besides, compared to the other state-of-the-art prostate segmentation methods, the authors' method achieves the best performance. CONCLUSIONS By appropriate use of valuable patient-specific information contained in the previous treatment images, the authors' proposed online update scheme can obtain satisfactory results for both landmark detection and prostate segmentation.
Collapse
Affiliation(s)
- Xiubin Dai
- College of Geographic and Biologic Information, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu 210015, China and IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510
| | - Yaozong Gao
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510
| | - Dinggang Shen
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510 and Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| |
Collapse
|
21
|
3D Deep Learning for Efficient and Robust Landmark Detection in Volumetric Data. LECTURE NOTES IN COMPUTER SCIENCE 2015. [DOI: 10.1007/978-3-319-24553-9_69] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|
22
|
Shao Y, Gao Y, Guo Y, Shi Y, Yang X, Shen D. Hierarchical lung field segmentation with joint shape and appearance sparse learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:1761-80. [PMID: 25181734 DOI: 10.1109/tmi.2014.2305691] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Lung field segmentation in the posterior-anterior (PA) chest radiograph is important for pulmonary disease diagnosis and hemodialysis treatment. Due to high shape variation and boundary ambiguity, accurate lung field segmentation from chest radiograph is still a challenging task. To tackle these challenges, we propose a joint shape and appearance sparse learning method for robust and accurate lung field segmentation. The main contributions of this paper are: 1) a robust shape initialization method is designed to achieve an initial shape that is close to the lung boundary under segmentation; 2) a set of local sparse shape composition models are built based on local lung shape segments to overcome the high shape variations; 3) a set of local appearance models are similarly adopted by using sparse representation to capture the appearance characteristics in local lung boundary segments, thus effectively dealing with the lung boundary ambiguity; 4) a hierarchical deformable segmentation framework is proposed to integrate the scale-dependent shape and appearance information together for robust and accurate segmentation. Our method is evaluated on 247 PA chest radiographs in a public dataset. The experimental results show that the proposed local shape and appearance models outperform the conventional shape and appearance models. Compared with most of the state-of-the-art lung field segmentation methods under comparison, our method also shows a higher accuracy, which is comparable to the inter-observer annotation variation.
Collapse
|
23
|
Gao Y, Zhan Y, Shen D. Incremental learning with selective memory (ILSM): towards fast prostate localization for image guided radiotherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:518-34. [PMID: 24495983 PMCID: PMC4379484 DOI: 10.1109/tmi.2013.2291495] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Image-guided radiotherapy (IGRT) requires fast and accurate localization of the prostate in 3-D treatment-guided radiotherapy, which is challenging due to low tissue contrast and large anatomical variation across patients. On the other hand, the IGRT workflow involves collecting a series of computed tomography (CT) images from the same patient under treatment. These images contain valuable patient-specific information yet are often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to "personalize" the model to fit patient-specific appearance characteristics. The model is personalized with two steps: backward pruning that discards obsolete population-based knowledge and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of a specific patient much more accurately. This work has three contributions: 1) the proposed incremental learning framework can capture patient-specific characteristics more effectively, compared to traditional learning schemes, such as pure patient-specific learning, population-based learning, and mixture learning with patient-specific and population data; 2) this learning framework does not have any parametric model assumption, hence, allowing the adoption of any discriminative classifier; and 3) using ILSM, we can localize the prostate in treatment CTs accurately (DSC ∼ 0.89 ) and fast ( ∼ 4 s), which satisfies the real-world clinical requirements of IGRT.
Collapse
Affiliation(s)
- Yaozong Gao
- Department of Computer Science and the Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
| | - Yiqiang Zhan
- SYNGO Division, Siemens Medical Solutions, Malvern, PA 19355 USA
| | - Dinggang Shen
- Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 136-701, Korea
| |
Collapse
|
24
|
Gao Y, Zhan Y, Shen D. Incremental learning with selective memory (ILSM): towards fast prostate localization for image guided radiotherapy. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2013; 16:378-86. [PMID: 24579163 PMCID: PMC3939625 DOI: 10.1007/978-3-642-40763-5_47] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
Image-guided radiotherapy (IGRT) requires fast and accurate localization of prostate in treatment CTs, which is challenging due to low tissue contrast and large anatomical variations across patients. On the other hand, in IGRT workflow, a series of CT images is acquired from the same patient under treatment, which contains valuable patient-specific information yet is often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to "personalize" the model to fit patient-specific appearance characteristics. Particularly, the model is personalized with two steps, backward pruning that discards obsolete population-based knowledge, and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of the specific patient much more accurately. Validated on a large dataset (349 CT scans), our method achieved high localization accuracy (DSC approximately 0.87) in 4 seconds.
Collapse
Affiliation(s)
- Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill
| | - Yiqiang Zhan
- Siemens Medical Solutions USA, Inc., Malvern, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill
| |
Collapse
|
25
|
Zhang S, Zhan Y, Metaxas DN. Deformable segmentation via sparse representation and dictionary learning. Med Image Anal 2012; 16:1385-96. [PMID: 22959839 DOI: 10.1016/j.media.2012.07.007] [Citation(s) in RCA: 132] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2012] [Revised: 07/04/2012] [Accepted: 07/27/2012] [Indexed: 11/26/2022]
|
26
|
Automatic Scan Planning for Magnetic Resonance Imaging of the Knee Joint. Ann Biomed Eng 2012; 40:2033-42. [DOI: 10.1007/s10439-012-0552-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2011] [Accepted: 03/15/2012] [Indexed: 10/28/2022]
|
27
|
Robust MR spine detection using hierarchical learning and local articulated model. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2012; 15:141-8. [PMID: 23285545 DOI: 10.1007/978-3-642-33415-3_18] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
A clinically acceptable auto-spine detection system, i.e., localization and labeling of vertebrae and inter-vertebral discs, is required to have high robustness, in particular to severe diseases (e.g., scoliosis) and imaging artifacts (e.g. metal artifacts in MR). Our method aims to achieve this goal with two novel components. First, instead of treating vertebrae/discs as either repetitive components or completely independent entities, we emulate a radiologist and use a hierarchial strategy to learn detectors dedicated to anchor (distinctive) vertebrae, bundle (non-distinctive) vertebrae and inter-vertebral discs, respectively. At run-time, anchor vertebrae are detected concurrently to provide redundant and distributed appearance cues robust to local imaging artifacts. Bundle vertebrae detectors provide candidates of vertebrae with subtle appearance differences, whose labels are mutually determined by anchor vertebrae to gain additional robustness. Disc locations are derived from a cloud of responses from disc detectors, which is robust to sporadic voxel-level errors. Second, owing to the non-rigidness of spine anatomies, we employ a local articulated model to effectively model the spatial relations across vertebrae and discs. The local articulated model fuses appearance cues from different detectors in a way that is robust to abnormal spine geometry resulting from severe diseases. Our method is validated by 300 MR spine scout scans and exhibits robust performance, especially to cases with severe diseases and imaging artifacts.
Collapse
|