1
|
Liao W, Luo X, Li L, Xu J, He Y, Huang H, Zhang S. Automatic cervical lymph nodes detection and segmentation in heterogeneous computed tomography images using deep transfer learning. Sci Rep 2025; 15:4250. [PMID: 39905029 PMCID: PMC11794882 DOI: 10.1038/s41598-024-84804-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2024] [Accepted: 12/27/2024] [Indexed: 02/06/2025] Open
Abstract
To develop a deep learning model using transfer learning for automatic detection and segmentation of neck lymph nodes (LNs) in computed tomography (CT) images, the study included 11,013 annotated LNs with a short-axis diameter ≥ 3 mm from 626 head and neck cancer patients across four hospitals. The nnUNet model was used as a baseline, pre-trained on a large-scale head and neck dataset, and then fine-tuned with 4,729 LNs from hospital A for detection and segmentation. Validation was conducted on an internal testing cohort (ITC A) and three external testing cohorts (ETCs B, C, and D), with 1684 and 4600 LNs, respectively. Detection was evaluated via sensitivity, positive predictive value (PPV), and false positive rate per case (FP/vol), while segmentation was assessed using the Dice similarity coefficient (DSC) and Hausdorff distance (HD95). For detection, the sensitivity, PPV, and FP/vol in ITC A were 54.6%, 69.0%, and 3.4, respectively. In ETCs, the sensitivity ranged from 45.7% at 3.9 FP/vol to 63.5% at 5.8 FP/vol. Segmentation achieved a mean DSC of 0.72 in ITC A and 0.72 to 0.74 in ETCs, as well as a mean HD95 of 3.78 mm in ITC A and 2.73 mm to 2.85 mm in ETCs. No significant sensitivity difference was found between contrast-enhanced and unenhanced CT images (p = 0.502) or repeated CT images (p = 0.815) during adaptive radiotherapy. The model's segmentation accuracy was comparable to that of experienced oncologists. The model shows promise in automatically detecting and segmenting neck LNs in CT images, potentially reducing oncologists' segmentation workload.
Collapse
Affiliation(s)
- Wenjun Liao
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, Sichuan Cancer Center, Cancer Hospital Affiliate to School of Medicine, University of Electronic Science and Technology of China, Chengdu, 610041, China
| | - Xiangde Luo
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Lu Li
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, Sichuan Cancer Center, Cancer Hospital Affiliate to School of Medicine, University of Electronic Science and Technology of China, Chengdu, 610041, China
| | - Jinfeng Xu
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Yuan He
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 23000, Anhui, China
| | - Hui Huang
- Cancer Center, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 610072, China
| | - Shichuan Zhang
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, Sichuan Cancer Center, Cancer Hospital Affiliate to School of Medicine, University of Electronic Science and Technology of China, Chengdu, 610041, China.
| |
Collapse
|
2
|
Al Hasan MM, Ghazimoghadam S, Tunlayadechanont P, Mostafiz MT, Gupta M, Roy A, Peters K, Hochhegger B, Mancuso A, Asadizanjani N, Forghani R. Automated Segmentation of Lymph Nodes on Neck CT Scans Using Deep Learning. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2955-2966. [PMID: 38937342 PMCID: PMC11612088 DOI: 10.1007/s10278-024-01114-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Revised: 04/01/2024] [Accepted: 04/03/2024] [Indexed: 06/29/2024]
Abstract
Early and accurate detection of cervical lymph nodes is essential for the optimal management and staging of patients with head and neck malignancies. Pilot studies have demonstrated the potential for radiomic and artificial intelligence (AI) approaches in increasing diagnostic accuracy for the detection and classification of lymph nodes, but implementation of many of these approaches in real-world clinical settings would necessitate an automated lymph node segmentation pipeline as a first step. In this study, we aim to develop a non-invasive deep learning (DL) algorithm for detecting and automatically segmenting cervical lymph nodes in 25,119 CT slices from 221 normal neck contrast-enhanced CT scans from patients without head and neck cancer. We focused on the most challenging task of segmentation of small lymph nodes, evaluated multiple architectures, and employed U-Net and our adapted spatial context network to detect and segment small lymph nodes measuring 5-10 mm. The developed algorithm achieved a Dice score of 0.8084, indicating its effectiveness in detecting and segmenting cervical lymph nodes despite their small size. A segmentation framework successful in this task could represent an essential initial block for future algorithms aiming to evaluate small objects such as lymph nodes in different body parts, including small lymph nodes looking normal to the naked human eye but harboring early nodal metastases.
Collapse
Affiliation(s)
- Md Mahfuz Al Hasan
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
- Department of Electrical and Computer Engineering, University of Florida College of Medicine, Gainesville, FL, USA
| | - Saba Ghazimoghadam
- Augmented Intelligence and Precision Health Laboratory, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
| | - Padcha Tunlayadechanont
- Augmented Intelligence and Precision Health Laboratory, Research Institute of the McGill University Health Centre, Montreal, QC, Canada
- Department of Diagnostic and Therapeutic Radiology and Research, Faculty of Medicine Ramathibodi Hospital, Ratchathewi, Bangkok, Thailand
| | - Mohammed Tahsin Mostafiz
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
- Department of Electrical and Computer Engineering, University of Florida College of Medicine, Gainesville, FL, USA
| | - Manas Gupta
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
| | - Antika Roy
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
- Department of Electrical and Computer Engineering, University of Florida College of Medicine, Gainesville, FL, USA
| | - Keith Peters
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Bruno Hochhegger
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Anthony Mancuso
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA
| | - Navid Asadizanjani
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA
- Department of Electrical and Computer Engineering, University of Florida College of Medicine, Gainesville, FL, USA
| | - Reza Forghani
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, 1600 SW Archer Road, Gainesville, FL, 32610-0374, USA.
- Department of Radiology, University of Florida College of Medicine, Gainesville, FL, USA.
- Division of Medical Physics, University of Florida College of Medicine, Gainesville, FL, USA.
- Department of Neurology, Division of Movement Disorders, University of Florida College of Medicine, Gainesville, FL, USA.
- Augmented Intelligence and Precision Health Laboratory, Research Institute of the McGill University Health Centre, Montreal, QC, Canada.
| |
Collapse
|
3
|
Cao Y, Feng J, Wang C, Yang F, Wang X, Xu J, Huang C, Zhang S, Li Z, Mao L, Zhang T, Jia B, Li T, Li H, Zhang B, Shi H, Li D, Zhang N, Yu Y, Meng X, Zhang Z. LNAS: a clinically applicable deep-learning system for mediastinal enlarged lymph nodes segmentation and station mapping without regard to the pathogenesis using unenhanced CT images. LA RADIOLOGIA MEDICA 2024; 129:229-238. [PMID: 38108979 DOI: 10.1007/s11547-023-01747-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 10/20/2023] [Indexed: 12/19/2023]
Abstract
BACKGROUND The accurate identification and evaluation of lymph nodes by CT images is of great significance for disease diagnosis, treatment, and prognosis. PURPOSE To assess the lymph nodes' segmentation, size, and station by artificial intelligence (AI) for unenhanced chest CT images and evaluate its value in clinical scenarios. MATERIAL AND METHODS This retrospective study proposed an end-to-end Lymph Nodes Analysis System (LNAS) consisting of three models: the Lymph Node Segmentation model (LNS), the Mediastinal Organ Segmentation model (MOS), and the Lymph Node Station Registration model (LNR). We selected a healthy chest CT image as the template image and annotated 14 lymph node station masks according to the IASLC to build the lymph node station mapping template. The exact contours and stations of the lymph nodes were annotated by two junior radiologists and reviewed by a senior radiologist. Patients aged 18 and above, who had undergone unenhanced chest CT and had at least one suspicious enlarged mediastinal lymph node in imaging reports, were included. Exclusions were patients who had thoracic surgeries in the past 2 weeks or artifacts on CT images affecting lymph node observation by radiologists. The system was trained on 6725 consecutive chest CTs that from Tianjin Medical University General Hospital, among which 6249 patients had suspicious enlarged mediastinal lymph nodes. A total of 519 consecutive chest CTs from Qilu Hospital of Shandong University (Qingdao) were used for external validation. The gold standard for each CT was determined by two radiologists and reviewed by one senior radiologist. RESULTS The patient-level sensitivity of the LNAS system reached of 93.94% and 92.89% in internal and external test dataset, respectively. And the lesion-level sensitivity (recall) reached 89.48% and 85.97% in internal and external test dataset. For man-machine comparison, AI significantly apparently shortened the average reading time (p < 0.001) and had better lesion-level and patient-level sensitivities. CONCLUSION AI improved the sensitivity lymph node segmentation by radiologists with an advantage in reading time.
Collapse
Affiliation(s)
- Yang Cao
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Jintang Feng
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
- Department of Radiology, Tianjin Chest Hospital, Tianjin, China
| | | | - Fan Yang
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Xiaomeng Wang
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | | | | | | | | | - Li Mao
- Deepwise AI Lab, Beijing, China
| | - Tianzhu Zhang
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Bingzhen Jia
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Tongli Li
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Hui Li
- Department of Radiology, Qilu Hospital (Qingdao), Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Bingjin Zhang
- Department of Radiology, Qilu Hospital (Qingdao), Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Hongmei Shi
- Department of Radiology, Qilu Hospital (Qingdao), Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Dong Li
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Ningnannan Zhang
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Yizhou Yu
- Deepwise AI Lab, Beijing, China
- Department of Computer Science, The University of Hong Kong, Hong Kong, China
| | - Xiangshui Meng
- Department of Radiology, Qilu Hospital (Qingdao), Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Zhang Zhang
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China.
| |
Collapse
|
4
|
Manjunatha Y, Sharma V, Iwahori Y, Bhuyan MK, Wang A, Ouchi A, Shimizu Y. Lymph node detection in CT scans using modified U-Net with residual learning and 3D deep network. Int J Comput Assist Radiol Surg 2023; 18:723-732. [PMID: 36630071 DOI: 10.1007/s11548-022-02822-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Accepted: 12/20/2022] [Indexed: 01/12/2023]
Abstract
PURPOSE Lymph node (LN) detection is a crucial step that complements the diagnosis and treatments involved during cancer investigations. However, the low-contrast structures in the CT scan images and the nodes' varied shapes, sizes, and poses, along with their sparsely distributed locations, make the detection step challenging and lead to many false positives. The manual examination of the CT scan slices could be time-consuming, and false positives could divert the clinician's focus. To overcome these issues, our work aims at providing an automated framework for LNs detection in order to obtain more accurate detection results with low false positives. METHODS The proposed work consists of two stages: candidate generation and false positive reduction. The first stage generates volumes of interest (VOI) of probable LN candidates using a modified U-Net with ResNet architecture to obtain high sensitivity but with the cost of increased false positives. The second-stage processes the obtained candidate LNs for false positive reduction using 3D convolutional neural network (CNN) classifier. We further present an analysis of various deep learning models while decomposing 3D VOI into different representations. RESULTS The method is evaluated on two publicly available datasets containing CT scans of mediastinal and abdominal LNs. Our proposed approach yields sensitivities of 87% at 2.75 false positives per volume (FP/vol.) and 79% at 1.74 FP/vol. with the mediastinal and abdominal datasets, respectively. Our method presented a competitive performance in terms of sensitivity compared to the state-of-the-art methods and encountered very few false positives. CONCLUSION We developed an automated framework for LNs detection using a modified U-Net with residual learning and 3D CNNs. The results indicate that our method could achieve high sensitivity with relatively low false positives, which helps avoid ineffective treatments.
Collapse
Affiliation(s)
- Yashwanth Manjunatha
- Dept. of Electronics & Electrical Engineering, Indian Institute of Technology Guwahati, Guwahati, Assam, 781039, India
| | - Vanshali Sharma
- Dept. of Computer Science & Engineering, Indian Institute of Technology Guwahati, Guwahati, Assam, 781039, India.
| | - Yuji Iwahori
- Dept. of Computer Science, Chubu University, Kasugai, 487-8501, Japan
| | - M K Bhuyan
- Dept. of Electronics & Electrical Engineering, Indian Institute of Technology Guwahati, Guwahati, Assam, 781039, India
- Mehta Family School of Data Science and Artificial Intelligence, Indian Institute of Technology Guwahati, Guwahati, Assam, 781039, India
| | - Aili Wang
- Higher Educational Key Laboratory for Measuring and Control Technology and Instrumentations of Heilongjiang, Harbin University of Science and Technology, Harbin, 150080, China
| | - Akira Ouchi
- Dept. of Gastroenterological Surgery, Aichi Cancer Center Hospital, Nagoya, 464-8681, Japan
| | - Yasuhiro Shimizu
- Dept. of Gastroenterological Surgery, Aichi Cancer Center Hospital, Nagoya, 464-8681, Japan
| |
Collapse
|
5
|
Jin D, Guo D, Ge J, Ye X, Lu L. Towards automated organs at risk and target volumes contouring: Defining precision radiation therapy in the modern era. JOURNAL OF THE NATIONAL CANCER CENTER 2022; 2:306-313. [PMID: 39036546 PMCID: PMC11256697 DOI: 10.1016/j.jncc.2022.09.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 09/06/2022] [Accepted: 09/27/2022] [Indexed: 12/05/2022] Open
Abstract
Precision radiotherapy is a critical and indispensable cancer treatment means in the modern clinical workflow with the goal of achieving "quality-up and cost-down" in patient care. The challenge of this therapy lies in developing computerized clinical-assistant solutions with precision, automation, and reproducibility built-in to deliver it at scale. In this work, we provide a comprehensive yet ongoing, incomplete survey of and discussions on the recent progress of utilizing advanced deep learning, semantic organ parsing, multimodal imaging fusion, neural architecture search and medical image analytical techniques to address four corner-stone problems or sub-problems required by all precision radiotherapy workflows, namely, organs at risk (OARs) segmentation, gross tumor volume (GTV) segmentation, metastasized lymph node (LN) detection, and clinical tumor volume (CTV) segmentation. Without loss of generality, we mainly focus on using esophageal and head-and-neck cancers as examples, but the methods can be extrapolated to other types of cancers. High-precision, automated and highly reproducible OAR/GTV/LN/CTV auto-delineation techniques have demonstrated their effectiveness in reducing the inter-practitioner variabilities and the time cost to permit rapid treatment planning and adaptive replanning for the benefit of patients. Through the presentation of the achievements and limitations of these techniques in this review, we hope to encourage more collective multidisciplinary precision radiotherapy workflows to transpire.
Collapse
Affiliation(s)
- Dakai Jin
- DAMO Academy, Alibaba Group, New York, United States
| | - Dazhou Guo
- DAMO Academy, Alibaba Group, New York, United States
| | - Jia Ge
- Department of Radiation Oncology, The First Affiliated Hospital of Zhejiang University, Hangzhou, China
| | - Xianghua Ye
- Department of Radiation Oncology, The First Affiliated Hospital of Zhejiang University, Hangzhou, China
| | - Le Lu
- DAMO Academy, Alibaba Group, New York, United States
| |
Collapse
|
6
|
Wu C, Chang F, Su X, Wu Z, Wang Y, Zhu L, Zhang Y. Integrating features from lymph node stations for metastatic lymph node detection. Comput Med Imaging Graph 2022; 101:102108. [PMID: 36030621 DOI: 10.1016/j.compmedimag.2022.102108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 07/08/2022] [Accepted: 07/28/2022] [Indexed: 01/27/2023]
Abstract
Metastasis on lymph nodes (LNs), the most common way of spread for primary tumor cells, is a sign of increased mortality. However, metastatic LNs are time-consuming and challenging to detect even for professional radiologists due to their small sizes, high sparsity, and ambiguity in appearance. It is desired to leverage recent development in deep learning to automatically detect metastatic LNs. Besides a two-stage detection network, we here introduce an additional branch to leverage information about LN stations, an important reference for radiologists during metastatic LN diagnosis, as supplementary information for metastatic LN detection. The branch targets to solve a closely related task on the LN station level, i.e., classifying whether an LN station contains metastatic LN or not, so as to learn representations for LN stations. Considering that a metastatic LN station is expected to significantly affect the nearby ones, a GCN-based structure is adopted by the branch to model the relationship among different LN stations. At the classification stage of metastatic LN detection, the above learned LN station features, as well as the features reflecting the distance between the LN candidate and the LN stations, are integrated with the LN features. We validate our method on a dataset containing 114 intravenous contrast-enhanced Computed Tomography (CT) images of oral squamous cell carcinoma (OSCC) patients and show that it outperforms several state-of-the-art methods on the mFROC, maxF1, and AUC scores, respectively.
Collapse
Affiliation(s)
- Chaoyi Wu
- Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Feng Chang
- Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xiao Su
- Department of Radiology, School of Medicine, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University, Shanghai 200011, China
| | - Zhihan Wu
- School of Medicine, Shanghai Jiao Tong University, Shanghai 200025, China
| | - Yanfeng Wang
- Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai 200240, China; Shanghai AI Laboratory, Shanghai 200232, China
| | - Ling Zhu
- Department of Radiology, School of Medicine, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University, Shanghai 200011, China.
| | - Ya Zhang
- Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai 200240, China; Shanghai AI Laboratory, Shanghai 200232, China.
| |
Collapse
|
7
|
Wu H, Pang KKY, Pang GKH, Au-Yeung RKH. A soft-computing based approach to overlapped cells analysis in histopathology images with genetic algorithm. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
8
|
Reliable detection of lymph nodes in whole pelvic for radiotherapy. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
9
|
Pang Y, Wang H, Li H. Medical Imaging Biomarker Discovery and Integration Towards AI-Based Personalized Radiotherapy. Front Oncol 2022; 11:764665. [PMID: 35111666 PMCID: PMC8801459 DOI: 10.3389/fonc.2021.764665] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 11/29/2021] [Indexed: 12/22/2022] Open
Abstract
Intensity-modulated radiation therapy (IMRT) has been used for high-accurate physical dose distribution sculpture and employed to modulate different dose levels into Gross Tumor Volume (GTV), Clinical Target Volume (CTV) and Planning Target Volume (PTV). GTV, CTV and PTV can be prescribed at different dose levels, however, there is an emphasis that their dose distributions need to be uniform, despite the fact that most types of tumour are heterogeneous. With traditional radiomics and artificial intelligence (AI) techniques, we can identify biological target volume from functional images against conventional GTV derived from anatomical imaging. Functional imaging, such as multi parameter MRI and PET can be used to implement dose painting, which allows us to achieve dose escalation by increasing doses in certain areas that are therapy-resistant in the GTV and reducing doses in less aggressive areas. In this review, we firstly discuss several quantitative functional imaging techniques including PET-CT and multi-parameter MRI. Furthermore, theoretical and experimental comparisons for dose painting by contours (DPBC) and dose painting by numbers (DPBN), along with outcome analysis after dose painting are provided. The state-of-the-art AI-based biomarker diagnosis techniques is reviewed. Finally, we conclude major challenges and future directions in AI-based biomarkers to improve cancer diagnosis and radiotherapy treatment.
Collapse
Affiliation(s)
- Yaru Pang
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Hui Wang
- Department of Chemical Engineering, University College London, London, United Kingdom
| | - He Li
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
10
|
Ariji Y, Kise Y, Fukuda M, Kuwada C, Ariji E. Segmentation of metastatic cervical lymph nodes from CT images of oral cancers using deep learning technology. Dentomaxillofac Radiol 2022; 51:20210515. [PMID: 35113725 PMCID: PMC9499194 DOI: 10.1259/dmfr.20210515] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVE The purpose of this study was to establish a deep learning model for segmenting the cervical lymph nodes of oral cancer patients and diagnosing metastatic or non-metastatic lymph nodes from contrast-enhanced computed tomography (CT) images. METHODS CT images of 158 metastatic and 514 non-metastatic lymph nodes were prepared. CT images were assigned to training, validation, and test datasets. The colored images with lymph nodes were prepared together with the original images for the training and validation datasets. Learning was performed for 200 epochs using the neural network U-net. Performance in segmenting lymph nodes and diagnosing metastasis were obtained. RESULTS Performance in segmenting metastatic lymph nodes showed recall of 0.742, precision of 0.942, and F1 score of 0.831. The recall of metastatic lymph nodes at level II was 0.875, which was the highest value. The diagnostic performance of identifying metastasis showed an area under the curve (AUC) of 0.950, which was significantly higher than that of radiologists (0.896). CONCLUSIONS A deep learning model was created to automatically segment the cervical lymph nodes of oral squamous cell carcinomas. Segmentation performances should still be improved, but the segmented lymph nodes were more accurately diagnosed for metastases compared with evaluation by humans.
Collapse
Affiliation(s)
- Yoshiko Ariji
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan.,Department of Oral Radiology, Osaka Dental University, Osaka, Japan
| | - Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Motoki Fukuda
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Chiaki Kuwada
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| |
Collapse
|
11
|
Schmid D, Scholz VB, Kircher PR, Lautenschlaeger IE. Employing deep convolutional neural networks for segmenting the medial retropharyngeal lymph nodes in CT studies of dogs. Vet Radiol Ultrasound 2022; 63:763-770. [PMID: 35877815 PMCID: PMC9796347 DOI: 10.1111/vru.13132] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 05/04/2022] [Accepted: 05/20/2022] [Indexed: 01/01/2023] Open
Abstract
While still in its infancy, the application of deep convolutional neural networks in veterinary diagnostic imaging is a rapidly growing field. The preferred deep learning architecture to be employed is convolutional neural networks, as these provide the structure preferably used for the analysis of medical images. With this retrospective exploratory study, the applicability of such networks for the task of delineating certain organs with respect to their surrounding tissues was tested. More precisely, a deep convolutional neural network was trained to segment medial retropharyngeal lymph nodes in a study dataset consisting of CT scans of canine heads. With a limited dataset of 40 patients, the network in conjunction with image augmentation techniques achieved an intersection-overunion of overall fair performance (median 39%, 25 percentiles at 22%, 75 percentiles at 51%). The results indicate that these architectures can indeed be trained to segment anatomic structures in anatomically complicated and breed-related variating areas such as the head, possibly even using just small training sets. As these conditions are quite common in veterinary medical imaging, all routines were published as an open-source Python package with the hope of simplifying future research projects in the community.
Collapse
Affiliation(s)
- David Schmid
- Clinic of Diagnostic ImagingVetsuisse FacultyUniversity of ZurichZurichSwitzerland
| | - Volkher B. Scholz
- Clinic of Diagnostic ImagingVetsuisse FacultyUniversity of ZurichZurichSwitzerland
| | - Patrick R. Kircher
- Clinic of Diagnostic ImagingVetsuisse FacultyUniversity of ZurichZurichSwitzerland
| | | |
Collapse
|
12
|
Yousefirizi F, Pierre Decazes, Amyar A, Ruan S, Saboury B, Rahmim A. AI-Based Detection, Classification and Prediction/Prognosis in Medical Imaging:: Towards Radiophenomics. PET Clin 2021; 17:183-212. [PMID: 34809866 DOI: 10.1016/j.cpet.2021.09.010] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Artificial intelligence (AI) techniques have significant potential to enable effective, robust, and automated image phenotyping including the identification of subtle patterns. AI-based detection searches the image space to find the regions of interest based on patterns and features. There is a spectrum of tumor histologies from benign to malignant that can be identified by AI-based classification approaches using image features. The extraction of minable information from images gives way to the field of "radiomics" and can be explored via explicit (handcrafted/engineered) and deep radiomics frameworks. Radiomics analysis has the potential to be used as a noninvasive technique for the accurate characterization of tumors to improve diagnosis and treatment monitoring. This work reviews AI-based techniques, with a special focus on oncological PET and PET/CT imaging, for different detection, classification, and prediction/prognosis tasks. We also discuss needed efforts to enable the translation of AI techniques to routine clinical workflows, and potential improvements and complementary techniques such as the use of natural language processing on electronic health records and neuro-symbolic AI techniques.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Pierre Decazes
- Department of Nuclear Medicine, Henri Becquerel Centre, Rue d'Amiens - CS 11516 - 76038 Rouen Cedex 1, France; QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Amine Amyar
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France; General Electric Healthcare, Buc, France
| | - Su Ruan
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, USA; Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada; Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; Department of Physics, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
13
|
Leveraging network using controlled weight learning approach for thyroid cancer lymph node detection. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.10.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
14
|
Li J, Sun W, Feng X, Xing G, von Deneen KM, Wang W, Zhang Y, Cui G. A dense connection encoding–decoding convolutional neural network structure for semantic segmentation of thymoma. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.04.023] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
15
|
Godinho DM, Felício JM, Castela T, Silva NA, Orvalho MDL, Fernandes CA, Conceição RC. Development of MRI-based axillary numerical models and estimation of axillary lymph node dielectric properties for microwave imaging. Med Phys 2021; 48:5974-5990. [PMID: 34338335 DOI: 10.1002/mp.15143] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 07/20/2021] [Accepted: 07/22/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Microwave imaging (MWI) has been studied as a complementary imaging modality to improve sensitivity and specificity of diagnosis of axillary lymph nodes (ALNs), which can be metastasized by breast cancer. The feasibility of such a system is based on the dielectric contrast between healthy and metastasized ALNs. However, reliable information such as anatomically realistic numerical models and matching dielectric properties of the axillary region and ALNs, which are crucial to develop MWI systems, are still limited in the literature. The purpose of this work is to develop a methodology to infer dielectric properties of structures from magnetic resonance imaging (MRI), in particular, ALNs. We further use this methodology, which is tailored for structures farther away from MR coils, to create MRI-based numerical models of the axillary region and share them with the scientific community, through an open-access repository. METHODS We use a dataset of breast MRI scans of 40 patients, 15 of them with metastasized ALNs. We apply image processing techniques to minimize the artifacts in MR images and segment the tissues of interest. The background, lung cavity, and skin are segmented using thresholding techniques and the remaining tissues are segmented using a K-means clustering algorithm. The ALNs are segmented combining the clustering results of two MRI sequences. The performance of this methodology was evaluated using qualitative criteria. We then apply a piecewise linear interpolation between voxel signal intensities and known dielectric properties, which allow us to create dielectric property maps within an MRI and consequently infer ALN properties. Finally, we compare healthy and metastasized ALN dielectric properties within and between patients, and we create an open-access repository of numerical axillary region numerical models which can be used for electromagnetic simulations. RESULTS The proposed methodology allowed creating anatomically realistic models of the axillary region, segmenting 80 ALNs and analyzing the corresponding dielectric properties. The estimated relative permittivity of those ALNs ranged from 16.6 to 49.3 at 5 GHz. We observe there is a high variability of dielectric properties of ALNs, which can be mainly related to the ALN size and, consequently, its composition. We verified an average dielectric contrast of 29% between healthy and metastasized ALNs. Our repository comprises 10 numerical models of the axillary region, from five patients, with variable number of metastasized ALNs and body mass index. CONCLUSIONS The observed contrast between healthy and metastasized ALNs is a good indicator for the feasibility of a MWI system aiming to diagnose ALNs. This paper presents new contributions regarding anatomical modeling and dielectric properties' characterization, in particular for axillary region applications.
Collapse
Affiliation(s)
- Daniela M Godinho
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências da Universidade de Lisboa, Lisbon, Portugal
| | - João M Felício
- Centro de Investigação Naval (CINAV), Escola Naval, Almada, Portugal.,Instituto de Telecomunicações, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Tiago Castela
- Departamento de Radiologia, Hospital da Luz Lisboa, Luz Saúde, Lisbon, Portugal
| | - Nuno A Silva
- Hospital da Luz Learning Health, Luz Saúde, Lisbon, Portugal
| | | | - Carlos A Fernandes
- Instituto de Telecomunicações, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Raquel C Conceição
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências da Universidade de Lisboa, Lisbon, Portugal
| |
Collapse
|
16
|
Wang PP, Deng CL, Wu B. Magnetic resonance imaging-based artificial intelligence model in rectal cancer. World J Gastroenterol 2021; 27:2122-2130. [PMID: 34025068 PMCID: PMC8117733 DOI: 10.3748/wjg.v27.i18.2122] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 02/23/2021] [Accepted: 03/16/2021] [Indexed: 02/06/2023] Open
Abstract
Rectal magnetic resonance imaging (MRI) is the preferred method for the diagnosis of rectal cancer as recommended by the guidelines. Rectal MRI can accurately evaluate the tumor location, tumor stage, invasion depth, extramural vascular invasion, and circumferential resection margin. We summarize the progress of research on the use of artificial intelligence (AI) in rectal cancer in recent years. AI, represented by machine learning, is being increasingly used in the medical field. The application of AI models based on high-resolution MRI in rectal cancer has been increasingly reported. In addition to staging the diagnosis and localizing radiotherapy, an increasing number of studies have reported that AI models based on high-resolution MRI can be used to predict the response to chemotherapy and prognosis of patients.
Collapse
Affiliation(s)
- Pei-Pei Wang
- Department of General Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Chao-Lin Deng
- Department of General Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Bin Wu
- Department of General Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| |
Collapse
|
17
|
Jeffrey Kuo CF, Hsun Lin K, Weng WH, Barman J, Huang CC, Chiu CW, Lee JL, Hsu HH. Complete fully automatic segmentation and 3-dimensional measurement of mediastinal lymph nodes for a new response evaluation criteria for solid tumors. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.03.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
18
|
Li Z, Xia Y. Deep Reinforcement Learning for Weakly-Supervised Lymph Node Segmentation in CT Images. IEEE J Biomed Health Inform 2021; 25:774-783. [PMID: 32749988 DOI: 10.1109/jbhi.2020.3008759] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Accurate and automated lymph node segmentation is pivotal for quantitatively accessing disease progression and potential therapeutics. The complex variation of lymph node morphology and the difficulty of acquiring voxel-wise manual annotations make lymph node segmentation a challenging task. Since the Response Evaluation Criteria in Solid Tumors (RECIST) annotation, which indicates the location, length, and width of a lymph node, is commonly available in hospital data archives, we advocate to use RECIST annotations as the supervision, and thus formulate this segmentation task into a weakly-supervised learning problem. In this paper, we propose a deep reinforcement learning-based lymph node segmentation (DRL-LNS) model. Based on RECIST annotations, we segment RECIST-slices in an unsupervised way to produce pseudo ground truths, which are then used to train U-Net as a segmentation network. Next, we train a DRL model, in which the segmentation network interacts with the policy network to optimize the lymph node bounding boxes and segmentation results simultaneously. The proposed DRL-LNS model was evaluated against three widely used image segmentation networks on a public thoracoabdominal Computed Tomography (CT) dataset that contains 984 3D lymph nodes, and achieves the mean Dice similarity coefficient (DSC) of 77.17% and the mean Intersection over Union (IoU) of 64.78% in the four-fold cross-validation. Our results suggest that the DRL-based bounding box prediction strategy outperforms the label propagation strategy and the proposed DRL-LNS model is able to achieve the state-of-the-art performance on this weakly-supervised lymph node segmentation task.
Collapse
|
19
|
Xu G, Cao H, Udupa JK, Tong Y, Torigian DA. DiSegNet: A deep dilated convolutional encoder-decoder architecture for lymph node segmentation on PET/CT images. Comput Med Imaging Graph 2021; 88:101851. [PMID: 33465588 DOI: 10.1016/j.compmedimag.2020.101851] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2020] [Revised: 11/20/2020] [Accepted: 12/15/2020] [Indexed: 11/29/2022]
Abstract
PURPOSE Automated lymph node (LN) recognition and segmentation from cross-sectional medical images is an important step for the automated diagnostic assessment of patients with cancer. Yet, it is still a difficult task owing to the low contrast of LNs and surrounding soft tissues as well as due to the variation in nodal size and shape. In this paper, we present a novel LN segmentation method based on a newly designed neural network for positron emission tomography/computed tomography (PET/CT) images. METHODS This work communicates two problems involved in LN segmentation task. Firstly, an efficient loss function named cosine-sine (CS) is proposed for the voxel class imbalance problem in the convolution network training process. Second, a multi-stage and multi-scale Atrous (Dilated) spatial pyramid pooling sub-module, named MS-ASPP, is introduced to the encoder-decoder architecture (SegNet), which aims to make use of multi-scale information to improve the performance of LN segmentation. The new architecture is named DiSegNet (Dilated SegNet). RESULTS Four-fold cross-validation is performed on 63 PET/CT data sets. In each experiment, 10 data sets are selected randomly for testing and the other 53 for training. The results show that we reach an average 77 % Dice similarity coefficient score with CS loss function by trained DiSegNet, compared to a baseline method SegNet by cross-entropy (CE) with 71 % Dice similarity coefficient. CONCLUSIONS The performance of the proposed DiSegNet with CS loss function suggests its potential clinical value for disease quantification.
Collapse
Affiliation(s)
- Guoping Xu
- School of Computer Sciences and Engineering, Wuhan Institute of Technology, Wuhan, Hubei, 430205, China; School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China; Medical Image Processing Group, 602 Goddard Building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, United States
| | - Hanqiang Cao
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Jayaram K Udupa
- Medical Image Processing Group, 602 Goddard Building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, United States.
| | - Yubing Tong
- Medical Image Processing Group, 602 Goddard Building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, United States
| | - Drew A Torigian
- Medical Image Processing Group, 602 Goddard Building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, United States
| |
Collapse
|
20
|
Jiang Y, Chen W, Liu M, Wang Y, Meijering E. 3D Neuron Microscopy Image Segmentation via the Ray-Shooting Model and a DC-BLSTM Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:26-37. [PMID: 32881683 DOI: 10.1109/tmi.2020.3021493] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The morphology reconstruction (tracing) of neurons in 3D microscopy images is important to neuroscience research. However, this task remains very challenging because of the low signal-to-noise ratio (SNR) and the discontinued segments of neurite patterns in the images. In this paper, we present a neuronal structure segmentation method based on the ray-shooting model and the Long Short-Term Memory (LSTM)-based network to enhance the weak-signal neuronal structures and remove background noise in 3D neuron microscopy images. Specifically, the ray-shooting model is used to extract the intensity distribution features within a local region of the image. And we design a neural network based on the dual channel bidirectional LSTM (DC-BLSTM) to detect the foreground voxels according to the voxel-intensity features and boundary-response features extracted by multiple ray-shooting models that are generated in the whole image. This way, we transform the 3D image segmentation task into multiple 1D ray/sequence segmentation tasks, which makes it much easier to label the training samples than many existing Convolutional Neural Network (CNN) based 3D neuron image segmentation methods. In the experiments, we evaluate the performance of our method on the challenging 3D neuron images from two datasets, the BigNeuron dataset and the Whole Mouse Brain Sub-image (WMBS) dataset. Compared with the neuron tracing results on the segmented images produced by other state-of-the-art neuron segmentation methods, our method improves the distance scores by about 32% and 27% in the BigNeuron dataset, and about 38% and 27% in the WMBS dataset.
Collapse
|
21
|
Wang WA, Dong P, Zhang A, Wang WJ, Guo CA, Wang J, Liu HB. Artificial intelligence: A new budding star in gastric cancer. Artif Intell Gastroenterol 2020; 1:60-70. [DOI: 10.35712/aig.v1.i4.60] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 11/01/2020] [Accepted: 11/27/2020] [Indexed: 02/06/2023] Open
Abstract
The pursuit of health has always been the driving force for the advancement of human society, and social development will be profoundly affected by every breakthrough in the medical industry. With the arrival of the information technology revolution era, artificial intelligence (AI) technology has been rapidly developed. AI has been combined with medicine but it has been less studied with gastric cancer (GC). AI is a new budding star in GC, and its contribution to GC is mainly focused on diagnosis and treatment. For early GC, AI’s impact is not only reflected in its high accuracy but also its ability to quickly train primary doctors, improve the diagnosis rate of early GC, and reduce missed cases. At the same time, it will also reduce the possibility of missed diagnosis of advanced GC in cardia. Furthermore, it is used to assist imaging doctors to determine the location of lymph nodes and, more importantly, it can more effectively judge the lymph node metastasis of GC, which is conducive to the prognosis of patients. In surgical treatment of GC, it also has great potential. Robotic surgery is the latest technology in GC surgery. It is a bright star for minimally invasive treatment of GC, and together with laparoscopic surgery, it has become a common treatment for GC. Through machine learning, robotic systems can reduce operator errors and trauma of patients, and can predict the prognosis of GC patients. Throughout the centuries of development of surgery, the history gradually changes from traumatic to minimally invasive. In the future, AI will help GC patients reduce surgical trauma and further improve the efficiency of minimally invasive treatment of GC.
Collapse
Affiliation(s)
- Wen-An Wang
- Graduate School, Gansu University of Traditional Chinese Medicine, Lanzhou 730000, Gansu Province, China
- Department of General Surgery, The 940th Hospital of Joint Logistics Support Force of Chinese People’s Liberation Army, Lanzhou 730050, Gansu Province, China
| | - Peng Dong
- Department of General Surgery, Lanzhou University Second Hospital, Lanzhou 730000, Gansu Province, China
| | - An Zhang
- Graduate School, Gansu University of Traditional Chinese Medicine, Lanzhou 730000, Gansu Province, China
- Department of General Surgery, The 940th Hospital of Joint Logistics Support Force of Chinese People’s Liberation Army, Lanzhou 730050, Gansu Province, China
| | - Wen-Jie Wang
- Department of General Surgery, Lanzhou University Second Hospital, Lanzhou 730000, Gansu Province, China
| | - Chang-An Guo
- Department of Emergency Medicine, Lanzhou University Second Hospital, Lanzhou 730000, Gansu Province, China
| | - Jing Wang
- Graduate School, Gansu University of Traditional Chinese Medicine, Lanzhou 730000, Gansu Province, China
- Department of General Surgery, The 940th Hospital of Joint Logistics Support Force of Chinese People’s Liberation Army, Lanzhou 730050, Gansu Province, China
| | - Hong-Bin Liu
- Department of General Surgery, The 940th Hospital of Joint Logistics Support Force of Chinese People’s Liberation Army, Lanzhou 730050, Gansu Province, China
| |
Collapse
|
22
|
Jin D, Guo D, Ho TY, Harrison AP, Xiao J, Tseng CK, Lu L. DeepTarget: Gross tumor and clinical target volume segmentation in esophageal cancer radiotherapy. Med Image Anal 2020; 68:101909. [PMID: 33341494 DOI: 10.1016/j.media.2020.101909] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 09/10/2020] [Accepted: 11/13/2020] [Indexed: 12/19/2022]
Abstract
Gross tumor volume (GTV) and clinical target volume (CTV) delineation are two critical steps in the cancer radiotherapy planning. GTV defines the primary treatment area of the gross tumor, while CTV outlines the sub-clinical malignant disease. Automatic GTV and CTV segmentation are both challenging for distinct reasons: GTV segmentation relies on the radiotherapy computed tomography (RTCT) image appearance, which suffers from poor contrast with the surrounding tissues, while CTV delineation relies on a mixture of predefined and judgement-based margins. High intra- and inter-user variability makes this a particularly difficult task. We develop tailored methods solving each task in the esophageal cancer radiotherapy, together leading to a comprehensive solution for the target contouring task. Specifically, we integrate the RTCT and positron emission tomography (PET) modalities together into a two-stream chained deep fusion framework taking advantage of both modalities to facilitate more accurate GTV segmentation. For CTV segmentation, since it is highly context-dependent-it must encompass the GTV and involved lymph nodes while also avoiding excessive exposure to the organs at risk-we formulate it as a deep contextual appearance-based problem using encoded spatial distances of these anatomical structures. This better emulates the margin- and appearance-based CTV delineation performed by oncologists. Adding to our contributions, for the GTV segmentation we propose a simple yet effective progressive semantically-nested network (PSNN) backbone that outperforms more complicated models. Our work is the first to provide a comprehensive solution for the esophageal GTV and CTV segmentation in radiotherapy planning. Extensive 4-fold cross-validation on 148 esophageal cancer patients, the largest analysis to date, was carried out for both tasks. The results demonstrate that our GTV and CTV segmentation approaches significantly improve the performance over previous state-of-the-art works, e.g., by 8.7% increases in Dice score (DSC) and 32.9mm reduction in Hausdorff distance (HD) for GTV segmentation, and by 3.4% increases in DSC and 29.4mm reduction in HD for CTV segmentation.
Collapse
Affiliation(s)
| | | | | | | | - Jing Xiao
- Ping An Technology, Shenzhen, Guangdong, China
| | | | - Le Lu
- PAII Inc., Bethesda, MD, USA
| |
Collapse
|
23
|
Xu G, Udupa JK, Tong Y, Odhner D, Cao H, Torigian DA. AAR-LN-DQ: Automatic anatomy recognition based disease quantification in thoracic lymph node zones via FDG PET/CT images without Nodal Delineation. Med Phys 2020; 47:3467-3484. [PMID: 32418221 DOI: 10.1002/mp.14240] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 04/22/2020] [Accepted: 05/08/2020] [Indexed: 01/02/2023] Open
Abstract
PURPOSE The derivation of quantitative information from medical images in a practical manner is essential for quantitative radiology (QR) to become a clinical reality, but still faces a major hurdle because of image segmentation challenges. With the goal of performing disease quantification in lymph node (LN) stations without explicit nodal delineation, this paper presents a novel approach for disease quantification (DQ) by automatic recognition of LN zones and detection of malignant lymph nodes within thoracic LN zones via positron emission tomography/computed tomography (PET/CT) images. Named AAR-LN-DQ, this approach decouples DQ methods from explicit nodal segmentation via an LN recognition strategy involving a novel globular filter and a deep neural network called SegNet. METHOD The methodology consists of four main steps: (a) Building lymph node zone models by automatic anatomy recognition (AAR) method. It incorporates novel aspects of model building that relate to finding an optimal hierarchy for organs and lymph node zones in the thorax. (b) Recognizing lymph node zones by the built lymph node models. (c) Detecting pathologic LNs in the recognized zones by using a novel globular filter (g-filter) and a multi-level support vector machine (SVM) classifier. Here, we make use of the general globular shape of LNs to first localize them and then use a multi-level SVM classifier to identify pathologic LNs from among the LNs localized by the g-filter. Alternatively, we designed a deep neural network called SegNet which is trained to directly recognize pathologic nodes within AAR localized LN zones. (d) Disease quantification based on identified pathologic LNs within localized zones. A fuzzy disease map is devised to express the degree of disease burden at each voxel within the identified LNs to simultaneously handle several uncertain phenomena such as PET partial volume effects, uncertainty in localization of LNs, and gradation of disease content at the voxel level. We focused on the task of disease quantification in patients with lymphoma based on PET/CT acquisitions and devised a method of evaluation. Model building was carried out using 42 near-normal patient datasets via contrast-enhanced CT examinations of their thorax. PET/CT datasets from an additional 63 lymphoma patients were utilized for evaluating the AAR-LN-DQ methodology. We assess the accuracy of the three main processes involved in AAR-LN-DQ via fivefold cross validation: lymph node zone recognition, abnormal lymph node localization, and disease quantification. RESULTS The recognition and scale error for LN zones were 12.28 mm ± 1.99 and 0.94 ± 0.02, respectively, on normal CT datasets. On abnormal PET/CT datasets, the sensitivity and specificity of pathologic LN recognition were 84.1% ± 0.115 and 98.5% ± 0.003, respectively, for the g-filter-SVM strategy, and 91.3% ± 0.110 and 96.1% ± 0.016, respectively, for the SegNet method. Finally, the mean absolute percent errors for disease quantification of the recognized abnormal LNs were 8% ± 0.09 and 14% ± 0.10 for the g-filter-SVM method and the best SegNet strategy, respectively. CONCLUSIONS Accurate disease quantification on PET/CT images without performing explicit delineation of lymph nodes is feasible following lymph node zone and pathologic LN localization. It is very useful to perform LN zone recognition by AAR as this step can cover most (95.8%) of the abnormal LNs and drastically reduce the regions to search for abnormal LNs. This also improves the specificity of deep networks such as SegNet significantly. It is possible to utilize general shape information about LNs such as their globular nature via g-filter and to arrive at high recognition rates for abnormal LNs in conjunction with a traditional classifier such as SVM. Finally, the disease map concept is effective for estimating disease burden, irrespective of how the LNs are identified, to handle various uncertainties without having to address them explicitly one by one.
Collapse
Affiliation(s)
- Guoping Xu
- School of Electronic Information and Communications, Huazhong University of Science and technology, Wuhan, Hubei, 430074, China.,Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, USA
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, USA
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, USA
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, USA
| | - Hanqiang Cao
- School of Electronic Information and Communications, Huazhong University of Science and technology, Wuhan, Hubei, 430074, China
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, USA.,Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
24
|
Zhao X, Xie P, Wang M, Li W, Pickhardt PJ, Xia W, Xiong F, Zhang R, Xie Y, Jian J, Bai H, Ni C, Gu J, Yu T, Tang Y, Gao X, Meng X. Deep learning-based fully automated detection and segmentation of lymph nodes on multiparametric-mri for rectal cancer: A multicentre study. EBioMedicine 2020; 56:102780. [PMID: 32512507 PMCID: PMC7276514 DOI: 10.1016/j.ebiom.2020.102780] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Revised: 04/09/2020] [Accepted: 04/21/2020] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Accurate lymph nodes (LNs) assessment is important for rectal cancer (RC) staging in multiparametric magnetic resonance imaging (mpMRI). However, it is incredibly time-consumming to identify all the LNs in scan region. This study aims to develop and validate a deep-learning-based, fully-automated lymph node detection and segmentation (auto-LNDS) model based on mpMRI. METHODS In total, 5789 annotated LNs (diameter ≥ 3 mm) in mpMRI from 293 patients with RC in a single center were enrolled. Fused T2-weighted images (T2WI) and diffusion-weighted images (DWI) provided input for the deep learning framework Mask R-CNN through transfer learning to generate the auto-LNDS model. The model was then validated both on the internal and external datasets consisting of 935 LNs and 1198 LNs, respectively. The performance for LNs detection was evaluated using sensitivity, positive predictive value (PPV), and false positive rate per case (FP/vol), and segmentation performance was evaluated using the Dice similarity coefficient (DSC). FINDINGS For LNs detection, auto-LNDS achieved sensitivity, PPV, and FP/vol of 80.0%, 73.5% and 8.6 in internal testing, and 62.6%, 64.5%, and 8.2 in external testing, respectively, significantly better than the performance of junior radiologists. The time taken for model detection and segmentation was 1.3 s/case, compared with 200 s/case for the radiologists. For LNs segmentation, the DSC of the model was in the range of 0.81-0.82. INTERPRETATION This deep learning-based auto-LNDS model can achieve pelvic LNseffectively based on mpMRI for RC, and holds great potential for facilitating N-staging in clinical practice.
Collapse
Affiliation(s)
- Xingyu Zhao
- University of Science and Technology of China, No.96 Jinzhai Road, Hefei, Anhui, 230026, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88, Keling Road, Suzhou, Jiangsu 215163, China
| | - Peiyi Xie
- Department of Radiology, The Sixth Affiliated Hospital of Sun Yat-sen University, No.26 Yuancunerheng Road, Guangzhou, Guangdong 510655, China
| | - Mengmeng Wang
- University of Science and Technology of China, No.96 Jinzhai Road, Hefei, Anhui, 230026, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88, Keling Road, Suzhou, Jiangsu 215163, China
| | - Wenru Li
- Department of Radiology, The Sixth Affiliated Hospital of Sun Yat-sen University, No.26 Yuancunerheng Road, Guangzhou, Guangdong 510655, China
| | - Perry J Pickhardt
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, USA
| | - Wei Xia
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88, Keling Road, Suzhou, Jiangsu 215163, China
| | - Fei Xiong
- Department of Radiology, The Sixth Affiliated Hospital of Sun Yat-sen University, No.26 Yuancunerheng Road, Guangzhou, Guangdong 510655, China
| | - Rui Zhang
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88, Keling Road, Suzhou, Jiangsu 215163, China
| | - Yao Xie
- Department of Radiology, The Sixth Affiliated Hospital of Sun Yat-sen University, No.26 Yuancunerheng Road, Guangzhou, Guangdong 510655, China
| | - Junming Jian
- University of Science and Technology of China, No.96 Jinzhai Road, Hefei, Anhui, 230026, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88, Keling Road, Suzhou, Jiangsu 215163, China
| | - Honglin Bai
- University of Science and Technology of China, No.96 Jinzhai Road, Hefei, Anhui, 230026, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88, Keling Road, Suzhou, Jiangsu 215163, China
| | - Caifang Ni
- The First Affiliated Hospital of Soochow University, No. 899, Pinghai Road, Suzhou, Jiangsu 215006, China
| | - Jinhui Gu
- Chinese Academy of Traditional Chinese Medicine, No. 16, Inner South Street, Dongzhimen, Beijing 100700, China; Guiyang College of Traditional Chinese Medicine, NO.50 Shi Dong Road, Guiyang, Guizhou 550002, China; The People's Hospital of Suzhou National Hi-Tech District, 215129, China
| | - Tao Yu
- Beijing Hospital General Surgery Department, National Center of Gerontology, No. 1, Donghua Dahua Road, Beijing 100730, China
| | - Yuguo Tang
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88, Keling Road, Suzhou, Jiangsu 215163, China
| | - Xin Gao
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88, Keling Road, Suzhou, Jiangsu 215163, China.
| | - Xiaochun Meng
- Department of Radiology, The Sixth Affiliated Hospital of Sun Yat-sen University, No.26 Yuancunerheng Road, Guangzhou, Guangdong 510655, China.
| |
Collapse
|
25
|
Peng T, Wang Y, Xu TC, Shi L, Jiang J, Zhu S. Detection of Lung Contour with Closed Principal Curve and Machine Learning. J Digit Imaging 2018; 31:520-533. [PMID: 29450843 DOI: 10.1007/s10278-018-0058-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
Radiation therapy plays an essential role in the treatment of cancer. In radiation therapy, the ideal radiation doses are delivered to the observed tumor while not affecting neighboring normal tissues. In three-dimensional computed tomography (3D-CT) scans, the contours of tumors and organs-at-risk (OARs) are often manually delineated by radiologists. The task is complicated and time-consuming, and the manually delineated results will be variable from different radiologists. We propose a semi-supervised contour detection algorithm, which firstly uses a few points of region of interest (ROI) as an approximate initialization. Then the data sequences are achieved by the closed polygonal line (CPL) algorithm, where the data sequences consist of the ordered projection indexes and the corresponding initial points. Finally, the smooth lung contour can be obtained, when the data sequences are trained by the backpropagation neural network model (BNNM). We use the private clinical dataset and the public Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset to measure the accuracy of the presented method, respectively. To the private dataset, experimental results on the initial points which are as low as 15% of the manually delineated points show that the Dice coefficient reaches up to 0.95 and the global error is as low as 1.47 × 10-2. The performance of the proposed algorithm is also better than the cubic spline interpolation (CSI) algorithm. While on the public LIDC-IDRI dataset, our method achieves superior segmentation performance with average Dice of 0.83.
Collapse
Affiliation(s)
- Tao Peng
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China.
| | - Yihuai Wang
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China.
| | - Thomas Canhao Xu
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China
| | - Lianmin Shi
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China
| | - Jianwu Jiang
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China
| | - Shilang Zhu
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China
| |
Collapse
|
26
|
Oda H, Bhatia KK, Oda M, Kitasaka T, Iwano S, Homma H, Takabatake H, Mori M, Natori H, Schnabel JA, Mori K. Automated mediastinal lymph node detection from CT volumes based on intensity targeted radial structure tensor analysis. J Med Imaging (Bellingham) 2017; 4:044502. [PMID: 29152534 PMCID: PMC5683200 DOI: 10.1117/1.jmi.4.4.044502] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2017] [Accepted: 10/16/2017] [Indexed: 01/10/2023] Open
Abstract
This paper presents a local intensity structure analysis based on an intensity targeted radial structure tensor (ITRST) and the blob-like structure enhancement filter based on it (ITRST filter) for the mediastinal lymph node detection algorithm from chest computed tomography (CT) volumes. Although the filter based on radial structure tensor analysis (RST filter) based on conventional RST analysis can be utilized to detect lymph nodes, some lymph nodes adjacent to regions with extremely high or low intensities cannot be detected. Therefore, we propose the ITRST filter, which integrates the prior knowledge on detection target intensity range into the RST filter. Our lymph node detection algorithm consists of two steps: (1) obtaining candidate regions using the ITRST filter and (2) removing false positives (FPs) using the support vector machine classifier. We evaluated lymph node detection performance of the ITRST filter on 47 contrast-enhanced chest CT volumes and compared it with the RST and Hessian filters. The detection rate of the ITRST filter was 84.2% with 9.1 FPs/volume for lymph nodes whose short axis was at least 10 mm, which outperformed the RST and Hessian filters.
Collapse
Affiliation(s)
- Hirohisa Oda
- Nagoya University, Graduate School of Information Science, Furo-cho, Chikusa-ku, Nagoya, Japan
| | - Kanwal K. Bhatia
- King’s College London, Division of Imaging Sciences and Biomedical Engineering, St. Thomas’ Hospital, London, United Kingdom
| | - Masahiro Oda
- Nagoya University, Graduate School of Informatics, Furo-cho, Chikusa-ku, Nagoya, Japan
| | - Takayuki Kitasaka
- Aichi Institute of Technology, School of Information Science, Yakusa-cho, Toyota, Japan
| | - Shingo Iwano
- Nagoya University Graduate School of Medicine, Showa-ku, Nagoya, Japan
| | | | | | - Masaki Mori
- Sapporo-Kosei General Hospital, Chuo-ku, Sapporo, Japan
| | | | - Julia A. Schnabel
- King’s College London, Division of Imaging Sciences and Biomedical Engineering, St. Thomas’ Hospital, London, United Kingdom
| | - Kensaku Mori
- Nagoya University, Graduate School of Informatics, Furo-cho, Chikusa-ku, Nagoya, Japan
| |
Collapse
|
27
|
Naziroglu RE, Puylaert CAJ, Tielbeek JAW, Makanyanga J, Menys A, Ponsioen CY, Hatzakis H, Taylor SA, Stoker J, van Vliet LJ, Vos FM. Semi-automatic bowel wall thickness measurements on MR enterography in patients with Crohn's disease. Br J Radiol 2017; 90:20160654. [PMID: 28401775 DOI: 10.1259/bjr.20160654] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
OBJECTIVE To evaluate a semi-automatic method for delineation of the bowel wall and measurement of the wall thickness in patients with Crohn's disease. METHODS 53 patients with suspected or proven Crohn's disease were selected. Two radiologists independently supervised the delineation of regions with active Crohn's disease on MRI, yielding manual annotations (Ano1, Ano2). Three observers manually measured the maximal bowel wall thickness of each annotated segment. An active contour segmentation approach semi-automatically delineated the bowel wall. For each active region, two segmentations (Seg1, Seg2) were obtained by independent observers, in which the maximum wall thickness was automatically determined. The overlap between (Seg1, Seg2) was compared with the overlap of (Ano1, Ano2) using Wilcoxon's signed rank test. The corresponding variances were compared using the Brown-Forsythe test. The variance of the semi-automatic thickness measurements was compared with the overall variance of manual measurements through an F-test. Furthermore, the intraclass correlation coefficient (ICC) of semi-automatic thickness measurements was compared with the ICC of manual measurements through a likelihood-ratio test. RESULTS Patient demographics: median age, 30 years; interquartile range, 25-38 years; 33 females. The median overlap of the semi-automatic segmentations (Seg1 vs Seg2: 0.89) was significantly larger than the median overlap of the manual annotations (Ano1 vs Ano2: 0.72); p = 1.4 × 10-5. The variance in overlap of the semi-automatic segmentations was significantly smaller than the variance in overlap of the manual annotations (p = 1.1 × 10-9). The variance of the semi-automated measurements (0.46 mm2) was significantly smaller than the variance of the manual measurements (2.90 mm2, p = 1.1 × 10-7). The ICC of semi-automatic measurement (0.88) was significantly higher than the ICC of manual measurement (0.45); p = 0.005. CONCLUSION The semi-automatic technique facilitates reproducible delineation of regions with active Crohn's disease. The semi-automatic thickness measurement sustains significantly improved interobserver agreement. Advances in knowledge: Automation of bowel wall thickness measurements strongly increases reproducibility of these measurements, which are commonly used in MRI scoring systems of Crohn's disease activity.
Collapse
Affiliation(s)
- Robiel E Naziroglu
- 1 Department of Imaging Physics, Delft University of Technology, Delft, Netherlands
| | - Carl A J Puylaert
- 2 Department of Radiology, Academic Medical Center, University of Amsterdam, Amsterdam, Netherlands
| | - Jeroen A W Tielbeek
- 2 Department of Radiology, Academic Medical Center, University of Amsterdam, Amsterdam, Netherlands
| | | | - Alex Menys
- 3 Center for Medical Imaging, University College London, London, UK
| | - Cyriel Y Ponsioen
- 2 Department of Radiology, Academic Medical Center, University of Amsterdam, Amsterdam, Netherlands
| | | | - Stuart A Taylor
- 3 Center for Medical Imaging, University College London, London, UK
| | - Jaap Stoker
- 2 Department of Radiology, Academic Medical Center, University of Amsterdam, Amsterdam, Netherlands
| | - Lucas J van Vliet
- 1 Department of Imaging Physics, Delft University of Technology, Delft, Netherlands
| | - Frans M Vos
- 1 Department of Imaging Physics, Delft University of Technology, Delft, Netherlands.,2 Department of Radiology, Academic Medical Center, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
28
|
Three Aspects on Using Convolutional Neural Networks for Computer-Aided Detection in Medical Imaging. DEEP LEARNING AND CONVOLUTIONAL NEURAL NETWORKS FOR MEDICAL IMAGE COMPUTING 2017. [DOI: 10.1007/978-3-319-42999-1_8] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
29
|
Liu J, Hoffman J, Zhao J, Yao J, Lu L, Kim L, Turkbey EB, Summers RM. Mediastinal lymph node detection and station mapping on chest CT using spatial priors and random forest. Med Phys 2016; 43:4362. [PMID: 27370151 PMCID: PMC4920813 DOI: 10.1118/1.4954009] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2015] [Revised: 05/26/2016] [Accepted: 06/02/2016] [Indexed: 12/25/2022] Open
Abstract
PURPOSE To develop an automated system for mediastinal lymph node detection and station mapping for chest CT. METHODS The contextual organs, trachea, lungs, and spine are first automatically identified to locate the region of interest (ROI) (mediastinum). The authors employ shape features derived from Hessian analysis, local object scale, and circular transformation that are computed per voxel in the ROI. Eight more anatomical structures are simultaneously segmented by multiatlas label fusion. Spatial priors are defined as the relative multidimensional distance vectors corresponding to each structure. Intensity, shape, and spatial prior features are integrated and parsed by a random forest classifier for lymph node detection. The detected candidates are then segmented by the following curve evolution process. Texture features are computed on the segmented lymph nodes and a support vector machine committee is used for final classification. For lymph node station labeling, based on the segmentation results of the above anatomical structures, the textual definitions of mediastinal lymph node map according to the International Association for the Study of Lung Cancer are converted into patient-specific color-coded CT image, where the lymph node station can be automatically assigned for each detected node. RESULTS The chest CT volumes from 70 patients with 316 enlarged mediastinal lymph nodes are used for validation. For lymph node detection, their system achieves 88% sensitivity at eight false positives per patient. For lymph node station labeling, 84.5% of lymph nodes are correctly assigned to their stations. CONCLUSIONS Multiple-channel shape, intensity, and spatial prior features aggregated by a random forest classifier improve mediastinal lymph node detection on chest CT. Using the location information of segmented anatomic structures from the multiatlas formulation enables accurate identification of lymph node stations.
Collapse
Affiliation(s)
- Jiamin Liu
- Imaging Biomarkers and Computer-aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center Building, 10 Room 1C224 MSC 1182, Bethesda, Maryland 20892-1182
| | - Joanne Hoffman
- Imaging Biomarkers and Computer-aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center Building, 10 Room 1C224 MSC 1182, Bethesda, Maryland 20892-1182
| | - Jocelyn Zhao
- Imaging Biomarkers and Computer-aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center Building, 10 Room 1C224 MSC 1182, Bethesda, Maryland 20892-1182
| | - Jianhua Yao
- Imaging Biomarkers and Computer-aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center Building, 10 Room 1C224 MSC 1182, Bethesda, Maryland 20892-1182
| | - Le Lu
- Imaging Biomarkers and Computer-aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center Building, 10 Room 1C224 MSC 1182, Bethesda, Maryland 20892-1182
| | - Lauren Kim
- Imaging Biomarkers and Computer-aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center Building, 10 Room 1C224 MSC 1182, Bethesda, Maryland 20892-1182
| | - Evrim B Turkbey
- Imaging Biomarkers and Computer-aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center Building, 10 Room 1C224 MSC 1182, Bethesda, Maryland 20892-1182
| | - Ronald M Summers
- Imaging Biomarkers and Computer-aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center Building, 10 Room 1C224 MSC 1182, Bethesda, Maryland 20892-1182
| |
Collapse
|
30
|
Mahapatra D, Vos FM, Buhmann JM. Active learning based segmentation of Crohns disease from abdominal MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2016; 128:75-85. [PMID: 27040833 DOI: 10.1016/j.cmpb.2016.01.014] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2015] [Revised: 01/13/2016] [Accepted: 01/14/2016] [Indexed: 06/05/2023]
Abstract
This paper proposes a novel active learning (AL) framework, and combines it with semi supervised learning (SSL) for segmenting Crohns disease (CD) tissues from abdominal magnetic resonance (MR) images. Robust fully supervised learning (FSL) based classifiers require lots of labeled data of different disease severities. Obtaining such data is time consuming and requires considerable expertise. SSL methods use a few labeled samples, and leverage the information from many unlabeled samples to train an accurate classifier. AL queries labels of most informative samples and maximizes gain from the labeling effort. Our primary contribution is in designing a query strategy that combines novel context information with classification uncertainty and feature similarity. Combining SSL and AL gives a robust segmentation method that: (1) optimally uses few labeled samples and many unlabeled samples; and (2) requires lower training time. Experimental results show our method achieves higher segmentation accuracy than FSL methods with fewer samples and reduced training effort.
Collapse
Affiliation(s)
| | - Franciscus M Vos
- Department of Radiology, Academic Medical Center, The Netherlands; Quantitative Imaging Group, Delft University of Technology, The Netherlands
| | | |
Collapse
|
31
|
Roth HR, Lu L, Liu J, Yao J, Seff A, Cherry K, Kim L, Summers RM. Improving Computer-Aided Detection Using Convolutional Neural Networks and Random View Aggregation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1170-81. [PMID: 26441412 PMCID: PMC7340334 DOI: 10.1109/tmi.2015.2482920] [Citation(s) in RCA: 264] [Impact Index Per Article: 29.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Automated computer-aided detection (CADe) has been an important tool in clinical practice and research. State-of-the-art methods often show high sensitivities at the cost of high false-positives (FP) per patient rates. We design a two-tiered coarse-to-fine cascade framework that first operates a candidate generation system at sensitivities ∼ 100% of but at high FP levels. By leveraging existing CADe systems, coordinates of regions or volumes of interest (ROI or VOI) are generated and function as input for a second tier, which is our focus in this study. In this second stage, we generate 2D (two-dimensional) or 2.5D views via sampling through scale transformations, random translations and rotations. These random views are used to train deep convolutional neural network (ConvNet) classifiers. In testing, the ConvNets assign class (e.g., lesion, pathology) probabilities for a new set of random views that are then averaged to compute a final per-candidate classification probability. This second tier behaves as a highly selective process to reject difficult false positives while preserving high sensitivities. The methods are evaluated on three data sets: 59 patients for sclerotic metastasis detection, 176 patients for lymph node detection, and 1,186 patients for colonic polyp detection. Experimental results show the ability of ConvNets to generalize well to different medical imaging CADe applications and scale elegantly to various data sets. Our proposed methods improve performance markedly in all cases. Sensitivities improved from 57% to 70%, 43% to 77%, and 58% to 75% at 3 FPs per patient for sclerotic metastases, lymph nodes and colonic polyps, respectively.
Collapse
|
32
|
Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D, Summers RM. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1285-98. [PMID: 26886976 PMCID: PMC4890616 DOI: 10.1109/tmi.2016.2528162] [Citation(s) in RCA: 1894] [Impact Index Per Article: 210.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2016] [Revised: 02/04/2016] [Accepted: 02/05/2016] [Indexed: 05/17/2023]
Abstract
Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.
Collapse
Affiliation(s)
- Hoo-Chang Shin
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory
| | - Holger R. Roth
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory
| | | | - Le Lu
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory
- National Institutes of Health Clinical CenterClinical Image Processing ServiceRadiology and Imaging Sciences DepartmentBethesdaMD20892-1182USA
| | - Ziyue Xu
- Center for Infectious Disease Imaging
| | | | - Jianhua Yao
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory
- National Institutes of Health Clinical CenterClinical Image Processing ServiceRadiology and Imaging Sciences DepartmentBethesdaMD20892-1182USA
| | | | - Ronald M. Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory
- National Institutes of Health Clinical CenterClinical Image Processing ServiceRadiology and Imaging Sciences DepartmentBethesdaMD20892-1182USA
| |
Collapse
|
33
|
Suárez-Mejías C, Pérez-Carrasco JA, Serrano C, López-Guerra JL, Parra-Calderón C, Gómez-Cía T, Acha B. Three-dimensional segmentation of retroperitoneal masses using continuous convex relaxation and accumulated gradient distance for radiotherapy planning. Med Biol Eng Comput 2016; 55:1-15. [PMID: 27099157 DOI: 10.1007/s11517-016-1505-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2015] [Accepted: 03/28/2016] [Indexed: 11/25/2022]
Abstract
An innovative algorithm has been developed for the segmentation of retroperitoneal tumors in 3D radiological images. This algorithm makes it possible for radiation oncologists and surgeons semiautomatically to select tumors for possible future radiation treatment and surgery. It is based on continuous convex relaxation methodology, the main novelty being the introduction of accumulated gradient distance, with intensity and gradient information being incorporated into the segmentation process. The algorithm was used to segment 26 CT image volumes. The results were compared with manual contouring of the same tumors. The proposed algorithm achieved 90 % sensitivity, 100 % specificity and 84 % positive predictive value, obtaining a mean distance to the closest point of 3.20 pixels. The algorithm's dependence on the initial manual contour was also analyzed, with results showing that the algorithm substantially reduced the variability of the manual segmentation carried out by different specialists. The algorithm was also compared with four benchmark algorithms (thresholding, edge-based level-set, region-based level-set and continuous max-flow with two labels). To the best of our knowledge, this is the first time the segmentation of retroperitoneal tumors for radiotherapy planning has been addressed.
Collapse
Affiliation(s)
- Cristina Suárez-Mejías
- Technological Innovation Group, Virgen del Rocío University Hospital, Seville, Spain.
- Signal Theory and Communications Department, University of Seville, Seville, Spain.
| | | | - Carmen Serrano
- Signal Theory and Communications Department, University of Seville, Seville, Spain
| | | | - Carlos Parra-Calderón
- Technological Innovation Group, Virgen del Rocío University Hospital, Seville, Spain
| | - Tomás Gómez-Cía
- Surgery Unit, Virgen del Rocío University Hospital, Seville, Spain
| | - Begoña Acha
- Signal Theory and Communications Department, University of Seville, Seville, Spain
| |
Collapse
|
34
|
Abstract
OBJECTIVE Automated analysis of abdominal CT has advanced markedly over just the last few years. Fully automated assessment of organs, lymph nodes, adipose tissue, muscle, bowel, spine, and tumors are some examples where tremendous progress has been made. Computer-aided detection of lesions has also improved dramatically. CONCLUSION This article reviews the progress and provides insights into what is in store in the near future for automated analysis for abdominal CT, ultimately leading to fully automated interpretation.
Collapse
|
35
|
Oberkampf H, Zillner S, Overton JA, Bauer B, Cavallaro A, Uder M, Hammon M. Semantic representation of reported measurements in radiology. BMC Med Inform Decis Mak 2016; 16:5. [PMID: 26801764 PMCID: PMC4722630 DOI: 10.1186/s12911-016-0248-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2016] [Accepted: 01/20/2016] [Indexed: 12/23/2022] Open
Abstract
Background In radiology, a vast amount of diverse data is generated, and unstructured reporting is standard. Hence, much useful information is trapped in free-text form, and often lost in translation and transmission. One relevant source of free-text data consists of reports covering the assessment of changes in tumor burden, which are needed for the evaluation of cancer treatment success. Any change of lesion size is a critical factor in follow-up examinations. It is difficult to retrieve specific information from unstructured reports and to compare them over time. Therefore, a prototype was implemented that demonstrates the structured representation of findings, allowing selective review in consecutive examinations and thus more efficient comparison over time. Methods We developed a semantic Model for Clinical Information (MCI) based on existing ontologies from the Open Biological and Biomedical Ontologies (OBO) library. MCI is used for the integrated representation of measured image findings and medical knowledge about the normal size of anatomical entities. An integrated view of the radiology findings is realized by a prototype implementation of a ReportViewer. Further, RECIST (Response Evaluation Criteria In Solid Tumors) guidelines are implemented by SPARQL queries on MCI. The evaluation is based on two data sets of German radiology reports: An oncologic data set consisting of 2584 reports on 377 lymphoma patients and a mixed data set consisting of 6007 reports on diverse medical and surgical patients. All measurement findings were automatically classified as abnormal/normal using formalized medical background knowledge, i.e., knowledge that has been encoded into an ontology. A radiologist evaluated 813 classifications as correct or incorrect. All unclassified findings were evaluated as incorrect. Results The proposed approach allows the automatic classification of findings with an accuracy of 96.4 % for oncologic reports and 92.9 % for mixed reports. The ReportViewer permits efficient comparison of measured findings from consecutive examinations. The implementation of RECIST guidelines with SPARQL enhances the quality of the selection and comparison of target lesions as well as the corresponding treatment response evaluation. Conclusions The developed MCI enables an accurate integrated representation of reported measurements and medical knowledge. Thus, measurements can be automatically classified and integrated in different decision processes. The structured representation is suitable for improved integration of clinical findings during decision-making. The proposed ReportViewer provides a longitudinal overview of the measurements.
Collapse
Affiliation(s)
- Heiner Oberkampf
- Department of Computer Science, Software Methodologies for Distributed Systems, University of Augsburg, Universitätsstraße 6a, 86159, Augsburg, Germany. .,Corporate Technology, Siemens AG, Otto-Hahn-Ring 6, 81739, Münech, Germany.
| | - Sonja Zillner
- Corporate Technology, Siemens AG, Otto-Hahn-Ring 6, 81739, Münech, Germany. .,School of International Business and Entrepreneurship, Steinbeis University, Kalkofenstraße 53, 71083, Herrenberg, Germany.
| | | | - Bernhard Bauer
- Department of Computer Science, Software Methodologies for Distributed Systems, University of Augsburg, Universitätsstraße 6a, 86159, Augsburg, Germany.
| | - Alexander Cavallaro
- Department of Radiology, University Hospital Erlangen, Maximiliansplatz 1, 91054, Erlangen, Germany.
| | - Michael Uder
- Department of Radiology, University Hospital Erlangen, Maximiliansplatz 1, 91054, Erlangen, Germany.
| | - Matthias Hammon
- Department of Radiology, University Hospital Erlangen, Maximiliansplatz 1, 91054, Erlangen, Germany.
| |
Collapse
|
36
|
Shi Y, Gao Y, Liao S, Zhang D, Gao Y, Shen D. A Learning-Based CT Prostate Segmentation Method via Joint Transductive Feature Selection and Regression. Neurocomputing 2016; 173:317-331. [PMID: 26752809 PMCID: PMC4704800 DOI: 10.1016/j.neucom.2014.11.098] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
In1 recent years, there has been a great interest in prostate segmentation, which is a important and challenging task for CT image guided radiotherapy. In this paper, a learning-based segmentation method via joint transductive feature selection and transductive regression is presented, which incorporates the physician's simple manual specification (only taking a few seconds), to aid accurate segmentation, especially for the case with large irregular prostate motion. More specifically, for the current treatment image, experienced physician is first allowed to manually assign the labels for a small subset of prostate and non-prostate voxels, especially in the first and last slices of the prostate regions. Then, the proposed method follows the two step: in prostate-likelihood estimation step, two novel algorithms: tLasso and wLapRLS, will be sequentially employed for transductive feature selection and transductive regression, respectively, aiming to generate the prostate-likelihood map. In multi-atlases based label fusion step, the final segmentation result will be obtained according to the corresponding prostate-likelihood map and the previous images of the same patient. The proposed method has been substantially evaluated on a real prostate CT dataset including 24 patients with 330 CT images, and compared with several state-of-the-art methods. Experimental results show that the proposed method outperforms the state-of-the-arts in terms of higher Dice ratio, higher true positive fraction, and lower centroid distances. Also, the results demonstrate that simple manual specification can help improve the segmentation performance, which is clinically feasible in real practice.
Collapse
Affiliation(s)
- Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, China; Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | - Yaozong Gao
- Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | - Shu Liao
- Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | | | - Yang Gao
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Dinggang Shen
- Department of Radiology and BRIC, UNC Chapel Hill, U.S
| |
Collapse
|
37
|
Automatic Lymph Node Cluster Segmentation Using Holistically-Nested Neural Networks and Structured Optimization in CT Images. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2016 2016. [DOI: 10.1007/978-3-319-46723-8_45] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|
38
|
|
39
|
Seff A, Lu L, Barbu A, Roth H, Shin HC, Summers RM. Leveraging Mid-Level Semantic Boundary Cues for Automated Lymph Node Detection. LECTURE NOTES IN COMPUTER SCIENCE 2015. [DOI: 10.1007/978-3-319-24571-3_7] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
40
|
Bria A, Karssemeijer N, Tortorella F. Learning from unbalanced data: A cascade-based approach for detecting clustered microcalcifications. Med Image Anal 2014; 18:241-52. [DOI: 10.1016/j.media.2013.10.014] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2013] [Revised: 10/18/2013] [Accepted: 10/31/2013] [Indexed: 11/29/2022]
|
41
|
Seff A, Lu L, Cherry KM, Roth HR, Liu J, Wang S, Hoffman J, Turkbey EB, Summers RM. 2D view aggregation for lymph node detection using a shallow hierarchy of linear classifiers. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2014; 17:544-52. [PMID: 25333161 PMCID: PMC4350911 DOI: 10.1007/978-3-319-10404-1_68] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Enlarged lymph nodes (LNs) can provide important information for cancer diagnosis, staging, and measuring treatment reactions, making automated detection a highly sought goal. In this paper, we propose a new algorithm representation of decomposing the LN detection problem into a set of 2D object detection subtasks on sampled CT slices, largely alleviating the curse of dimensionality issue. Our 2D detection can be effectively formulated as linear classification on a single image feature type of Histogram of Oriented Gradients (HOG), covering a moderate field-of-view of 45 by 45 voxels. We exploit both max-pooling and sparse linear fusion schemes to aggregate these 2D detection scores for the final 3D LN detection. In this manner, detection is more tractable and does not need to perform perfectly at instance level (as weak hypotheses) since our aggregation process will robustly harness collective information for LN detection. Two datasets (90 patients with 389 mediastinal LNs and 86 patients with 595 abdominal LNs) are used for validation. Cross-validation demonstrates 78.0% sensitivity at 6 false positives/volume (FP/vol.) (86.1% at 10 FP/vol.) and 73.1% sensitivity at 6 FP/vol. (87.2% at 10 FP/vol.), for the mediastinal and abdominal datasets respectively. Our results compare favorably to previous state-of-the-art methods.
Collapse
|
42
|
Roth HR, Lu L, Seff A, Cherry KM, Hoffman J, Wang S, Liu J, Turkbey E, Summers RM. A new 2.5D representation for lymph node detection using random sets of deep convolutional neural network observations. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2014; 17:520-7. [PMID: 25333158 PMCID: PMC4295635 DOI: 10.1007/978-3-319-10404-1_65] [Citation(s) in RCA: 154] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
Automated Lymph Node (LN) detection is an important clinical diagnostic task but very challenging due to the low contrast of surrounding structures in Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely distributed locations. State-of-the-art studies show the performance range of 52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1 FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this paper, we first operate a preliminary candidate generation stage, towards -100% sensitivity at the cost of high FP levels (-40 per patient), to harvest volumes of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by resampling 2D reformatted orthogonal views N times, via scale, random translations, and rotations with respect to the VOI centroid coordinates. These random views are then used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign LN probabilities for all N random views that can be simply averaged (as a set) to compute the final classification probability per VOI. We validate the approach on two datasets: 90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs. We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in mediastinum and abdomen respectively, which drastically improves over the previous state-of-the-art work.
Collapse
Affiliation(s)
- Holger R. Roth
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory,, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, USA
| | - Le Lu
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory,, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, USA
| | - Ari Seff
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory,, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, USA
| | - Kevin M. Cherry
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory,, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, USA
| | - Joanne Hoffman
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory,, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, USA
| | - Shijun Wang
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory,, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, USA
| | - Jiamin Liu
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory,, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, USA
| | - Evrim Turkbey
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory,, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, USA
| | - Ronald M. Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory,, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, USA
| |
Collapse
|
43
|
Steger S, Bozoglu N, Kuijper A, Wesarg S. Application of radial ray based segmentation to cervical lymph nodes in CT images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:888-900. [PMID: 23362249 DOI: 10.1109/tmi.2013.2242901] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The 3D-segmentation of lymph nodes in computed tomography images is required for staging and disease progression monitoring. Major challenges are shape and size variance, as well as low contrast, image noise, and pathologies. In this paper, radial ray based segmentation is applied to lymph nodes. From a seed point, rays are cast into all directions and an optimization technique determines a radius for each ray based on image appearance and shape knowledge. Lymph node specific appearance cost functions are introduced and their optimal parameters are determined. For the first time, the resulting segmentation accuracy of different appearance cost functions and optimization strategies is compared. Further contributions are extensions to reduce the dependency on the seed point, to support a larger variety of shapes, and to enable interaction. The best results are obtained using graph-cut on a combination of the direction weighted image gradient and accumulated intensities outside a predefined intensity range. Evaluation on 100 lymph nodes shows that with an average symmetric surface distance of 0.41 mm the segmentation accuracy is close to manual segmentation and outperforms existing radial ray and model based methods. The method's inter-observer-variability of 5.9% for volume assessment is lower than the 15.9% obtained using manual segmentation.
Collapse
|
44
|
Criminisi A, Robertson D, Konukoglu E, Shotton J, Pathak S, White S, Siddiqui K. Regression forests for efficient anatomy detection and localization in computed tomography scans. Med Image Anal 2013; 17:1293-303. [PMID: 23410511 DOI: 10.1016/j.media.2013.01.001] [Citation(s) in RCA: 118] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2011] [Revised: 01/08/2013] [Accepted: 01/09/2013] [Indexed: 11/24/2022]
Abstract
This paper proposes a new algorithm for the efficient, automatic detection and localization of multiple anatomical structures within three-dimensional computed tomography (CT) scans. Applications include selective retrieval of patients images from PACS systems, semantic visual navigation and tracking radiation dose over time. The main contribution of this work is a new, continuous parametrization of the anatomy localization problem, which allows it to be addressed effectively by multi-class random regression forests. Regression forests are similar to the more popular classification forests, but trained to predict continuous, multi-variate outputs, where the training focuses on maximizing the confidence of output predictions. A single pass of our probabilistic algorithm enables the direct mapping from voxels to organ location and size. Quantitative validation is performed on a database of 400 highly variable CT scans. We show that the proposed method is more accurate and robust than techniques based on efficient multi-atlas registration and template-based nearest-neighbor detection. Due to the simplicity of the regressor's context-rich visual features and the algorithm's parallelism, these results are achieved in typical run-times of only ∼4 s on a conventional single-core machine.
Collapse
|