1
|
Mittal S, Tong A, Young S, Jha P. Artificial intelligence applications in endometriosis imaging. Abdom Radiol (NY) 2025:10.1007/s00261-025-04897-w. [PMID: 40167644 DOI: 10.1007/s00261-025-04897-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2025] [Revised: 03/07/2025] [Accepted: 03/14/2025] [Indexed: 04/02/2025]
Abstract
Artificial intelligence (AI) may have the potential to improve existing diagnostic challenges in endometriosis imaging. To better direct future research, this descriptive review summarizes the general landscape of AI applications in endometriosis imaging. Articles from PubMed were selected to represent different approaches to AI applications in endometriosis imaging. Current endometriosis imaging literature focuses on AI applications in ultrasound (US) and magnetic resonance imaging (MRI). Most studies use US data, with MRI studies being limited at present. The majority of US studies employ transvaginal ultrasound (TVUS) data and aim to detect deep endometriosis implants, adenomyosis, endometriomas, and secondary signs of endometriosis. Most MRI studies evaluate endometriosis disease diagnosis and segmentation. Some studies analyze multi-modal methods for endometriosis imaging, combining US and MRI data or using imaging data in combination with clinical data. Current literature lacks generalizability and standardization. Most studies in this review utilize small sample sizes with retrospective approaches and single-center data. Existing models only focus on narrow disease detection or diagnosis questions and lack standardized ground truth. Overall, AI applications in endometriosis imaging analysis are in their early stages, and continued research is essential to develop and enhance these models.
Collapse
Affiliation(s)
- Sneha Mittal
- University of Tennessee Health Science Center, Memphis, USA
| | | | | | | |
Collapse
|
2
|
Tang H, Huang Z, Li W, Wu Y, Yuan J, Yang Y, Zhang Y, Qin J, Zheng H, Liang D, Wang M, Hu Z. Automatic Brain Segmentation for PET/MR Dual-Modal Images Through a Cross-Fusion Mechanism. IEEE J Biomed Health Inform 2025; 29:1982-1994. [PMID: 40030515 DOI: 10.1109/jbhi.2024.3516012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
The precise segmentation of different brain regions and tissues is usually a prerequisite for the detection and diagnosis of various neurological disorders in neuroscience. Considering the abundance of functional and structural dual-modality information for positron emission tomography/magnetic resonance (PET/MR) images, we propose a novel 3D whole-brain segmentation network with a cross-fusion mechanism introduced to obtain 45 brain regions. Specifically, the network processes PET and MR images simultaneously, employing UX-Net and a cross-fusion block for feature extraction and fusion in the encoder. We test our method by comparing it with other deep learning-based methods, including 3DUXNET, SwinUNETR, UNETR, nnFormer, UNet3D, NestedUNet, ResUNet, and VNet. The experimental results demonstrate that the proposed method achieves better segmentation performance in terms of both visual and quantitative evaluation metrics and achieves more precise segmentation in three views while preserving fine details. In particular, the proposed method achieves superior quantitative results, with a Dice coefficient of 85.73% 0.01%, a Jaccard index of 76.68% 0.02%, a sensitivity of 85.00% 0.01%, a precision of 83.26% 0.03% and a Hausdorff distance (HD) of 4.4885 14.85%. Moreover, the distribution and correlation of the SUV in the volume of interest (VOI) are also evaluated (PCC > 0.9), indicating consistency with the ground truth and the superiority of the proposed method. In future work, we will utilize our whole-brain segmentation method in clinical practice to assist doctors in accurately diagnosing and treating brain diseases.
Collapse
|
3
|
Chen Z, Chambara N, Lo X, Liu SYW, Gunda ST, Han X, Ying MTC. Improving the diagnostic strategy for thyroid nodules: a combination of artificial intelligence-based computer-aided diagnosis system and shear wave elastography. Endocrine 2025; 87:744-757. [PMID: 39375254 PMCID: PMC11811255 DOI: 10.1007/s12020-024-04053-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Accepted: 09/19/2024] [Indexed: 10/09/2024]
Abstract
PURPOSE Thyroid nodules are highly prevalent in the general population, posing a clinical challenge in accurately distinguishing between benign and malignant cases. This study aimed to investigate the diagnostic performance of different strategies, utilizing a combination of a computer-aided diagnosis system (AmCAD) and shear wave elastography (SWE) imaging, to effectively differentiate benign and malignant thyroid nodules in ultrasonography. METHODS A total of 126 thyroid nodules with pathological confirmation were prospectively included in this study. The AmCAD was utilized to analyze the ultrasound imaging characteristics of the nodules, while the SWE was employed to measure their stiffness in both transverse and longitudinal thyroid scans. Twelve diagnostic patterns were formed by combining AmCAD diagnosis and SWE values, including isolation, series, parallel, and integration. The diagnostic performance was assessed using the receiver operating characteristic curve and area under the curve (AUC). Sensitivity, specificity, accuracy, missed malignancy rate, and unnecessary biopsy rate were also determined. RESULTS Various diagnostic schemes have shown specific advantages in terms of diagnostic performance. Overall, integrating AmCAD with SWE imaging in the transverse scan yielded the most favorable diagnostic performance, achieving an AUC of 72.2% (95% confidence interval (CI): 63.0-81.5%), outperforming other diagnostic schemes. Furthermore, in the subgroup analysis of nodules measuring <2 cm or 2-4 cm, the integrated scheme consistently exhibited promising diagnostic performance, with AUCs of 74.2% (95% CI: 61.9-86.4%) and 77.4% (95% CI: 59.4-95.3%) respectively, surpassing other diagnostic schemes. The integrated scheme also effectively addressed thyroid nodule management by reducing the missed malignancy rate to 9.5% and unnecessary biopsy rate to 22.2%. CONCLUSION The integration of AmCAD and SWE imaging in the transverse thyroid scan significantly enhances the diagnostic performance for distinguishing benign and malignant thyroid nodules. This strategy offers clinicians the advantage of obtaining more accurate clinical diagnoses and making well-informed decisions regarding patient management.
Collapse
Affiliation(s)
- Ziman Chen
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong.
| | | | - Xina Lo
- Department of Surgery, North District Hospital, Sheung Shui, New Territories, Hong Kong
| | - Shirley Yuk Wah Liu
- Department of Surgery, The Chinese University of Hong Kong, Prince of Wales Hospital, Shatin, New Territories, Hong Kong
| | - Simon Takadiyi Gunda
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Xinyang Han
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Michael Tin Cheung Ying
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong.
| |
Collapse
|
4
|
Sharma R, Salman S, Gu Q, Freeman WD. Advancing Neurocritical Care with Artificial Intelligence and Machine Learning: The Promise, Practicalities, and Pitfalls ahead. Neurol Clin 2025; 43:153-165. [PMID: 39547739 DOI: 10.1016/j.ncl.2024.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2024]
Abstract
Expansion of artificial intelligence (AI) in the field of medicine is changing the paradigm of clinical practice at a rapid pace. Incorporation of AI in medicine offers new tools as well as challenges, and physicians and learners need to adapt to assimilate AI into practice and education. AI can expedite early diagnosis and intervention with real-time multimodal monitoring. AI assistants can decrease the clerical burden of heath care improving the productivity of work force while mitigating burnout. There are still no regulatory parameters for use of AI and regulatory framework is needed for the implementation of AI systems in medicine to ensure transparency, accountability, and equitable access.
Collapse
Affiliation(s)
- Rohan Sharma
- Department of Neurological Surgery, Neurology and Critical Care, Mayo Clinic, 4500 San Pablo Road S, Jacksonville, FL 32256, USA
| | - Saif Salman
- Department of Neurological Surgery, Neurology and Critical Care, Mayo Clinic, 4500 San Pablo Road S, Jacksonville, FL 32256, USA
| | - Qiangqiang Gu
- Department of Neurological Surgery, Neurology and Critical Care, Mayo Clinic, 4500 San Pablo Road S, Jacksonville, FL 32256, USA
| | - William D Freeman
- Department of Neurological Surgery, Neurology and Critical Care, Mayo Clinic, 4500 San Pablo Road S, Jacksonville, FL 32256, USA.
| |
Collapse
|
5
|
Li X, Zhao L, Zhang L, Wu Z, Liu Z, Jiang H, Cao C, Xu S, Li Y, Dai H, Yuan Y, Liu J, Li G, Zhu D, Yan P, Li Q, Liu W, Liu T, Shen D. Artificial General Intelligence for Medical Imaging Analysis. IEEE Rev Biomed Eng 2025; 18:113-129. [PMID: 39509310 DOI: 10.1109/rbme.2024.3493775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2024]
Abstract
Large-scale Artificial General Intelligence (AGI) models, including Large Language Models (LLMs) such as ChatGPT/GPT-4, have achieved unprecedented success in a variety of general domain tasks. Yet, when applied directly to specialized domains like medical imaging, which require in-depth expertise, these models face notable challenges arising from the medical field's inherent complexities and unique characteristics. In this review, we delve into the potential applications of AGI models in medical imaging and healthcare, with a primary focus on LLMs, Large Vision Models, and Large Multimodal Models. We provide a thorough overview of the key features and enabling techniques of LLMs and AGI, and further examine the roadmaps guiding the evolution and implementation of AGI models in the medical sector, summarizing their present applications, potentialities, and associated challenges. In addition, we highlight potential future research directions, offering a holistic view on upcoming ventures. This comprehensive review aims to offer insights into the future implications of AGI in medical imaging, healthcare, and beyond.
Collapse
|
6
|
Huang X, Zhu Y, Shao M, Xia M, Shen X, Wang P, Wang X. Dual-branch Transformer for semi-supervised medical image segmentation. J Appl Clin Med Phys 2024; 25:e14483. [PMID: 39133901 PMCID: PMC11466465 DOI: 10.1002/acm2.14483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 06/06/2024] [Accepted: 07/01/2024] [Indexed: 10/12/2024] Open
Abstract
PURPOSE In recent years, the use of deep learning for medical image segmentation has become a popular trend, but its development also faces some challenges. Firstly, due to the specialized nature of medical data, precise annotation is time-consuming and labor-intensive. Training neural networks effectively with limited labeled data is a significant challenge in medical image analysis. Secondly, convolutional neural networks commonly used for medical image segmentation research often focus on local features in images. However, the recognition of complex anatomical structures or irregular lesions often requires the assistance of both local and global information, which has led to a bottleneck in its development. Addressing these two issues, in this paper, we propose a novel network architecture. METHODS We integrate a shift window mechanism to learn more comprehensive semantic information and employ a semi-supervised learning strategy by incorporating a flexible amount of unlabeled data. Specifically, a typical U-shaped encoder-decoder structure is applied to obtain rich feature maps. Each encoder is designed as a dual-branch structure, containing Swin modules equipped with windows of different size to capture features of multiple scales. To effectively utilize unlabeled data, a level set function is introduced to establish consistency between the function regression and pixel classification. RESULTS We conducted experiments on the COVID-19 CT dataset and DRIVE dataset and compared our approach with various semi-supervised and fully supervised learning models. On the COVID-19 CT dataset, we achieved a segmentation accuracy of up to 74.56%. Our segmentation accuracy on the DRIVE dataset was 79.79%. CONCLUSIONS The results demonstrate the outstanding performance of our method on several commonly used evaluation metrics. The high segmentation accuracy of our model demonstrates that utilizing Swin modules with different window sizes can enhance the feature extraction capability of the model, and the level set function can enable semi-supervised models to more effectively utilize unlabeled data. This provides meaningful insights for the application of deep learning in medical image segmentation. Our code will be released once the manuscript is accepted for publication.
Collapse
Affiliation(s)
- Xiaojie Huang
- The Second Affiliated HospitalSchool of MedicineZhejiang UniversityHangzhouChina
| | - Yating Zhu
- Zhejiang University of TechnologyHangzhouChina
| | | | - Ming Xia
- Zhejiang University of TechnologyHangzhouChina
| | - Xiaoting Shen
- Stomatology HospitalSchool of StomatologyZhejiang University School of MedicineHangzhouChina
| | - Pingli Wang
- The Second Affiliated HospitalSchool of MedicineZhejiang UniversityHangzhouChina
| | | |
Collapse
|
7
|
Yamada A, Hanaoka S, Takenaga T, Miki S, Yoshikawa T, Nomura Y. Investigation of distributed learning for automated lesion detection in head MR images. Radiol Phys Technol 2024; 17:725-738. [PMID: 39048847 PMCID: PMC11341643 DOI: 10.1007/s12194-024-00827-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 06/11/2024] [Accepted: 07/14/2024] [Indexed: 07/27/2024]
Abstract
In this study, we investigated the application of distributed learning, including federated learning and cyclical weight transfer, in the development of computer-aided detection (CADe) software for (1) cerebral aneurysm detection in magnetic resonance (MR) angiography images and (2) brain metastasis detection in brain contrast-enhanced MR images. We used datasets collected from various institutions, scanner vendors, and magnetic field strengths for each target CADe software. We compared the performance of multiple strategies, including a centralized strategy, in which software development is conducted at a development institution after collecting de-identified data from multiple institutions. Our results showed that the performance of CADe software trained through distributed learning was equal to or better than that trained through the centralized strategy. However, the distributed learning strategies that achieved the highest performance depend on the target CADe software. Hence, distributed learning can become one of the strategies for CADe software development using data collected from multiple institutions.
Collapse
Affiliation(s)
- Aiki Yamada
- Department of Medical Engineering, Graduate School of Science and Engineering, Chiba University, 1-33 Yayoi-Cho, Inage-Ku, Chiba, 263-8522, Japan.
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Tomomi Takenaga
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-Cho, Inage-Ku, Chiba, 263-8522, Japan
| |
Collapse
|
8
|
Nomura Y, Hanaoka S, Hayashi N, Yoshikawa T, Koshino S, Sato C, Tatsuta M, Tanaka Y, Kano S, Nakaya M, Inui S, Kusakabe M, Nakao T, Miki S, Watadani T, Nakaoka R, Shimizu A, Abe O. Performance changes due to differences among annotating radiologists for training data in computerized lesion detection. Int J Comput Assist Radiol Surg 2024; 19:1527-1536. [PMID: 38625446 PMCID: PMC11329536 DOI: 10.1007/s11548-024-03136-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 03/28/2024] [Indexed: 04/17/2024]
Abstract
PURPOSE The quality and bias of annotations by annotators (e.g., radiologists) affect the performance changes in computer-aided detection (CAD) software using machine learning. We hypothesized that the difference in the years of experience in image interpretation among radiologists contributes to annotation variability. In this study, we focused on how the performance of CAD software changes with retraining by incorporating cases annotated by radiologists with varying experience. METHODS We used two types of CAD software for lung nodule detection in chest computed tomography images and cerebral aneurysm detection in magnetic resonance angiography images. Twelve radiologists with different years of experience independently annotated the lesions, and the performance changes were investigated by repeating the retraining of the CAD software twice, with the addition of cases annotated by each radiologist. Additionally, we investigated the effects of retraining using integrated annotations from multiple radiologists. RESULTS The performance of the CAD software after retraining differed among annotating radiologists. In some cases, the performance was degraded compared to that of the initial software. Retraining using integrated annotations showed different performance trends depending on the target CAD software, notably in cerebral aneurysm detection, where the performance decreased compared to using annotations from a single radiologist. CONCLUSIONS Although the performance of the CAD software after retraining varied among the annotating radiologists, no direct correlation with their experience was found. The performance trends differed according to the type of CAD software used when integrated annotations from multiple radiologists were used.
Collapse
Affiliation(s)
- Yukihiro Nomura
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, 263-8522, Japan.
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Saori Koshino
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Chiaki Sato
- Department of Radiology, Tokyo Metropolitan Bokutoh Hospital, Tokyo, Japan
| | - Momoko Tatsuta
- Department of Diagnostic Radiology, Kitasato University Hospital, Sagamihara, Kanagawa, Japan
| | - Yuya Tanaka
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Shintaro Kano
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Moto Nakaya
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Shohei Inui
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | | | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Takeyuki Watadani
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Ryusuke Nakaoka
- Division of Medical Devices, National Institute of Health Sciences, Kawasaki, Kanagawa, Japan
| | - Akinobu Shimizu
- Institute of Engineering, Tokyo University of Agriculture and Technology, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
9
|
Szymaszek P, Tyszka-Czochara M, Ortyl J. Application of Photoactive Compounds in Cancer Theranostics: Review on Recent Trends from Photoactive Chemistry to Artificial Intelligence. Molecules 2024; 29:3164. [PMID: 38999115 PMCID: PMC11243723 DOI: 10.3390/molecules29133164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 06/14/2024] [Accepted: 06/25/2024] [Indexed: 07/14/2024] Open
Abstract
According to the World Health Organization (WHO) and the International Agency for Research on Cancer (IARC), the number of cancer cases and deaths worldwide is predicted to nearly double by 2030, reaching 21.7 million cases and 13 million fatalities. The increase in cancer mortality is due to limitations in the diagnosis and treatment options that are currently available. The close relationship between diagnostics and medicine has made it possible for cancer patients to receive precise diagnoses and individualized care. This article discusses newly developed compounds with potential for photodynamic therapy and diagnostic applications, as well as those already in use. In addition, it discusses the use of artificial intelligence in the analysis of diagnostic images obtained using, among other things, theranostic agents.
Collapse
Affiliation(s)
- Patryk Szymaszek
- Department of Biotechnology and Physical Chemistry, Faculty of Chemical Engineering and Technology, Cracow University of Technology, Warszawska 24, 31-155 Kraków, Poland
| | | | - Joanna Ortyl
- Department of Biotechnology and Physical Chemistry, Faculty of Chemical Engineering and Technology, Cracow University of Technology, Warszawska 24, 31-155 Kraków, Poland
- Photo HiTech Ltd., Bobrzyńskiego 14, 30-348 Kraków, Poland
- Photo4Chem Ltd., Juliusza Lea 114/416A-B, 31-133 Cracow, Poland
| |
Collapse
|
10
|
Liu X, Qu L, Xie Z, Zhao J, Shi Y, Song Z. Towards more precise automatic analysis: a systematic review of deep learning-based multi-organ segmentation. Biomed Eng Online 2024; 23:52. [PMID: 38851691 PMCID: PMC11162022 DOI: 10.1186/s12938-024-01238-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 04/11/2024] [Indexed: 06/10/2024] Open
Abstract
Accurate segmentation of multiple organs in the head, neck, chest, and abdomen from medical images is an essential step in computer-aided diagnosis, surgical navigation, and radiation therapy. In the past few years, with a data-driven feature extraction approach and end-to-end training, automatic deep learning-based multi-organ segmentation methods have far outperformed traditional methods and become a new research topic. This review systematically summarizes the latest research in this field. We searched Google Scholar for papers published from January 1, 2016 to December 31, 2023, using keywords "multi-organ segmentation" and "deep learning", resulting in 327 papers. We followed the PRISMA guidelines for paper selection, and 195 studies were deemed to be within the scope of this review. We summarized the two main aspects involved in multi-organ segmentation: datasets and methods. Regarding datasets, we provided an overview of existing public datasets and conducted an in-depth analysis. Concerning methods, we categorized existing approaches into three major classes: fully supervised, weakly supervised and semi-supervised, based on whether they require complete label information. We summarized the achievements of these methods in terms of segmentation accuracy. In the discussion and conclusion section, we outlined and summarized the current trends in multi-organ segmentation.
Collapse
Affiliation(s)
- Xiaoyu Liu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Linhao Qu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Ziyue Xie
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Jiayue Zhao
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Yonghong Shi
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| |
Collapse
|
11
|
Sharma N, Gupta S, Gupta D, Gupta P, Juneja S, Shah A, Shaikh A. UMobileNetV2 model for semantic segmentation of gastrointestinal tract in MRI scans. PLoS One 2024; 19:e0302880. [PMID: 38718092 PMCID: PMC11078421 DOI: 10.1371/journal.pone.0302880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 04/14/2024] [Indexed: 05/12/2024] Open
Abstract
Gastrointestinal (GI) cancer is leading general tumour in the Gastrointestinal tract, which is fourth significant reason of tumour death in men and women. The common cure for GI cancer is radiation treatment, which contains directing a high-energy X-ray beam onto the tumor while avoiding healthy organs. To provide high dosages of X-rays, a system needs for accurately segmenting the GI tract organs. The study presents a UMobileNetV2 model for semantic segmentation of small and large intestine and stomach in MRI images of the GI tract. The model uses MobileNetV2 as an encoder in the contraction path and UNet layers as a decoder in the expansion path. The UW-Madison database, which contains MRI scans from 85 patients and 38,496 images, is used for evaluation. This automated technology has the capability to enhance the pace of cancer therapy by aiding the radio oncologist in the process of segmenting the organs of the GI tract. The UMobileNetV2 model is compared to three transfer learning models: Xception, ResNet 101, and NASNet mobile, which are used as encoders in UNet architecture. The model is analyzed using three distinct optimizers, i.e., Adam, RMS, and SGD. The UMobileNetV2 model with the combination of Adam optimizer outperforms all other transfer learning models. It obtains a dice coefficient of 0.8984, an IoU of 0.8697, and a validation loss of 0.1310, proving its ability to reliably segment the stomach and intestines in MRI images of gastrointestinal cancer patients.
Collapse
Affiliation(s)
- Neha Sharma
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Deepali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Punit Gupta
- University College Dublin, Dublin, Ireland
- Manipal University Jaipur, Jaipur, India
| | - Sapna Juneja
- International Islamic University, Kuala Lumpur, Malaysia
| | - Asadullah Shah
- International Islamic University, Kuala Lumpur, Malaysia
| | | |
Collapse
|
12
|
Wang D, Huai B, Ma X, Jin B, Wang Y, Chen M, Sang J, Liu R. Application of artificial intelligence-assisted image diagnosis software based on volume data reconstruction technique in medical imaging practice teaching. BMC MEDICAL EDUCATION 2024; 24:405. [PMID: 38605345 PMCID: PMC11010354 DOI: 10.1186/s12909-024-05382-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 04/02/2024] [Indexed: 04/13/2024]
Abstract
BACKGROUND In medical imaging courses, due to the complexity of anatomical relationships, limited number of practical course hours and instructors, how to improve the teaching quality of practical skills and self-directed learning ability has always been a challenge for higher medical education. Artificial intelligence-assisted diagnostic (AISD) software based on volume data reconstruction (VDR) technique is gradually entering radiology. It converts two-dimensional images into three-dimensional images, and AI can assist in image diagnosis. However, the application of artificial intelligence in medical education is still in its early stages. The purpose of this study is to explore the application value of AISD software based on VDR technique in medical imaging practical teaching, and to provide a basis for improving medical imaging practical teaching. METHODS Totally 41 students majoring in clinical medicine in 2017 were enrolled as the experiment group. AISD software based on VDR was used in practical teaching of medical imaging to display 3D images and mark lesions with AISD. Then annotations were provided and diagnostic suggestions were given. Also 43 students majoring in clinical medicine from 2016 were chosen as the control group, who were taught with the conventional film and multimedia teaching methods. The exam results and evaluation scales were compared statistically between groups. RESULTS The total skill scores of the test group were significantly higher compared with the control group (84.51 ± 3.81 vs. 80.67 ± 5.43). The scores of computed tomography (CT) diagnosis (49.93 ± 3.59 vs. 46.60 ± 4.89) and magnetic resonance (MR) diagnosis (17.41 ± 1.00 vs. 16.93 ± 1.14) of the experiment group were both significantly higher. The scores of academic self-efficacy (82.17 ± 4.67) and self-directed learning ability (235.56 ± 13.50) of the group were significantly higher compared with the control group (78.93 ± 6.29, 226.35 ± 13.90). CONCLUSIONS Applying AISD software based on VDR to medical imaging practice teaching can enable students to timely obtain AI annotated lesion information and 3D images, which may help improve their image reading skills and enhance their academic self-efficacy and self-directed learning abilities.
Collapse
Affiliation(s)
- DongXu Wang
- Department of Medical Imaging, Second Affiliated Hospital of Qiqihar Medical University, 37 West Zhonghua Road, Qiqihar, Heilongjiang, 161006, China.
| | - BingCheng Huai
- Department of Medical Imaging, Second Affiliated Hospital of Qiqihar Medical University, 37 West Zhonghua Road, Qiqihar, Heilongjiang, 161006, China
| | - Xing Ma
- Center for Higher Education Research and Teaching Quality Evaluation, Harbin Medical University, Harbin, Heilongjiang, 150000, China
| | - BaiMing Jin
- School of Public Health, Qiqihar Medical University, 333 BuKui North Street, Qiqihar, Heilongjiang, 161006, China
| | - YuGuang Wang
- Department of Medical Imaging, Second Affiliated Hospital of Qiqihar Medical University, 37 West Zhonghua Road, Qiqihar, Heilongjiang, 161006, China
| | - MengYu Chen
- Academic Affairs Section, Second Affiliated Hospital of Qiqihar Medical University, 37 West Zhonghua Road, Qiqihar, Heilongjiang, 161006, China
| | - JunZhi Sang
- Department of Medical Imaging, Second Affiliated Hospital of Qiqihar Medical University, 37 West Zhonghua Road, Qiqihar, Heilongjiang, 161006, China
| | - RuiNan Liu
- Department of Medical Imaging, Second Affiliated Hospital of Qiqihar Medical University, 37 West Zhonghua Road, Qiqihar, Heilongjiang, 161006, China
| |
Collapse
|
13
|
Jeon SK, Joo I, Park J, Kim JM, Park SJ, Yoon SH. Fully-automated multi-organ segmentation tool applicable to both non-contrast and post-contrast abdominal CT: deep learning algorithm developed using dual-energy CT images. Sci Rep 2024; 14:4378. [PMID: 38388824 PMCID: PMC10883917 DOI: 10.1038/s41598-024-55137-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 02/20/2024] [Indexed: 02/24/2024] Open
Abstract
A novel 3D nnU-Net-based of algorithm was developed for fully-automated multi-organ segmentation in abdominal CT, applicable to both non-contrast and post-contrast images. The algorithm was trained using dual-energy CT (DECT)-obtained portal venous phase (PVP) and spatiotemporally-matched virtual non-contrast images, and tested using a single-energy (SE) CT dataset comprising PVP and true non-contrast (TNC) images. The algorithm showed robust accuracy in segmenting the liver, spleen, right kidney (RK), and left kidney (LK), with mean dice similarity coefficients (DSCs) exceeding 0.94 for each organ, regardless of contrast enhancement. However, pancreas segmentation demonstrated slightly lower performance with mean DSCs of around 0.8. In organ volume estimation, the algorithm demonstrated excellent agreement with ground-truth measurements for the liver, spleen, RK, and LK (intraclass correlation coefficients [ICCs] > 0.95); while the pancreas showed good agreements (ICC = 0.792 in SE-PVP, 0.840 in TNC). Accurate volume estimation within a 10% deviation from ground-truth was achieved in over 90% of cases involving the liver, spleen, RK, and LK. These findings indicate the efficacy of our 3D nnU-Net-based algorithm, developed using DECT images, which provides precise segmentation of the liver, spleen, and RK and LK in both non-contrast and post-contrast CT images, enabling reliable organ volumetry, albeit with relatively reduced performance for the pancreas.
Collapse
Affiliation(s)
- Sun Kyung Jeon
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Ijin Joo
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea.
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.
- Institute of Radiation Medicine, Seoul National University Medical Research Center Seoul National University Hospital, Seoul, Korea.
| | - Junghoan Park
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | | | | | - Soon Ho Yoon
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
- MEDICALIP. Co. Ltd., Seoul, Korea
| |
Collapse
|
14
|
Selle M, Kircher M, Schwennen C, Visscher C, Jung K. Dimension reduction and outlier detection of 3-D shapes derived from multi-organ CT images. BMC Med Inform Decis Mak 2024; 24:49. [PMID: 38355504 PMCID: PMC10865689 DOI: 10.1186/s12911-024-02457-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 02/08/2024] [Indexed: 02/16/2024] Open
Abstract
BACKGROUND Unsupervised clustering and outlier detection are important in medical research to understand the distributional composition of a collective of patients. A number of clustering methods exist, also for high-dimensional data after dimension reduction. Clustering and outlier detection may, however, become less robust or contradictory if multiple high-dimensional data sets per patient exist. Such a scenario is given when the focus is on 3-D data of multiple organs per patient, and a high-dimensional feature matrix per organ is extracted. METHODS We use principal component analysis (PCA), t-distributed stochastic neighbor embedding (t-SNE) and multiple co-inertia analysis (MCIA) combined with bagplots to study the distribution of multi-organ 3-D data taken by computed tomography scans. After point-set registration of multiple organs from two public data sets, multiple hundred shape features are extracted per organ. While PCA and t-SNE can only be applied to each organ individually, MCIA can project the data of all organs into the same low-dimensional space. RESULTS MCIA is the only approach, here, with which data of all organs can be projected into the same low-dimensional space. We studied how frequently (i.e., by how many organs) a patient was classified to belong to the inner or outer 50% of the population, or as an outlier. Outliers could only be detected with MCIA and PCA. MCIA and t-SNE were more robust in judging the distributional location of a patient in contrast to PCA. CONCLUSIONS MCIA is more appropriate and robust in judging the distributional location of a patient in the case of multiple high-dimensional data sets per patient. It is still recommendable to apply PCA or t-SNE in parallel to MCIA to study the location of individual organs.
Collapse
Affiliation(s)
- Michael Selle
- Institute of Animal Genomics, University of Veterinary Medicine Hannover, Hannover, Germany.
| | - Magdalena Kircher
- Institute of Animal Genomics, University of Veterinary Medicine Hannover, Hannover, Germany
| | - Cornelia Schwennen
- Institute for Animal Nutrition, University of Veterinary Medicine Hannover, Hannover, Germany
| | - Christian Visscher
- Institute for Animal Nutrition, University of Veterinary Medicine Hannover, Hannover, Germany
| | - Klaus Jung
- Institute of Animal Genomics, University of Veterinary Medicine Hannover, Hannover, Germany.
| |
Collapse
|
15
|
Jiao R, Zhang Y, Ding L, Xue B, Zhang J, Cai R, Jin C. Learning with limited annotations: A survey on deep semi-supervised learning for medical image segmentation. Comput Biol Med 2024; 169:107840. [PMID: 38157773 DOI: 10.1016/j.compbiomed.2023.107840] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 10/30/2023] [Accepted: 12/07/2023] [Indexed: 01/03/2024]
Abstract
Medical image segmentation is a fundamental and critical step in many image-guided clinical approaches. Recent success of deep learning-based segmentation methods usually relies on a large amount of labeled data, which is particularly difficult and costly to obtain, especially in the medical imaging domain where only experts can provide reliable and accurate annotations. Semi-supervised learning has emerged as an appealing strategy and been widely applied to medical image segmentation tasks to train deep models with limited annotations. In this paper, we present a comprehensive review of recently proposed semi-supervised learning methods for medical image segmentation and summarize both the technical novelties and empirical results. Furthermore, we analyze and discuss the limitations and several unsolved problems of existing approaches. We hope this review can inspire the research community to explore solutions to this challenge and further advance the field of medical image segmentation.
Collapse
Affiliation(s)
- Rushi Jiao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China; School of Engineering Medicine, Beihang University, Beijing, 100191, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China.
| | - Yichi Zhang
- School of Data Science, Fudan University, Shanghai, 200433, China; Artificial Intelligence Innovation and Incubation Institute, Fudan University, Shanghai, 200433, China.
| | - Le Ding
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China.
| | - Bingsen Xue
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China.
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, China.
| | - Rong Cai
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beihang University, Beijing, 100191, China.
| | - Cheng Jin
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China; Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China; Beijing Anding Hospital, Capital Medical University, Beijing, 100088, China.
| |
Collapse
|
16
|
Creswell J, Vo LNQ, Qin ZZ, Muyoyeta M, Tovar M, Wong EB, Ahmed S, Vijayan S, John S, Maniar R, Rahman T, MacPherson P, Banu S, Codlin AJ. Early user perspectives on using computer-aided detection software for interpreting chest X-ray images to enhance access and quality of care for persons with tuberculosis. BMC GLOBAL AND PUBLIC HEALTH 2023; 1:30. [PMID: 39681961 DOI: 10.1186/s44263-023-00033-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 12/06/2023] [Indexed: 12/18/2024]
Abstract
Despite 30 years as a public health emergency, tuberculosis (TB) remains one of the world's deadliest diseases. Most deaths are among persons with TB who are not reached with diagnosis and treatment. Thus, timely screening and accurate detection of TB, particularly using sensitive tools such as chest radiography, is crucial for reducing the global burden of this disease. However, lack of qualified human resources represents a common limiting factor in many high TB-burden countries. Artificial intelligence (AI) has emerged as a powerful complement in many facets of life, including for the interpretation of chest X-ray images. However, while AI may serve as a viable alternative to human radiographers and radiologists, there is a high likelihood that those suffering from TB will not reap the benefits of this technological advance without appropriate, clinically effective use and cost-conscious deployment. The World Health Organization recommended the use of AI for TB screening in 2021, and early adopters of the technology have been using the technology in many ways. In this manuscript, we present a compilation of early user experiences from nine high TB-burden countries focused on practical considerations and best practices related to deployment, threshold and use case selection, and scale-up. While we offer technical and operational guidance on the use of AI for interpreting chest X-ray images for TB detection, our aim remains to maximize the benefit that programs, implementers, and ultimately TB-affected individuals can derive from this innovative technology.
Collapse
Affiliation(s)
| | - Luan Nguyen Quang Vo
- Friends for International TB Relief (FIT), Hanoi, Vietnam
- Department of Global Health, WHO Collaboration Centre On Tuberculosis and Social Medicine, Karolinska Institutet, Stockholm, Sweden
| | | | - Monde Muyoyeta
- Centre for Infectious Disease Research in Zambia, Lusaka, Zambia
| | | | - Emily Beth Wong
- Africa Health Research Institute, KwaZulu-Natal, South Africa
- Division of Infectious Diseases, Heersink School of Medicine, University of Alabama Birmingham, Birmingham, AL, USA
| | - Shahriar Ahmed
- International Centre for Diarrhoeal Disease Research, Bangladesh (icddr,b), Dhaka, Bangladesh
| | | | | | - Rabia Maniar
- Interactive Research and Development (IRD) Pakistan, Karachi, Pakistan
| | | | - Peter MacPherson
- School of Health & Wellbeing, University of Glasgow, Glasgow, UK
- Malawi-Liverpool-Wellcome Trust Clinical Research Programme, Blantyre, Malawi
- London School of Hygiene & Tropical Medicine, London, UK
| | - Sayera Banu
- International Centre for Diarrhoeal Disease Research, Bangladesh (icddr,b), Dhaka, Bangladesh
| | - Andrew James Codlin
- Friends for International TB Relief (FIT), Hanoi, Vietnam
- Department of Global Health, WHO Collaboration Centre On Tuberculosis and Social Medicine, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
17
|
Glaser N, Bosman S, Madonsela T, van Heerden A, Mashaete K, Katende B, Ayakaka I, Murphy K, Signorell A, Lynen L, Bremerich J, Reither K. Incidental radiological findings during clinical tuberculosis screening in Lesotho and South Africa: a case series. J Med Case Rep 2023; 17:365. [PMID: 37620921 PMCID: PMC10464059 DOI: 10.1186/s13256-023-04097-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 07/21/2023] [Indexed: 08/26/2023] Open
Abstract
BACKGROUND Chest X-ray offers high sensitivity and acceptable specificity as a tuberculosis screening tool, but in areas with a high burden of tuberculosis, there is often a lack of radiological expertise to interpret chest X-ray. Computer-aided detection systems based on artificial intelligence are therefore increasingly used to screen for tuberculosis-related abnormalities on digital chest radiographies. The CAD4TB software has previously been shown to demonstrate high sensitivity for chest X-ray tuberculosis-related abnormalities, but it is not yet calibrated for the detection of non-tuberculosis abnormalities. When screening for tuberculosis, users of computer-aided detection need to be aware that other chest pathologies are likely to be as prevalent as, or more prevalent than, active tuberculosis. However, non--tuberculosis chest X-ray abnormalities detected during chest X-ray screening for tuberculosis remain poorly characterized in the sub-Saharan African setting, with only minimal literature. CASE PRESENTATION In this case series, we report on four cases with non-tuberculosis abnormalities detected on CXR in TB TRIAGE + ACCURACY (ClinicalTrials.gov Identifier: NCT04666311), a study in adult presumptive tuberculosis cases at health facilities in Lesotho and South Africa to determine the diagnostic accuracy of two potential tuberculosis triage tests: computer-aided detection (CAD4TB v7, Delft, the Netherlands) and C-reactive protein (Alere Afinion, USA). The four Black African participants presented with the following chest X-ray abnormalities: a 59-year-old woman with pulmonary arteriovenous malformation, a 28-year-old man with pneumothorax, a 20-year-old man with massive bronchiectasis, and a 47-year-old woman with aspergilloma. CONCLUSIONS Solely using chest X-ray computer-aided detection systems based on artificial intelligence as a tuberculosis screening strategy in sub-Saharan Africa comes with benefits, but also risks. Due to the limitation of CAD4TB for non-tuberculosis-abnormality identification, the computer-aided detection software may miss significant chest X-ray abnormalities that require treatment, as exemplified in our four cases. Increased data collection, characterization of non-tuberculosis anomalies and research on the implications of these diseases for individuals and health systems in sub-Saharan Africa is needed to help improve existing artificial intelligence software programs and their use in countries with high tuberculosis burden.
Collapse
Affiliation(s)
- Naomi Glaser
- Faculty of Medicine, University of Zürich, Zurich, Switzerland.
- Department of Health Sciences and Medicine, University of Lucerne, Lucerne, Switzerland.
| | - Shannon Bosman
- Center for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
| | - Thandanani Madonsela
- Center for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
| | - Alastair van Heerden
- Center for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
| | | | | | - Irene Ayakaka
- SolidarMed, Partnerships for Health, Maseru, Lesotho
| | - Keelin Murphy
- Radboud University Medical Center, Nijmegen, The Netherlands
| | - Aita Signorell
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Lutgarde Lynen
- Institute of Tropical Medicine Antwerp, Antwerp, Belgium
| | - Jens Bremerich
- Department of Radiology, Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Klaus Reither
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland.
- University of Basel, Basel, Switzerland.
| |
Collapse
|
18
|
Suttels V, Guedes Da Costa S, Garcia E, Brahier T, Hartley MA, Agodokpessi G, Wachinou P, Fasseur F, Boillat-Blanco N. Barriers and facilitators to implementation of point-of-care lung ultrasonography in a tertiary centre in Benin: a qualitative study among general physicians and pneumologists. BMJ Open 2023; 13:e070765. [PMID: 37369423 DOI: 10.1136/bmjopen-2022-070765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/29/2023] Open
Abstract
OBJECTIVES Owing to its ease-of-use and excellent diagnostic performance for the assessment of respiratory symptoms, point-of-care lung ultrasound (POC-LUS) has emerged as an attractive skill in resource-low settings, where limited access to specialist care and inconsistent radiology services erode health equity.To narrow down the research to practice gap, this study aims to gain in-depth insights in the perceptions on POC-LUS and computer-assisted POC-LUS for the diagnosis of lower respiratory tract infections (LRTIs) in a low-income and middle-income country (LMIC) of sub-Saharan Africa. DESIGN AND SETTING Qualitative study using face-to-face semi-structured interviews with three pneumologists and five general physicians in a tertiary centre for pneumology and tuberculosis in Benin, West Africa. The center hosts a prospective cohort study on the diagnostic performance of POC-LUS for LRTI. In this context, all participants started a POC-LUS training programme 6 months before the current study. Transcripts were coded by the interviewer, checked for intercoder reliability by an independent psychologist, compared and thematically summarised according to grounded theory methods. RESULTS Various barriers- and facilitators+ to POC-LUS implementation were identified related to four principal categories: (1) hospital setting (eg, lack of resources for device renewal or maintenance-, need for POC tests+), (2) physician's perceptions (eg, lack of opportunity to practice-, willingness to appropriate the technique+), (3) tool characteristics (eg, unclear lifespan-, expedited diagnosis+) and (4) patient's experience (no analogous image to keep-, reduction in costs+). Furthermore, all interviewees had positive attitudes towards computer-assisted POC-LUS. CONCLUSIONS There is a clear need for POC affordable lung imaging techniques in LMIC and physicians are willing to implement POC-LUS to optimise the diagnostic approach of LRTI with an affordable tool. Successful integration of POC-LUS into clinical routine will require adequate responses to local challenges related to the lack of available maintenance resources and limited opportunity to supervised practice for physicians.
Collapse
Affiliation(s)
| | - Sofia Guedes Da Costa
- Research Center for Psychology of Health, Aging and Sport Examination (PHASE), University of Lausanne, Lausanne, Switzerland
| | - Elena Garcia
- Emergency Department, CHUV, Lausanne, Switzerland
| | | | - Mary-Anne Hartley
- Digital Global Health Department, University of Lausanne, Lausanne, Switzerland
- Intelligent Global Health Research Group, Swiss Institute of Technology (EPFL), Lausanne, Switzerland
| | - Gildas Agodokpessi
- National Hospital Center of Pneumology, University of Abomey-Calavi, Cotonou, Benin
| | - Prudence Wachinou
- National Hospital Center of Pneumology, University of Abomey-Calavi, Cotonou, Benin
| | - Fabienne Fasseur
- Research Center for Psychology of Health, Aging and Sport Examination (PHASE), University of Lausanne, Lausanne, Switzerland
| | | |
Collapse
|
19
|
Ramaekers M, Viviers CGA, Janssen BV, Hellström TAE, Ewals L, van der Wulp K, Nederend J, Jacobs I, Pluyter JR, Mavroeidis D, van der Sommen F, Besselink MG, Luyer MDP. Computer-Aided Detection for Pancreatic Cancer Diagnosis: Radiological Challenges and Future Directions. J Clin Med 2023; 12:4209. [PMID: 37445243 DOI: 10.3390/jcm12134209] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 06/08/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023] Open
Abstract
Radiological imaging plays a crucial role in the detection and treatment of pancreatic ductal adenocarcinoma (PDAC). However, there are several challenges associated with the use of these techniques in daily clinical practice. Determination of the presence or absence of cancer using radiological imaging is difficult and requires specific expertise, especially after neoadjuvant therapy. Early detection and characterization of tumors would potentially increase the number of patients who are eligible for curative treatment. Over the last decades, artificial intelligence (AI)-based computer-aided detection (CAD) has rapidly evolved as a means for improving the radiological detection of cancer and the assessment of the extent of disease. Although the results of AI applications seem promising, widespread adoption in clinical practice has not taken place. This narrative review provides an overview of current radiological CAD systems in pancreatic cancer, highlights challenges that are pertinent to clinical practice, and discusses potential solutions for these challenges.
Collapse
Affiliation(s)
- Mark Ramaekers
- Department of Surgery, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| | - Christiaan G A Viviers
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Boris V Janssen
- Department of Surgery, Amsterdam UMC, University of Amsterdam, 1105 AZ Amsterdam, The Netherlands
- Cancer Center Amsterdam, 1081 HV Amsterdam, The Netherlands
| | - Terese A E Hellström
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Lotte Ewals
- Department of Radiology, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| | - Kasper van der Wulp
- Department of Radiology, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| | - Joost Nederend
- Department of Radiology, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| | - Igor Jacobs
- Department of Hospital Services and Informatics, Philips Research, 5656 AE Eindhoven, The Netherlands
| | - Jon R Pluyter
- Department of Experience Design, Philips Design, 5656 AE Eindhoven, The Netherlands
| | - Dimitrios Mavroeidis
- Department of Data Science, Philips Research, 5656 AE Eindhoven, The Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Marc G Besselink
- Department of Surgery, Amsterdam UMC, University of Amsterdam, 1105 AZ Amsterdam, The Netherlands
- Cancer Center Amsterdam, 1081 HV Amsterdam, The Netherlands
| | - Misha D P Luyer
- Department of Surgery, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| |
Collapse
|
20
|
Xu M, Chen Z, Zheng J, Zhao Q, Yuan Z. Artificial Intelligence-Aided Optical Imaging for Cancer Theranostics. Semin Cancer Biol 2023:S1044-579X(23)00094-9. [PMID: 37302519 DOI: 10.1016/j.semcancer.2023.06.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 06/08/2023] [Accepted: 06/08/2023] [Indexed: 06/13/2023]
Abstract
The use of artificial intelligence (AI) to assist biomedical imaging have demonstrated its high accuracy and high efficiency in medical decision-making for individualized cancer medicine. In particular, optical imaging methods are able to visualize both the structural and functional information of tumors tissues with high contrast, low cost, and noninvasive property. However, no systematic work has been performed to inspect the recent advances on AI-aided optical imaging for cancer theranostics. In this review, we demonstrated how AI can guide optical imaging methods to improve the accuracy on tumor detection, automated analysis and prediction of its histopathological section, its monitoring during treatment, and its prognosis by using computer vision, deep learning and natural language processing. By contrast, the optical imaging techniques involved mainly consisted of various tomography and microscopy imaging methods such as optical endoscopy imaging, optical coherence tomography, photoacoustic imaging, diffuse optical tomography, optical microscopy imaging, Raman imaging, and fluorescent imaging. Meanwhile, existing problems, possible challenges and future prospects for AI-aided optical imaging protocol for cancer theranostics were also discussed. It is expected that the present work can open a new avenue for precision oncology by using AI and optical imaging tools.
Collapse
Affiliation(s)
- Mengze Xu
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai, China; Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China; Centre for Cognitive and Brain Sciences, University of Macau, Macau SAR, China
| | - Zhiyi Chen
- Institute of Medical Imaging, Hengyang Medical School, University of South China, Hengyang, China
| | - Junxiao Zheng
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China; Centre for Cognitive and Brain Sciences, University of Macau, Macau SAR, China
| | - Qi Zhao
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China
| | - Zhen Yuan
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China; Centre for Cognitive and Brain Sciences, University of Macau, Macau SAR, China.
| |
Collapse
|
21
|
Al-Naser YA. The impact of artificial intelligence on radiography as a profession: A narrative review. J Med Imaging Radiat Sci 2023; 54:162-166. [PMID: 36376210 DOI: 10.1016/j.jmir.2022.10.196] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 09/27/2022] [Accepted: 10/14/2022] [Indexed: 11/13/2022]
Abstract
BACKGROUND AND PURPOSE Artificial intelligence (AI) algorithms, particularly deep learning, have made significant strides in image recognition and classification, providing remarkable diagnostic accuracy to various diseases. This domain of AI has been the focus of many research papers as it directly relates to the roles and responsibilities of a radiologist. However, discussions on the impact of such technology on the radiography profession are often overlooked. To address this gap in the literature, this paper aims to address the application of AI in radiography and how AI's rapid emergence into healthcare is impacting not only standard radiographic protocols but the role of the radiographic technologist as well. METHODS A review of the literature on AI and radiography was performed, using databases within PubMed, Google Scholar, and ScienceDirect. Video presentations from YouTube were also utilized to weigh the varying opinions of world leaders at the fore of artificial intelligence. RESULTS AI can augment routine standard radiographic protocols. It can automatically ensure optimal patient positioning within the gantry as well as automate image processing. As AI technologies continue to emerge in diagnostic imaging, practicing radiologic technologists are urged to achieve threshold computational and technical literacy to operate AI-driven imaging technology. CONCLUSION There are many applications of AI in radiography including acquisition and image processing. In the near future, it will be important to supply the demand for radiographers skilled in AI-driven technologies.
Collapse
|
22
|
Lian S, Li L, Luo Z, Zhong Z, Wang B, Li S. Learning multi-organ segmentation via partial- and mutual-prior from single-organ datasets. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
23
|
Liu H, Fu Y, Zhang S, Liu J, Wang Y, Wang G, Fang J. GCHA-Net: Global context and hybrid attention network for automatic liver segmentation. Comput Biol Med 2023; 152:106352. [PMID: 36481761 DOI: 10.1016/j.compbiomed.2022.106352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 11/15/2022] [Accepted: 11/23/2022] [Indexed: 11/27/2022]
Abstract
Liver segmentation is a critical step in liver cancer diagnosis and surgical planning. The U-Net's architecture is one of the most efficient deep networks for medical image segmentation. However, the continuous downsampling operators in U-Net causes the loss of spatial information. To solve these problems, we propose a global context and hybrid attention network, called GCHA-Net, to adaptive capture the structural and detailed features. To capture the global features, a global attention module (GAM) is designed to model the channel and positional dimensions of the interdependencies. To capture the local features, a feature aggregation module (FAM) is designed, where a local attention module (LAM) is proposed to capture the spatial information. LAM can make our model focus on the local liver regions and suppress irrelevant information. The experimental results on the dataset LiTS2017 show that the dice per case (DPC) value and dice global (DG) value of liver were 96.5% and 96.9%, respectively. Compared with the state-of-the-art models, our model has superior performance in liver segmentation. Meanwhile, we test the experiment results on the 3Dircadb dataset, and it shows our model can obtain the highest accuracy compared with the closely related models. From these results, it can been seen that the proposed model can effectively capture the global context information and build the correlation between different convolutional layers. The code is available at the website: https://github.com/HuaxiangLiu/GCAU-Net.
Collapse
Affiliation(s)
- Huaxiang Liu
- Institute of Intelligent Information Processing, Taizhou University, Taizhou, 318000, Zhejiang, China
| | - Youyao Fu
- Institute of Intelligent Information Processing, Taizhou University, Taizhou, 318000, Zhejiang, China
| | - Shiqing Zhang
- Institute of Intelligent Information Processing, Taizhou University, Taizhou, 318000, Zhejiang, China
| | - Jun Liu
- College of Mechanical Engineering, Quzhou University, Quzhou, 324000, Zhejiang, China
| | - Yong Wang
- School of Aeronautics and Astronautics, Sun Yat Sen University, Guangzhou, 510275, Guangdong, China
| | - Guoyu Wang
- Institute of Intelligent Information Processing, Taizhou University, Taizhou, 318000, Zhejiang, China
| | - Jiangxiong Fang
- Institute of Intelligent Information Processing, Taizhou University, Taizhou, 318000, Zhejiang, China; College of Mechanical Engineering, Quzhou University, Quzhou, 324000, Zhejiang, China.
| |
Collapse
|
24
|
Irkham I, Ibrahim AU, Nwekwo CW, Al-Turjman F, Hartati YW. Current Technologies for Detection of COVID-19: Biosensors, Artificial Intelligence and Internet of Medical Things (IoMT): Review. SENSORS (BASEL, SWITZERLAND) 2022; 23:426. [PMID: 36617023 PMCID: PMC9824404 DOI: 10.3390/s23010426] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 12/14/2022] [Accepted: 12/21/2022] [Indexed: 06/17/2023]
Abstract
Despite the fact that COVID-19 is no longer a global pandemic due to development and integration of different technologies for the diagnosis and treatment of the disease, technological advancement in the field of molecular biology, electronics, computer science, artificial intelligence, Internet of Things, nanotechnology, etc. has led to the development of molecular approaches and computer aided diagnosis for the detection of COVID-19. This study provides a holistic approach on COVID-19 detection based on (1) molecular diagnosis which includes RT-PCR, antigen-antibody, and CRISPR-based biosensors and (2) computer aided detection based on AI-driven models which include deep learning and transfer learning approach. The review also provide comparison between these two emerging technologies and open research issues for the development of smart-IoMT-enabled platforms for the detection of COVID-19.
Collapse
Affiliation(s)
- Irkham Irkham
- Department of Chemistry, Faculty of Mathematics and Natural Sciences, Padjadjaran University, Bandung 40173, Indonesia
| | | | - Chidi Wilson Nwekwo
- Department of Biomedical Engineering, Near East University, Mersin 99138, Turkey
| | - Fadi Al-Turjman
- Research Center for AI and IoT, Faculty of Engineering, University of Kyrenia, Mersin 99138, Turkey
- Artificial Intelligence Engineering Department, AI and Robotics Institute, Near East University, Mersin 99138, Turkey
| | - Yeni Wahyuni Hartati
- Department of Chemistry, Faculty of Mathematics and Natural Sciences, Padjadjaran University, Bandung 40173, Indonesia
| |
Collapse
|
25
|
Artificial Intelligence (AI) in Breast Imaging: A Scientometric Umbrella Review. Diagnostics (Basel) 2022; 12:diagnostics12123111. [PMID: 36553119 PMCID: PMC9777253 DOI: 10.3390/diagnostics12123111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 12/14/2022] Open
Abstract
Artificial intelligence (AI), a rousing advancement disrupting a wide spectrum of applications with remarkable betterment, has continued to gain momentum over the past decades. Within breast imaging, AI, especially machine learning and deep learning, honed with unlimited cross-data/case referencing, has found great utility encompassing four facets: screening and detection, diagnosis, disease monitoring, and data management as a whole. Over the years, breast cancer has been the apex of the cancer cumulative risk ranking for women across the six continents, existing in variegated forms and offering a complicated context in medical decisions. Realizing the ever-increasing demand for quality healthcare, contemporary AI has been envisioned to make great strides in clinical data management and perception, with the capability to detect indeterminate significance, predict prognostication, and correlate available data into a meaningful clinical endpoint. Here, the authors captured the review works over the past decades, focusing on AI in breast imaging, and systematized the included works into one usable document, which is termed an umbrella review. The present study aims to provide a panoramic view of how AI is poised to enhance breast imaging procedures. Evidence-based scientometric analysis was performed in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guideline, resulting in 71 included review works. This study aims to synthesize, collate, and correlate the included review works, thereby identifying the patterns, trends, quality, and types of the included works, captured by the structured search strategy. The present study is intended to serve as a "one-stop center" synthesis and provide a holistic bird's eye view to readers, ranging from newcomers to existing researchers and relevant stakeholders, on the topic of interest.
Collapse
|
26
|
Rouvière O, Jaouen T, Baseilhac P, Benomar ML, Escande R, Crouzet S, Souchon R. Artificial intelligence algorithms aimed at characterizing or detecting prostate cancer on MRI: How accurate are they when tested on independent cohorts? – A systematic review. Diagn Interv Imaging 2022; 104:221-234. [PMID: 36517398 DOI: 10.1016/j.diii.2022.11.005] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Accepted: 11/22/2022] [Indexed: 12/14/2022]
Abstract
PURPOSE The purpose of this study was to perform a systematic review of the literature on the diagnostic performance, in independent test cohorts, of artificial intelligence (AI)-based algorithms aimed at characterizing/detecting prostate cancer on magnetic resonance imaging (MRI). MATERIALS AND METHODS Medline, Embase and Web of Science were searched for studies published between January 2018 and September 2022, using a histological reference standard, and assessing prostate cancer characterization/detection by AI-based MRI algorithms in test cohorts composed of more than 40 patients and with at least one of the following independency criteria as compared to the training cohort: different institution, different population type, different MRI vendor, different magnetic field strength or strict temporal splitting. RESULTS Thirty-five studies were selected. The overall risk of bias was low. However, 23 studies did not use predefined diagnostic thresholds, which may have optimistically biased the results. Test cohorts fulfilled one to three of the five independency criteria. The diagnostic performance of the algorithms used as standalones was good, challenging that of human reading. In the 12 studies with predefined diagnostic thresholds, radiomics-based computer-aided diagnosis systems (assessing regions-of-interest drawn by the radiologist) tended to provide more robust results than deep learning-based computer-aided detection systems (providing probability maps). Two of the six studies comparing unassisted and assisted reading showed significant improvement due to the algorithm, mostly by reducing false positive findings. CONCLUSION Prostate MRI AI-based algorithms showed promising results, especially for the relatively simple task of characterizing predefined lesions. The best management of discrepancies between human reading and algorithm findings still needs to be defined.
Collapse
Affiliation(s)
- Olivier Rouvière
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France; Université Lyon 1, Faculté de médecine Lyon Est, Lyon 69003, France; LabTAU, INSERM, U1032, Lyon 69003, France.
| | | | - Pierre Baseilhac
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France
| | - Mohammed Lamine Benomar
- LabTAU, INSERM, U1032, Lyon 69003, France; University of Ain Temouchent, Faculty of Science and Technology, Algeria
| | - Raphael Escande
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France
| | - Sébastien Crouzet
- Université Lyon 1, Faculté de médecine Lyon Est, Lyon 69003, France; LabTAU, INSERM, U1032, Lyon 69003, France; Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Urology, Lyon 69003, France
| | | |
Collapse
|
27
|
Ikerionwu C, Ugwuishiwu C, Okpala I, James I, Okoronkwo M, Nnadi C, Orji U, Ebem D, Ike A. Application of machine and deep learning algorithms in optical microscopic detection of Plasmodium: A malaria diagnostic tool for the future. Photodiagnosis Photodyn Ther 2022; 40:103198. [PMID: 36379305 DOI: 10.1016/j.pdpdt.2022.103198] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/07/2022] [Accepted: 11/08/2022] [Indexed: 11/14/2022]
Abstract
Machine and deep learning techniques are prevalent in the medical discipline due to their high level of accuracy in disease diagnosis. One such disease is malaria caused by Plasmodium falciparum and transmitted by the female anopheles mosquito. According to the World Health Organisation (WHO), millions of people are infected annually, leading to inevitable deaths in the infected population. Statistical records show that early detection of malaria parasites could prevent deaths and machine learning (ML) has proved helpful in the early detection of malarial parasites. Human error is identified to be a major cause of inaccurate diagnostics in the traditional microscopy malaria diagnosis method. Therefore, the method would be more reliable if human expert dependency is restricted or entirely removed, and thus, the motivation of this paper. This study presents a systematic review to understand the prevalent machine learning algorithms applied to a low-cost, portable optical microscope in the automation of blood film interpretation for malaria parasite detection. Peer-reviewed papers were downloaded from selected reputable databases eg. Elsevier, IEEExplore, Pubmed, Scopus, Web of Science, etc. The extant literature suggests that convolutional neural network (CNN) and its variants (deep learning) account for 41.9% of the microscopy malaria diagnosis using machine learning with a prediction accuracy of 99.23%. Thus, the findings suggest that early detection of the malaria parasite has improved through the application of CNN and other ML algorithms on microscopic malaria parasite detection.
Collapse
Affiliation(s)
- Charles Ikerionwu
- Machine Learning on Disease Diagnosis Research Group, Nigeria; Department of Software Engineering, Federal University of Technology, Owerri, Imo State, Nigeria
| | - Chikodili Ugwuishiwu
- Machine Learning on Disease Diagnosis Research Group, Nigeria; Department of Computer Science, University of Nigeria, Nsukka, Enugu State, Nigeria.
| | - Izunna Okpala
- Machine Learning on Disease Diagnosis Research Group, Nigeria; Department of Information Technology, University of Cincinnati, USA
| | - Idara James
- Machine Learning on Disease Diagnosis Research Group, Nigeria; Department of Computer Science, Akwa Ibom State University, Nigeria
| | - Matthew Okoronkwo
- Machine Learning on Disease Diagnosis Research Group, Nigeria; Department of Computer Science, University of Nigeria, Nsukka, Enugu State, Nigeria
| | - Charles Nnadi
- Machine Learning on Disease Diagnosis Research Group, Nigeria; Deprtment of Pharmaceutical and Medicinal Chemistry, Faculty of Pharmaceutical Sciences, University of Nigeria, Nsukka, Enugu State, Nigeria
| | - Ugochukwu Orji
- Machine Learning on Disease Diagnosis Research Group, Nigeria; Department of Computer Science, University of Nigeria, Nsukka, Enugu State, Nigeria
| | - Deborah Ebem
- Machine Learning on Disease Diagnosis Research Group, Nigeria; Department of Computer Science, University of Nigeria, Nsukka, Enugu State, Nigeria
| | - Anthony Ike
- Machine Learning on Disease Diagnosis Research Group, Nigeria; Department of Microbiology, University of Nigeria, Nsukka, Enugu State, Nigeria
| |
Collapse
|
28
|
Sabi S, Jacob JM, Gopi VP. CLASSIFICATION OF AGE-RELATED MACULAR DEGENERATION USING DAG-CNN ARCHITECTURE. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2022; 34. [DOI: 10.4015/s1016237222500375] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2025]
Abstract
Age-related Macular Degeneration (AMD) is the prime reason for vision impairment observed in major countries worldwide. Hence an accurate early detection of the disease is vital for more research in this area. Also, having a thorough eye diagnosis to detect AMD is a complex job. This paper introduces a Directed Acyclic Graph (DAG) structure-based Convolutional Neural network (CNN) architecture to better classify Dry or Wet AMD. The DAG architecture can combine features from multiple layers to provide better results. The DAG model also has the capacity to learn multi-level visual properties to increase classification accuracy. Fine tuning of DAG-based CNN model helps in improving the performance of the network. The training and testing of the proposed model are carried out with the Mendeley data set and achieved an accuracy of 99.2% with an AUC value of 0.9999. The proposed model also obtains better results for other parameters such as precision, recall and F1-score. Performance of the proposed network is also compared to that of the related works performed on the same data set. This shows ability of the proposed method to grade AMD images to help early detection of the disease. The model also performs computationally efficient for real-time applications as it does the classification process with few learnable parameters and fewer Floating-Point Operations (FLOPs).
Collapse
Affiliation(s)
- S. Sabi
- Department of Electronics and Communication Engineering, Sree Buddha College of Engineering, Pattoor, APJ Abdul Kalam echnological University, Kerala, India
| | - Jaya Mary Jacob
- Department of Biotechnology and Biochemical Engineering, Sree Buddha College of Engineering, Pattoor, APJ Abdul Kalam Technological University, Kerala, India
| | - Varun P. Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tiruchirappalli, Tamilnadu 620015, India
| |
Collapse
|
29
|
Uncertainty-guided mutual consistency learning for semi-supervised medical image segmentation. Artif Intell Med 2022; 138:102476. [PMID: 36990583 DOI: 10.1016/j.artmed.2022.102476] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 11/20/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022]
Abstract
Medical image segmentation is a fundamental and critical step in many clinical approaches. Semi-supervised learning has been widely applied to medical image segmentation tasks since it alleviates the heavy burden of acquiring expert-examined annotations and takes the advantage of unlabeled data which is much easier to acquire. Although consistency learning has been proven to be an effective approach by enforcing an invariance of predictions under different distributions, existing approaches cannot make full use of region-level shape constraint and boundary-level distance information from unlabeled data. In this paper, we propose a novel uncertainty-guided mutual consistency learning framework to effectively exploit unlabeled data by integrating intra-task consistency learning from up-to-date predictions for self-ensembling and cross-task consistency learning from task-level regularization to exploit geometric shape information. The framework is guided by the estimated segmentation uncertainty of models to select out relatively certain predictions for consistency learning, so as to effectively exploit more reliable information from unlabeled data. Experiments on two publicly available benchmark datasets showed that: (1) Our proposed method can achieve significant performance improvement by leveraging unlabeled data, with up to 4.13% and 9.82% in Dice coefficient compared to supervised baseline on left atrium segmentation and brain tumor segmentation, respectively. (2) Compared with other semi-supervised segmentation methods, our proposed method achieve better segmentation performance under the same backbone network and task settings on both datasets, demonstrating the effectiveness and robustness of our method and potential transferability for other medical image segmentation tasks.
Collapse
|
30
|
Lee CE, Park H, Shin YG, Chung M. Voxel-wise adversarial semi-supervised learning for medical image segmentation. Comput Biol Med 2022; 150:106152. [PMID: 36208595 DOI: 10.1016/j.compbiomed.2022.106152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 09/03/2022] [Accepted: 09/24/2022] [Indexed: 11/17/2022]
Abstract
BACKGROUND AND OBJECTIVE Semi-supervised learning for medical image segmentation is an important area of research for alleviating the huge cost associated with the construction of reliable large-scale annotations in the medical domain. Recent semi-supervised approaches have demonstrated promising results by employing consistency regularization, pseudo-labeling techniques, and adversarial learning. These methods primarily attempt to learn the distribution of labeled and unlabeled data by enforcing consistency in the predictions or embedding context. However, previous approaches have focused only on local discrepancy minimization or context relations across single classes. METHODS In this paper, we introduce a novel adversarial learning-based semi-supervised segmentation method that effectively embeds both local and global features from multiple hidden layers and learns context relations between multiple classes. Our voxel-wise adversarial learning method utilizes a voxel-wise feature discriminator, which considers multilayer voxel-wise features (involving both local and global features) as an input by embedding class-specific voxel-wise feature distribution. Furthermore, our previous representation learning method is improved by overcoming information loss and learning stability problems, which enables rich representations of labeled data. RESULT In the experiments, we used the Left Atrial Segmentation Challenge dataset and the Abdominal Multi-Organ dataset to prove the effectiveness of our method in both single class and multiclass segmentation. The experimental results demonstrate that our method outperforms current best-performing state-of-the-art semi-supervised learning approaches. Our proposed adversarial learning-based semi-supervised segmentation method successfully leveraged unlabeled data to improve the network performance by 2% in Dice score coefficient for multi-organ dataset. CONCLUSION We compare our approach to a wide range of medical datasets, and showed our method can be adapted to embed class-specific features. Furthermore, visual interpretation of the feature space demonstrates that our proposed method enables a well-distributed and separated feature space from both labeled and unlabeled data, which improves the overall prediction results.
Collapse
Affiliation(s)
| | - Hyelim Park
- Department of Computer Science and Engineering, Seoul National University, Republic of Korea.
| | - Yeong-Gil Shin
- Department of Computer Science and Engineering, Seoul National University, Republic of Korea.
| | - Minyoung Chung
- School of Software, Soongsil University, 369 Sangdo-Ro, Dongjak-Gu, Seoul, 06978, Republic of Korea.
| |
Collapse
|
31
|
Nemoto M, Tanaka A, Kaida H, Kimura Y, Nagaoka T, Yamada T, Hanaoka K, Kitajima K, Tsuchitani T, Ishii K. Automatic detection of primary and metastatic lesions on cervicothoracic region and whole-body bone using a uniform machine-learnable approach for [18F]-FDG-PET/CT image analysis. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac9173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 09/12/2022] [Indexed: 11/12/2022]
Abstract
Abstract
We propose a method to detect primary and metastatic lesions with Fluorine−18 fluorodeoxyglucose (FDG) accumulation in the lung field, neck, mediastinum, and bony regions on the FDG-PET/CT images. To search for systemic lesions, various anatomical structures must be considered. The proposed method is addressed by using an extraction process for anatomical regions and a uniform lesion detection approach. The uniform approach does not utilize processes that reflect any region-specific anatomical aspects but has a machine-learnable framework. Therefore, it can work as a lesion detection process for a specific anatomical region if it machine-learns the specific region data. In this study, three lesion detection processes for the whole-body bone region, lung field, or neck-mediastinum region are obtained. These detection processes include lesion candidate detection and false positive (FP) candidate elimination. The lesion candidate detection is based on a voxel anomaly detection with a one-class support vector machine. The FP candidate elimination is performed using an AdaBoost classifier ensemble. The image features used by the ensemble are selected sequentially during training and are optimal for candidate classification. Three-fold cross-validation was used to detect performance with the 54 diseased FDG-PET/CT images. The mean sensitivity for detecting primary and metastatic lesions at 3 FPs per case was 0.89 with a 0.10 standard deviation (SD) in the bone region, 0.80 with a 0.10 SD in the lung field, and 0.87 with a 0.10 SD in the neck region. The average areas under the ROC curve were 0.887 with a 0.125 SD for detecting bone metastases, 0.900 with a 0.063 SD for detecting pulmonary lesions, and 0.927 with a 0.035 SD for detecting the neck-mediastinum lesions. These detection performances indicate that the proposed method could be applied clinically. These results also show that the uniform approach has high versatility for providing various lesion detection processes.
Collapse
|
32
|
Zimmerer D, Full PM, Isensee F, Jager P, Adler T, Petersen J, Kohler G, Ross T, Reinke A, Kascenas A, Jensen BS, O'Neil AQ, Tan J, Hou B, Batten J, Qiu H, Kainz B, Shvetsova N, Fedulova I, Dylov DV, Yu B, Zhai J, Hu J, Si R, Zhou S, Wang S, Li X, Chen X, Zhao Y, Marimont SN, Tarroni G, Saase V, Maier-Hein L, Maier-Hein K. MOOD 2020: A Public Benchmark for Out-of-Distribution Detection and Localization on Medical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2728-2738. [PMID: 35468060 DOI: 10.1109/tmi.2022.3170077] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Detecting Out-of-Distribution (OoD) data is one of the greatest challenges in safe and robust deployment of machine learning algorithms in medicine. When the algorithms encounter cases that deviate from the distribution of the training data, they often produce incorrect and over-confident predictions. OoD detection algorithms aim to catch erroneous predictions in advance by analysing the data distribution and detecting potential instances of failure. Moreover, flagging OoD cases may support human readers in identifying incidental findings. Due to the increased interest in OoD algorithms, benchmarks for different domains have recently been established. In the medical imaging domain, for which reliable predictions are often essential, an open benchmark has been missing. We introduce the Medical-Out-Of-Distribution-Analysis-Challenge (MOOD) as an open, fair, and unbiased benchmark for OoD methods in the medical imaging domain. The analysis of the submitted algorithms shows that performance has a strong positive correlation with the perceived difficulty, and that all algorithms show a high variance for different anomalies, making it yet hard to recommend them for clinical practice. We also see a strong correlation between challenge ranking and performance on a simple toy test set, indicating that this might be a valuable addition as a proxy dataset during anomaly detection algorithm development.
Collapse
|
33
|
Ma J, Zhang Y, Gu S, Zhu C, Ge C, Zhang Y, An X, Wang C, Wang Q, Liu X, Cao S, Zhang Q, Liu S, Wang Y, Li Y, He J, Yang X. AbdomenCT-1K: Is Abdominal Organ Segmentation a Solved Problem? IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:6695-6714. [PMID: 34314356 DOI: 10.1109/tpami.2021.3100536] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
With the unprecedented developments in deep learning, automatic segmentation of main abdominal organs seems to be a solved problem as state-of-the-art (SOTA) methods have achieved comparable results with inter-rater variability on many benchmark datasets. However, most of the existing abdominal datasets only contain single-center, single-phase, single-vendor, or single-disease cases, and it is unclear whether the excellent performance can generalize on diverse datasets. This paper presents a large and diverse abdominal CT organ segmentation dataset, termed AbdomenCT-1K, with more than 1000 (1K) CT scans from 12 medical centers, including multi-phase, multi-vendor, and multi-disease cases. Furthermore, we conduct a large-scale study for liver, kidney, spleen, and pancreas segmentation and reveal the unsolved segmentation problems of the SOTA methods, such as the limited generalization ability on distinct medical centers, phases, and unseen diseases. To advance the unsolved problems, we further build four organ segmentation benchmarks for fully supervised, semi-supervised, weakly supervised, and continual learning, which are currently challenging and active research topics. Accordingly, we develop a simple and effective method for each benchmark, which can be used as out-of-the-box methods and strong baselines. We believe the AbdomenCT-1K dataset will promote future in-depth research towards clinical applicable abdominal organ segmentation methods.
Collapse
|
34
|
Chen X, Wang X, Zhang K, Fung KM, Thai TC, Moore K, Mannel RS, Liu H, Zheng B, Qiu Y. Recent advances and clinical applications of deep learning in medical image analysis. Med Image Anal 2022; 79:102444. [PMID: 35472844 PMCID: PMC9156578 DOI: 10.1016/j.media.2022.102444] [Citation(s) in RCA: 275] [Impact Index Per Article: 91.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 03/09/2022] [Accepted: 04/01/2022] [Indexed: 02/07/2023]
Abstract
Deep learning has received extensive research interest in developing new medical image processing algorithms, and deep learning based models have been remarkably successful in a variety of medical imaging tasks to support disease detection and diagnosis. Despite the success, the further improvement of deep learning models in medical image analysis is majorly bottlenecked by the lack of large-sized and well-annotated datasets. In the past five years, many studies have focused on addressing this challenge. In this paper, we reviewed and summarized these recent studies to provide a comprehensive overview of applying deep learning methods in various medical image analysis tasks. Especially, we emphasize the latest progress and contributions of state-of-the-art unsupervised and semi-supervised deep learning in medical image analysis, which are summarized based on different application scenarios, including classification, segmentation, detection, and image registration. We also discuss major technical challenges and suggest possible solutions in the future research efforts.
Collapse
Affiliation(s)
- Xuxin Chen
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Ximin Wang
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Ke Zhang
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Kar-Ming Fung
- Department of Pathology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Theresa C Thai
- Department of Radiology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Kathleen Moore
- Department of Obstetrics and Gynecology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Robert S Mannel
- Department of Obstetrics and Gynecology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Hong Liu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA.
| |
Collapse
|
35
|
Tai HC, Chen KY, Wu MH, Chang KJ, Chen CN, Chen A. Assessing Detection Accuracy of Computerized Sonographic Features and Computer-Assisted Reading Performance in Differentiating Thyroid Cancers. Biomedicines 2022; 10:biomedicines10071513. [PMID: 35884818 PMCID: PMC9313277 DOI: 10.3390/biomedicines10071513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 06/20/2022] [Accepted: 06/24/2022] [Indexed: 11/16/2022] Open
Abstract
For ultrasound imaging of thyroid nodules, medical guidelines are all based on findings of sonographic features to provide clinicians management recommendations. Due to the recent development of artificial intelligence and machine learning (AI/ML) technologies, there have been computer-assisted detection (CAD) software devices available for clinical use to detect and quantify the sonographic features of thyroid nodules. This study is to validate the accuracy of the computerized sonographic features (CSF) by a CAD software device, namely, AmCAD-UT, and then to assess how the reading performance of clinicians (readers) can be improved providing the computerized features. The feature detection accuracy is tested against the ground truth established by a panel of thyroid specialists and a multiple-reader multiple-case (MRMC) study is performed to assess the sequential reading performance with the assistance of the CSF. Five computerized features, including anechoic area, hyperechoic foci, hypoechoic pattern, heterogeneous texture, and indistinct margin, were tested, with AUCs ranging from 0.888~0.946, 0.825~0.913, 0.812~0.847, 0.627~0.77, and 0.676~0.766, respectively. With the five CSFs, the sequential reading performance of 18 clinicians is found significantly improved, with the AUC increasing from 0.720 without CSF to 0.776 with CSF. Our studies show that the computerized features are consistent with the clinicians’ findings and provide additional value in assisting sonographic diagnosis.
Collapse
Affiliation(s)
- Hao-Chih Tai
- Department of Surgery, National Taiwan University Hospital and College of Medicine, Taipei 100225, Taiwan; (H.-C.T.); (K.-Y.C.); (M.-H.W.); (K.-J.C.)
| | - Kuen-Yuan Chen
- Department of Surgery, National Taiwan University Hospital and College of Medicine, Taipei 100225, Taiwan; (H.-C.T.); (K.-Y.C.); (M.-H.W.); (K.-J.C.)
| | - Ming-Hsun Wu
- Department of Surgery, National Taiwan University Hospital and College of Medicine, Taipei 100225, Taiwan; (H.-C.T.); (K.-Y.C.); (M.-H.W.); (K.-J.C.)
| | - King-Jen Chang
- Department of Surgery, National Taiwan University Hospital and College of Medicine, Taipei 100225, Taiwan; (H.-C.T.); (K.-Y.C.); (M.-H.W.); (K.-J.C.)
| | - Chiung-Nien Chen
- Department of Surgery, National Taiwan University Hospital and College of Medicine, Taipei 100225, Taiwan; (H.-C.T.); (K.-Y.C.); (M.-H.W.); (K.-J.C.)
- Correspondence: (C.-N.C.); (A.C.)
| | - Argon Chen
- Graduate Institute of Industrial Engineering, National Taiwan University, Taipei 106216, Taiwan
- Correspondence: (C.-N.C.); (A.C.)
| |
Collapse
|
36
|
Lee S, Shin HJ, Kim S, Kim EK. Successful Implementation of an Artificial Intelligence-Based Computer-Aided Detection System for Chest Radiography in Daily Clinical Practice. Korean J Radiol 2022; 23:847-852. [PMID: 35762186 PMCID: PMC9434734 DOI: 10.3348/kjr.2022.0193] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 04/25/2022] [Accepted: 05/19/2022] [Indexed: 12/31/2022] Open
Affiliation(s)
- Seungsoo Lee
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Korea
| | - Hyun Joo Shin
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Korea
| | - Sungwon Kim
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Eun-Kyung Kim
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Korea.
| |
Collapse
|
37
|
Wang Y, Cai H, Pu Y, Li J, Yang F, Yang C, Chen L, Hu Z. The value of AI in the Diagnosis, Treatment, and Prognosis of Malignant Lung Cancer. FRONTIERS IN RADIOLOGY 2022; 2:810731. [PMID: 37492685 PMCID: PMC10365105 DOI: 10.3389/fradi.2022.810731] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Accepted: 03/30/2022] [Indexed: 07/27/2023]
Abstract
Malignant tumors is a serious public health threat. Among them, lung cancer, which has the highest fatality rate globally, has significantly endangered human health. With the development of artificial intelligence (AI) and its integration with medicine, AI research in malignant lung tumors has become critical. This article reviews the value of CAD, computer neural network deep learning, radiomics, molecular biomarkers, and digital pathology for the diagnosis, treatment, and prognosis of malignant lung tumors.
Collapse
Affiliation(s)
- Yue Wang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Haihua Cai
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yongzhu Pu
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Jindan Li
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Fake Yang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Conghui Yang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Long Chen
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
38
|
Li J, Qi L, Chen Q, Zhang YD, Qian X. A dual meta-learning framework based on idle data for enhancing segmentation of pancreatic cancer. Med Image Anal 2022; 78:102342. [PMID: 35354108 DOI: 10.1016/j.media.2021.102342] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 11/08/2021] [Accepted: 12/23/2021] [Indexed: 11/16/2022]
Abstract
Automated segmentation of pancreatic cancer is vital for clinical diagnosis and treatment. However, the small size and inconspicuous boundaries limit the segmentation performance, which is further exacerbated for deep learning techniques with the few training samples due to the high threshold of image acquisition and annotation. To alleviate this issue caused by the small-scale dataset, we collect idle multi-parametric MRIs of pancreatic cancer from different studies to construct a relatively large dataset for enhancing the CT pancreatic cancer segmentation. Therefore, we propose a deep learning segmentation model with the dual meta-learning framework for pancreatic cancer. It can integrate the common knowledge of tumors obtained from idle MRIs and salient knowledge from CT images, making high-level features more discriminative. Specifically, the random intermediate modalities between MRIs and CT are first generated to smoothly fill in the gaps in visual appearance and provide rich intermediate representations for ensuing meta-learning scheme. Subsequently, we employ intermediate modalities-based model-agnostic meta-learning to capture and transfer commonalities. At last, a meta-optimizer is utilized to adaptively learn the salient features within CT data, thus alleviating the interference due to internal differences. Comprehensive experimental results demonstrated our method achieved the promising segmentation performance, with a max Dice score of 64.94% on our private dataset, and outperformed state-of-the-art methods on a public pancreatic cancer CT dataset. The proposed method is an effective pancreatic cancer segmentation framework, which can be easily integrated into other segmentation networks and thus promises to be a potential paradigm for alleviating data scarcity challenges using idle data.
Collapse
Affiliation(s)
- Jun Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China
| | - Liang Qi
- Department of Radiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, 210009, China
| | - Qingzhong Chen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China
| | - Yu-Dong Zhang
- Department of Radiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, 210009, China
| | - Xiaohua Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China.
| |
Collapse
|
39
|
Fiscal MRC, Treviño V, Treviño LJR, López RO, Cardona Huerta S, Javier Lara-Díaz V, Peña JGT. COVID-19 classification using thermal images. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:JBO-210328GRR. [PMID: 35585679 PMCID: PMC9116467 DOI: 10.1117/1.jbo.27.5.056003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Accepted: 04/12/2022] [Indexed: 06/15/2023]
Abstract
SIGNIFICANCE There is a scarcity of published research on the potential role of thermal imaging in the remote detection of respiratory issues due to coronavirus disease-19 (COVID-19). This is a comprehensive study that explores the potential of this imaging technology resulting from its convenient aspects that make it highly accessible: it is contactless, noninvasive, and devoid of harmful radiation effects, and it does not require a complicated installation process. AIM We aim to investigate the role of thermal imaging, specifically thermal video, for the identification of SARS-CoV-2-infected people using infrared technology and to explore the role of breathing patterns in different parts of the thorax for the identification of possible COVID-19 infection. APPROACH We used signal moment, signal texture, and shape moment features extracted from five different body regions of interest (whole upper body, chest, face, back, and side) of images obtained from thermal video clips in which optical flow and super-resolution were used. These features were classified into positive and negative COVID-19 using machine learning strategies. RESULTS COVID-19 detection for male models [receiver operating characteristic (ROC) area under the ROC curve (AUC) = 0.605 95% confidence intervals (CI) 0.58 to 0.64] is more reliable than for female models (ROC AUC = 0.577 95% CI 0.55 to 0.61). Overall, thermal imaging is not very sensitive nor specific in detecting COVID-19; the metrics were below 60% except for the chest view from males. CONCLUSIONS We conclude that, although it may be possible to remotely identify some individuals affected by COVID-19, at this time, the diagnostic performance of current methods for body thermal imaging is not good enough to be used as a mass screening tool.
Collapse
Affiliation(s)
| | - Victor Treviño
- Tecnologico de Monterrey, Escuela de Medicina y Ciencias de la Salud, Monterrey, Nuevo León, México
- Tecnologico de Monterrey, The Institute for Obesity Research, Integrative Biology Unit, Monterrey, Nuevo Leon, México
| | | | - Rocio Ortiz López
- Tecnologico de Monterrey, Escuela de Medicina y Ciencias de la Salud, Monterrey, Nuevo León, México
- Tecnologico de Monterrey, The Institute for Obesity Research, Integrative Biology Unit, Monterrey, Nuevo Leon, México
- Tecnologico de Monterrey, Hospital Zambrano Hellion, San Pedro Garza García, Nuevo León, México
| | - Servando Cardona Huerta
- Tecnologico de Monterrey, Hospital Zambrano Hellion, San Pedro Garza García, Nuevo León, México
| | - Victor Javier Lara-Díaz
- Tecnologico de Monterrey, Escuela de Medicina y Ciencias de la Salud, Monterrey, Nuevo León, México
| | - José Gerardo Tamez Peña
- Tecnologico de Monterrey, Escuela de Medicina y Ciencias de la Salud, Monterrey, Nuevo León, México
| |
Collapse
|
40
|
Huhtanen H, Nyman M, Mohsen T, Virkki A, Karlsson A, Hirvonen J. Automated detection of pulmonary embolism from CT-angiograms using deep learning. BMC Med Imaging 2022; 22:43. [PMID: 35282821 PMCID: PMC8919639 DOI: 10.1186/s12880-022-00763-z] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 02/21/2022] [Indexed: 12/22/2022] Open
Abstract
Background The aim of this study was to develop and evaluate a deep neural network model in the automated detection of pulmonary embolism (PE) from computed tomography pulmonary angiograms (CTPAs) using only weakly labelled training data. Methods We developed a deep neural network model consisting of two parts: a convolutional neural network architecture called InceptionResNet V2 and a long-short term memory network to process whole CTPA stacks as sequences of slices. Two versions of the model were created using either chest X-rays (Model A) or natural images (Model B) as pre-training data. We retrospectively collected 600 CTPAs to use in training and validation and 200 CTPAs to use in testing. CTPAs were annotated only with binary labels on both stack- and slice-based levels. Performance of the models was evaluated with ROC and precision–recall curves, specificity, sensitivity, accuracy, as well as positive and negative predictive values. Results Both models performed well on both stack- and slice-based levels. On the stack-based level, Model A reached specificity and sensitivity of 93.5% and 86.6%, respectively, outperforming Model B slightly (specificity 90.7% and sensitivity 83.5%). However, the difference between their ROC AUC scores was not statistically significant (0.94 vs 0.91, p = 0.07). Conclusions We show that a deep learning model trained with a relatively small, weakly annotated dataset can achieve excellent performance results in detecting PE from CTPAs. Supplementary Information The online version contains supplementary material available at 10.1186/s12880-022-00763-z.
Collapse
Affiliation(s)
- Heidi Huhtanen
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland.
| | - Mikko Nyman
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | | | - Arho Virkki
- Auria Clinical Informatics, Turku University Hospital, Turku, Finland.,Department of Mathematics and Statistics, University of Turku, Turku, Finland
| | - Antti Karlsson
- Auria Biobank, Turku University Hospital, University of Turku, Turku, Finland
| | - Jussi Hirvonen
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland
| |
Collapse
|
41
|
Advancements in Oncology with Artificial Intelligence—A Review Article. Cancers (Basel) 2022; 14:cancers14051349. [PMID: 35267657 PMCID: PMC8909088 DOI: 10.3390/cancers14051349] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 02/26/2022] [Accepted: 02/28/2022] [Indexed: 02/05/2023] Open
Abstract
Simple Summary With the advancement of artificial intelligence, including machine learning, the field of oncology has seen promising results in cancer detection and classification, epigenetics, drug discovery, and prognostication. In this review, we describe what artificial intelligence is and its function, as well as comprehensively summarize its evolution and role in breast, colorectal, and central nervous system cancers. Understanding the origin and current accomplishments might be essential to improve the quality, accuracy, generalizability, cost-effectiveness, and reliability of artificial intelligence models that can be used in worldwide clinical practice. Students and researchers in the medical field will benefit from a deeper understanding of how to use integrative AI in oncology for innovation and research. Abstract Well-trained machine learning (ML) and artificial intelligence (AI) systems can provide clinicians with therapeutic assistance, potentially increasing efficiency and improving efficacy. ML has demonstrated high accuracy in oncology-related diagnostic imaging, including screening mammography interpretation, colon polyp detection, glioma classification, and grading. By utilizing ML techniques, the manual steps of detecting and segmenting lesions are greatly reduced. ML-based tumor imaging analysis is independent of the experience level of evaluating physicians, and the results are expected to be more standardized and accurate. One of the biggest challenges is its generalizability worldwide. The current detection and screening methods for colon polyps and breast cancer have a vast amount of data, so they are ideal areas for studying the global standardization of artificial intelligence. Central nervous system cancers are rare and have poor prognoses based on current management standards. ML offers the prospect of unraveling undiscovered features from routinely acquired neuroimaging for improving treatment planning, prognostication, monitoring, and response assessment of CNS tumors such as gliomas. By studying AI in such rare cancer types, standard management methods may be improved by augmenting personalized/precision medicine. This review aims to provide clinicians and medical researchers with a basic understanding of how ML works and its role in oncology, especially in breast cancer, colorectal cancer, and primary and metastatic brain cancer. Understanding AI basics, current achievements, and future challenges are crucial in advancing the use of AI in oncology.
Collapse
|
42
|
Hong J, Yu SCH, Chen W. Unsupervised domain adaptation for cross-modality liver segmentation via joint adversarial learning and self-learning. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108729] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
43
|
Pellicer-Valero OJ, Marenco Jiménez JL, Gonzalez-Perez V, Casanova Ramón-Borja JL, Martín García I, Barrios Benito M, Pelechano Gómez P, Rubio-Briones J, Rupérez MJ, Martín-Guerrero JD. Deep learning for fully automatic detection, segmentation, and Gleason grade estimation of prostate cancer in multiparametric magnetic resonance images. Sci Rep 2022; 12:2975. [PMID: 35194056 PMCID: PMC8864013 DOI: 10.1038/s41598-022-06730-6] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 02/03/2022] [Indexed: 02/07/2023] Open
Abstract
Although the emergence of multi-parametric magnetic resonance imaging (mpMRI) has had a profound impact on the diagnosis of prostate cancers (PCa), analyzing these images remains still complex even for experts. This paper proposes a fully automatic system based on Deep Learning that performs localization, segmentation and Gleason grade group (GGG) estimation of PCa lesions from prostate mpMRIs. It uses 490 mpMRIs for training/validation and 75 for testing from two different datasets: ProstateX and Valencian Oncology Institute Foundation. In the test set, it achieves an excellent lesion-level AUC/sensitivity/specificity for the GGG[Formula: see text]2 significance criterion of 0.96/1.00/0.79 for the ProstateX dataset, and 0.95/1.00/0.80 for the IVO dataset. At a patient level, the results are 0.87/1.00/0.375 in ProstateX, and 0.91/1.00/0.762 in IVO. Furthermore, on the online ProstateX grand challenge, the model obtained an AUC of 0.85 (0.87 when trained only on the ProstateX data, tying up with the original winner of the challenge). For expert comparison, IVO radiologist's PI-RADS 4 sensitivity/specificity were 0.88/0.56 at a lesion level, and 0.85/0.58 at a patient level. The full code for the ProstateX-trained model is openly available at https://github.com/OscarPellicer/prostate_lesion_detection . We hope that this will represent a landmark for future research to use, compare and improve upon.
Collapse
Affiliation(s)
- Oscar J Pellicer-Valero
- Intelligent Data Analysis Laboratory, Department of Electronic Engineering, ETSE (Engineering School), Universitat de València (UV), Av. Universitat, sn, 46100, Bujassot, Valencia, Spain.
| | - José L Marenco Jiménez
- Department of Urology, Fundación Instituto Valenciano de Oncología (FIVO), Beltrán Báguena, 8, 46009, Valencia, Spain
| | - Victor Gonzalez-Perez
- Department of Medical Physics, Fundación Instituto, Valenciano de Oncología (FIVO), Beltrán Báguena, 8, 46009, Valencia, Spain
| | | | - Isabel Martín García
- Department of Radiodiagnosis, Fundación Instituto, Valenciano de Oncología (FIVO), Beltrán Báguena, 8, 46009, Valencia, Spain
| | - María Barrios Benito
- Department of Radiodiagnosis, Fundación Instituto, Valenciano de Oncología (FIVO), Beltrán Báguena, 8, 46009, Valencia, Spain
| | - Paula Pelechano Gómez
- Department of Radiodiagnosis, Fundación Instituto, Valenciano de Oncología (FIVO), Beltrán Báguena, 8, 46009, Valencia, Spain
| | - José Rubio-Briones
- Department of Urology, Fundación Instituto Valenciano de Oncología (FIVO), Beltrán Báguena, 8, 46009, Valencia, Spain
| | - María José Rupérez
- Instituto de Ingeniería Mecánica y Biomecánica, Universitat Politècnica de València (UPV), Camino de Vera, sn, 46022, Valencia, Spain
| | - José D Martín-Guerrero
- Intelligent Data Analysis Laboratory, Department of Electronic Engineering, ETSE (Engineering School), Universitat de València (UV), Av. Universitat, sn, 46100, Bujassot, Valencia, Spain
| |
Collapse
|
44
|
AI musculoskeletal clinical applications: how can AI increase my day-to-day efficiency? Skeletal Radiol 2022; 51:293-304. [PMID: 34341865 DOI: 10.1007/s00256-021-03876-8] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Revised: 07/21/2021] [Accepted: 07/21/2021] [Indexed: 02/02/2023]
Abstract
Artificial intelligence (AI) is expected to bring greater efficiency in radiology by performing tasks that would otherwise require human intelligence, also at a much faster rate than human performance. In recent years, milestone deep learning models with unprecedented low error rates and high computational efficiency have shown remarkable performance for lesion detection, classification, and segmentation tasks. However, the growing field of AI has significant implications for radiology that are not limited to visual tasks. These are essential applications for optimizing imaging workflows and improving noninterpretive tasks. This article offers an overview of the recent literature on AI, focusing on the musculoskeletal imaging chain, including initial patient scheduling, optimized protocoling, magnetic resonance imaging reconstruction, image enhancement, medical image-to-image translation, and AI-aided image interpretation. The substantial developments of advanced algorithms, the emergence of massive quantities of medical data, and the interest of researchers and clinicians reveal the potential for the growing applications of AI to augment the day-to-day efficiency of musculoskeletal radiologists.
Collapse
|
45
|
Jordan P, Adamson PM, Bhattbhatt V, Beriwal S, Shen S, Radermecker O, Bose S, Strain LS, Offe M, Fraley D, Principi S, Ye DH, Wang AS, Van Heteren J, Vo NJ, Schmidt TG. Pediatric chest-abdomen-pelvis and abdomen-pelvis CT images with expert organ contours. Med Phys 2022; 49:3523-3528. [PMID: 35067940 PMCID: PMC9090951 DOI: 10.1002/mp.15485] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 12/26/2021] [Accepted: 12/31/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Organ autosegmentation efforts to date have largely been focused on adult populations, due to limited availability of pediatric training data. Pediatric patients may present additional challenges for organ segmentation. This paper describes a dataset of 359 pediatric chest-abdomen-pelvis and abdomen-pelvis CT images with expert contours of up to 29 anatomical organ structures to aid in the evaluation and development of autosegmentation algorithms for pediatric CT imaging. ACQUISITION AND VALIDATION METHODS The dataset collection consists of axial CT images in DICOM format of 180 male and 179 female pediatric chest-abdomen-pelvis or abdomen-pelvis exams acquired from one of three CT scanners at Children's Wisconsin. The datasets represent random pediatric cases based upon routine clinical indications. Subjects ranged in age from 5 days to 16 years, with a mean age of seven years. The CT acquisition, contrast, and reconstruction protocols varied across the scanner models and patients, with specifications available in the DICOM headers. Expert contours were manually labeled for up to 29 organ structures per subject. Not all contours are available for all subjects, due to limited field of view or unreliable contouring due to high noise. DATA FORMAT AND USAGE NOTES The data are available on TCIA (https://www.cancerimagingarchive.net/) under the collection Pediatric-CT-SEG. The axial CT image slices for each subject are available in DICOM format. The expert contours are stored in a single DICOM RTSTRUCT file for each subject. The contours are names as listed in Table 2. POTENTIAL APPLICATIONS This dataset will enable the evaluation and development of organ autosegmentation algorithms for pediatric populations, which exhibit variations in organ shape and size across age. Automated organ segmentation from CT images has numerous applications including radiation therapy, diagnostic tasks, surgical planning, and patient-specific organ dose estimation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | - Michael Offe
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, Milwaukee, WI
| | - David Fraley
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, Milwaukee, WI
| | - Sara Principi
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, Milwaukee, WI
| | - Dong Hye Ye
- Department of Electrical Engineering, Marquette University, Milwaukee, WI
| | - Adam S Wang
- Department of Radiology, Stanford University, Stanford, CA
| | | | - Nghia-Jack Vo
- Department of Radiology, Medical College of Wisconsin, Milwaukee, WI
| | - Taly Gilat Schmidt
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, Milwaukee, WI
| |
Collapse
|
46
|
Automated Detection and Characterization of Colon Cancer with Deep Convolutional Neural Networks. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:5269913. [PMID: 36704098 PMCID: PMC9873459 DOI: 10.1155/2022/5269913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 06/22/2022] [Accepted: 07/14/2022] [Indexed: 01/31/2023]
Abstract
Colon cancer is a momentous reason for illness and death in people. The conclusive diagnosis of colon cancer is made through histological examination. Convolutional neural networks are being used to analyze colon cancer via digital image processing with the introduction of whole-slide imaging. Accurate categorization of colon cancers is necessary for capable analysis. Our objective is to promote a system for detecting and classifying colon adenocarcinomas by applying a deep convolutional neural network (DCNN) model with some preprocessing techniques on digital histopathology images. It is a leading cause of cancer-related death, despite the fact that both traditional and modern methods are capable of comparing images that may encompass cancer regions of various sorts after looking at a significant number of colon cancer images. The fundamental problem for colon histopathologists is differentiating benign from malignant illnesses to having some complicated factors. A cancer diagnosis can be automated through artificial intelligence (AI), enabling us to appraise more patients in less time and at a decreased cost. Modern deep learning (MDL) and digital image processing (DIP) approaches are used to accomplish this. The results indicate that the proposed structure can accurately analyze cancer tissues to a maximum of 99.80%. By implementing this approach, medical practitioners will establish an automated and reliable system for detecting various forms of colon cancer. Moreover, CAD systems will be built in the near future to extract numerous aspects from colonoscopic images for use as a preprocessing module for colon cancer diagnosis.
Collapse
|
47
|
Lee CE, Chung M, Shin YG. Voxel-level Siamese Representation Learning for Abdominal Multi-Organ Segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 213:106547. [PMID: 34839269 DOI: 10.1016/j.cmpb.2021.106547] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Revised: 10/18/2021] [Accepted: 11/15/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Recent works in medical image segmentation have actively explored various deep learning architectures or objective functions to encode high-level features from volumetric data owing to limited image annotations. However, most existing approaches tend to ignore cross-volume global context and define context relations in the decision space. In this work, we propose a novel voxel-level Siamese representation learning method for abdominal multi-organ segmentation to improve representation space. METHODS The proposed method enforces voxel-wise feature relations in the representation space for leveraging limited datasets more comprehensively to achieve better performance. Inspired by recent progress in contrastive learning, we suppressed voxel-wise relations from the same class to be projected to the same point without using negative samples. Moreover, we introduce a multi-resolution context aggregation method that aggregates features from multiple hidden layers, which encodes both the global and local contexts for segmentation. RESULTS Our experiments on the multi-organ dataset outperformed the existing approaches by 2% in Dice score coefficient. The qualitative visualizations of the representation spaces demonstrate that the improvements were gained primarily by a disentangled feature space. CONCLUSION Our new representation learning method successfully encoded high-level features in the representation space by using a limited dataset, which showed superior accuracy in the medical image segmentation task compared to other contrastive loss-based methods. Moreover, our method can be easily applied to other networks without using additional parameters in the inference.
Collapse
Affiliation(s)
- Chae Eun Lee
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea
| | - Minyoung Chung
- School of Software, Soongsil University, 369 Sangdo-Ro, Dongjak-Gu, Seoul, 06978, Republic of Korea.
| | - Yeong-Gil Shin
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea
| |
Collapse
|
48
|
Joo B, Choi HS, Ahn SS, Cha J, Won SY, Sohn B, Kim H, Han K, Kim HP, Choi JM, Lee SM, Kim TG, Lee SK. A Deep Learning Model with High Standalone Performance for Diagnosis of Unruptured Intracranial Aneurysm. Yonsei Med J 2021; 62:1052-1061. [PMID: 34672139 PMCID: PMC8542476 DOI: 10.3349/ymj.2021.62.11.1052] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 07/29/2021] [Accepted: 08/30/2021] [Indexed: 02/07/2023] Open
Abstract
PURPOSE This study aimed to investigate whether a deep learning model for automated detection of unruptured intracranial aneurysms on time-of-flight (TOF) magnetic resonance angiography (MRA) can achieve a target diagnostic performance comparable to that of human radiologists for approval from the Korean Ministry of Food and Drug Safety as an artificial intelligence-applied software. MATERIALS AND METHODS In this single-center, retrospective, confirmatory clinical trial, the diagnostic performance of the model was evaluated in a predetermined test set. After sample size estimation, the test set consisted of 135 aneurysm-containing examinations with 168 intracranial aneurysms and 197 aneurysm-free examinations. The target sensitivity and specificity were set as 87% and 92%, respectively. The patient-wise sensitivity and specificity of the model were analyzed. Moreover, the lesion-wise sensitivity and false-positive detection rate per case were also investigated. RESULTS The sensitivity and specificity of the model were 91.11% [95% confidence interval (CI): 84.99, 95.32] and 93.91% (95% CI: 89.60, 96.81), respectively, which met the target performance values. The lesion-wise sensitivity was 92.26%. The overall false-positive detection rate per case was 0.123. Of the 168 aneurysms, 13 aneurysms from 12 examinations were missed by the model. CONCLUSION The present deep learning model for automated detection of unruptured intracranial aneurysms on TOF MRA achieved the target diagnostic performance comparable to that of human radiologists. With high standalone performance, this model may be useful for accurate and efficient diagnosis of intracranial aneurysm.
Collapse
Affiliation(s)
- Bio Joo
- Department of Radiology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Hyun Seok Choi
- Department of Radiology, Seoul Medical Center, Seoul, Korea
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Image Data Science, Yonsei University College of Medicine, Seoul, Korea.
| | - Sung Soo Ahn
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Image Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - Jihoon Cha
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Image Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - So Yeon Won
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Image Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - Beomseok Sohn
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Image Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - Hwiyoung Kim
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Image Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - Kyunghwa Han
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Image Data Science, Yonsei University College of Medicine, Seoul, Korea
| | | | | | | | | | - Seung-Koo Lee
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Image Data Science, Yonsei University College of Medicine, Seoul, Korea
| |
Collapse
|
49
|
Nomura Y, Hanaoka S, Takenaga T, Nakao T, Shibata H, Miki S, Yoshikawa T, Watadani T, Hayashi N, Abe O. Preliminary study of generalized semiautomatic segmentation for 3D voxel labeling of lesions based on deep learning. Int J Comput Assist Radiol Surg 2021; 16:1901-1913. [PMID: 34652606 DOI: 10.1007/s11548-021-02504-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 09/17/2021] [Indexed: 11/28/2022]
Abstract
PURPOSE The three-dimensional (3D) voxel labeling of lesions requires significant radiologists' effort in the development of computer-aided detection software. To reduce the time required for the 3D voxel labeling, we aimed to develop a generalized semiautomatic segmentation method based on deep learning via a data augmentation-based domain generalization framework. In this study, we investigated whether a generalized semiautomatic segmentation model trained using two types of lesion can segment previously unseen types of lesion. METHODS We targeted lung nodules in chest CT images, liver lesions in hepatobiliary-phase images of Gd-EOB-DTPA-enhanced MR imaging, and brain metastases in contrast-enhanced MR images. For each lesion, the 32 × 32 × 32 isotropic volume of interest (VOI) around the center of gravity of the lesion was extracted. The VOI was input into a 3D U-Net model to define the label of the lesion. For each type of target lesion, we compared five types of data augmentation and two types of input data. RESULTS For all considered target lesions, the highest dice coefficients among the training patterns were obtained when using a combination of the existing data augmentation-based domain generalization framework and random monochrome inversion and when using the resized VOI as the input image. The dice coefficients were 0.639 ± 0.124 for the lung nodules, 0.660 ± 0.137 for the liver lesions, and 0.727 ± 0.115 for the brain metastases. CONCLUSIONS Our generalized semiautomatic segmentation model could label unseen three types of lesion with different contrasts from the surroundings. In addition, the resized VOI as the input image enables the adaptation to the various sizes of lesions even when the size distribution differed between the training set and the test set.
Collapse
Affiliation(s)
- Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan. .,Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, 263-8522, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Tomomi Takenaga
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Hisaichi Shibata
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Soichiro Miki
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takeyuki Watadani
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
50
|
Luo Y, Zhang Y, Sun X, Dai H, Chen X. Intelligent Solutions in Chest Abnormality Detection Based on YOLOv5 and ResNet50. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:2267635. [PMID: 34691373 PMCID: PMC8528629 DOI: 10.1155/2021/2267635] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 08/31/2021] [Accepted: 09/18/2021] [Indexed: 11/23/2022]
Abstract
Computer-aided diagnosis (CAD) has nearly fifty years of history and has assisted many clinicians in the diagnosis. With the development of technology, recently, researches use the deep learning method to get high accuracy results in the CAD system. With CAD, the computer output can be used as a second choice for radiologists and contribute to doctors doing the final right decisions. Chest abnormality detection is a classic detection and classification problem; researchers need to classify common thoracic lung diseases and localize critical findings. For the detection problem, there are two deep learning methods: one-stage method and two-stage method. In our paper, we introduce and analyze some representative model, such as RCNN, SSD, and YOLO series. In order to better solve the problem of chest abnormality detection, we proposed a new model based on YOLOv5 and ResNet50. YOLOv5 is the latest YOLO series, which is more flexible than the one-stage detection algorithms before. The function of YOLOv5 in our paper is to localize the abnormality region. On the other hand, we use ResNet, avoiding gradient explosion problems in deep learning for classification. And we filter the result we got from YOLOv5 and ResNet. If ResNet recognizes that the image is not abnormal, the YOLOv5 detection result is discarded. The dataset is collected via VinBigData's web-based platform, VinLab. We train our model on the dataset using Pytorch frame and use the mAP, precision, and F1-score as the metrics to evaluate our model's performance. In the progress of experiments, our method achieves superior performance over the other classical approaches on the same dataset. The experiments show that YOLOv5's mAP is 0.010, 0.020, 0.023 higher than those of YOLOv5, Fast RCNN, and EfficientDet. In addition, in the dimension of precision, our model also performs better than other models. The precision of our model is 0.512, which is 0.018, 0.027, 0.033 higher than YOLOv5, Fast RCNN, and EfficientDet.
Collapse
Affiliation(s)
- Yu Luo
- China Three Gorges University, College of Computer and Information Technology, Yichang, China
| | - Yifan Zhang
- School of Software, Nanchang University, Nanchang, China
| | - Xize Sun
- Chenggong Campus, Yunnan University, Kunming, China
| | - Hengwei Dai
- Southwest University, College of Computer and Information Science, Chongqing, China
| | - Xiaohui Chen
- China Three Gorges University, College of Computer and Information Technology, Yichang, China
| |
Collapse
|