1
|
Wen T, Tong B, Fu Y, Li Y, Ling M, Chen X. A novel adjunctive diagnostic method for bone cancer: Osteosarcoma cell segmentation based on Twin Swin Transformer with multi-scale feature fusion. J Bone Oncol 2024; 49:100647. [PMID: 39584045 PMCID: PMC11585710 DOI: 10.1016/j.jbo.2024.100647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 10/21/2024] [Accepted: 10/24/2024] [Indexed: 11/26/2024] Open
Abstract
Background Osteosarcoma, the most common primary bone tumor originating from osteoblasts, poses a significant challenge in medical practice, particularly among adolescents. Conventional diagnostic methods heavily rely on manual analysis of magnetic resonance imaging (MRI) scans, which often fall short in providing accurate and timely diagnosis. This underscores the critical need for advancements in medical imaging technologies to improve the detection and characterization of osteosarcoma. Methods In this study, we sought to address the limitations of current diagnostic approaches by leveraging Hoechst-stained images of osteosarcoma cells obtained via fluorescence microscopy. Our primary objective was to enhance the segmentation of osteosarcoma cells, a crucial step in precise diagnosis and treatment planning. Recognizing the shortcomings of existing feature extraction networks in capturing detailed cellular structures, we propose a novel approach utilizing a twin swin transformer architecture for osteosarcoma cell segmentation, with a focus on multi-scale feature fusion. Results The experimental findings demonstrate the effectiveness of the proposed Twin Swin Transformer with multi-scale feature fusion in significantly improving osteosarcoma cell segmentation. Compared to conventional techniques, our method achieves superior segmentation performance, highlighting its potential utility in clinical settings. Conclusion The development of our Twin Swin Transformer with multi-scale feature fusion method represents a significant advancement in medical imaging technology, particularly in the field of osteosarcoma diagnosis. By harnessing advanced computational techniques and leveraging high-resolution imaging data, our approach offers enhanced accuracy and efficiency in osteosarcoma cell segmentation, ultimately facilitating better patient care and clinical decision-making.
Collapse
Affiliation(s)
- Tingxi Wen
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Binbin Tong
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Yuqing Fu
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Yunfeng Li
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Mengde Ling
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Xinwen Chen
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| |
Collapse
|
2
|
Yousefirizi F, Shiri I, O JH, Bloise I, Martineau P, Wilson D, Bénard F, Sehn LH, Savage KJ, Zaidi H, Uribe CF, Rahmim A. Semi-supervised learning towards automated segmentation of PET images with limited annotations: application to lymphoma patients. Phys Eng Sci Med 2024; 47:833-849. [PMID: 38512435 DOI: 10.1007/s13246-024-01408-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 02/18/2024] [Indexed: 03/23/2024]
Abstract
Manual segmentation poses a time-consuming challenge for disease quantification, therapy evaluation, treatment planning, and outcome prediction. Convolutional neural networks (CNNs) hold promise in accurately identifying tumor locations and boundaries in PET scans. However, a major hurdle is the extensive amount of supervised and annotated data necessary for training. To overcome this limitation, this study explores semi-supervised approaches utilizing unlabeled data, specifically focusing on PET images of diffuse large B-cell lymphoma (DLBCL) and primary mediastinal large B-cell lymphoma (PMBCL) obtained from two centers. We considered 2-[18F]FDG PET images of 292 patients PMBCL (n = 104) and DLBCL (n = 188) (n = 232 for training and validation, and n = 60 for external testing). We harnessed classical wisdom embedded in traditional segmentation methods, such as the fuzzy clustering loss function (FCM), to tailor the training strategy for a 3D U-Net model, incorporating both supervised and unsupervised learning approaches. Various supervision levels were explored, including fully supervised methods with labeled FCM and unified focal/Dice loss, unsupervised methods with robust FCM (RFCM) and Mumford-Shah (MS) loss, and semi-supervised methods combining FCM with supervised Dice loss (MS + Dice) or labeled FCM (RFCM + FCM). The unified loss function yielded higher Dice scores (0.73 ± 0.11; 95% CI 0.67-0.8) than Dice loss (p value < 0.01). Among the semi-supervised approaches, RFCM + αFCM (α = 0.3) showed the best performance, with Dice score of 0.68 ± 0.10 (95% CI 0.45-0.77), outperforming MS + αDice for any supervision level (any α) (p < 0.01). Another semi-supervised approach with MS + αDice (α = 0.2) achieved Dice score of 0.59 ± 0.09 (95% CI 0.44-0.76) surpassing other supervision levels (p < 0.01). Given the time-consuming nature of manual delineations and the inconsistencies they may introduce, semi-supervised approaches hold promise for automating medical imaging segmentation workflows.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada.
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Joo Hyun O
- College of Medicine, Seoul St. Mary's Hospital, The Catholic University of Korea, Seoul, Republic of Korea
| | | | | | - Don Wilson
- BC Cancer, Vancouver, BC, Canada
- Department of Radiology, University of British Columbia, Vancouver, Canada
| | | | - Laurie H Sehn
- BC Cancer, Vancouver, BC, Canada
- Centre for Lymphoid Cancer, BC Cancer, Vancouver, Canada
| | - Kerry J Savage
- BC Cancer, Vancouver, BC, Canada
- Centre for Lymphoid Cancer, BC Cancer, Vancouver, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- University Medical Center Groningen, University of Groningens, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Vancouver, Odense, Denmark
- University Research and Innovation Center, Óbuda University, Budapest, Hungary
| | - Carlos F Uribe
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
- BC Cancer, Vancouver, BC, Canada
- Department of Radiology, University of British Columbia, Vancouver, Canada
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
- BC Cancer, Vancouver, BC, Canada
- Department of Radiology, University of British Columbia, Vancouver, Canada
- Departments of Physics and Biomedical Engineering, University of British Columbia, Vancouver, Canada
| |
Collapse
|
3
|
Xu K, Kang H. A Review of Machine Learning Approaches for Brain Positron Emission Tomography Data Analysis. Nucl Med Mol Imaging 2024; 58:203-212. [PMID: 38932757 PMCID: PMC11196571 DOI: 10.1007/s13139-024-00845-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 01/19/2024] [Accepted: 01/25/2024] [Indexed: 06/28/2024] Open
Abstract
Positron emission tomography (PET) imaging has moved forward the development of medical diagnostics and research across various domains, including cardiology, neurology, infection detection, and oncology. The integration of machine learning (ML) algorithms into PET data analysis has further enhanced their capabilities of including disease diagnosis and classification, image segmentation, and quantitative analysis. ML algorithms empower researchers and clinicians to extract valuable insights from complex big PET datasets, which enabling automated pattern recognition, predictive health outcome modeling, and more efficient data analysis. This review explains the basic knowledge of PET imaging, statistical methods for PET image analysis, and challenges of PET data analysis. We also discussed the improvement of analysis capabilities by combining PET data with machine learning algorithms and the application of this combination in various aspects of PET image research. This review also highlights current trends and future directions in PET imaging, emphasizing the driving and critical role of machine learning and big PET image data analytics in improving diagnostic accuracy and personalized medical approaches. Integration between PET imaging will shape the future of medical diagnosis and research.
Collapse
Affiliation(s)
- Ke Xu
- Department of Biostatistics, Vanderbilt University Medical Center, 2525 West End Avenue, Suite 1100, Nashville, TN 37203 USA
| | - Hakmook Kang
- Department of Biostatistics, Vanderbilt University Medical Center, 2525 West End Avenue, Suite 1100, Nashville, TN 37203 USA
| |
Collapse
|
4
|
Li W, Gou F, Wu J. Artificial intelligence auxiliary diagnosis and treatment system for breast cancer in developing countries. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:395-413. [PMID: 38189731 DOI: 10.3233/xst-230194] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
BACKGROUND In many developing countries, a significant number of breast cancer patients are unable to receive timely treatment due to a large population base, high patient numbers, and limited medical resources. OBJECTIVE This paper proposes a breast cancer assisted diagnosis system based on electronic medical records. The goal of this system is to address the limitations of existing systems, which primarily rely on structured electronic records and may miss crucial information stored in unstructured records. METHODS The proposed approach is a breast cancer assisted diagnosis system based on electronic medical records. The system utilizes breast cancer enhanced convolutional neural networks with semantic initialization filters (BC-INIT-CNN). It extracts highly relevant tumor markers from unstructured medical records to aid in breast cancer staging diagnosis and effectively utilizes the important information present in unstructured records. RESULTS The model's performance is assessed using various evaluation metrics. Such as accuracy, ROC curves, and Precision-Recall curves. Comparative analysis demonstrates that the BC-INIT-CNN model outperforms several existing methods in terms of accuracy and computational efficiency. CONCLUSIONS The proposed breast cancer assisted diagnosis system based on BC-INIT-CNN showcases the potential to address the challenges faced by developing countries in providing timely treatment to breast cancer patients. By leveraging unstructured medical records and extracting relevant tumor markers, the system enables accurate staging diagnosis and enhances the utilization of valuable information.
Collapse
Affiliation(s)
- Wenxiu Li
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Jia Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton VIC, Australia
| |
Collapse
|
5
|
Xing F, Silosky M, Ghosh D, Chin BB. Location-Aware Encoding for Lesion Detection in 68Ga-DOTATATE Positron Emission Tomography Images. IEEE Trans Biomed Eng 2024; 71:247-257. [PMID: 37471190 DOI: 10.1109/tbme.2023.3297249] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/22/2023]
Abstract
OBJECTIVE Lesion detection with positron emission tomography (PET) imaging is critical for tumor staging, treatment planning, and advancing novel therapies to improve patient outcomes, especially for neuroendocrine tumors (NETs). Current lesion detection methods often require manual cropping of regions/volumes of interest (ROIs/VOIs) a priori, or rely on multi-stage, cascaded models, or use multi-modality imaging to detect lesions in PET images. This leads to significant inefficiency, high variability and/or potential accumulative errors in lesion quantification. To tackle this issue, we propose a novel single-stage lesion detection method using only PET images. METHODS We design and incorporate a new, plug-and-play codebook learning module into a U-Net-like neural network and promote lesion location-specific feature learning at multiple scales. We explicitly regularize the codebook learning with direct supervision at the network's multi-level hidden layers and enforce the network to learn multi-scale discriminative features with respect to predicting lesion positions. The network automatically combines the predictions from the codebook learning module and other layers via a learnable fusion layer. RESULTS We evaluate the proposed method on a real-world clinical 68Ga-DOTATATE PET image dataset, and our method produces significantly better lesion detection performance than recent state-of-the-art approaches. CONCLUSION We present a novel deep learning method for single-stage lesion detection in PET imaging data, with no ROI/VOI cropping in advance, no multi-stage modeling and no multi-modality data. SIGNIFICANCE This study provides a new perspective for effective and efficient lesion identification in PET, potentially accelerating novel therapeutic regimen development for NETs and ultimately improving patient outcomes including survival.
Collapse
|
6
|
Jaakkola MK, Rantala M, Jalo A, Saari T, Hentilä J, Helin JS, Nissinen TA, Eskola O, Rajander J, Virtanen KA, Hannukainen JC, López-Picón F, Klén R. Segmentation of Dynamic Total-Body [ 18F]-FDG PET Images Using Unsupervised Clustering. Int J Biomed Imaging 2023; 2023:3819587. [PMID: 38089593 PMCID: PMC10715853 DOI: 10.1155/2023/3819587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 11/01/2023] [Accepted: 11/17/2023] [Indexed: 10/17/2024] Open
Abstract
Clustering time activity curves of PET images have been used to separate clinically relevant areas of the brain or tumours. However, PET image segmentation in multiorgan level is much less studied due to the available total-body data being limited to animal studies. Now, the new PET scanners providing the opportunity to acquire total-body PET scans also from humans are becoming more common, which opens plenty of new clinically interesting opportunities. Therefore, organ-level segmentation of PET images has important applications, yet it lacks sufficient research. In this proof of concept study, we evaluate if the previously used segmentation approaches are suitable for segmenting dynamic human total-body PET images in organ level. Our focus is on general-purpose unsupervised methods that are independent of external data and can be used for all tracers, organisms, and health conditions. Additional anatomical image modalities, such as CT or MRI, are not used, but the segmentation is done purely based on the dynamic PET images. The tested methods are commonly used building blocks of the more sophisticated methods rather than final methods as such, and our goal is to evaluate if these basic tools are suited for the arising human total-body PET image segmentation. First, we excluded methods that were computationally too demanding for the large datasets from human total-body PET scanners. These criteria filtered out most of the commonly used approaches, leaving only two clustering methods, k-means and Gaussian mixture model (GMM), for further analyses. We combined k-means with two different preprocessing approaches, namely, principal component analysis (PCA) and independent component analysis (ICA). Then, we selected a suitable number of clusters using 10 images. Finally, we tested how well the usable approaches segment the remaining PET images in organ level, highlight the best approaches together with their limitations, and discuss how further research could tackle the observed shortcomings. In this study, we utilised 40 total-body [18F] fluorodeoxyglucose PET images of rats to mimic the coming large human PET images and a few actual human total-body images to ensure that our conclusions from the rat data generalise to the human data. Our results show that ICA combined with k-means has weaker performance than the other two computationally usable approaches and that certain organs are easier to segment than others. While GMM performed sufficiently, it was by far the slowest one among the tested approaches, making k-means combined with PCA the most promising candidate for further development. However, even with the best methods, the mean Jaccard index was slightly below 0.5 for the easiest tested organ and below 0.2 for the most challenging organ. Thus, we conclude that there is a lack of accurate and computationally light general-purpose segmentation method that can analyse dynamic total-body PET images.
Collapse
Affiliation(s)
- Maria K. Jaakkola
- Turku PET Centre, University of Turku, Turku, Finland
- Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Maria Rantala
- Turku PET Centre, University of Turku, Turku, Finland
| | - Anna Jalo
- MediCity Research Laboratory, University of Turku, Turku, Finland
- PET Preclinical Laboratory, Turku PET Centre, University of Turku, Turku, Finland
| | - Teemu Saari
- Turku PET Centre, University of Turku, Turku, Finland
- Turku PET Centre, Turku University Hospital, Turku, Finland
| | | | - Jatta S. Helin
- MediCity Research Laboratory, University of Turku, Turku, Finland
- PET Preclinical Laboratory, Turku PET Centre, University of Turku, Turku, Finland
| | - Tuuli A. Nissinen
- MediCity Research Laboratory, University of Turku, Turku, Finland
- PET Preclinical Laboratory, Turku PET Centre, University of Turku, Turku, Finland
| | - Olli Eskola
- Radiopharmaceutical Chemistry Laboratory, Turku PET Centre, University of Turku, Turku, Finland
| | - Johan Rajander
- Accelerator Laboratory, Turku PET Centre, Abo Akademi University, Turku, Finland
| | - Kirsi A. Virtanen
- Turku PET Centre, University of Turku, Turku, Finland
- Turku PET Centre, Turku University Hospital, Turku, Finland
| | | | - Francisco López-Picón
- Turku PET Centre, University of Turku, Turku, Finland
- MediCity Research Laboratory, University of Turku, Turku, Finland
- PET Preclinical Laboratory, Turku PET Centre, University of Turku, Turku, Finland
| | - Riku Klén
- Turku PET Centre, University of Turku, Turku, Finland
- Turku PET Centre, Turku University Hospital, Turku, Finland
| |
Collapse
|
7
|
Zhan X, Liu J, Long H, Zhu J, Tang H, Gou F, Wu J. An Intelligent Auxiliary Framework for Bone Malignant Tumor Lesion Segmentation in Medical Image Analysis. Diagnostics (Basel) 2023; 13:223. [PMID: 36673032 PMCID: PMC9858155 DOI: 10.3390/diagnostics13020223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/17/2022] [Accepted: 01/04/2023] [Indexed: 01/11/2023] Open
Abstract
Bone malignant tumors are metastatic and aggressive, with poor treatment outcomes and prognosis. Rapid and accurate diagnosis is crucial for limb salvage and increasing the survival rate. There is a lack of research on deep learning to segment bone malignant tumor lesions in medical images with complex backgrounds and blurred boundaries. Therefore, we propose a new intelligent auxiliary framework for the medical image segmentation of bone malignant tumor lesions, which consists of a supervised edge-attention guidance segmentation network (SEAGNET). We design a boundary key points selection module to supervise the learning of edge attention in the model to retain fine-grained edge feature information. We precisely locate malignant tumors by instance segmentation networks while extracting feature maps of tumor lesions in medical images. The rich contextual-dependent information in the feature map is captured by mixed attention to better understand the uncertainty and ambiguity of the boundary, and edge attention learning is used to guide the segmentation network to focus on the fuzzy boundary of the tumor region. We implement extensive experiments on real-world medical data to validate our model. It validates the superiority of our method over the latest segmentation methods, achieving the best performance in terms of the Dice similarity coefficient (0.967), precision (0.968), and accuracy (0.996). The results prove the important contribution of the framework in assisting doctors to improve the accuracy of diagnosis and clinical efficiency.
Collapse
Affiliation(s)
- Xiangbing Zhan
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Jun Liu
- The Second People’s Hospital of Huaihua, Huaihua 418000, China
| | - Huiyun Long
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Jun Zhu
- The First People’s Hospital of Huaihua, Huaihua 418000, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, China
| | - Haoyu Tang
- The First People’s Hospital of Huaihua, Huaihua 418000, China
| | - Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
- The First People’s Hospital of Huaihua, Huaihua 418000, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, China
| | - Jia Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
- The First People’s Hospital of Huaihua, Huaihua 418000, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, VIC 3800, Australia
| |
Collapse
|
8
|
Tang H, Huang H, Liu J, Zhu J, Gou F, Wu J. AI-Assisted Diagnosis and Decision-Making Method in Developing Countries for Osteosarcoma. Healthcare (Basel) 2022; 10:2313. [PMID: 36421636 PMCID: PMC9690527 DOI: 10.3390/healthcare10112313] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/28/2022] [Accepted: 11/15/2022] [Indexed: 10/29/2023] Open
Abstract
Osteosarcoma is a malignant tumor derived from primitive osteogenic mesenchymal cells, which is extremely harmful to the human body and has a high mortality rate. Early diagnosis and treatment of this disease is necessary to improve the survival rate of patients, and MRI is an effective tool for detecting osteosarcoma. However, due to the complex structure and variable location of osteosarcoma, cancer cells are highly heterogeneous and prone to aggregation and overlap, making it easy for doctors to inaccurately predict the area of the lesion. In addition, in developing countries lacking professional medical systems, doctors need to examine mass of osteosarcoma MRI images of patients, which is time-consuming and inefficient, and may result in misjudgment and omission. For the sake of reducing labor cost and improve detection efficiency, this paper proposes an Attention Condenser-based MRI image segmentation system for osteosarcoma (OMSAS), which can help physicians quickly locate the lesion area and achieve accurate segmentation of the osteosarcoma tumor region. Using the idea of AttendSeg, we constructed an Attention Condenser-based residual structure network (ACRNet), which greatly reduces the complexity of the structure and enables smaller hardware requirements while ensuring the accuracy of image segmentation. The model was tested on more than 4000 samples from two hospitals in China. The experimental results demonstrate that our model has higher efficiency, higher accuracy and lighter structure for osteosarcoma MRI image segmentation compared to other existing models.
Collapse
Affiliation(s)
- Haojun Tang
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Hui Huang
- The First People’s Hospital of Huaihua, Huaihua 418000, China
| | - Jun Liu
- The Second People’s Hospital of Huaihua, Huaihua 418000, China
| | - Jun Zhu
- The First People’s Hospital of Huaihua, Huaihua 418000, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, China
| | - Fangfang Gou
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
- The First People’s Hospital of Huaihua, Huaihua 418000, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, VIC 3800, Australia
| |
Collapse
|
9
|
Gou F, Liu J, Zhu J, Wu J. A Multimodal Auxiliary Classification System for Osteosarcoma Histopathological Images Based on Deep Active Learning. Healthcare (Basel) 2022; 10:2189. [PMID: 36360530 PMCID: PMC9690420 DOI: 10.3390/healthcare10112189] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 10/27/2022] [Accepted: 10/28/2022] [Indexed: 10/29/2023] Open
Abstract
Histopathological examination is an important criterion in the clinical diagnosis of osteosarcoma. With the improvement of hardware technology and computing power, pathological image analysis systems based on artificial intelligence have been widely used. However, classifying numerous intricate pathology images by hand is a tiresome task for pathologists. The lack of labeling data makes the system costly and difficult to build. This study constructs a classification assistance system (OHIcsA) based on active learning (AL) and a generative adversarial network (GAN). The system initially uses a small, labeled training set to train the classifier. Then, the most informative samples from the unlabeled images are selected for expert annotation. To retrain the network, the final chosen images are added to the initial labeled dataset. Experiments on real datasets show that our proposed method achieves high classification performance with an AUC value of 0.995 and an accuracy value of 0.989 using a small amount of labeled data. It reduces the cost of building a medical system. Clinical diagnosis can be aided by the system's findings, which can also increase the effectiveness and verifiable accuracy of doctors.
Collapse
Affiliation(s)
- Fangfang Gou
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Jun Liu
- The Second People’s Hospital of Huaihua, Huaihua 418000, China
| | - Jun Zhu
- The First People’s Hospital of Huaihua, Huaihua 418000, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, China
| | - Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, VIC 3800, Australia
| |
Collapse
|
10
|
Auxiliary Segmentation Method of Osteosarcoma in MRI Images Based on Denoising and Local Enhancement. Healthcare (Basel) 2022; 10:healthcare10081468. [PMID: 36011123 PMCID: PMC9408522 DOI: 10.3390/healthcare10081468] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/01/2022] [Accepted: 08/03/2022] [Indexed: 12/23/2022] Open
Abstract
Osteosarcoma is a bone tumor which is malignant. There are many difficulties when doctors manually identify patients’ MRI images to complete the diagnosis. The osteosarcoma in MRI images is very complex, making its recognition and segmentation resource-consuming. Automatic osteosarcoma area segmentation can solve these problems to a certain extent. However, existing studies usually fail to balance segmentation accuracy and efficiency. They are either sensitive to noise with low accuracy or time-consuming. So we propose an auxiliary segmentation method based on denoising and local enhancement. The method first optimizes the osteosarcoma images, including removing noise using the Edge Enhancement based Transformer for Medical Image Denoising (Eformer) and using a non-parameter method to localize and enhance the tumor region in MRI images. Osteosarcoma was then segmented by Deep Feature Aggregation for Real-Time Semantic Segmentation (DFANet). Our method achieves impressive segmentation accuracy. Moreover, it is efficient in both time and space. It can provide information about the location and extent of the osteosarcoma as a basis for further diagnosis.
Collapse
|
11
|
Wu J, Zhou L, Gou F, Tan Y. A Residual Fusion Network for Osteosarcoma MRI Image Segmentation in Developing Countries. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7285600. [PMID: 35965771 PMCID: PMC9365532 DOI: 10.1155/2022/7285600] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 07/01/2022] [Accepted: 07/05/2022] [Indexed: 01/07/2023]
Abstract
Among primary bone cancers, osteosarcoma is the most common, peaking between the ages of a child's rapid bone growth and adolescence. The diagnosis of osteosarcoma requires observing the radiological appearance of the infected bones. A common approach is MRI, but the manual diagnosis of MRI images is prone to observer bias and inaccuracy and is rather time consuming. The MRI images of osteosarcoma contain semantic messages in several different resolutions, which are often ignored by current segmentation techniques, leading to low generalizability and accuracy. In the meantime, the boundaries between osteosarcoma and bones or other tissues are sometimes too ambiguous to separate, making it a challenging job for inexperienced doctors to draw a line between them. In this paper, we propose using a multiscale residual fusion network to handle the MRI images. We placed a novel subnetwork after the encoders to exchange information between the feature maps of different resolutions, to fuse the information they contain. The outputs are then directed to both the decoders and a shape flow block, used for improving the spatial accuracy of the segmentation map. We tested over 80,000 osteosarcoma MRI images from the PET-CT center of a well-known hospital in China. Our approach can significantly improve the effectiveness of the semantic segmentation of osteosarcoma images. Our method has higher F1, DSC, and IOU compared with other models while maintaining the number of parameters and FLOPS.
Collapse
Affiliation(s)
- Jia Wu
- School of Computer Science and Engineering, Central South University, Chang Sha 410083, China
- Research Center for Artificial Intelligence, Monash University, Clayton Vic 3800, Melbourne, Australia
| | - Luting Zhou
- School of Computer Science and Engineering, Central South University, Chang Sha 410083, China
| | - Fangfang Gou
- School of Computer Science and Engineering, Central South University, Chang Sha 410083, China
| | - Yanlin Tan
- PET-CT Center, The Second Xiangya Hospital of Central South University, Changsha 410083, China
| |
Collapse
|
12
|
Li Y, Wu Y, Huang M, Zhang Y, Bai Z. Automatic prostate and peri-prostatic fat segmentation based on pyramid mechanism fusion network for T2-weighted MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 223:106918. [PMID: 35779461 DOI: 10.1016/j.cmpb.2022.106918] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/01/2022] [Revised: 05/10/2022] [Accepted: 05/25/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic and accurate segmentation of prostate and peri-prostatic fat in male pelvic MRI images is a critical step in the diagnosis and prognosis of prostate cancer. The boundary of prostate tissue is not clear, which makes the task of automatic segmentation very challenging. The main issues, especially for the peri-prostatic fat, which is being offered for the first time, are hazy boundaries and a large form variation. METHODS We propose a pyramid mechanism fusion network (PMF-Net) to learn global features and more comprehensive context information. In the proposed PMF-Net, we devised two pyramid techniques in particular. A pyramid mechanism module made of dilated convolutions of varying rates is inserted before each down sample of the fundamental network architecture encoder. The module is intended to address the issue of information loss during the feature coding process, particularly in the case of segmentation object boundary information. In the transition stage from encoder to decoder, pyramid fusion module is designed to extract global features. The features of the decoder not only integrate the features of the previous stage after up sampling and the output features of pyramid mechanism, but also include the features of skipping connection transmission under the same scale of the encoder. RESULTS The segmentation results of prostate and peri-prostatic fat on numerous diverse male pelvic MRI datasets show that our proposed PMF-Net has higher performance than existing methods. The average surface distance (ASD) and Dice similarity coefficient (DSC) of prostate segmentation results reached 10.06 and 90.21%, respectively. The ASD and DSC of the peri-prostatic fat segmentation results reached 50.96 and 82.41%. CONCLUSIONS The results of our segmentation are substantially connected and consistent with those of expert manual segmentation. Furthermore, peri-prostatic fat segmentation is a new issue, and good automatic segmentation has substantial therapeutic implications.
Collapse
Affiliation(s)
- Yuchun Li
- State Key Laboratory of Marine Resource Utilization in South China Sea, College of Information Science and Technology, Hainan University, Haikou 570288, China
| | - Yuanyuan Wu
- State Key Laboratory of Marine Resource Utilization in South China Sea, College of Information Science and Technology, Hainan University, Haikou 570288, China
| | - Mengxing Huang
- State Key Laboratory of Marine Resource Utilization in South China Sea, College of Information Science and Technology, Hainan University, Haikou 570288, China.
| | - Yu Zhang
- School of Computer science and Technology, Hainan University, Haikou 570288, China
| | - Zhiming Bai
- Haikou Municipal People's Hospital and Central South University Xiangya Medical College Affiliated Hospital, Haikou 570288, China
| |
Collapse
|
13
|
Wu J, Liu Z, Gou F, Zhu J, Tang H, Zhou X, Xiong W. BA-GCA Net: Boundary-Aware Grid Contextual Attention Net in Osteosarcoma MRI Image Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3881833. [PMID: 35942441 PMCID: PMC9356797 DOI: 10.1155/2022/3881833] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 06/22/2022] [Accepted: 07/05/2022] [Indexed: 12/11/2022]
Abstract
Osteosarcoma is one of the most common bone tumors that occurs in adolescents. Doctors often use magnetic resonance imaging (MRI) through biosensors to diagnose and predict osteosarcoma. However, a number of osteosarcoma MRI images have the problem of the tumor shape boundary being vague, complex, or irregular, which causes doctors to encounter difficulties in diagnosis and also makes some deep learning methods lose segmentation details as well as fail to locate the region of the osteosarcoma. In this article, we propose a novel boundary-aware grid contextual attention net (BA-GCA Net) to solve the problem of insufficient accuracy in osteosarcoma MRI image segmentation. First, a novel grid contextual attention (GCA) is designed to better capture the texture details of the tumor area. Then the statistical texture learning block (STLB) and the spatial transformer block (STB) are integrated into the network to improve its ability to extract statistical texture features and locate tumor areas. Over 80,000 MRI images of osteosarcoma from the Second Xiangya Hospital are adopted as a dataset for training, testing, and ablation studies. Results show that our proposed method achieves higher segmentation accuracy than existing methods with only a slight increase in the number of parameters and computational complexity.
Collapse
Affiliation(s)
- Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton VIC 3800, Australia
- The First People's Hospital of Huaihua, Huaihua, Hunan, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assis-10 Tance, Hunan University of Medicine, Changsha, China
| | - Zikang Liu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Fangfang Gou
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Jun Zhu
- The First People's Hospital of Huaihua, Huaihua, Hunan, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assis-10 Tance, Hunan University of Medicine, Changsha, China
| | - Haoyu Tang
- The First People's Hospital of Huaihua, Huaihua, Hunan, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assis-10 Tance, Hunan University of Medicine, Changsha, China
| | - Xian Zhou
- Jiangxi University of Chinese Medicine, Nanchang 330004, JiangXi, China
| | - Wangping Xiong
- Jiangxi University of Chinese Medicine, Nanchang 330004, JiangXi, China
| |
Collapse
|
14
|
Wu J, Guo Y, Gou F, Dai Z. A medical assistant segmentation method for MRI images of osteosarcoma based on DecoupleSegNet. INT J INTELL SYST 2022. [DOI: 10.1002/int.22949] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Affiliation(s)
- Jia Wu
- School of Computer Science and Engineering Central South University Changsha China
- Research Center for Artificial Intelligence Monash University Melbourne, Clayton Victoria Australia
| | - Yuxuan Guo
- School of Computer Science and Engineering Central South University Changsha China
| | - Fangfang Gou
- School of Computer Science and Engineering Central South University Changsha China
| | - Zhehao Dai
- Department of Spine Surgery, The Second Xiangya Hospital Central South University Changsha China
| |
Collapse
|
15
|
Wu J, Xiao P, Huang H, Gou F, Zhou Z, Dai Z. An artificial intelligence multiprocessing scheme for the diagnosis of osteosarcoma MRI images. IEEE J Biomed Health Inform 2022; 26:4656-4667. [PMID: 35727772 DOI: 10.1109/jbhi.2022.3184930] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Osteosarcoma is the most common malignant osteosarcoma, and most developing countries face great challenges in the diagnosis due to the lack of medical resources. Magnetic resonance imaging (MRI) has always been an important tool for the detection of osteosarcoma, but it is a time-consuming and labor-intensive task for doctors to manually identify MRI images. It is highly subjective and prone to misdiagnosis. Existing computer-aided diagnosis methods of osteosarcoma MRI images focus only on accuracy, ignoring the lack of computing resources in developing countries. In addition, the large amount of redundant and noisy data generated during imaging should also be considered. To alleviate the inefficiency of osteosarcoma diagnosis faced by developing countries, this paper proposed an artificial intelligence multiprocessing scheme for pre-screening, noise reduction, and segmentation of osteosarcoma MRI images. For pre-screening, we propose the Slide Block Filter to remove useless images. Next, we introduced a fast non-local means algorithm using integral images to denoise noisy images. We then segmented the filtered and denoised MRI images using a U-shaped network (ETUNet) embedded with a transformer layer, which enhances the functionality and robustness of the traditional U-shaped architecture. Finally, we further optimized the segmented tumor boundaries using conditional random fields. This paper conducted experiments on more than 70,000 MRI images of osteosarcoma from three hospitals in China. The experimental results show that our proposed methods have good results and better performance in pre-screening, noise reduction, and segmentation.
Collapse
|
16
|
Multi-Scale Tumor Localization Based on Priori Guidance-Based Segmentation Method for Osteosarcoma MRI Images. MATHEMATICS 2022. [DOI: 10.3390/math10122099] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Osteosarcoma is a malignant osteosarcoma that is extremely harmful to human health. Magnetic resonance imaging (MRI) technology is one of the commonly used methods for the imaging examination of osteosarcoma. Due to the large amount of osteosarcoma MRI image data and the complexity of detection, manual identification of osteosarcoma in MRI images is a time-consuming and labor-intensive task for doctors, and it is highly subjective, which can easily lead to missed and misdiagnosed problems. AI medical image-assisted diagnosis alleviates this problem. However, the brightness of MRI images and the multi-scale of osteosarcoma make existing studies still face great challenges in the identification of tumor boundaries. Based on this, this study proposed a prior guidance-based assisted segmentation method for MRI images of osteosarcoma, which is based on the few-shot technique for tumor segmentation and fine fitting. It not only solves the problem of multi-scale tumor localization, but also greatly improves the recognition accuracy of tumor boundaries. First, we preprocessed the MRI images using prior generation and normalization algorithms to reduce model performance degradation caused by irrelevant regions and high-level features. Then, we used a prior-guided feature abdominal muscle network to perform small-sample segmentation of tumors of different sizes based on features in the processed MRI images. Finally, using more than 80,000 MRI images from the Second Xiangya Hospital for experiments, the DOU value of the method proposed in this paper reached 0.945, which is at least 4.3% higher than other models in the experiment. We showed that our method specifically has higher prediction accuracy and lower resource consumption.
Collapse
|
17
|
Ouyang T, Yang S, Gou F, Dai Z, Wu J. Rethinking U-Net from an Attention Perspective with Transformers for Osteosarcoma MRI Image Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7973404. [PMID: 35707196 PMCID: PMC9192230 DOI: 10.1155/2022/7973404] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 04/24/2022] [Accepted: 04/28/2022] [Indexed: 12/17/2022]
Abstract
Osteosarcoma is one of the most common primary malignancies of bone in the pediatric and adolescent populations. The morphology and size of osteosarcoma MRI images often show great variability and randomness with different patients. In developing countries, with large populations and lack of medical resources, it is difficult to effectively address the difficulties of early diagnosis of osteosarcoma with limited physician manpower alone. In addition, with the proposal of precision medicine, existing MRI image segmentation models for osteosarcoma face the challenges of insufficient segmentation accuracy and high resource consumption. Inspired by transformer's self-attention mechanism, this paper proposes a lightweight osteosarcoma image segmentation architecture, UATransNet, by adding a multilevel guided self-aware attention module (MGAM) to the encoder-decoder architecture of U-Net. We successively perform dataset classification optimization and remove MRI image irrelevant background. Then, UATransNet is designed with transformer self-attention component (TSAC) and global context aggregation component (GCAC) at the bottom of the encoder-decoder architecture to perform integration of local features and global dependencies and aggregation of contexts to learned features. In addition, we apply dense residual learning to the convolution module and combined with multiscale jump connections, to improve the feature extraction capability. In this paper, we experimentally evaluate more than 80,000 osteosarcoma MRI images and show that our UATransNet yields more accurate segmentation performance. The IOU and DSC values of osteosarcoma are 0.922 ± 0.03 and 0.921 ± 0.04, respectively, and provide intuitive and accurate efficient decision information support for physicians.
Collapse
Affiliation(s)
- Tianxiang Ouyang
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Shun Yang
- Department of Spine Surgery, The Second Xiangya Hospital, Central South University, Changsha 410011, China
| | - Fangfang Gou
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Zhehao Dai
- Department of Spine Surgery, The Second Xiangya Hospital, Central South University, Changsha 410011, China
| | - Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, VIC 3800, Australia
| |
Collapse
|
18
|
Deep Active Learning Framework for Lymph Node Metastasis Prediction in Medical Support System. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4601696. [PMID: 35592722 PMCID: PMC9113892 DOI: 10.1155/2022/4601696] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 03/27/2022] [Accepted: 04/23/2022] [Indexed: 12/21/2022]
Abstract
Assessing the extent of cancer spread by histopathological analysis of sentinel axillary lymph nodes is an important part of breast cancer staging. With the maturity and prevalence of deep learning technology, building auxiliary medical systems can help to relieve the burden of pathologists and increase the diagnostic precision and accuracy during this process. However, such histopathological images have complex patterns that are difficult for ordinary people to understand and require professional medical practitioners to annotate. This increases the cost of constructing such medical systems. To reduce the cost of annotating and improve the performance of the model as much as possible, in other words, using as few labeled samples as possible to obtain a greater performance improvement, we propose a deep learning framework with a three-stage query strategy and novel model update strategy. The framework first trains an auto-encoder with all the samples to obtain a global representation in a low-dimensional space. In the query stage, the unlabeled samples are first selected according to uncertainty, and then, coreset-based methods are employed to reduce sample redundancy. Finally, distribution differences between labeled samples and unlabeled samples are evaluated and samples that can quickly eliminate the distribution differences are selected. This method achieves faster iterative efficiency than the uncertainty strategies, representative strategies, or hybrid strategies on the lymph node slice dataset and other commonly used datasets. It reaches the performance of training with all data, but only uses 50% of the labeled. During the model update process, we randomly freeze some weights and only train the task model on new labeled samples with a smaller learning rate. Compared with fine-tuning task model on new samples, large-scale performance degradation is avoided. Compared with the retraining strategy or the replay strategy, it reduces the training cost of updating the task model by 79.87% and 90.07%, respectively.
Collapse
|
19
|
An Attention-Preserving Network-Based Method for Assisted Segmentation of Osteosarcoma MRI Images. MATHEMATICS 2022. [DOI: 10.3390/math10101665] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Osteosarcoma is a malignant bone tumor that is extremely dangerous to human health. Not only does it require a large amount of work, it is also a complicated task to outline the lesion area in an image manually, using traditional methods. With the development of computer-aided diagnostic techniques, more and more researchers are focusing on automatic segmentation techniques for osteosarcoma analysis. However, existing methods ignore the size of osteosarcomas, making it difficult to identify and segment smaller tumors. This is very detrimental to the early diagnosis of osteosarcoma. Therefore, this paper proposes a Contextual Axial-Preserving Attention Network (CaPaN)-based MRI image-assisted segmentation method for osteosarcoma detection. Based on the use of Res2Net, a parallel decoder is added to aggregate high-level features which effectively combines the local and global features of osteosarcoma. In addition, channel feature pyramid (CFP) and axial attention (A-RA) mechanisms are used. A lightweight CFP can extract feature mapping and contextual information of different sizes. A-RA uses axial attention to distinguish tumor tissues by mining, which reduces computational costs and thus improves the generalization performance of the model. We conducted experiments using a real dataset provided by the Second Xiangya Affiliated Hospital and the results showed that our proposed method achieves better segmentation results than alternative models. In particular, our method shows significant advantages with respect to small target segmentation. Its precision is about 2% higher than the average values of other models. For the segmentation of small objects, the DSC value of CaPaN is 0.021 higher than that of the commonly used U-Net method.
Collapse
|
20
|
Qiao X, Jiang C, Li P, Yuan Y, Zeng Q, Bi L, Song S, Kim J, Feng DD, Huang Q. Improving Breast Tumor Segmentation in PET via Attentive Transformation Based Normalization. IEEE J Biomed Health Inform 2022; 26:3261-3271. [PMID: 35377850 DOI: 10.1109/jbhi.2022.3164570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Positron Emission Tomography (PET) has become a preferred imaging modality for cancer diagnosis, radiotherapy planning, and treatment responses monitoring. Accurate and automatic tumor segmentation is the fundamental requirement for these clinical applications. Deep convolutional neural networks have become the state-of-the-art in PET tumor segmentation. The normalization process is one of the key components for accelerating network training and improving the performance of the network. However, existing normalization methods either introduce batch noise into the instance PET image by calculating statistics on batch level or introduce background noise into every single pixel by sharing the same learnable parameters spatially. In this paper, we proposed an attentive transformation (AT)-based normalization method for PET tumor segmentation. We exploit the distinguishability of breast tumor in PET images and dynamically generate dedicated and pixel-dependent learnable parameters in normalization via the transformation on a combination of channel-wise and spatial-wise attentive responses. The attentive learnable parameters allow to re-calibrate features pixel-by-pixel to focus on the high-uptake area while attenuating the background noise of PET images. Our experimental results on two real clinical datasets show that the AT-based normalization method improves breast tumor segmentation performance when compared with the existing normalization methods.
Collapse
|
21
|
Osteosarcoma MRI Image-Assisted Segmentation System Base on Guided Aggregated Bilateral Network. MATHEMATICS 2022. [DOI: 10.3390/math10071090] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Osteosarcoma is a primary malignant tumor. It is difficult to cure and expensive to treat. Generally, diagnosis is made by analyzing MRI images of patients. In the process of clinical diagnosis, the mainstream method is the still time-consuming and laborious manual screening. Modern computer image segmentation technology can realize the automatic processing of the original image of osteosarcoma and assist doctors in diagnosis. However, to achieve a better effect of segmentation, the complexity of the model is relatively high, and the hardware conditions in developing countries are limited, so it is difficult to use it directly. Based on this situation, we propose an osteosarcoma aided segmentation method based on a guided aggregated bilateral network (OSGABN), which improves the segmentation accuracy of the model and greatly reduces the parameter scale, effectively alleviating the above problems. The fast bilateral segmentation network (FaBiNet) is used to segment images. It is a high-precision model with a detail branch that captures low-level information and a lightweight semantic branch that captures high-level semantic context. We used more than 80,000 osteosarcoma MRI images from three hospitals in China for detection, and the results showed that our model can achieve an accuracy of around 0.95 and a params of 2.33 M.
Collapse
|
22
|
Intelligent Segmentation Medical Assistance System for MRI Images of Osteosarcoma in Developing Countries. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:7703583. [PMID: 35096135 PMCID: PMC8791734 DOI: 10.1155/2022/7703583] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 12/27/2021] [Indexed: 12/25/2022]
Abstract
Osteosarcoma is the most common primary malignant bone tumor in children and adolescents. It has a high degree of malignancy and a poor prognosis in developing countries. The doctor manually explained that magnetic resonance imaging (MRI) suffers from subjectivity and fatigue limitations. In addition, the structure, shape, and position of osteosarcoma are complicated, and there is a lot of noise in MRI images. Directly inputting the original data set into the automatic segmentation system will bring noise and cause the model's segmentation accuracy to decrease. Therefore, this paper proposes an osteosarcoma MRI image segmentation system based on a deep convolution neural network, which solves the overfitting problem caused by noisy data and improves the generalization performance of the model. Firstly, we use Mean Teacher to optimize the data set. The noise data is put into the second round of training of the model to improve the robustness of the model. Then, we segment the image using a deep separable U-shaped network (SepUNet) and conditional random field (CRF). SepUnet can segment lesion regions of different sizes at multiple scales; CRF further optimizes the boundary. Finally, this article calculates the area of the tumor area, which provides a more intuitive reference for assisting doctors in diagnosis. More than 80000 MRI images of osteosarcoma from three hospitals in China were tested. The results show that the proposed method guarantees the balance of speed, accuracy, and cost under the premise of improving accuracy.
Collapse
|
23
|
Zhan X, Long H, Gou F, Duan X, Kong G, Wu J. A Convolutional Neural Network-Based Intelligent Medical System with Sensors for Assistive Diagnosis and Decision-Making in Non-Small Cell Lung Cancer. SENSORS 2021; 21:s21237996. [PMID: 34884000 PMCID: PMC8659811 DOI: 10.3390/s21237996] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 11/26/2021] [Accepted: 11/28/2021] [Indexed: 12/15/2022]
Abstract
In many regions of the world, early diagnosis of non-small cell lung cancer (NSCLC) is a major challenge due to the large population and lack of medical resources, which is difficult toeffectively address via limited physician manpower alone. Therefore, we developed a convolutional neural network (CNN)-based assisted diagnosis and decision-making intelligent medical system with sensors. This system analyzes NSCLC patients' medical records using sensors to assist staging a diagnosis and provides recommended treatment plans to physicians. To address the problem of unbalanced case samples across pathological stages, we used transfer learning and dynamic sampling techniques to reconstruct and iteratively train the model to improve the accuracy of the prediction system. In this paper, all data for training and testing the system were obtained from the medical records of 2,789,675 patients with NSCLC, which were recorded in three hospitals in China over a five-year period. When the number of case samples reached 8000, the system achieved an accuracy rate of 0.84, which is already close to that of the doctors (accuracy: 0.86). The experimental results proved that the system can quickly and accurately analyze patient data and provide decision information support for physicians.
Collapse
Affiliation(s)
- Xiangbing Zhan
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China; (X.Z.); (X.D.); (G.K.)
| | - Huiyun Long
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China; (X.Z.); (X.D.); (G.K.)
- Correspondence: (H.L.); (J.W.)
| | - Fangfang Gou
- School of Computer Science and Engineering, Central South University, Changsha 410083, China;
| | - Xun Duan
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China; (X.Z.); (X.D.); (G.K.)
| | - Guangqian Kong
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China; (X.Z.); (X.D.); (G.K.)
| | - Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China;
- Research Center for Artificial Intelligence, Monash University, Clayton, VIC 3800, Australia
- Correspondence: (H.L.); (J.W.)
| |
Collapse
|
24
|
Yu G, Chen Z, Wu J, Tan Y. A diagnostic prediction framework on auxiliary medical system for breast cancer in developing countries. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107459] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
25
|
A Staging Auxiliary Diagnosis Model for Nonsmall Cell Lung Cancer Based on the Intelligent Medical System. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:6654946. [PMID: 33628327 PMCID: PMC7886591 DOI: 10.1155/2021/6654946] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 01/15/2021] [Accepted: 01/30/2021] [Indexed: 11/17/2022]
Abstract
At present, human health is threatened by many diseases, and lung cancer is one of the most dangerous tumors that threaten human life. In most developing countries, due to the large population and lack of medical resources, it is difficult for doctors to meet patients' needs for medical treatment only by relying on the manual diagnosis. Based on massive medical information, the intelligent decision-making system has played a great role in assisting doctors in analyzing patients' conditions, improving the accuracy of clinical diagnosis, and reducing the workload of medical staff. This article is based on the data of 8,920 nonsmall cell lung cancer patients collected by different medical systems in three hospitals in China. Based on the intelligent medical system, on the basis of the intelligent medical system, this paper constructs a nonsmall cell lung cancer staging auxiliary diagnosis model based on convolutional neural network (CNNSAD). CNNSAD converts patient medical records into word sequences, uses convolutional neural networks to extract semantic features from patient medical records, and combines dynamic sampling and transfer learning technology to construct a balanced data set. The experimental results show that the model is superior to other methods in terms of accuracy, recall, and precision. When the number of samples reaches 3000, the accuracy of the system will reach over 80%, which can effectively realize the auxiliary diagnosis of nonsmall cell lung cancer and combine dynamic sampling and migration learning techniques to train nonsmall cell lung cancer staging auxiliary diagnosis models, which can effectively achieve the auxiliary diagnosis of nonsmall cell lung cancer. The simulation results show that the model is better than the other methods in the experiment in terms of accuracy, recall, and precision.
Collapse
|