1
|
Gou F, Liu J, Xiao C, Wu J. Research on Artificial-Intelligence-Assisted Medicine: A Survey on Medical Artificial Intelligence. Diagnostics (Basel) 2024; 14:1472. [PMID: 39061610 PMCID: PMC11275417 DOI: 10.3390/diagnostics14141472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 07/04/2024] [Accepted: 07/05/2024] [Indexed: 07/28/2024] Open
Abstract
With the improvement of economic conditions and the increase in living standards, people's attention in regard to health is also continuously increasing. They are beginning to place their hopes on machines, expecting artificial intelligence (AI) to provide a more humanized medical environment and personalized services, thus greatly expanding the supply and bridging the gap between resource supply and demand. With the development of IoT technology, the arrival of the 5G and 6G communication era, and the enhancement of computing capabilities in particular, the development and application of AI-assisted healthcare have been further promoted. Currently, research on and the application of artificial intelligence in the field of medical assistance are continuously deepening and expanding. AI holds immense economic value and has many potential applications in regard to medical institutions, patients, and healthcare professionals. It has the ability to enhance medical efficiency, reduce healthcare costs, improve the quality of healthcare services, and provide a more intelligent and humanized service experience for healthcare professionals and patients. This study elaborates on AI development history and development timelines in the medical field, types of AI technologies in healthcare informatics, the application of AI in the medical field, and opportunities and challenges of AI in the field of medicine. The combination of healthcare and artificial intelligence has a profound impact on human life, improving human health levels and quality of life and changing human lifestyles.
Collapse
Affiliation(s)
- Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Jun Liu
- The Second People's Hospital of Huaihua, Huaihua 418000, China
| | - Chunwen Xiao
- The Second People's Hospital of Huaihua, Huaihua 418000, China
| | - Jia Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, VIC 3800, Australia
| |
Collapse
|
2
|
Wu Y, Li J, Wang X, Zhang Z, Zhao S. DECIDE: A decoupled semantic and boundary learning network for precise osteosarcoma segmentation by integrating multi-modality MRI. Comput Biol Med 2024; 174:108308. [PMID: 38581998 DOI: 10.1016/j.compbiomed.2024.108308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 01/17/2024] [Accepted: 03/12/2024] [Indexed: 04/08/2024]
Abstract
Automated Osteosarcoma Segmentation in Multi-modality MRI (AOSMM) holds clinical significance for effective tumor evaluation and treatment planning. However, the precision of AOSMM is challenged by the diverse characteristics of multi-modality MRI and the inherent heterogeneity and boundary ambiguity of osteosarcoma. While numerous methods have made significant strides in automated osteosarcoma segmentation, they primarily focused on the use of a single MRI modality and overlooked the potential benefits of integrating complementary information from other MRI modalities. Furthermore, they did not adequately model the long-range dependencies of complex tumor features, which may lead to insufficiently discriminative feature representations. To this end, we propose a decoupled semantic and boundary learning network (DECIDE) to achieve precise AOSMM with three functional modules. The Multi-modality Feature Fusion and Recalibration (MFR) module adaptively fuses and recalibrates multi-modality features by exploiting their channel-wise dependencies to compute low-rank attention weights for effectively aggregating useful information from different MRI modalities, which promotes complementary learning between multi-modality MRI and enables a more comprehensive tumor characterization. The Lesion Attention Enhancement (LAE) module employs spatial and channel attention mechanisms to capture global contextual dependencies over local features, significantly enhancing the discriminability and representational capacity of intricate tumor features. The Boundary Context Aggregation (BCA) module further enhances semantic representations by utilizing boundary information for effective context aggregation while also ensuring intra-class consistency in cases of boundary ambiguity. Substantial experiments demonstrate that DECIDE achieves exceptional performance in osteosarcoma segmentation, surpassing state-of-the-art methods in terms of accuracy and stability.
Collapse
Affiliation(s)
- Yinhao Wu
- Department of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, 518107, China
| | - Jianqi Li
- The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, 510080, China
| | - Xinxin Wang
- Department of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, 518107, China
| | - Zhaohui Zhang
- The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, 510080, China.
| | - Shen Zhao
- Department of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, 518107, China.
| |
Collapse
|
3
|
He Z, Liu J, Gou F, Wu J. An Innovative Solution Based on TSCA-ViT for Osteosarcoma Diagnosis in Resource-Limited Settings. Biomedicines 2023; 11:2740. [PMID: 37893113 PMCID: PMC10604772 DOI: 10.3390/biomedicines11102740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 09/24/2023] [Accepted: 10/08/2023] [Indexed: 10/29/2023] Open
Abstract
Identifying and managing osteosarcoma pose significant challenges, especially in resource-constrained developing nations. Advanced diagnostic methods involve isolating the nucleus from cancer cells for comprehensive analysis. However, two main challenges persist: mitigating image noise during the capture and transmission of cellular sections, and providing an efficient, accurate, and cost-effective solution for cell nucleus segmentation. To tackle these issues, we introduce the Twin-Self and Cross-Attention Vision Transformer (TSCA-ViT). This pioneering AI-based system employs a directed filtering algorithm for noise reduction and features an innovative transformer architecture with a twin attention mechanism for effective segmentation. The model also incorporates cross-attention-enabled skip connections to augment spatial information. We evaluated our method on a dataset of 1000 osteosarcoma pathology slide images from the Second People's Hospital of Huaihua, achieving a remarkable average precision of 97.7%. This performance surpasses traditional methodologies. Furthermore, TSCA-ViT offers enhanced computational efficiency owing to its fewer parameters, which results in reduced time and equipment costs. These findings underscore the superior efficacy and efficiency of TSCA-ViT, offering a promising approach for addressing the ongoing challenges in osteosarcoma diagnosis and treatment, particularly in settings with limited resources.
Collapse
Affiliation(s)
- Zengxiao He
- School of Computer Science and Engineering, Central South University, Changsha 410083, China;
| | - Jun Liu
- The Second People’s Hospital of Huaihua, Huaihua 418000, China
| | - Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China;
| | - Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China;
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China;
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, VIC 3800, Australia
| |
Collapse
|
4
|
Lim CC, Ling AHW, Chong YF, Mashor MY, Alshantti K, Aziz ME. Comparative Analysis of Image Processing Techniques for Enhanced MRI Image Quality: 3D Reconstruction and Segmentation Using 3D U-Net Architecture. Diagnostics (Basel) 2023; 13:2377. [PMID: 37510120 PMCID: PMC10377862 DOI: 10.3390/diagnostics13142377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 06/29/2023] [Accepted: 07/05/2023] [Indexed: 07/30/2023] Open
Abstract
Osteosarcoma is a common type of bone tumor, particularly prevalent in children and adolescents between the ages of 5 and 25 who are experiencing growth spurts during puberty. Manual delineation of tumor regions in MRI images can be laborious and time-consuming, and results may be subjective and difficult to replicate. Therefore, a convolutional neural network (CNN) was developed to automatically segment osteosarcoma cancerous cells in three types of MRI images. The study consisted of five main stages. First, 3692 DICOM format MRI images were acquired from 46 patients, including T1-weighted, T2-weighted, and T1-weighted with injection of Gadolinium (T1W + Gd) images. Contrast stretching and median filter were applied to enhance image intensity and remove noise, and the pre-processed images were reconstructed into NIfTI format files for deep learning. The MRI images were then transformed to fit the CNN's requirements. A 3D U-Net architecture was proposed with optimized parameters to build an automatic segmentation model capable of segmenting osteosarcoma from the MRI images. The 3D U-Net segmentation model achieved excellent results, with mean dice similarity coefficients (DSC) of 83.75%, 85.45%, and 87.62% for T1W, T2W, and T1W + Gd images, respectively. However, the study found that the proposed method had some limitations, including poorly defined borders, missing lesion portions, and other confounding factors. In summary, an automatic segmentation method based on a CNN has been developed to address the challenge of manually segmenting osteosarcoma cancerous cells in MRI images. While the proposed method showed promise, the study revealed limitations that need to be addressed to improve its efficacy.
Collapse
Affiliation(s)
- Chee Chin Lim
- Faculty of Electronic Engineering & Technology, Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
- Sport Engineering Research Centre (SERC), Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
| | - Apple Ho Wei Ling
- Faculty of Electronic Engineering & Technology, Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
| | - Yen Fook Chong
- Sport Engineering Research Centre (SERC), Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
| | - Mohd Yusoff Mashor
- Faculty of Electronic Engineering & Technology, Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
- Sport Engineering Research Centre (SERC), Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
| | | | - Mohd Ezane Aziz
- Department of Radiology, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
| |
Collapse
|
5
|
Lv B, Liu F, Li Y, Nie J, Gou F, Wu J. Artificial Intelligence-Aided Diagnosis Solution by Enhancing the Edge Features of Medical Images. Diagnostics (Basel) 2023; 13:1063. [PMID: 36980371 PMCID: PMC10047640 DOI: 10.3390/diagnostics13061063] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 02/18/2023] [Accepted: 03/09/2023] [Indexed: 03/14/2023] Open
Abstract
Bone malignant tumors are metastatic and aggressive. The manual screening of medical images is time-consuming and laborious, and computer technology is now being introduced to aid in diagnosis. Due to a large amount of noise and blurred lesion edges in osteosarcoma MRI images, high-precision segmentation methods require large computational resources and are difficult to use in developing countries with limited conditions. Therefore, this study proposes an artificial intelligence-aided diagnosis scheme by enhancing image edge features. First, a threshold screening filter (TSF) was used to pre-screen the MRI images to filter redundant data. Then, a fast NLM algorithm was introduced for denoising. Finally, a segmentation method with edge enhancement (TBNet) was designed to segment the pre-processed images by fusing Transformer based on the UNet network. TBNet is based on skip-free connected U-Net and includes a channel-edge cross-fusion transformer and a segmentation method with a combined loss function. This solution optimizes diagnostic efficiency and solves the segmentation problem of blurred edges, providing more help and reference for doctors to diagnose osteosarcoma. The results based on more than 4000 osteosarcoma MRI images show that our proposed method has a good segmentation effect and performance, with Dice Similarity Coefficient (DSC) reaching 0.949, and show that other evaluation indexes such as Intersection of Union (IOU) and recall are better than other methods.
Collapse
Affiliation(s)
- Baolong Lv
- School of Modern Service Management, Shandong Youth University of Political Science, Jinan 250102, China; (B.L.); (Y.L.)
| | - Feng Liu
- School of Information Engineering, Shandong Youth University of Political Science, Jinan 250102, China
- New Technology Research and Development Center of Intelligent Information Controlling in Universities of Shandong, Jinan 250103, China
| | - Yulin Li
- School of Modern Service Management, Shandong Youth University of Political Science, Jinan 250102, China; (B.L.); (Y.L.)
| | - Jianhua Nie
- Shandong Provincial People’s Government Administration Guarantee Center, Jinan 250011, China;
| | - Fangfang Gou
- School of Computer Science and Engineering, Central South University, Changsha 410017, China;
| | - Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410017, China;
- Research Center for Artificial Intelligence, Monash University, Melbourne, VIC 3800, Australia
| |
Collapse
|
6
|
Mămuleanu M, Urhuț CM, Săndulescu LD, Kamal C, Pătrașcu AM, Ionescu AG, Șerbănescu MS, Streba CT. An Automated Method for Classifying Liver Lesions in Contrast-Enhanced Ultrasound Imaging Based on Deep Learning Algorithms. Diagnostics (Basel) 2023; 13:1062. [PMID: 36980369 PMCID: PMC10047233 DOI: 10.3390/diagnostics13061062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/09/2023] [Accepted: 03/09/2023] [Indexed: 03/14/2023] Open
Abstract
BACKGROUND Contrast-enhanced ultrasound (CEUS) is an important imaging modality in the diagnosis of liver tumors. By using contrast agent, a more detailed image is obtained. Time-intensity curves (TIC) can be extracted using a specialized software, and then the signal can be analyzed for further investigations. METHODS The purpose of the study was to build an automated method for extracting TICs and classifying liver lesions in CEUS liver investigations. The cohort contained 50 anonymized video investigations from 49 patients. Besides the CEUS investigations, clinical data from the patients were provided. A method comprising three modules was proposed. The first module, a lesion segmentation deep learning (DL) model, handled the prediction of masks frame-by-frame (region of interest). The second module performed dilation on the mask, and after applying colormap to the image, it extracted the TIC and the parameters from the TIC (area under the curve, time to peak, mean transit time, and maximum intensity). The third module, a feed-forward neural network, predicted the final diagnosis. It was trained on the TIC parameters extracted by the second model, together with other data: gender, age, hepatitis history, and cirrhosis history. RESULTS For the feed-forward classifier, five classes were chosen: hepatocarcinoma, metastasis, other malignant lesions, hemangioma, and other benign lesions. Being a multiclass classifier, appropriate performance metrics were observed: categorical accuracy, F1 micro, F1 macro, and Matthews correlation coefficient. The results showed that due to class imbalance, in some cases, the classifier was not able to predict with high accuracy a specific lesion from the minority classes. However, on the majority classes, the classifier can predict the lesion type with high accuracy. CONCLUSIONS The main goal of the study was to develop an automated method of classifying liver lesions in CEUS video investigations. Being modular, the system can be a useful tool for gastroenterologists or medical students: either as a second opinion system or a tool to automatically extract TICs.
Collapse
Affiliation(s)
- Mădălin Mămuleanu
- Department of Automatic Control and Electronics, University of Craiova, 200585 Craiova, Romania
- Oncometrics S.R.L., 200677 Craiova, Romania
| | | | - Larisa Daniela Săndulescu
- Department of Gastroenterology, Research Center of Gastroenterology and Hepatology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Constantin Kamal
- Oncometrics S.R.L., 200677 Craiova, Romania
- Department of Pulmonology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Ana-Maria Pătrașcu
- Oncometrics S.R.L., 200677 Craiova, Romania
- Department of Hematology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Alin Gabriel Ionescu
- Oncometrics S.R.L., 200677 Craiova, Romania
- Department of History of Medicine, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Mircea-Sebastian Șerbănescu
- Oncometrics S.R.L., 200677 Craiova, Romania
- Department of Medical Informatics and Statistics, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Costin Teodor Streba
- Oncometrics S.R.L., 200677 Craiova, Romania
- Department of Gastroenterology, Research Center of Gastroenterology and Hepatology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
- Department of Pulmonology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| |
Collapse
|
7
|
Zhan X, Liu J, Long H, Zhu J, Tang H, Gou F, Wu J. An Intelligent Auxiliary Framework for Bone Malignant Tumor Lesion Segmentation in Medical Image Analysis. Diagnostics (Basel) 2023; 13:223. [PMID: 36673032 PMCID: PMC9858155 DOI: 10.3390/diagnostics13020223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/17/2022] [Accepted: 01/04/2023] [Indexed: 01/11/2023] Open
Abstract
Bone malignant tumors are metastatic and aggressive, with poor treatment outcomes and prognosis. Rapid and accurate diagnosis is crucial for limb salvage and increasing the survival rate. There is a lack of research on deep learning to segment bone malignant tumor lesions in medical images with complex backgrounds and blurred boundaries. Therefore, we propose a new intelligent auxiliary framework for the medical image segmentation of bone malignant tumor lesions, which consists of a supervised edge-attention guidance segmentation network (SEAGNET). We design a boundary key points selection module to supervise the learning of edge attention in the model to retain fine-grained edge feature information. We precisely locate malignant tumors by instance segmentation networks while extracting feature maps of tumor lesions in medical images. The rich contextual-dependent information in the feature map is captured by mixed attention to better understand the uncertainty and ambiguity of the boundary, and edge attention learning is used to guide the segmentation network to focus on the fuzzy boundary of the tumor region. We implement extensive experiments on real-world medical data to validate our model. It validates the superiority of our method over the latest segmentation methods, achieving the best performance in terms of the Dice similarity coefficient (0.967), precision (0.968), and accuracy (0.996). The results prove the important contribution of the framework in assisting doctors to improve the accuracy of diagnosis and clinical efficiency.
Collapse
Affiliation(s)
- Xiangbing Zhan
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Jun Liu
- The Second People’s Hospital of Huaihua, Huaihua 418000, China
| | - Huiyun Long
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Jun Zhu
- The First People’s Hospital of Huaihua, Huaihua 418000, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, China
| | - Haoyu Tang
- The First People’s Hospital of Huaihua, Huaihua 418000, China
| | - Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
- The First People’s Hospital of Huaihua, Huaihua 418000, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, China
| | - Jia Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
- The First People’s Hospital of Huaihua, Huaihua 418000, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, VIC 3800, Australia
| |
Collapse
|
8
|
Hu Y, Mohammad Mirzaei N, Shahriyari L. Bio-Mechanical Model of Osteosarcoma Tumor Microenvironment: A Porous Media Approach. Cancers (Basel) 2022; 14:cancers14246143. [PMID: 36551627 PMCID: PMC9777270 DOI: 10.3390/cancers14246143] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/05/2022] [Accepted: 12/08/2022] [Indexed: 12/14/2022] Open
Abstract
Osteosarcoma is the most common malignant bone tumor in children and adolescents with a poor prognosis. To describe the progression of osteosarcoma, we expanded a system of data-driven ODE from a previous study into a system of Reaction-Diffusion-Advection (RDA) equations and coupled it with Biot equations of poroelasticity to form a bio-mechanical model. The RDA system includes the spatio-temporal information of the key components of the tumor microenvironment. The Biot equations are comprised of an equation for the solid phase, which governs the movement of the solid tumor, and an equation for the fluid phase, which relates to the motion of cells. The model predicts the total number of cells and cytokines of the tumor microenvironment and simulates the tumor's size growth. We simulated different scenarios using this model to investigate the impact of several biomedical settings on tumors' growth. The results indicate the importance of macrophages in tumors' growth. Particularly, we have observed a high co-localization of macrophages and cancer cells, and the concentration of tumor cells increases as the number of macrophages increases.
Collapse
|
9
|
Tang H, Huang H, Liu J, Zhu J, Gou F, Wu J. AI-Assisted Diagnosis and Decision-Making Method in Developing Countries for Osteosarcoma. Healthcare (Basel) 2022; 10:2313. [PMID: 36421636 PMCID: PMC9690527 DOI: 10.3390/healthcare10112313] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/28/2022] [Accepted: 11/15/2022] [Indexed: 10/29/2023] Open
Abstract
Osteosarcoma is a malignant tumor derived from primitive osteogenic mesenchymal cells, which is extremely harmful to the human body and has a high mortality rate. Early diagnosis and treatment of this disease is necessary to improve the survival rate of patients, and MRI is an effective tool for detecting osteosarcoma. However, due to the complex structure and variable location of osteosarcoma, cancer cells are highly heterogeneous and prone to aggregation and overlap, making it easy for doctors to inaccurately predict the area of the lesion. In addition, in developing countries lacking professional medical systems, doctors need to examine mass of osteosarcoma MRI images of patients, which is time-consuming and inefficient, and may result in misjudgment and omission. For the sake of reducing labor cost and improve detection efficiency, this paper proposes an Attention Condenser-based MRI image segmentation system for osteosarcoma (OMSAS), which can help physicians quickly locate the lesion area and achieve accurate segmentation of the osteosarcoma tumor region. Using the idea of AttendSeg, we constructed an Attention Condenser-based residual structure network (ACRNet), which greatly reduces the complexity of the structure and enables smaller hardware requirements while ensuring the accuracy of image segmentation. The model was tested on more than 4000 samples from two hospitals in China. The experimental results demonstrate that our model has higher efficiency, higher accuracy and lighter structure for osteosarcoma MRI image segmentation compared to other existing models.
Collapse
Affiliation(s)
- Haojun Tang
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Hui Huang
- The First People’s Hospital of Huaihua, Huaihua 418000, China
| | - Jun Liu
- The Second People’s Hospital of Huaihua, Huaihua 418000, China
| | - Jun Zhu
- The First People’s Hospital of Huaihua, Huaihua 418000, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, China
| | - Fangfang Gou
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
- The First People’s Hospital of Huaihua, Huaihua 418000, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, VIC 3800, Australia
| |
Collapse
|
10
|
Liu F, Zhu J, Lv B, Yang L, Sun W, Dai Z, Gou F, Wu J. Auxiliary Segmentation Method of Osteosarcoma MRI Image Based on Transformer and U-Net. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9990092. [PMID: 36419505 PMCID: PMC9678467 DOI: 10.1155/2022/9990092] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 10/24/2022] [Accepted: 10/28/2022] [Indexed: 07/28/2023]
Abstract
One of the most prevalent malignant bone tumors is osteosarcoma. The diagnosis and treatment cycle are long and the prognosis is poor. It takes a lot of time to manually identify osteosarcoma from osteosarcoma magnetic resonance imaging (MRI). Medical image processing technology has greatly alleviated the problems faced by medical diagnoses. However, MRI images of osteosarcoma are characterized by high noise and blurred edges. The complex features increase the difficulty of lesion area identification. Therefore, this study proposes an osteosarcoma MRI image segmentation method (OSTransnet) based on Transformer and U-net. This technique primarily addresses the issues of fuzzy tumor edge segmentation and overfitting brought on by data noise. First, we optimize the dataset by changing the precise spatial distribution of noise and the data-increment image rotation process. The tumor is then segmented based on the model of U-Net and Transformer with edge improvement. It compensates for the limitations of U-semantic Net by using channel-based transformers. Finally, we also add an edge enhancement module (BAB) and a combined loss function to improve the performance of edge segmentation. The method's accuracy and stability are demonstrated by the detection and training results based on more than 4,000 MRI images of osteosarcoma, which also demonstrate how well the method works as an adjunct to clinical diagnosis and treatment.
Collapse
Affiliation(s)
- Feng Liu
- School of Information Engineering, Shandong Youth University of Political Science, Jinan, Shandong, China
- New Technology Research and Development Center of Intelligent Information Controlling in Universities of Shandong, Jinan 250103, China
| | - Jun Zhu
- The First People's Hospital of Huaihua, Huaihua 418000, Hunan, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, Hunan, China
| | - Baolong Lv
- School of Modern Service Management, Shandong Youth University of Political Science, Jinan, China
| | - Lei Yang
- School of Computer Science and Technology, Shandong Janzhu University, Jinan, China
| | - Wenyan Sun
- School of Information Engineering, Shandong Youth University of Political Science, Jinan, Shandong, China
| | - Zhehao Dai
- Department of Spine Surgery, The Second Xiangya Hospital, Central South University, Changsha 410011, China
| | - Fangfang Gou
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, Victoria 3800, Australia
| |
Collapse
|
11
|
Gou F, Liu J, Zhu J, Wu J. A Multimodal Auxiliary Classification System for Osteosarcoma Histopathological Images Based on Deep Active Learning. Healthcare (Basel) 2022; 10:2189. [PMID: 36360530 PMCID: PMC9690420 DOI: 10.3390/healthcare10112189] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 10/27/2022] [Accepted: 10/28/2022] [Indexed: 10/29/2023] Open
Abstract
Histopathological examination is an important criterion in the clinical diagnosis of osteosarcoma. With the improvement of hardware technology and computing power, pathological image analysis systems based on artificial intelligence have been widely used. However, classifying numerous intricate pathology images by hand is a tiresome task for pathologists. The lack of labeling data makes the system costly and difficult to build. This study constructs a classification assistance system (OHIcsA) based on active learning (AL) and a generative adversarial network (GAN). The system initially uses a small, labeled training set to train the classifier. Then, the most informative samples from the unlabeled images are selected for expert annotation. To retrain the network, the final chosen images are added to the initial labeled dataset. Experiments on real datasets show that our proposed method achieves high classification performance with an AUC value of 0.995 and an accuracy value of 0.989 using a small amount of labeled data. It reduces the cost of building a medical system. Clinical diagnosis can be aided by the system's findings, which can also increase the effectiveness and verifiable accuracy of doctors.
Collapse
Affiliation(s)
- Fangfang Gou
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Jun Liu
- The Second People’s Hospital of Huaihua, Huaihua 418000, China
| | - Jun Zhu
- The First People’s Hospital of Huaihua, Huaihua 418000, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assistance, Hunan University of Medicine, Huaihua 418000, China
| | - Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, VIC 3800, Australia
| |
Collapse
|
12
|
Wu J, Zhou L, Gou F, Tan Y. A Residual Fusion Network for Osteosarcoma MRI Image Segmentation in Developing Countries. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7285600. [PMID: 35965771 PMCID: PMC9365532 DOI: 10.1155/2022/7285600] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 07/01/2022] [Accepted: 07/05/2022] [Indexed: 01/07/2023]
Abstract
Among primary bone cancers, osteosarcoma is the most common, peaking between the ages of a child's rapid bone growth and adolescence. The diagnosis of osteosarcoma requires observing the radiological appearance of the infected bones. A common approach is MRI, but the manual diagnosis of MRI images is prone to observer bias and inaccuracy and is rather time consuming. The MRI images of osteosarcoma contain semantic messages in several different resolutions, which are often ignored by current segmentation techniques, leading to low generalizability and accuracy. In the meantime, the boundaries between osteosarcoma and bones or other tissues are sometimes too ambiguous to separate, making it a challenging job for inexperienced doctors to draw a line between them. In this paper, we propose using a multiscale residual fusion network to handle the MRI images. We placed a novel subnetwork after the encoders to exchange information between the feature maps of different resolutions, to fuse the information they contain. The outputs are then directed to both the decoders and a shape flow block, used for improving the spatial accuracy of the segmentation map. We tested over 80,000 osteosarcoma MRI images from the PET-CT center of a well-known hospital in China. Our approach can significantly improve the effectiveness of the semantic segmentation of osteosarcoma images. Our method has higher F1, DSC, and IOU compared with other models while maintaining the number of parameters and FLOPS.
Collapse
Affiliation(s)
- Jia Wu
- School of Computer Science and Engineering, Central South University, Chang Sha 410083, China
- Research Center for Artificial Intelligence, Monash University, Clayton Vic 3800, Melbourne, Australia
| | - Luting Zhou
- School of Computer Science and Engineering, Central South University, Chang Sha 410083, China
| | - Fangfang Gou
- School of Computer Science and Engineering, Central South University, Chang Sha 410083, China
| | - Yanlin Tan
- PET-CT Center, The Second Xiangya Hospital of Central South University, Changsha 410083, China
| |
Collapse
|
13
|
Wu J, Liu Z, Gou F, Zhu J, Tang H, Zhou X, Xiong W. BA-GCA Net: Boundary-Aware Grid Contextual Attention Net in Osteosarcoma MRI Image Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3881833. [PMID: 35942441 PMCID: PMC9356797 DOI: 10.1155/2022/3881833] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 06/22/2022] [Accepted: 07/05/2022] [Indexed: 12/11/2022]
Abstract
Osteosarcoma is one of the most common bone tumors that occurs in adolescents. Doctors often use magnetic resonance imaging (MRI) through biosensors to diagnose and predict osteosarcoma. However, a number of osteosarcoma MRI images have the problem of the tumor shape boundary being vague, complex, or irregular, which causes doctors to encounter difficulties in diagnosis and also makes some deep learning methods lose segmentation details as well as fail to locate the region of the osteosarcoma. In this article, we propose a novel boundary-aware grid contextual attention net (BA-GCA Net) to solve the problem of insufficient accuracy in osteosarcoma MRI image segmentation. First, a novel grid contextual attention (GCA) is designed to better capture the texture details of the tumor area. Then the statistical texture learning block (STLB) and the spatial transformer block (STB) are integrated into the network to improve its ability to extract statistical texture features and locate tumor areas. Over 80,000 MRI images of osteosarcoma from the Second Xiangya Hospital are adopted as a dataset for training, testing, and ablation studies. Results show that our proposed method achieves higher segmentation accuracy than existing methods with only a slight increase in the number of parameters and computational complexity.
Collapse
Affiliation(s)
- Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton VIC 3800, Australia
- The First People's Hospital of Huaihua, Huaihua, Hunan, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assis-10 Tance, Hunan University of Medicine, Changsha, China
| | - Zikang Liu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Fangfang Gou
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Jun Zhu
- The First People's Hospital of Huaihua, Huaihua, Hunan, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assis-10 Tance, Hunan University of Medicine, Changsha, China
| | - Haoyu Tang
- The First People's Hospital of Huaihua, Huaihua, Hunan, China
- Collaborative Innovation Center for Medical Artificial Intelligence and Big Data Decision Making Assis-10 Tance, Hunan University of Medicine, Changsha, China
| | - Xian Zhou
- Jiangxi University of Chinese Medicine, Nanchang 330004, JiangXi, China
| | - Wangping Xiong
- Jiangxi University of Chinese Medicine, Nanchang 330004, JiangXi, China
| |
Collapse
|