1
|
|
Liu P, Zheng G. CVCL: Context-aware Voxel-wise Contrastive Learning for label-efficient multi-organ segmentation. Comput Biol Med 2023; 160:106995. [PMID: 37187134 DOI: 10.1016/j.compbiomed.2023.106995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Despite the significant performance improvement on multi-organ segmentation with supervised deep learning-based methods, the label-hungry nature hinders their applications in practical disease diagnosis and treatment planning. Due to the challenges in obtaining expert-level accurate, densely annotated multi-organ datasets, label-efficient segmentation, such as partially supervised segmentation trained on partially labeled datasets or semi-supervised medical image segmentation, has attracted increasing attention recently. However, most of these methods suffer from the limitation that they neglect or underestimate the challenging unlabeled regions during model training. To this end, we propose a novel Context-aware Voxel-wise Contrastive Learning method, referred as CVCL, to take full advantage of both labeled and unlabeled information in label-scarce datasets for a performance improvement on multi-organ segmentation. Experimental results demonstrate that our proposed method achieves superior performance than other state-of-the-art methods.
Collapse
Affiliation(s)
- Peng Liu
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China
| | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China.
| |
Collapse
|
2
|
|
Xu X, Du J, Song J, Xue Z, Li A, Guan Z. Cluster-aware multiplex InfoMax for unsupervised graph representation learning. Neurocomputing 2023; 532:94-105. [DOI: 10.1016/j.neucom.2023.02.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
|
3
|
|
Xie T, Wang Z, Li H, Wu P, Huang H, Zhang H, Alsaadi FE, Zeng N. Progressive attention integration-based multi-scale efficient network for medical imaging analysis with application to COVID-19 diagnosis. Comput Biol Med 2023; 159:106947. [PMID: 37099976 DOI: 10.1016/j.compbiomed.2023.106947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/28/2023]
Abstract
In this paper, a novel deep learning-based medical imaging analysis framework is developed, which aims to deal with the insufficient feature learning caused by the imperfect property of imaging data. Named as multi-scale efficient network (MEN), the proposed method integrates different attention mechanisms to realize sufficient extraction of both detailed features and semantic information in a progressive learning manner. In particular, a fused-attention block is designed to extract fine-grained details from the input, where the squeeze-excitation (SE) attention mechanism is applied to make the model focus on potential lesion areas. A multi-scale low information loss (MSLIL)-attention block is proposed to compensate for potential global information loss and enhance the semantic correlations among features, where the efficient channel attention (ECA) mechanism is adopted. The proposed MEN is comprehensively evaluated on two COVID-19 diagnostic tasks, and the results show that as compared with some other advanced deep learning models, the proposed method is competitive in accurate COVID-19 recognition, which yields the best accuracy of 98.68% and 98.85%, respectively, and exhibits satisfactory generalization ability as well.
Collapse
Affiliation(s)
- Tingyi Xie
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Zidong Wang
- Department of Computer Science, Brunel University London, Uxbridge UB8 3PH, UK.
| | - Han Li
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Peishu Wu
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Huixiang Huang
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Hongyi Zhang
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Fuad E Alsaadi
- Communication Systems and Networks Research Group, Department of Electrical and Computer Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Nianyin Zeng
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China.
| |
Collapse
|
4
|
|
Rammal A, Ezukwoke K, Hoayek A, Batton-Hubert M. Root cause prediction for failures in semiconductor industry, a genetic algorithm-machine learning approach. Sci Rep 2023; 13:4934. [PMID: 36973298 DOI: 10.1038/s41598-023-30769-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023] Open
Abstract
Failure analysis has become an important part of guaranteeing good quality in the electronic component manufacturing process. The conclusions of a failure analysis can be used to identify a component's flaws and to better understand the mechanisms and causes of failure, allowing for the implementation of remedial steps to improve the product's quality and reliability. A failure reporting, analysis, and corrective action system is a method for organizations to report, classify, and evaluate failures, as well as plan corrective actions. These text feature datasets must first be preprocessed by Natural Language Processing techniques and converted to numeric by vectorization methods before starting the process of information extraction and building predictive models to predict failure conclusions of a given failure description. However, not all-textual information is useful for building predictive models suitable for failure analysis. Feature selection has been approached by several variable selection methods. Some of them have not been adapted for use in large data sets or are difficult to tune and others are not applicable to textual data. This article aims to develop a predictive model able to predict the failure conclusions using the discriminating features of the failure descriptions. For this, we propose to combine a Genetic Algorithm with supervised learning methods for an optimal prediction of the conclusions of failure in terms of the discriminant features of failure descriptions. Since we have an unbalanced dataset, we propose to apply an F1 score as a fitness function of supervised classification methods such as Decision Tree Classifier and Support Vector Machine. The suggested algorithms are called GA-DT and GA-SVM. Experiments on failure analysis textual datasets demonstrate the effectiveness of the proposed GA-DT method in creating a better predictive model of failure conclusion compared to using the information of the entire textual features or limited features selected by a genetic algorithm based on a SVM. Quantitative performances such as BLEU score and cosine similarity are used to compare the prediction performance of the different approaches.
Collapse
Affiliation(s)
- Abbas Rammal
- Ecole des Mines de Saint-Etienne, Mathematics and Industrial Engineering, Organisation and Environmental Engineering, Henri FAYOL Institute, 42023, Saint-Etienne, France.
| | - Kenneth Ezukwoke
- Ecole des Mines de Saint-Etienne, Mathematics and Industrial Engineering, Organisation and Environmental Engineering, Henri FAYOL Institute, 42023, Saint-Etienne, France
| | - Anis Hoayek
- Ecole des Mines de Saint-Etienne, Mathematics and Industrial Engineering, Organisation and Environmental Engineering, Henri FAYOL Institute, 42023, Saint-Etienne, France
| | - Mireille Batton-Hubert
- Ecole des Mines de Saint-Etienne, Mathematics and Industrial Engineering, Organisation and Environmental Engineering, Henri FAYOL Institute, 42023, Saint-Etienne, France
| |
Collapse
|
5
|
|
Hwang JH, Lim M, Han G, Park H, Kim YB, Park J, Jun SY, Lee J, Cho JW. Preparing pathological data to develop an artificial intelligence model in the nonclinical study. Sci Rep 2023; 13:3896. [PMID: 36890209 DOI: 10.1038/s41598-023-30944-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2023] Open
Abstract
Artificial intelligence (AI)-based analysis has recently been adopted in the examination of histological slides via the digitization of glass slides using a digital scanner. In this study, we examined the effect of varying the staining color tone and magnification level of a dataset on the result of AI model prediction in hematoxylin and eosin stained whole slide images (WSIs). The WSIs of liver tissues with fibrosis were used as an example, and three different datasets (N20, B20, and B10) were prepared with different color tones and magnifications. Using these datasets, we built five models trained Mask R-CNN algorithm by a single or mixed dataset of N20, B20, and B10. We evaluated their model performance using the test dataset of three datasets. It was found that the models that were trained with mixed datasets (models B20/N20 and B10/B20), which consist of different color tones or magnifications, performed better than the single dataset trained models. Consequently, superior performance of the mixed models was obtained from the actual prediction results of the test images. We suggest that training the algorithm with various staining color tones and multi-scaled image datasets would be more optimized for consistent remarkable performance in predicting pathological lesions of interest.
Collapse
Affiliation(s)
- Ji-Hee Hwang
- Toxicologic Pathology Research Group, Department of Advanced Toxicology Research, Korea Institute of Toxicology, Daejeon, 34114, Korea
| | - Minyoung Lim
- Toxicologic Pathology Research Group, Department of Advanced Toxicology Research, Korea Institute of Toxicology, Daejeon, 34114, Korea
| | - Gyeongjin Han
- Toxicologic Pathology Research Group, Department of Advanced Toxicology Research, Korea Institute of Toxicology, Daejeon, 34114, Korea
| | - Heejin Park
- Toxicologic Pathology Research Group, Department of Advanced Toxicology Research, Korea Institute of Toxicology, Daejeon, 34114, Korea
| | - Yong-Bum Kim
- Department of Advanced Toxicology Research, Korea Institute of Toxicology, Daejeon, 34114, Korea
| | - Jinseok Park
- Research and Development Team, LAC Inc., Seoul, 07807, Korea
| | - Sang-Yeop Jun
- Research and Development Team, LAC Inc., Seoul, 07807, Korea
| | - Jaeku Lee
- Research and Development Team, LAC Inc., Seoul, 07807, Korea
| | - Jae-Woo Cho
- Toxicologic Pathology Research Group, Department of Advanced Toxicology Research, Korea Institute of Toxicology, Daejeon, 34114, Korea.
| |
Collapse
|
6
|
|
Liao X, Liu Z, Zheng X, Ping Z, He X. Wind power prediction based on periodic characteristic decomposition and multi-layer attention network. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
7
|
|
Liu M, Wang Z, Li H, Wu P, Alsaadi FE, Zeng N. AA-WGAN: Attention augmented Wasserstein generative adversarial network with application to fundus retinal vessel segmentation. Comput Biol Med 2023. [PMID: 37019013 DOI: 10.1016/j.compbiomed.2023.106874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
Abstract
In this paper, a novel attention augmented Wasserstein generative adversarial network (AA-WGAN) is proposed for fundus retinal vessel segmentation, where a U-shaped network with attention augmented convolution and squeeze-excitation module is designed to serve as the generator. In particular, the complex vascular structures make some tiny vessels hard to segment, while the proposed AA-WGAN can effectively handle such imperfect data property, which is competent in capturing the dependency among pixels in the whole image to highlight the regions of interests via the applied attention augmented convolution. By applying the squeeze-excitation module, the generator is able to pay attention to the important channels of the feature maps, and the useless information can be suppressed as well. In addition, gradient penalty method is adopted in the WGAN backbone to alleviate the phenomenon of generating large amounts of repeated images due to excessive concentration on accuracy. The proposed model is comprehensively evaluated on three datasets DRIVE, STARE, and CHASE_DB1, and the results show that the proposed AA-WGAN is a competitive vessel segmentation model as compared with several other advanced models, which obtains the accuracy of 96.51%, 97.19% and 96.94% on each dataset, respectively. The effectiveness of the applied important components is validated by ablation study, which also endows the proposed AA-WGAN with considerable generalization ability.
Collapse
|
8
|
|
Waldauf P, Scales N, Shahin J, Schmidt M, van Beinum A, Hornby L, Shemie SD, Hogue M, Wind TJ, van Mook W, Dhanani S, Duska F. Machine learning determination of motivators of terminal extubation during the transition to end-of-life care in intensive care unit. Sci Rep 2023; 13:2632. [PMID: 36788319 DOI: 10.1038/s41598-023-29042-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023] Open
Abstract
Procedural aspects of compassionate care such as the terminal extubation are understudied. We used machine learning methods to determine factors associated with the decision to extubate the critically ill patient at the end of life, and whether the terminal extubation shortens the dying process. We performed a secondary data analysis of a large, prospective, multicentre, cohort study, death prediction and physiology after removal of therapy (DePPaRT), which collected baseline data as well as ECG, pulse oximeter and arterial waveforms from WLST until 30 min after death. We analysed a priori defined factors associated with the decision to perform terminal extubation in WLST using the random forest method and logistic regression. Cox regression was used to analyse the effect of terminal extubation on time from WLST to death. A total of 616 patients were included into the analysis, out of which 396 (64.3%) were terminally extubated. The study centre, low or no vasopressor support, and good respiratory function were factors significantly associated with the decision to extubate. Unadjusted time to death did not differ between patients with and without extubation (median survival time extubated vs. not extubated: 60 [95% CI: 46; 76] vs. 58 [95% CI: 45; 75] min). In contrast, after adjustment for confounders, time to death of extubated patients was significantly shorter (49 [95% CI: 40; 62] vs. 85 [95% CI: 61; 115] min). The decision to terminally extubate is associated with specific centres and less respiratory and/or vasopressor support. In this context, terminal extubation was associated with a shorter time to death.
Collapse
|
9
|
|
Wu P, Wang Z, Zheng B, Li H, Alsaadi FE, Zeng N. AGGN: Attention-based glioma grading network with multi-scale feature extraction and multi-modal information fusion. Comput Biol Med 2023; 152:106457. [PMID: 36571937 DOI: 10.1016/j.compbiomed.2022.106457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
In this paper, a magnetic resonance imaging (MRI) oriented novel attention-based glioma grading network (AGGN) is proposed. By applying the dual-domain attention mechanism, both channel and spatial information can be considered to assign weights, which benefits highlighting the key modalities and locations in the feature maps. Multi-branch convolution and pooling operations are applied in a multi-scale feature extraction module to separately obtain shallow and deep features on each modality, and a multi-modal information fusion module is adopted to sufficiently merge low-level detailed and high-level semantic features, which promotes the synergistic interaction among different modality information. The proposed AGGN is comprehensively evaluated through extensive experiments, and the results have demonstrated the effectiveness and superiority of the proposed AGGN in comparison to other advanced models, which also presents high generalization ability and strong robustness. In addition, even without the manually labeled tumor masks, AGGN can present considerable performance as other state-of-the-art algorithms, which alleviates the excessive reliance on supervised information in the end-to-end learning paradigm.
Collapse
|