1
|
Sletta Ø, Cheema A, Marthinsen AJ, Andreassen IM, Sletten CM, Galtung IT, Soler A, Molinas M. Newly identified Phonocardiography frequency bands for psychological stress detection with Deep Wavelet Scattering Network. Comput Biol Med 2024; 178:108722. [PMID: 38889628 DOI: 10.1016/j.compbiomed.2024.108722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 04/10/2024] [Accepted: 06/06/2024] [Indexed: 06/20/2024]
Abstract
The timely psychological stress detection can improve the quality of human life by preventing stress-induced behavioral and pathological consequences. This paper presents a novel framework that eliminates the need of Electrocardiography (ECG) signals-based referencing of Phonocardiography (PCG) signals for psychological stress detection. This stand-alone PCG-based methodology uses wavelet scattering approach on the data acquired from twenty-eight healthy adult male and female subjects to detect psychological stress. The acquired PCG signals are asynchronously segmented for the analysis using wavelet scattering transform. After the noise bands removal, the optimized segmentation length (L), scattering network parameters namely-invariance scale (J) and quality factor (Q) are utilized for computation of scattering features. These scattering coefficients generated are fed to K-nearest neighbor (KNN) and Extreme Gradient Boosting (XGBoost) classifier and the ten-fold cross validation-based performance metrics obtained are-accuracy 94.30 %, sensitivity 97.96 %, specificity 88.01 % and area under the curve (AUC) 0.9298 using XGBoost classifier for detecting psychological stress. Most importantly, the framework also identified two frequency bands in PCG signals with high discriminatory power for psychological stress detection as 270-290 Hz and 380-390 Hz. The elimination of multi-modal data acquisition and analysis makes this approach cost-efficient and reduces computational complexity.
Collapse
Affiliation(s)
- Øystein Sletta
- Department of Engineering Cybernetics, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Amandeep Cheema
- Department of Engineering Cybernetics, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Anne Joo Marthinsen
- Department of Engineering Cybernetics, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Ida Marie Andreassen
- Department of Engineering Cybernetics, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Christian Moe Sletten
- Department of Engineering Cybernetics, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Ivar Tesdal Galtung
- Department of Engineering Cybernetics, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Andres Soler
- Department of Engineering Cybernetics, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Marta Molinas
- Department of Engineering Cybernetics, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway.
| |
Collapse
|
2
|
Das R, Bose S, Chowdhury RS, Maulik U. Dense Dilated Multi-Scale Supervised Attention-Guided Network for histopathology image segmentation. Comput Biol Med 2023; 163:107182. [PMID: 37379615 DOI: 10.1016/j.compbiomed.2023.107182] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 05/24/2023] [Accepted: 06/13/2023] [Indexed: 06/30/2023]
Abstract
Over the last couple of decades, the introduction and proliferation of whole-slide scanners led to increasing interest in the research of digital pathology. Although manual analysis of histopathological images is still the gold standard, the process is often tedious and time consuming. Furthermore, manual analysis also suffers from intra- and interobserver variability. Separating structures or grading morphological changes can be difficult due to architectural variability of these images. Deep learning techniques have shown great potential in histopathology image segmentation that drastically reduces the time needed for downstream tasks of analysis and providing accurate diagnosis. However, few algorithms have clinical implementations. In this paper, we propose a new deep learning model Dense Dilated Multiscale Supervised Attention-Guided (D2MSA) Network for histopathology image segmentation that makes use of deep supervision coupled with a hierarchical system of novel attention mechanisms. The proposed model surpasses state-of-the-art performance while using similar computational resources. The performance of the model has been evaluated for the tasks of gland segmentation and nuclei instance segmentation, both of which are clinically relevant tasks to assess the state and progress of malignancy. Here, we have used histopathology image datasets for three different types of cancer. We have also performed extensive ablation tests and hyperparameter tuning to ensure the validity and reproducibility of the model performance. The proposed model is available at www.github.com/shirshabose/D2MSA-Net.
Collapse
Affiliation(s)
- Rangan Das
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Shirsha Bose
- Department of Informatics, Technical University of Munich, Munich, Bavaria 85748, Germany.
| | - Ritesh Sur Chowdhury
- Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Ujjwal Maulik
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
3
|
Razali NF, Isa IS, Sulaiman SN, A. Karim NK, Osman MK. CNN-Wavelet scattering textural feature fusion for classifying breast tissue in mammograms. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
|
4
|
Dabass M, Dabass J. An Atrous Convolved Hybrid Seg-Net Model with residual and attention mechanism for gland detection and segmentation in histopathological images. Comput Biol Med 2023; 155:106690. [PMID: 36827788 DOI: 10.1016/j.compbiomed.2023.106690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 02/06/2023] [Accepted: 02/14/2023] [Indexed: 02/21/2023]
Abstract
PURPOSE A clinically compatible computerized segmentation model is presented here that aspires to supply clinical gland informative details by seizing every small and intricate variation in medical images, integrate second opinions, and reduce human errors. APPROACH It comprises of enhanced learning capability that extracts denser multi-scale gland-specific features, recover semantic gap during concatenation, and effectively handle resolution-degradation and vanishing gradient problems. It is having three proposed modules namely Atrous Convolved Residual Learning Module in the encoder as well as decoder, Residual Attention Module in the skip connection paths, and Atrous Convolved Transitional Module as the transitional and output layer. Also, pre-processing techniques like patch-sampling, stain-normalization, augmentation, etc. are employed to develop its generalization capability. To verify its robustness and invigorate network invariance against digital variability, extensive experiments are carried out employing three different public datasets i.e., GlaS (Gland Segmentation Challenge), CRAG (Colorectal Adenocarcinoma Gland) and LC-25000 (Lung Colon-25000) dataset and a private HosC (Hospital Colon) dataset. RESULTS The presented model accomplished combative gland detection outcomes having F1-score (GlaS(Test A(0.957), Test B(0.926)), CRAG(0.935), LC 25000(0.922), HosC(0.963)); and gland segmentation results having Object-Dice Index (GlaS(Test A(0.961), Test B(0.933)), CRAG(0.961), LC-25000(0.940), HosC(0.929)), and Object-Hausdorff Distance (GlaS(Test A(21.77) and Test B(69.74)), CRAG(87.63), LC-25000(95.85), HosC(83.29)). In addition, validation score (GlaS (Test A(0.945), Test B(0.937)), CRAG(0.934), LC-25000(0.911), HosC(0.928)) supplied by the proficient pathologists is integrated for the end segmentation results to corroborate the applicability and appropriateness for assistance at the clinical level applications. CONCLUSION The proposed system will assist pathologists in devising precise diagnoses by offering a referential perspective during morphology assessment of colon histopathology images.
Collapse
Affiliation(s)
- Manju Dabass
- EECE Deptt, The NorthCap University, Gurugram, India.
| | - Jyoti Dabass
- DBT Centre of Excellence Biopharmaceutical Technology, IIT, Delhi, India
| |
Collapse
|
5
|
Li J, Ke L, Du Q, Chen X, Ding X. Multi-modal cardiac function signals classification algorithm based on improved D-S evidence theory. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103078] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
6
|
Wu J, Qiu X, Zhang J, Wu F, Kong Y, Yang G, Senhadji L, Shu H. Fractional Wavelet-Based Generative Scattering Networks. Front Neurorobot 2021; 15:752752. [PMID: 34764862 PMCID: PMC8577828 DOI: 10.3389/fnbot.2021.752752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 09/20/2021] [Indexed: 11/13/2022] Open
Abstract
Generative adversarial networks and variational autoencoders (VAEs) provide impressive image generation from Gaussian white noise, but both are difficult to train, since they need a generator (or encoder) and a discriminator (or decoder) to be trained simultaneously, which can easily lead to unstable training. To solve or alleviate these synchronous training problems of generative adversarial networks (GANs) and VAEs, researchers recently proposed generative scattering networks (GSNs), which use wavelet scattering networks (ScatNets) as the encoder to obtain features (or ScatNet embeddings) and convolutional neural networks (CNNs) as the decoder to generate an image. The advantage of GSNs is that the parameters of ScatNets do not need to be learned, while the disadvantage of GSNs is that their ability to obtain representations of ScatNets is slightly weaker than that of CNNs. In addition, the dimensionality reduction method of principal component analysis (PCA) can easily lead to overfitting in the training of GSNs and, therefore, affect the quality of generated images in the testing process. To further improve the quality of generated images while keeping the advantages of GSNs, this study proposes generative fractional scattering networks (GFRSNs), which use more expressive fractional wavelet scattering networks (FrScatNets), instead of ScatNets as the encoder to obtain features (or FrScatNet embeddings) and use similar CNNs of GSNs as the decoder to generate an image. Additionally, this study develops a new dimensionality reduction method named feature-map fusion (FMF) instead of performing PCA to better retain the information of FrScatNets,; it also discusses the effect of image fusion on the quality of the generated image. The experimental results obtained on the CIFAR-10 and CelebA datasets show that the proposed GFRSNs can lead to better generated images than the original GSNs on testing datasets. The experimental results of the proposed GFRSNs with deep convolutional GAN (DCGAN), progressive GAN (PGAN), and CycleGAN are also given.
Collapse
Affiliation(s)
- Jiasong Wu
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China
- Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, School of Computer Science and Engineering, Southeast University, Nanjing, China
- Univ Rennes, INSERM, LTSI-UMR 1099, Rennes, France
- Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Univ Rennes, INSERM, Rennes, France
| | - Xiang Qiu
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China
- Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Univ Rennes, INSERM, Rennes, France
| | - Jing Zhang
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China
- Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Univ Rennes, INSERM, Rennes, France
| | - Fuzhi Wu
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China
- Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Univ Rennes, INSERM, Rennes, France
| | - Youyong Kong
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China
- Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, School of Computer Science and Engineering, Southeast University, Nanjing, China
- Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Univ Rennes, INSERM, Rennes, France
| | - Guanyu Yang
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China
- Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, School of Computer Science and Engineering, Southeast University, Nanjing, China
- Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Univ Rennes, INSERM, Rennes, France
| | - Lotfi Senhadji
- Univ Rennes, INSERM, LTSI-UMR 1099, Rennes, France
- Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Univ Rennes, INSERM, Rennes, France
| | - Huazhong Shu
- Laboratory of Image Science and Technology, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China
- Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, School of Computer Science and Engineering, Southeast University, Nanjing, China
- Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Univ Rennes, INSERM, Rennes, France
| |
Collapse
|
7
|
Mei N, Wang H, Zhang Y, Liu F, Jiang X, Wei S. Classification of heart sounds based on quality assessment and wavelet scattering transform. Comput Biol Med 2021; 137:104814. [PMID: 34481179 DOI: 10.1016/j.compbiomed.2021.104814] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 08/25/2021] [Indexed: 01/16/2023]
Abstract
Automatic classification of heart sound plays an important role in the diagnosis of cardiovascular diseases. In this study, a heart sound sample classification method based on quality assessment and wavelet scattering transform was proposed. First, the ratio of zero crossings (RZC) and the root mean square of successive differences (RMSSD) were used for assessing the quality of heart sound signal. The first signal segment conforming to the threshold standard was selected as the current sample for the continuous heart sound signal. Using the wavelet scattering transform, the wavelet scattering coefficients were expanded according to the wavelet scale dimension, to obtain the features. Support vector machine (SVM) was used for classification, and the classification results for the samples were obtained using the wavelet scale dimension voting approach. The effects of RZC and RMSSD on the results are discussed in detail. On the database of PhysioNet Computing in Cardiology Challenge 2016 (CinC 2016), the proposed method yields 92.23% accuracy (Acc), 96.62% sensitivity (Se), 90.65% specificity (Sp), and 93.64% measure of accuracy (Macc). The results show that the proposed method can effectively classify normal and abnormal heart sound samples with high accuracy.
Collapse
Affiliation(s)
- Na Mei
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Hongxia Wang
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Yatao Zhang
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai, China.
| | - Feifei Liu
- School of Science, Shandong Jianzhu University, Jinan, China
| | - Xinge Jiang
- School of Information Science and Electrical Engineering, Shandong Jiaotong University, Jinan, China
| | - Shoushui Wei
- School of Control Science and Engineering, Shandong University, Jinan, China.
| |
Collapse
|
8
|
|
9
|
Thakur N, Yoon H, Chong Y. Current Trends of Artificial Intelligence for Colorectal Cancer Pathology Image Analysis: A Systematic Review. Cancers (Basel) 2020; 12:1884. [PMID: 32668721 PMCID: PMC7408874 DOI: 10.3390/cancers12071884] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 07/06/2020] [Accepted: 07/09/2020] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer (CRC) is one of the most common cancers requiring early pathologic diagnosis using colonoscopy biopsy samples. Recently, artificial intelligence (AI) has made significant progress and shown promising results in the field of medicine despite several limitations. We performed a systematic review of AI use in CRC pathology image analysis to visualize the state-of-the-art. Studies published between January 2000 and January 2020 were searched in major online databases including MEDLINE (PubMed, Cochrane Library, and EMBASE). Query terms included "colorectal neoplasm," "histology," and "artificial intelligence." Of 9000 identified studies, only 30 studies consisting of 40 models were selected for review. The algorithm features of the models were gland segmentation (n = 25, 62%), tumor classification (n = 8, 20%), tumor microenvironment characterization (n = 4, 10%), and prognosis prediction (n = 3, 8%). Only 20 gland segmentation models met the criteria for quantitative analysis, and the model proposed by Ding et al. (2019) performed the best. Studies with other features were in the elementary stage, although most showed impressive results. Overall, the state-of-the-art is promising for CRC pathological analysis. However, datasets in most studies had relatively limited scale and quality for clinical application of this technique. Future studies with larger datasets and high-quality annotations are required for routine practice-level validation.
Collapse
Affiliation(s)
- Nishant Thakur
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Korea;
| | - Hongjun Yoon
- AI Lab, Deepnoid, #1305 E&C Venture Dream Tower 2, 55, Digital-ro 33-Gil, Guro-gu, Seoul 06216, Korea;
| | - Yosep Chong
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Korea;
| |
Collapse
|