1
|
Wei W, Xu X, Hu G, Shao Y, Wang Q. Deep Learning and Histogram-Based Grain Size Analysis of Images. SENSORS (BASEL, SWITZERLAND) 2024; 24:4923. [PMID: 39123970 PMCID: PMC11314959 DOI: 10.3390/s24154923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 07/14/2024] [Accepted: 07/28/2024] [Indexed: 08/12/2024]
Abstract
Grain size analysis is used to study grain size and distribution. It is a critical indicator in sedimentary simulation experiments (SSEs), which aids in understanding hydrodynamic conditions and identifying the features of sedimentary environments. Existing methods for grain size analysis based on images primarily focus on scenarios where grain edges are distinct or grain arrangements are regular. However, these methods are not suitable for images from SSEs. We proposed a deep learning model incorporating histogram layers for the analysis of SSE images with fuzzy grain edges and irregular arrangements. Firstly, ResNet18 was used to extract features from SSE images. These features were then input into the histogram layer to obtain local histogram features, which were concatenated to form comprehensive histogram features for the entire image. Finally, the histogram features were connected to a fully connected layer to estimate the grain size corresponding to the cumulative volume percentage. In addition, an applied workflow was developed. The results demonstrate that the proposed method achieved higher accuracy than the eight other models and was highly consistent with manual results in practice. The proposed method enhances the efficiency and accuracy of grain size analysis for images with irregular grain distribution and improves the quantification and automation of grain size analysis in SSEs. It can also be applied for grain size analysis in fields such as soil and geotechnical engineering.
Collapse
Affiliation(s)
| | | | | | | | - Qing Wang
- School of Geosciences, Yangtze University, Wuhan 430100, China; (W.W.)
| |
Collapse
|
2
|
Wang J, Yang F, Wang B, Hu J, Liu M, Wang X, Dong J, Song G, Wang Z. Cell recognition based on features extracted by AFM and parameter optimization classifiers. ANALYTICAL METHODS : ADVANCING METHODS AND APPLICATIONS 2024; 16:4626-4635. [PMID: 38921601 DOI: 10.1039/d4ay00684d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Intelligent technology can assist in the diagnosis and treatment of disease, which would pave the way towards precision medicine in the coming decade. As a key focus of medical research, the diagnosis and prognosis of cancer play an important role in the future survival of patients. In this work, a diagnostic method based on nano-resolution imaging was proposed to meet the demand for precise detection methods in medicine and scientific research. The cell images scanned by AFM were recognized by cell feature engineering and machine learning classifiers. A feature ranking method based on the importance of features to responses was used to screen features closely related to categorization and optimization of feature combinations, which helps to understand the feature differences between cell types at the micro level. The results showed that the Bayesian optimized back propagation neural network has accuracy rates of 90.37% and 92.68% on two cell datasets (HL-7702 & SMMC-7721 and GES-1 & SGC-7901), respectively. This provides an automatic analysis method for identifying cancer cells or abnormal cells, which can help to reduce the burden of medical or scientific research, decrease misjudgment and promote precise medical care for the whole society.
Collapse
Affiliation(s)
- Junxi Wang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China.
- Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528400, China
- Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China
- College of Physics, Changchun University of Science and Technology, Changchun 130022, China.
| | - Fan Yang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China.
- Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528400, China
- Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China
- College of Physics, Changchun University of Science and Technology, Changchun 130022, China.
| | - Bowei Wang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China.
- Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528400, China
- Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China
- College of Physics, Changchun University of Science and Technology, Changchun 130022, China.
| | - Jing Hu
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China.
- Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528400, China
- Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China
| | - Mengnan Liu
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China.
- Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528400, China
- Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China
| | - Xia Wang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China.
- Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528400, China
- Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China
- College of Physics, Changchun University of Science and Technology, Changchun 130022, China.
| | - Jianjun Dong
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China.
- Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528400, China
- Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China
- College of Physics, Changchun University of Science and Technology, Changchun 130022, China.
| | - Guicai Song
- College of Physics, Changchun University of Science and Technology, Changchun 130022, China.
| | - Zuobin Wang
- International Research Centre for Nano Handling and Manufacturing of China, Changchun University of Science and Technology, Changchun 130022, China.
- Centre for Opto/Bio-Nano Measurement and Manufacturing, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528400, China
- Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun 130022, China
- College of Physics, Changchun University of Science and Technology, Changchun 130022, China.
- JR3CN & IRAC, University of Bedfordshire, Luton LU1 3JU, UK
| |
Collapse
|
3
|
Ferrero A, Ghelichkhan E, Manoochehri H, Ho MM, Albertson DJ, Brintz BJ, Tasdizen T, Whitaker RT, Knudsen BS. HistoEM: A Pathologist-Guided and Explainable Workflow Using Histogram Embedding for Gland Classification. Mod Pathol 2024; 37:100447. [PMID: 38369187 DOI: 10.1016/j.modpat.2024.100447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 01/06/2024] [Accepted: 02/06/2024] [Indexed: 02/20/2024]
Abstract
Pathologists have, over several decades, developed criteria for diagnosing and grading prostate cancer. However, this knowledge has not, so far, been included in the design of convolutional neural networks (CNN) for prostate cancer detection and grading. Further, it is not known whether the features learned by machine-learning algorithms coincide with diagnostic features used by pathologists. We propose a framework that enforces algorithms to learn the cellular and subcellular differences between benign and cancerous prostate glands in digital slides from hematoxylin and eosin-stained tissue sections. After accurate gland segmentation and exclusion of the stroma, the central component of the pipeline, named HistoEM, utilizes a histogram embedding of features from the latent space of the CNN encoder. Each gland is represented by 128 feature-wise histograms that provide the input into a second network for benign vs cancer classification of the whole gland. Cancer glands are further processed by a U-Net structured network to separate low-grade from high-grade cancer. Our model demonstrates similar performance compared with other state-of-the-art prostate cancer grading models with gland-level resolution. To understand the features learned by HistoEM, we first rank features based on the distance between benign and cancer histograms and visualize the tissue origins of the 2 most important features. A heatmap of pixel activation by each feature is generated using Grad-CAM and overlaid on nuclear segmentation outlines. We conclude that HistoEM, similar to pathologists, uses nuclear features for the detection of prostate cancer. Altogether, this novel approach can be broadly deployed to visualize computer-learned features in histopathology images.
Collapse
Affiliation(s)
- Alessandro Ferrero
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Elham Ghelichkhan
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Hamid Manoochehri
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Man Minh Ho
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | | | | | - Tolga Tasdizen
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Ross T Whitaker
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | | |
Collapse
|
4
|
Tao H, Duan Q. Hierarchical attention network with progressive feature fusion for facial expression recognition. Neural Netw 2024; 170:337-348. [PMID: 38006736 DOI: 10.1016/j.neunet.2023.11.033] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 11/04/2023] [Accepted: 11/12/2023] [Indexed: 11/27/2023]
Abstract
Facial expression recognition (FER) in the wild is challenging due to the disturbing factors including pose variation, occlusions, and illumination variation. The attention mechanism can relieve these issues by enhancing expression-relevant information and suppressing expression-irrelevant information. However, most methods utilize the same attention mechanism on feature tensors with varying spatial and channel sizes across different network layers, disregarding the dynamically changing sizes of these tensors. To solve this issue, this paper proposes a hierarchical attention network with progressive feature fusion for FER. Specifically, first, to aggregate diverse complementary features, a diverse feature extraction module based on several feature aggregation blocks is designed to exploit both local context and global context features, both low-level and high-level features, as well as the gradient features that are robust to illumination variation. Second, to effectively fuse the above diverse features, a hierarchical attention module (HAM) is designed to progressively enhance discriminative features from key parts of the facial images and suppress task-irrelevant features from disturbing facial regions. Extensive experiments show that our model achieves the best performance among existing FER methods.
Collapse
Affiliation(s)
- Huanjie Tao
- School of Computer Science, Northwestern Polytechnical University, Xi'an 710129, PR China; Engineering Research Center of Embedded System Integration, Ministry of Education. Xi'an 710129, PR China; National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, Xi'an 710129, PR China.
| | - Qianyue Duan
- School of Computer Science, Northwestern Polytechnical University, Xi'an 710129, PR China
| |
Collapse
|
5
|
Zhao Y, Zhu H, Chen X, Luo F, Li M, Zhou J, Chen S, Pan Y. Pose-invariant and occlusion-robust neonatal facial pain assessment. Comput Biol Med 2023; 165:107462. [PMID: 37716244 DOI: 10.1016/j.compbiomed.2023.107462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 08/12/2023] [Accepted: 09/04/2023] [Indexed: 09/18/2023]
Abstract
Neonatal Facial Pain Assessment (NFPA) is essential to improve neonatal pain management. Pose variation and occlusion, which can significantly alter the facial appearance, are two major and still unstudied barriers to NFPA. We bridge this gap in terms of method and dataset. Techniques to tackle both challenges in other tasks either expect pose/occlusion-invariant deep learning methods or first generate a normal version of the input image before feature extraction, combining these we argue that it is more effective to jointly perform adversarial learning and end-to-end classification for their mutual benefit. To this end, we propose a Pose-invariant Occlusion-robust Pain Assessment (POPA) framework, with two novelties. We incorporate adversarial learning-based disturbance mitigation for end-to-end pain-level classification and propose a novel composite loss function for facial representation learning; compared to the vanilla discriminator that implicitly determines occlusion and pose conditions, we propose a multi-scale discriminator that determines explicitly, while incorporating local discriminators to enhance the discrimination of key regions. For a comprehensive evaluation, we built the first neonatal pain dataset with disturbance annotation involving 1091 neonates and also applied the proposed POPA to the facial expression recognition task. Extensive qualitative and quantitative experiments prove the superiority of the POPA.
Collapse
Affiliation(s)
- Yisheng Zhao
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China.
| | - Huaiyu Zhu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China.
| | - Xiaofei Chen
- Nursing Department, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou 310052, China.
| | - Feixiang Luo
- Nursing Department, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou 310052, China.
| | - Mengting Li
- Nursing Department, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou 310052, China.
| | - Jinyan Zhou
- Nursing Department, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou 310052, China.
| | - Shuohui Chen
- Hospital Infection-Control Department, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou 310052, China.
| | - Yun Pan
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China.
| |
Collapse
|
6
|
Haider I, Yang HJ, Lee GS, Kim SH. Robust Human Face Emotion Classification Using Triplet-Loss-Based Deep CNN Features and SVM. SENSORS (BASEL, SWITZERLAND) 2023; 23:4770. [PMID: 37430689 DOI: 10.3390/s23104770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 05/02/2023] [Accepted: 05/07/2023] [Indexed: 07/12/2023]
Abstract
Human facial emotion detection is one of the challenging tasks in computer vision. Owing to high inter-class variance, it is hard for machine learning models to predict facial emotions accurately. Moreover, a person with several facial emotions increases the diversity and complexity of classification problems. In this paper, we have proposed a novel and intelligent approach for the classification of human facial emotions. The proposed approach comprises customized ResNet18 by employing transfer learning with the integration of triplet loss function (TLF), followed by SVM classification model. Using deep features from a customized ResNet18 trained with triplet loss, the proposed pipeline consists of a face detector used to locate and refine the face bounding box and a classifier to identify the facial expression class of discovered faces. RetinaFace is used to extract the identified face areas from the source image, and a ResNet18 model is trained on cropped face images with triplet loss to retrieve those features. An SVM classifier is used to categorize the facial expression based on the acquired deep characteristics. In this paper, we have proposed a method that can achieve better performance than state-of-the-art (SoTA) methods on JAFFE and MMI datasets. The technique is based on the triplet loss function to generate deep input image features. The proposed method performed well on the JAFFE and MMI datasets with an accuracy of 98.44% and 99.02%, respectively, on seven emotions; meanwhile, the performance of the method needs to be fine-tuned for the FER2013 and AFFECTNET datasets.
Collapse
Affiliation(s)
- Irfan Haider
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju 500-757, Republic of Korea
| | - Hyung-Jeong Yang
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju 500-757, Republic of Korea
| | - Guee-Sang Lee
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju 500-757, Republic of Korea
| | - Soo-Hyung Kim
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju 500-757, Republic of Korea
| |
Collapse
|
7
|
Zhang Z, Tian X, Zhang Y, Guo K, Xu X. Enhanced Discriminative Global-Local Feature Learning with Priority for Facial Expression Recognition. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.02.056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
|
8
|
|