1
|
Cascella M, Schiavo D, Cuomo A, Ottaiano A, Perri F, Patrone R, Migliarelli S, Bignami EG, Vittori A, Cutugno F. Artificial Intelligence for Automatic Pain Assessment: Research Methods and Perspectives. Pain Res Manag 2023; 2023:6018736. [PMID: 37416623 PMCID: PMC10322534 DOI: 10.1155/2023/6018736] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 02/03/2023] [Accepted: 04/20/2023] [Indexed: 07/08/2023]
Abstract
Although proper pain evaluation is mandatory for establishing the appropriate therapy, self-reported pain level assessment has several limitations. Data-driven artificial intelligence (AI) methods can be employed for research on automatic pain assessment (APA). The goal is the development of objective, standardized, and generalizable instruments useful for pain assessment in different clinical contexts. The purpose of this article is to discuss the state of the art of research and perspectives on APA applications in both research and clinical scenarios. Principles of AI functioning will be addressed. For narrative purposes, AI-based methods are grouped into behavioral-based approaches and neurophysiology-based pain detection methods. Since pain is generally accompanied by spontaneous facial behaviors, several approaches for APA are based on image classification and feature extraction. Language features through natural language strategies, body postures, and respiratory-derived elements are other investigated behavioral-based approaches. Neurophysiology-based pain detection is obtained through electroencephalography, electromyography, electrodermal activity, and other biosignals. Recent approaches involve multimode strategies by combining behaviors with neurophysiological findings. Concerning methods, early studies were conducted by machine learning algorithms such as support vector machine, decision tree, and random forest classifiers. More recently, artificial neural networks such as convolutional and recurrent neural network algorithms are implemented, even in combination. Collaboration programs involving clinicians and computer scientists must be aimed at structuring and processing robust datasets that can be used in various settings, from acute to different chronic pain conditions. Finally, it is crucial to apply the concepts of explainability and ethics when examining AI applications for pain research and management.
Collapse
Affiliation(s)
- Marco Cascella
- Division of Anesthesia and Pain Medicine, Istituto Nazionale Tumori IRCCS Fondazione G. Pascale, Naples 80131, Italy
| | - Daniela Schiavo
- Division of Anesthesia and Pain Medicine, Istituto Nazionale Tumori IRCCS Fondazione G. Pascale, Naples 80131, Italy
| | - Arturo Cuomo
- Division of Anesthesia and Pain Medicine, Istituto Nazionale Tumori IRCCS Fondazione G. Pascale, Naples 80131, Italy
| | - Alessandro Ottaiano
- SSD-Innovative Therapies for Abdominal Metastases, Istituto Nazionale Tumori di Napoli IRCCS “G. Pascale”, Via M. Semmola, Naples 80131, Italy
| | - Francesco Perri
- Head and Neck Oncology Unit, Istituto Nazionale Tumori IRCCS-Fondazione “G. Pascale”, Naples 80131, Italy
| | - Renato Patrone
- Dieti Department, University of Naples, Naples, Italy
- Division of Hepatobiliary Surgical Oncology, Istituto Nazionale Tumori IRCCS, Fondazione Pascale-IRCCS di Napoli, Naples, Italy
| | - Sara Migliarelli
- Department of Pharmacology, Faculty of Medicine and Psychology, University Sapienza of Rome, Rome, Italy
| | - Elena Giovanna Bignami
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Alessandro Vittori
- Department of Anesthesia and Critical Care, ARCO ROMA, Ospedale Pediatrico Bambino Gesù IRCCS, Rome 00165, Italy
| | - Francesco Cutugno
- Department of Electrical Engineering and Information Technologies, University of Naples “Federico II”, Naples 80100, Italy
| |
Collapse
|
2
|
Abstract
Pain assessment is used to improve patients’ treatment outcomes. Human observers may be influenced by personal factors, such as inexperience and medical organizations are facing a shortage of experts. In this study, we developed a facial expressions-based automatic pain assessment system (FEAPAS) to notify medical staff when a patient suffers pain by activating an alarm and recording the incident and pain level with the date and time. The model consists of two identical concurrent subsystems, each of which takes one of the two inputs of the model, i.e., “full face” and “the upper half of the same face”. The subsystems extract the relevant input features via two pre-trained convolutional neural networks (CNNs), using either VGG16, InceptionV3, ResNet50, or ResNeXt50, while freezing all convolutional blocks and replacing the classifier layer with a shallow CNN. The concatenated outputs in this stage is then sent to the model’s classifier. This approach mimics the human observer method and gives more importance to the upper part of the face, which is similar to the Prkachin and Soloman pain intensity (PSPI). Additionally, we further optimized our models by applying four optimizers (SGD/ADAM/RMSprop/RAdam) to each model and testing them on the UNBC-McMaster shoulder pain expression archive dataset to find the optimal combination, InceptionV3-SGD. The optimal model showed an accuracy of 99.10% on 10-fold cross-validation, thus outperforming the state-of-the-art model on the UNBC-McMaster database. It also scored 90.56% on unseen subject data. To speed up the system response time and reduce unnecessary alarms associated with temporary facial expressions, a select but effective subset of frames was inspected and classified. Two frame-selection criteria were reported. Classifying only two frames at the middle of 30-frame sequence was optimal, with an average reaction time of at most 6.49 s and the ability to avoid unnecessary alarms.
Collapse
|
3
|
Zhao B, Dong X, Guo Y, Jia X, Huang Y. PCA Dimensionality Reduction Method for Image Classification. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10632-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
4
|
Wei B, Hao K, Gao L, Tang XS, Zhao Y. A biologically inspired visual integrated model for image classification. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.04.081] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
5
|
Wei B, He H, Hao K, Gao L, Tang XS. Visual interaction networks: A novel bio-inspired computational model for image classification. Neural Netw 2020; 130:100-110. [PMID: 32652433 DOI: 10.1016/j.neunet.2020.06.019] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Revised: 05/12/2020] [Accepted: 06/22/2020] [Indexed: 10/24/2022]
Abstract
Inspired by biological mechanisms and structures in neuroscience, many biologically inspired visual computational models have been presented to provide new solutions for visual recognition task. For example, convolutional neural network (CNN) was proposed according to the hierarchical structure of biological vision, which could achieve superior performance in large-scale image classification. In this paper, we propose a new framework called visual interaction networks (VIN-Net), which is inspired by visual interaction mechanisms. More specifically, self-interaction, mutual-interaction, multi-interaction, and adaptive interaction are proposed in VIN-Net, forming the first interactive completeness of the visual interaction model. To further enhance the representation ability of visual features, the adaptive adjustment mechanism is integrated into the VIN-Net model. Finally, our model is evaluated on three benchmark datasets and two self-built textile defect datasets. The experimental results demonstrate that the proposed model exhibits its efficiency on visual classification tasks. Furthermore, a textile industrial application shows that the proposed architecture outperforms the state-of-the-art approaches in classification performance.
Collapse
Affiliation(s)
- Bing Wei
- Engineering Research Center of Digitized Textile and Apparel Technology, Ministry of Education, Donghua University, Shanghai 201620, China; College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
| | - Haibo He
- Department of Electrical, Computer, and Biomedical Engineering, University of Rhode Island, Kingston, RI 02881, USA.
| | - Kuangrong Hao
- Engineering Research Center of Digitized Textile and Apparel Technology, Ministry of Education, Donghua University, Shanghai 201620, China; College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
| | - Lei Gao
- Business School, Shandong Normal University, Ji'nan 250014, China; Commonwealth Scientific and Industrial Research Organization (CSIRO), SA 5064, Australia
| | - Xue-Song Tang
- Engineering Research Center of Digitized Textile and Apparel Technology, Ministry of Education, Donghua University, Shanghai 201620, China; College of Information Sciences and Technology, Donghua University, Shanghai 201620, China
| |
Collapse
|