1
|
Song W, Latham NK, Liu L, Rice HE, Sainlaire M, Min L, Zhang L, Thai T, Kang MJ, Li S, Tejeda C, Lipsitz S, Samal L, Carroll DL, Adkison L, Herlihy L, Ryan V, Bates DW, Dykes PC. Improved accuracy and efficiency of primary care fall risk screening of older adults using a machine learning approach. J Am Geriatr Soc 2024; 72:1145-1154. [PMID: 38217355 PMCID: PMC11018490 DOI: 10.1111/jgs.18776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 11/21/2023] [Accepted: 12/22/2023] [Indexed: 01/15/2024]
Abstract
BACKGROUND While many falls are preventable, they remain a leading cause of injury and death in older adults. Primary care clinics largely rely on screening questionnaires to identify people at risk of falls. Limitations of standard fall risk screening questionnaires include suboptimal accuracy, missing data, and non-standard formats, which hinder early identification of risk and prevention of fall injury. We used machine learning methods to develop and evaluate electronic health record (EHR)-based tools to identify older adults at risk of fall-related injuries in a primary care population and compared this approach to standard fall screening questionnaires. METHODS Using patient-level clinical data from an integrated healthcare system consisting of 16-member institutions, we conducted a case-control study to develop and evaluate prediction models for fall-related injuries in older adults. Questionnaire-derived prediction with three questions from a commonly used fall risk screening tool was evaluated. We then developed four temporal machine learning models using routinely available longitudinal EHR data to predict the future risk of fall injury. We also developed a fall injury-prevention clinical decision support (CDS) implementation prototype to link preventative interventions to patient-specific fall injury risk factors. RESULTS Questionnaire-based risk screening achieved area under the receiver operating characteristic curve (AUC) up to 0.59 with 23% to 33% similarity for each pair of three fall injury screening questions. EHR-based machine learning risk screening showed significantly improved performance (best AUROC = 0.76), with similar prediction performance between 6-month and one-year prediction models. CONCLUSIONS The current method of questionnaire-based fall risk screening of older adults is suboptimal with redundant items, inadequate precision, and no linkage to prevention. A machine learning fall injury prediction method can accurately predict risk with superior sensitivity while freeing up clinical time for initiating personalized fall prevention interventions. The developed algorithm and data science pipeline can impact routine primary care fall prevention practice.
Collapse
Affiliation(s)
- Wenyu Song
- Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Nancy K Latham
- Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Luwei Liu
- Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
| | - Hannah E Rice
- Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
| | - Michael Sainlaire
- Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
| | - Lillian Min
- Department of Internal Medicine, University of Michigan, Ann Arbor, Michigan, USA
| | - Linying Zhang
- Institute for Informatics, Data Science, and Biostatistics, Washington University School of Medicine, St. Louis, Missouri, USA
| | - Tien Thai
- Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
| | - Min-Jeoung Kang
- Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Siyun Li
- Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
| | - Christian Tejeda
- Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
| | - Stuart Lipsitz
- Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Lipika Samal
- Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Diane L Carroll
- Yvonne L. Munn Center for Nursing Research, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Lesley Adkison
- Department of Nursing and Patient Care Services, Newton Wellesley Hospital, Newton, Massachusetts, USA
| | - Lisa Herlihy
- Division of Nursing, Salem Hospital, Salem, Massachusetts, USA
| | - Virginia Ryan
- Division of Nursing, Brigham and Women's Faulkner Hospital, Jamaica Plain, Massachusetts, USA
| | - David W Bates
- Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Patricia C Dykes
- Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
2
|
Wang L, Liu G. Research on multi-robot collaborative operation in logistics and warehousing using A3C optimized YOLOv5-PPO model. Front Neurorobot 2024; 17:1329589. [PMID: 38322650 PMCID: PMC10844514 DOI: 10.3389/fnbot.2023.1329589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 12/27/2023] [Indexed: 02/08/2024] Open
Abstract
Introduction In the field of logistics warehousing robots, collaborative operation and coordinated control have always been challenging issues. Although deep learning and reinforcement learning methods have made some progress in solving these problems, however, current research still has shortcomings. In particular, research on adaptive sensing and real-time decision-making of multi-robot swarms has not yet received sufficient attention. Methods To fill this research gap, we propose a YOLOv5-PPO model based on A3C optimization. This model cleverly combines the target detection capabilities of YOLOv5 and the PPO reinforcement learning algorithm, aiming to improve the efficiency and accuracy of collaborative operations among logistics and warehousing robot groups. Results Through extensive experimental evaluation on multiple datasets and tasks, the results show that in different scenarios, our model can successfully achieve multi-robot collaborative operation, significantly improve task completion efficiency, and maintain target detection and environment High accuracy of understanding. Discussion In addition, our model shows excellent robustness and adaptability and can adapt to dynamic changes in the environment and fluctuations in demand, providing an effective method to solve the collaborative operation problem of logistics warehousing robots.
Collapse
Affiliation(s)
- Lei Wang
- School of Economy and Management, Hanjiang Normal University, Shiyan, Hubei, China
| | - Guangjun Liu
- School of Business, Wuchang University of Technology, Wuhan, Hubei, China
| |
Collapse
|
3
|
Dinesh MG, Bacanin N, Askar SS, Abouhawwash M. Diagnostic ability of deep learning in detection of pancreatic tumour. Sci Rep 2023; 13:9725. [PMID: 37322046 PMCID: PMC10272117 DOI: 10.1038/s41598-023-36886-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 06/12/2023] [Indexed: 06/17/2023] Open
Abstract
Pancreatic cancer is associated with higher mortality rates due to insufficient diagnosis techniques, often diagnosed at an advanced stage when effective treatment is no longer possible. Therefore, automated systems that can detect cancer early are crucial to improve diagnosis and treatment outcomes. In the medical field, several algorithms have been put into use. Valid and interpretable data are essential for effective diagnosis and therapy. There is much room for cutting-edge computer systems to develop. The main objective of this research is to predict pancreatic cancer early using deep learning and metaheuristic techniques. This research aims to create a deep learning and metaheuristic techniques-based system to predict pancreatic cancer early by analyzing medical imaging data, mainly CT scans, and identifying vital features and cancerous growths in the pancreas using Convolutional Neural Network (CNN) and YOLO model-based CNN (YCNN) models. Once diagnosed, the disease cannot be effectively treated, and its progression is unpredictable. That's why there's been a push in recent years to implement fully automated systems that can sense cancer at a prior stage and improve diagnosis and treatment. The paper aims to evaluate the effectiveness of the novel YCNN approach compared to other modern methods in predicting pancreatic cancer. To predict the vital features from the CT scan and the proportion of cancer feasts in the pancreas using the threshold parameters booked as markers. This paper employs a deep learning approach called a Convolutional Neural network (CNN) model to predict pancreatic cancer images. In addition, we use the YOLO model-based CNN (YCNN) to aid in the categorization process. Both biomarkers and CT image dataset is used for testing. The YCNN method was shown to perform well by a cent percent of accuracy compared to other modern techniques in a thorough review of comparative findings.
Collapse
Affiliation(s)
- M G Dinesh
- Department of Computer Science and Engineering, EASA College of Engineering and Technology, Coimbatore, India
| | | | - S S Askar
- Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, 11451, Riyadh, Saudi Arabia
| | - Mohamed Abouhawwash
- Department of Computational Mathematics, Science and Engineering (CMSE), College of Engineering, Michigan State University, East Lansing, MI, 48824, USA.
- Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, 35516, Egypt.
| |
Collapse
|
4
|
Ahmad PN, Shah AM, Lee K. A Review on Electronic Health Record Text-Mining for Biomedical Name Entity Recognition in Healthcare Domain. Healthcare (Basel) 2023; 11:1268. [PMID: 37174810 PMCID: PMC10178605 DOI: 10.3390/healthcare11091268] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/24/2023] [Accepted: 04/26/2023] [Indexed: 05/15/2023] Open
Abstract
Biomedical-named entity recognition (bNER) is critical in biomedical informatics. It identifies biomedical entities with special meanings, such as people, places, and organizations, as predefined semantic types in electronic health records (EHR). bNER is essential for discovering novel knowledge using computational methods and Information Technology. Early bNER systems were configured manually to include domain-specific features and rules. However, these systems were limited in handling the complexity of the biomedical text. Recent advances in deep learning (DL) have led to the development of more powerful bNER systems. DL-based bNER systems can learn the patterns of biomedical text automatically, making them more robust and efficient than traditional rule-based systems. This paper reviews the healthcare domain of bNER, using DL techniques and artificial intelligence in clinical records, for mining treatment prediction. bNER-based tools are categorized systematically and represent the distribution of input, context, and tag (encoder/decoder). Furthermore, to create a labeled dataset for our machine learning sentiment analyzer to analyze the sentiment of a set of tweets, we used a manual coding approach and the multi-task learning method to bias the training signals with domain knowledge inductively. To conclude, we discuss the challenges facing bNER systems and future directions in the healthcare field.
Collapse
Affiliation(s)
- Pir Noman Ahmad
- School of Computer Science, Harbin Institute of Technology, Harbin 150001, China
| | - Adnan Muhammad Shah
- Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea
| | - KangYoon Lee
- Department of Computer Engineering, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|
5
|
Park J, Artin MG, Lee KE, May BL, Park M, Hur C, Tatonetti NP. Structured deep embedding model to generate composite clinical indices from electronic health records for early detection of pancreatic cancer. PATTERNS (NEW YORK, N.Y.) 2023; 4:100636. [PMID: 36699740 PMCID: PMC9868652 DOI: 10.1016/j.patter.2022.100636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 08/18/2022] [Accepted: 10/24/2022] [Indexed: 12/12/2022]
Abstract
The high-dimensionality, complexity, and irregularity of electronic health records (EHR) data create significant challenges for both simplified and comprehensive health assessments, prohibiting an efficient extraction of actionable insights by clinicians. If we can provide human decision-makers with a simplified set of interpretable composite indices (i.e., combining information about groups of related measures into single representative values), it will facilitate effective clinical decision-making. In this study, we built a structured deep embedding model aimed at reducing the dimensionality of the input variables by grouping related measurements as determined by domain experts (e.g., clinicians). Our results suggest that composite indices representing liver function may consistently be the most important factor in the early detection of pancreatic cancer (PC). We propose our model as a basis for leveraging deep learning toward developing composite indices from EHR for predicting health outcomes, including but not limited to various cancers, with clinically meaningful interpretations.
Collapse
Affiliation(s)
- Jiheum Park
- Department of Medicine, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Michael G. Artin
- Hospital of the University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Kate E. Lee
- Duke University Medical Center, Durham, NC 27710, USA
| | - Benjamin L. May
- Herbert Irving Comprehensive Cancer Center, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Michael Park
- Applied Info Partners, Inc, Worlds Fair Drive, Somerset, NJ 08873, USA
- X-Mechanics, Cresskill, NJ 07626, USA
| | - Chin Hur
- Department of Medicine, Columbia University Irving Medical Center, New York, NY 10032, USA
| | | |
Collapse
|