1
|
Zalay O, Bontempi D, Bitterman DS, Birkbak N, Shyr D, Haugg F, Qian JM, Roberts H, Perni S, Prudente V, Pai S, Dekker A, Haibe-Kains B, Guthier C, Balboni T, Warren L, Krishan M, Kann BH, Swanton C, Ruysscher DD, Mak RH, Aerts HJWL. Decoding biological age from face photographs using deep learning. medRxiv 2023:2023.09.12.23295132. [PMID: 37745558 PMCID: PMC10516042 DOI: 10.1101/2023.09.12.23295132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
Because humans age at different rates, a person's physical appearance may yield insights into their biological age and physiological health more reliably than their chronological age. In medicine, however, appearance is incorporated into medical judgments in a subjective and non-standardized fashion. In this study, we developed and validated FaceAge, a deep learning system to estimate biological age from easily obtainable and low-cost face photographs. FaceAge was trained on data from 58,851 healthy individuals, and clinical utility was evaluated on data from 6,196 patients with cancer diagnoses from two institutions in the United States and The Netherlands. To assess the prognostic relevance of FaceAge estimation, we performed Kaplan Meier survival analysis. To test a relevant clinical application of FaceAge, we assessed the performance of FaceAge in end-of-life patients with metastatic cancer who received palliative treatment by incorporating FaceAge into clinical prediction models. We found that, on average, cancer patients look older than their chronological age, and looking older is correlated with worse overall survival. FaceAge demonstrated significant independent prognostic performance in a range of cancer types and stages. We found that FaceAge can improve physicians' survival predictions in incurable patients receiving palliative treatments, highlighting the clinical utility of the algorithm to support end-of-life decision-making. FaceAge was also significantly associated with molecular mechanisms of senescence through gene analysis, while age was not. These findings may extend to diseases beyond cancer, motivating using deep learning algorithms to translate a patient's visual appearance into objective, quantitative, and clinically useful measures.
Collapse
Affiliation(s)
- Osbert Zalay
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, United States of America
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
- Division of Radiation Oncology, Queen’s University, Kingston, Canada
| | - Dennis Bontempi
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, United States of America
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, The Netherlands
- Department of Radiation Oncology (MAASTRO), Maastricht University, Maastricht, The Netherlands
| | - Danielle S Bitterman
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, United States of America
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
| | - Nicolai Birkbak
- Department of Molecular Medicine, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- Bioinformatics Research Center, Aarhus University, Aarhus, Denmark
| | - Derek Shyr
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston
| | - Fridolin Haugg
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, United States of America
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
| | - Jack M Qian
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, United States of America
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
| | - Hannah Roberts
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, United States of America
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
| | - Subha Perni
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, United States of America
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
| | - Vasco Prudente
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, United States of America
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, The Netherlands
| | - Suraj Pai
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, United States of America
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), Maastricht University, Maastricht, The Netherlands
| | - Benjamin Haibe-Kains
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
| | - Christian Guthier
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, United States of America
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
| | - Tracy Balboni
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
| | - Laura Warren
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
| | - Monica Krishan
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
| | - Benjamin H Kann
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, United States of America
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
| | - Charles Swanton
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, UK
- Cancer Evolution and Genome Instability Laboratory, The Francis Crick Institute, London, UK
| | - Dirk De Ruysscher
- Department of Radiation Oncology (MAASTRO), Maastricht University, Maastricht, The Netherlands
| | - Raymond H Mak
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, United States of America
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
| | - Hugo JWL Aerts
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, United States of America
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, United States of America
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, The Netherlands
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, United States of America
| |
Collapse
|
3
|
Haugg F, Elgendi M, Menon C. Effectiveness of Remote PPG Construction Methods: A Preliminary Analysis. Bioengineering (Basel) 2022; 9:bioengineering9100485. [PMID: 36290452 PMCID: PMC9598377 DOI: 10.3390/bioengineering9100485] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 09/05/2022] [Accepted: 09/09/2022] [Indexed: 11/24/2022] Open
Abstract
The contactless recording of a photoplethysmography (PPG) signal with a Red-Green-Blue (RGB) camera is known as remote photoplethysmography (rPPG). Studies have reported on the positive impact of using this technique, particularly in heart rate estimation, which has led to increased research on this topic among scientists. Therefore, converting from RGB signals to constructing an rPPG signal is an important step. Eight rPPG methods (plant-orthogonal-to-skin (POS), local group invariance (LGI), the chrominance-based method (CHROM), orthogonal matrix image transformation (OMIT), GREEN, independent component analysis (ICA), principal component analysis (PCA), and blood volume pulse (PBV) methods) were assessed using dynamic time warping, power spectrum analysis, and Pearson’s correlation coefficient, with different activities (at rest, during exercising in the gym, during talking, and while head rotating) and four regions of interest (ROI): the forehead, the left cheek, the right cheek, and a combination of all three ROIs. The best performing rPPG methods in all categories were the POS, LGI, and OMI methods; each performed well in all activities. Recommendations for future work are provided.
Collapse
Affiliation(s)
- Fridolin Haugg
- Biomedical and Mobile Health Technology Lab, ETH Zurich, 8008 Zurich, Switzerland
- Department of Mechanical Engineering, Karlsruher Institute for Technology, 76131 Karlsruhe, Germany
| | - Mohamed Elgendi
- Biomedical and Mobile Health Technology Lab, ETH Zurich, 8008 Zurich, Switzerland
- Correspondence:
| | - Carlo Menon
- Biomedical and Mobile Health Technology Lab, ETH Zurich, 8008 Zurich, Switzerland
| |
Collapse
|
4
|
Haugg F, Elgendi M, Menon C. Assessment of Blood Pressure Using Only a Smartphone and Machine Learning Techniques: A Systematic Review. Front Cardiovasc Med 2022; 9:894224. [PMID: 35770219 PMCID: PMC9234172 DOI: 10.3389/fcvm.2022.894224] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/06/2022] [Indexed: 11/28/2022] Open
Abstract
Regular monitoring of blood pressure (BP) allows for early detection of hypertension and symptoms related to cardiovascular disease. Measuring BP with a cuff requires equipment that is not always readily available and it may be impractical for some patients. Smartphones are an integral part of the lives of most people; thus, detecting and monitoring hypertension with a smartphone is likely to increase the ability to monitor BP due to the convenience of use for many patients. Smartphones lend themselves to assessing cardiovascular health because their built-in sensors and cameras provide a means of detecting arterial pulsations. To this end, several image processing and machine learning (ML) techniques for predicting BP using a smartphone have been developed. Several ML models that utilize smartphones are discussed in this literature review. Of the 53 papers identified, seven publications were evaluated. The performance of the ML models was assessed based on their accuracy for classification, the mean error measure, and the standard deviation of error for regression. It was found that artificial neural networks and support vector machines were often used. Because a variety of influencing factors determines the performance of an ML model, no clear preference could be determined. The number of input features ranged from five to 233, with the most commonly used being demographic data and the features extracted from photoplethysmogram signals. Each study had a different number of participants, ranging from 17 to 5,992. Comparisons of the cuff-based measures were mostly used to validate the results. Some of these ML models are already used to detect hypertension and BP but, to satisfy possible regulatory demands, improved reliability is needed under a wider range of conditions, including controlled and uncontrolled environments. A discussion of the advantages of various ML techniques and the selected features is offered at the end of this systematic review.
Collapse
Affiliation(s)
- Fridolin Haugg
- Biomedical and Mobile Health Technology Lab, ETH Zurich, Zurich, Switzerland.,Department of Mechanical Engineering, Karlsruher Institute for Technology, Karlsruhe, Germany
| | - Mohamed Elgendi
- Biomedical and Mobile Health Technology Lab, ETH Zurich, Zurich, Switzerland
| | - Carlo Menon
- Biomedical and Mobile Health Technology Lab, ETH Zurich, Zurich, Switzerland
| |
Collapse
|