1
|
Xin K, Wei X, Shao J, Chen F, Liu Q, Liu B. Establishment of a novel tumor neoantigen prediction tool for personalized vaccine design. Hum Vaccin Immunother 2024; 20:2300881. [PMID: 38214336 DOI: 10.1080/21645515.2023.2300881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 12/28/2023] [Indexed: 01/13/2024] Open
Abstract
The personalized neoantigen nanovaccine (PNVAC) platform for patients with gastric cancer we established previously exhibited promising anti-tumor immunoreaction. However, limited by the ability of traditional neoantigen prediction tools, a portion of epitopes failed to induce specific immune response. In order to filter out more neoantigens to optimize our PNVAC platform, we develop a novel neoantigen prediction model, NUCC. This prediction tool trained through a deep learning approach exhibits better neoantigen prediction performance than other prediction tools, not only in two independent epitope datasets, but also in a totally new epitope dataset we construct from scratch, including 25 patients with advance gastric cancer and 150 candidate mutant peptides, 13 of which prove to be neoantigen by immunogenicity test in vitro. Our work lay the foundation for the improvement of our PNVAC platform for gastric cancer in the future.
Collapse
Affiliation(s)
- Kai Xin
- Department of Oncology, Nanjing Drum Tower Hospital Clinical College of Nanjing University of Chinese Medicine, Nanjing, Jiangsu Province, China
| | - Xiao Wei
- Department of Pathology, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, Jiangsu Province, China
| | - Jie Shao
- Department of Oncology, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, Jiangsu Province, China
| | - Fangjun Chen
- Department of Oncology, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, Jiangsu Province, China
| | - Qin Liu
- Department of Oncology, Nanjing Drum Tower Hospital Clinical College of Nanjing University of Chinese Medicine, Nanjing, Jiangsu Province, China
- Department of Oncology, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, Jiangsu Province, China
| | - Baorui Liu
- Department of Oncology, Nanjing Drum Tower Hospital Clinical College of Nanjing University of Chinese Medicine, Nanjing, Jiangsu Province, China
- Department of Oncology, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, Jiangsu Province, China
| |
Collapse
|
2
|
Ita K, Roshanaei S. Artificial intelligence for skin permeability prediction: deep learning. J Drug Target 2024; 32:334-346. [PMID: 38258521 DOI: 10.1080/1061186x.2024.2309574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 01/07/2024] [Indexed: 01/24/2024]
Abstract
BACKGROUND AND OBJECTIVE Researchers have put in significant laboratory time and effort in measuring the permeability coefficient (Kp) of xenobiotics. To develop alternative approaches to this labour-intensive procedure, predictive models have been employed by scientists to describe the transport of xenobiotics across the skin. Most quantitative structure-permeability relationship (QSPR) models are derived statistically from experimental data. Recently, artificial intelligence-based computational drug delivery has attracted tremendous interest. Deep learning is an umbrella term for machine-learning algorithms consisting of deep neural networks (DNNs). Distinct network architectures, like convolutional neural networks (CNNs), feedforward neural networks (FNNs), and recurrent neural networks (RNNs), can be employed for prediction. METHODS In this project, we used a convolutional neural network, feedforward neural network, and recurrent neural network to predict skin permeability coefficients from a publicly available database reported by Cheruvu et al. The dataset contains 476 records of 145 chemicals, xenobiotics, and pharmaceuticals, administered on the human epidermis in vitro from aqueous solutions of constant concentration either saturated in infinite dose quantities or diluted. All the computations were conducted with Python under Anaconda and Jupyterlab environment after importing the required Python, Keras, and Tensorflow modules. RESULTS We used a convolutional neural network, feedforward neural network, and recurrent neural network to predict log kp. CONCLUSION This research work shows that deep learning networks can be successfully used to digitally screen and predict the skin permeability of xenobiotics.
Collapse
Affiliation(s)
- Kevin Ita
- College of Pharmacy, Touro University, Vallejo, CA, USA
| | | |
Collapse
|
3
|
Ouyang S, He B, Luo H, Jia F. SwinD-Net: a lightweight segmentation network for laparoscopic liver segmentation. Comput Assist Surg (Abingdon) 2024; 29:2329675. [PMID: 38504595 DOI: 10.1080/24699322.2024.2329675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2024] Open
Abstract
The real-time requirement for image segmentation in laparoscopic surgical assistance systems is extremely high. Although traditional deep learning models can ensure high segmentation accuracy, they suffer from a large computational burden. In the practical setting of most hospitals, where powerful computing resources are lacking, these models cannot meet the real-time computational demands. We propose a novel network SwinD-Net based on Skip connections, incorporating Depthwise separable convolutions and Swin Transformer Blocks. To reduce computational overhead, we eliminate the skip connection in the first layer and reduce the number of channels in shallow feature maps. Additionally, we introduce Swin Transformer Blocks, which have a larger computational and parameter footprint, to extract global information and capture high-level semantic features. Through these modifications, our network achieves desirable performance while maintaining a lightweight design. We conduct experiments on the CholecSeg8k dataset to validate the effectiveness of our approach. Compared to other models, our approach achieves high accuracy while significantly reducing computational and parameter overhead. Specifically, our model requires only 98.82 M floating-point operations (FLOPs) and 0.52 M parameters, with an inference time of 47.49 ms per image on a CPU. Compared to the recently proposed lightweight segmentation network UNeXt, our model not only outperforms it in terms of the Dice metric but also has only 1/3 of the parameters and 1/22 of the FLOPs. In addition, our model achieves a 2.4 times faster inference speed than UNeXt, demonstrating comprehensive improvements in both accuracy and speed. Our model effectively reduces parameter count and computational complexity, improving the inference speed while maintaining comparable accuracy. The source code will be available at https://github.com/ouyangshuiming/SwinDNet.
Collapse
Affiliation(s)
- Shuiming Ouyang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Baochun He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Huoling Luo
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Fucang Jia
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
4
|
Wang T, Dremel J, Richter S, Polanski W, Uckermann O, Eyüpoglu I, Czarske JW, Kuschmierz R. Resolution-enhanced multi-core fiber imaging learned on a digital twin for cancer diagnosis. Neurophotonics 2024; 11:S11505. [PMID: 38298866 PMCID: PMC10828892 DOI: 10.1117/1.nph.11.s1.s11505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 01/04/2024] [Accepted: 01/08/2024] [Indexed: 02/02/2024]
Abstract
Significance Deep learning enables label-free all-optical biopsies and automated tissue classification. Endoscopic systems provide intraoperative diagnostics to deep tissue and speed up treatment without harmful tissue removal. However, conventional multi-core fiber (MCF) endoscopes suffer from low resolution and artifacts, which hinder tumor diagnostics. Aim We introduce a method to enable unpixelated, high-resolution tumor imaging through a given MCF with a diameter of around 0.65 mm and arbitrary core arrangement and inhomogeneous transmissivity. Approach Image reconstruction is based on deep learning and the digital twin concept of the single-reference-based simulation with inhomogeneous optical properties of MCF and transfer learning on a small experimental dataset of biological tissue. The reference provided physical information about the MCF during the training processes. Results For the simulated data, hallucination caused by the MCF inhomogeneity was eliminated, and the averaged peak signal-to-noise ratio and structural similarity were increased from 11.2 dB and 0.20 to 23.4 dB and 0.74, respectively. By transfer learning, the metrics of independent test images experimentally acquired on glioblastoma tissue ex vivo can reach up to 31.6 dB and 0.97 with 14 fps computing speed. Conclusions With the proposed approach, a single reference image was required in the pre-training stage and laborious acquisition of training data was bypassed. Validation on glioblastoma cryosections with transfer learning on only 50 image pairs showed the capability for high-resolution deep tissue retrieval and high clinical feasibility.
Collapse
Affiliation(s)
- Tijue Wang
- TU Dresden, Laboratory of Measurement and Sensor System Technique, Dresden, Germany
- TU Dresden, Competence Center BIOLAS, Dresden, Germany
- TU Dresden, Else Kröner Fresenius Center for Digital Health, Germany
| | - Jakob Dremel
- TU Dresden, Laboratory of Measurement and Sensor System Technique, Dresden, Germany
- TU Dresden, Competence Center BIOLAS, Dresden, Germany
- TU Dresden, Else Kröner Fresenius Center for Digital Health, Germany
| | - Sven Richter
- TU Dresden, Else Kröner Fresenius Center for Digital Health, Germany
- University Hospital Carl Gustav Carus, TU Dresden, Department of Neurosurgery, Dresden, Germany
| | - Witold Polanski
- TU Dresden, Else Kröner Fresenius Center for Digital Health, Germany
- University Hospital Carl Gustav Carus, TU Dresden, Department of Neurosurgery, Dresden, Germany
| | - Ortrud Uckermann
- TU Dresden, Else Kröner Fresenius Center for Digital Health, Germany
- University Hospital Carl Gustav Carus, TU Dresden, Department of Neurosurgery, Dresden, Germany
- University Hospital Carl Gustav Carus, TU Dresden, Division of Medical Biology, Department of Psychiatry, Faculty of Medicine, Dresden, Germany
| | - Ilker Eyüpoglu
- TU Dresden, Else Kröner Fresenius Center for Digital Health, Germany
- University Hospital Carl Gustav Carus, TU Dresden, Department of Neurosurgery, Dresden, Germany
| | - Jürgen W. Czarske
- TU Dresden, Laboratory of Measurement and Sensor System Technique, Dresden, Germany
- TU Dresden, Competence Center BIOLAS, Dresden, Germany
- TU Dresden, Else Kröner Fresenius Center for Digital Health, Germany
- TU Dresden, Excellence Cluster Physics of Life, Dresden, Germany
- TU Dresden, School of Science, Faculty of Physics, Dresden, Germany
| | - Robert Kuschmierz
- TU Dresden, Laboratory of Measurement and Sensor System Technique, Dresden, Germany
- TU Dresden, Competence Center BIOLAS, Dresden, Germany
- TU Dresden, Else Kröner Fresenius Center for Digital Health, Germany
| |
Collapse
|
5
|
Carvalho Macruz FBD, Dias ALMP, Andrade CS, Nucci MP, Rimkus CDM, Lucato LT, Rocha AJD, Kitamura FC. The new era of artificial intelligence in neuroradiology: current research and promising tools. Arq Neuropsiquiatr 2024; 82:1-12. [PMID: 38565188 PMCID: PMC10987255 DOI: 10.1055/s-0044-1779486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 12/13/2023] [Indexed: 04/04/2024]
Abstract
Radiology has a number of characteristics that make it an especially suitable medical discipline for early artificial intelligence (AI) adoption. These include having a well-established digital workflow, standardized protocols for image storage, and numerous well-defined interpretive activities. The more than 200 commercial radiologic AI-based products recently approved by the Food and Drug Administration (FDA) to assist radiologists in a number of narrow image-analysis tasks such as image enhancement, workflow triage, and quantification, corroborate this observation. However, in order to leverage AI to boost efficacy and efficiency, and to overcome substantial obstacles to widespread successful clinical use of these products, radiologists should become familiarized with the emerging applications in their particular areas of expertise. In light of this, in this article we survey the existing literature on the application of AI-based techniques in neuroradiology, focusing on conditions such as vascular diseases, epilepsy, and demyelinating and neurodegenerative conditions. We also introduce some of the algorithms behind the applications, briefly discuss a few of the challenges of generalization in the use of AI models in neuroradiology, and skate over the most relevant commercially available solutions adopted in clinical practice. If well designed, AI algorithms have the potential to radically improve radiology, strengthening image analysis, enhancing the value of quantitative imaging techniques, and mitigating diagnostic errors.
Collapse
Affiliation(s)
- Fabíola Bezerra de Carvalho Macruz
- Universidade de São Paulo, Hospital das Clínicas, Departamento de Radiologia e Oncologia, Seção de Neurorradiologia, Faculdade de Medicina, São Paulo SP, Brazil.
- Rede D'Or São Luiz, Departamento de Radiologia e Diagnóstico por Imagem, São Paulo SP, Brazil.
- Universidade de São Paulo, Laboratório de Investigação Médica em Ressonância Magnética (LIM 44), São Paulo SP, Brazil.
- Academia Nacional de Medicina, Rio de Janeiro RJ, Brazil.
| | | | | | - Mariana Penteado Nucci
- Universidade de São Paulo, Laboratório de Investigação Médica em Ressonância Magnética (LIM 44), São Paulo SP, Brazil.
| | - Carolina de Medeiros Rimkus
- Universidade de São Paulo, Hospital das Clínicas, Departamento de Radiologia e Oncologia, Seção de Neurorradiologia, Faculdade de Medicina, São Paulo SP, Brazil.
- Rede D'Or São Luiz, Departamento de Radiologia e Diagnóstico por Imagem, São Paulo SP, Brazil.
- Universidade de São Paulo, Laboratório de Investigação Médica em Ressonância Magnética (LIM 44), São Paulo SP, Brazil.
| | - Leandro Tavares Lucato
- Universidade de São Paulo, Hospital das Clínicas, Departamento de Radiologia e Oncologia, Seção de Neurorradiologia, Faculdade de Medicina, São Paulo SP, Brazil.
- Diagnósticos da América SA, São Paulo SP, Brazil.
| | | | - Felipe Campos Kitamura
- Diagnósticos da América SA, São Paulo SP, Brazil.
- Universidade Federal de São Paulo, São Paulo SP, Brazil.
| |
Collapse
|
6
|
Chacko R, Davis MJ, Levy J, LeBoeuf M. Integration of a deep learning basal cell carcinoma detection and tumor mapping algorithm into the Mohs micrographic surgery workflow and effects on clinical staffing: A simulated, retrospective study. JAAD Int 2024; 15:185-191. [PMID: 38651039 PMCID: PMC11033206 DOI: 10.1016/j.jdin.2024.02.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/08/2024] [Indexed: 04/25/2024] Open
Abstract
Background Artificial intelligence (AI) enabled tools have been proposed as 1 solution to improve health care delivery. However, research on downstream effects of AI integration into the clinical workflow is lacking. Objective We aim to analyze how integration of an automated basal cell carcinoma detection and tumor mapping algorithm in a Mohs micrographic surgery unit impacts the work efficiency of clinical and laboratory staff. Methods Slide, staff, and histotechnician waiting times were analyzed over a 20-day period in a Mohs micrographic surgery unit. A simulated AI workflow was created and the time differences between the real and simulated workflows were compared. Results Simulated nonautonomous algorithm integration led to savings of 35.6% of slide waiting time, 18.4% of staff waiting time, and 18.6% of histotechnician waiting time per day. Algorithm integration on days with increased reconstruction complexity resulted in the greatest time savings. Limitations One Mohs micrographic surgery unit was analyzed and simulated AI integration was performed retrospectively. Conclusions AI integration results in reduced staff waiting times, enabling increased productivity and a streamlined clinical workflow. Schedules containing surgical cases with either increased repair complexity or numerous tumor removal stages stand to benefit most. However, significant logistical challenges must be addressed before broad adoption into clinical practice is realistic.
Collapse
Affiliation(s)
- Rachael Chacko
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
| | - Matthew J. Davis
- Department of Dermatology, Dartmouth Health, Lebanon, New Hampshire
| | - Joshua Levy
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
- Department of Dermatology, Dartmouth Health, Lebanon, New Hampshire
| | - Matthew LeBoeuf
- Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
- Department of Dermatology, Dartmouth Health, Lebanon, New Hampshire
| |
Collapse
|
7
|
Madadi Y, Abu-Serhan H, Yousefi S. Domain Adaptation-Based Deep Learning Model for Forecasting and Diagnosis of Glaucoma Disease. Biomed Signal Process Control 2024; 92:106061. [PMID: 38463435 PMCID: PMC10922017 DOI: 10.1016/j.bspc.2024.106061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
The main factor causing irreversible blindness is glaucoma. Early detection greatly reduces the risk of further vision loss. To address this problem, we developed a domain adaptation-based deep learning model called Glaucoma Domain Adaptation (GDA) based on 66,742 fundus photographs collected from 3272 eyes of 1636 subjects. GDA learns domain-invariant and domain-specific representations to extract both general and specific features. We also developed a progressive weighting mechanism to accurately transfer the source domain knowledge while mitigating the transfer of negative knowledge from the source to the target domain. We employed low-rank coding for aligning the source and target distributions. We trained GDA based on three different scenarios including eyes annotated as glaucoma due to 1) optic disc abnormalities regardless of visual field abnormalities, 2) optic disc or visual field abnormalities except ones that are glaucoma due to both optic disc and visual field abnormalities at the same time, and 3) visual field abnormalities regardless of optic disc abnormalities We then evaluate the generalizability of GDA based on two independent datasets. The AUCs of GDA in forecasting glaucoma based on the first, second, and third scenarios were 0.90, 0.88, and 0.80 and the Accuracies were 0.82, 0.78, and 0.72, respectively. The AUCs of GDA in diagnosing glaucoma based on the first, second, and third scenarios were 0.98, 0.96, and 0.93 and the Accuracies were 0.93, 0.91, and 0.88, respectively. The proposed GDA model achieved high performance and generalizability for forecasting and diagnosis of glaucoma disease from fundus photographs. GDA may augment glaucoma research and clinical practice in identifying patients with glaucoma and forecasting those who may develop glaucoma thus preventing future vision loss.
Collapse
Affiliation(s)
- Yeganeh Madadi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
| | | | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
8
|
Niemeyer F, Galbusera F, Beukers M, Jonas R, Tao Y, Fusellier M, Tryfonidou MA, Neidlinger‐Wilke C, Kienle A, Wilke H. Automatic grading of intervertebral disc degeneration in lumbar dog spines. JOR Spine 2024; 7:e1326. [PMID: 38633660 PMCID: PMC11022603 DOI: 10.1002/jsp2.1326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 02/20/2024] [Accepted: 02/25/2024] [Indexed: 04/19/2024] Open
Abstract
Background Intervertebral disc degeneration is frequent in dogs and can be associated with symptoms and functional impairments. The degree of disc degeneration can be assessed on T2-weighted MRI scans using the Pfirrmann classification scheme, which was developed for the human spine. However, it could also be used to quantify the effectiveness of disc regeneration therapies. We developed and tested a deep learning tool able to automatically score the degree of disc degeneration in dog spines, starting from an existing model designed to process images of human patients. Methods MRI midsagittal scans of 5991 lumbar discs of dog patients were collected and manually evaluated with the Pfirrmann scheme and a modified scheme with transitional grades. A deep learning model was trained to classify the disc images based on the two schemes and tested by comparing its performance with the model processing human images. Results The determination of the Pfirrmann grade showed sensitivities higher than 83% for all degeneration grades, except for grade 5, which is rare in dog spines, and high specificities. In comparison, the correspondent human model had slightly higher sensitivities, on average 90% versus 85% for the canine model. The modified scheme with the fractional grades did not show significant advantages with respect to the original Pfirrmann grades. Conclusions The novel tool was able to accurately and reliably score the severity of disc degeneration in dogs, although with a performance inferior than that of the human model. The tool has potential in the clinical management of disc degeneration in canine patients as well as in longitudinal studies evaluating regenerative therapies in dogs used as animal models of human disorders.
Collapse
Affiliation(s)
- Frank Niemeyer
- Institute for Orthopaedic Research and Biomechanics, Centre for Trauma ResearchUniversity Hospital UlmUlmGermany
- SpineServ GmbH & Co. KGUlmGermany
| | - Fabio Galbusera
- Institute for Orthopaedic Research and Biomechanics, Centre for Trauma ResearchUniversity Hospital UlmUlmGermany
- SpineServ GmbH & Co. KGUlmGermany
- Head Research Group Spine, Spine CenterSchulthess ClinicZürichSwitzerland
| | - Martijn Beukers
- Department of Clinical Sciences, Faculty of Veterinary MedicineUtrecht UniversityUtrechtThe Netherlands
| | - René Jonas
- Institute for Orthopaedic Research and Biomechanics, Centre for Trauma ResearchUniversity Hospital UlmUlmGermany
| | | | - Marion Fusellier
- Maitre de Conférences Imagerie Médicale, INSERM UMRS1229, Regenerative Medicine and Skeleton RMeS Team STEPSchool of Dental SurgeryNantesFrance
| | - Marianna A. Tryfonidou
- Department of Clinical Sciences, Faculty of Veterinary MedicineUtrecht UniversityUtrechtThe Netherlands
| | - Cornelia Neidlinger‐Wilke
- Institute for Orthopaedic Research and Biomechanics, Centre for Trauma ResearchUniversity Hospital UlmUlmGermany
- SpineServ GmbH & Co. KGUlmGermany
| | | | - Hans‐Joachim Wilke
- Institute for Orthopaedic Research and Biomechanics, Centre for Trauma ResearchUniversity Hospital UlmUlmGermany
- SpineServ GmbH & Co. KGUlmGermany
| |
Collapse
|
9
|
Thompson AR. A comparison of two learning approach inventories and their utility in predicting examination performance and study habits. Adv Physiol Educ 2024; 48:164-170. [PMID: 38269405 DOI: 10.1152/advan.00227.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/04/2024] [Accepted: 01/23/2024] [Indexed: 01/26/2024]
Abstract
The revised two-factor Study Process Questionnaire and the Approaches and Study Skills Inventory for Students are two instruments commonly used to measure student learning approach. Although they are designed to measure similar constructs, it is unclear whether the metrics they provide differ in terms of their real-world classification of learning approach. The purpose of this study is to compare outcomes of these two inventories in a study population from an undergraduate (baccalaureate) human anatomy course. The three central goals of this study are to compare the inventories in terms of 1) how students are classified, 2) the relationship between examination performance, time spent studying, and learning approach, and 3) instrument reliability. Results demonstrate that student classifications of corresponding scales of each inventory are significantly correlated, suggesting they measure similar constructs. Although the inventories had similar reliability, neither was consistently strong in predicting examination performance or study habits. Overall, these results suggest that the two inventories are comparable in terms of how they measure learning approach, but the lack of correspondence between learning approach scores and measurement outcomes questions their validity as tools that can be used universally in classrooms.NEW & NOTEWORTHY Although learning approach inventories have been used extensively in education research, there has been no direct comparison of how student classification differs between instruments or how classification influences the interpretation of how learning approach impacts student performance. This is especially relevant in light of recent research questioning the validity of the Study Process Questionnaire (LoGiudice AB, Norman GR, Manzoor S, Monteiro S. Adv Health Sci Educ Theory Pract 28: 47-63, 2023; Johnson SN, Gallagher ED, Vagnozzi AM. PLoS One 16: e0250600, 2021).
Collapse
Affiliation(s)
- Andrew R Thompson
- Department of Medical Education, University of Cincinnati College of Medicine, Cincinnati, Ohio, United States
| |
Collapse
|
10
|
Lee JH, Kim JY, Ryu K, Al-Masni MA, Kim TH, Han D, Kim HG, Kim DH. JUST-Net: Jointly unrolled cross-domain optimization based spatio-temporal reconstruction network for accelerated 3D myelin water imaging. Magn Reson Med 2024; 91:2483-2497. [PMID: 38342983 DOI: 10.1002/mrm.30021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 01/08/2024] [Accepted: 01/08/2024] [Indexed: 02/13/2024]
Abstract
PURPOSE We introduced a novel reconstruction network, jointly unrolled cross-domain optimization-based spatio-temporal reconstruction network (JUST-Net), aimed at accelerating 3D multi-echo gradient-echo (mGRE) data acquisition and improving the quality of resulting myelin water imaging (MWI) maps. METHOD An unrolled cross-domain spatio-temporal reconstruction network was designed. The main idea is to combine frequency and spatio-temporal image feature representations and to sequentially implement convolution layers in both domains. The k-space subnetwork utilizes shared information from adjacent frames, whereas the image subnetwork applies separate convolutions in both spatial and temporal dimensions. The proposed reconstruction network was evaluated for both retrospectively and prospectively accelerated acquisition. Furthermore, it was assessed in simulation studies and real-world cases with k-space corruptions to evaluate its potential for motion artifact reduction. RESULTS The proposed JUST-Net enabled highly reproducible and accelerated 3D mGRE acquisition for whole-brain MWI, reducing the acquisition time from fully sampled 15:23 to 2:22 min within a 3-min reconstruction time. The normalized root mean squared error of the reconstructed mGRE images increased by less than 4.0%, and the correlation coefficients for MWI showed a value of over 0.68 when compared to the fully sampled reference. Additionally, the proposed method demonstrated a mitigating effect on both simulated and clinical motion-corrupted cases. CONCLUSION The proposed JUST-Net has demonstrated the capability to achieve high acceleration factors for 3D mGRE-based MWI, which is expected to facilitate widespread clinical applications of MWI.
Collapse
Affiliation(s)
- Jae-Hun Lee
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
- Artificial Intelligence and Robotics Institute, Korea Institute of Science and Technology, Seoul, Republic of Korea
| | - Jae-Yoon Kim
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Kanghyun Ryu
- Artificial Intelligence and Robotics Institute, Korea Institute of Science and Technology, Seoul, Republic of Korea
| | - Mohammed A Al-Masni
- Department of Artificial Intelligence, Sejong University, Seoul, Republic of Korea
| | - Tae Hyung Kim
- Department of Computer Engineering, Hongik University, Seoul, Republic of Korea
| | - Dongyeob Han
- Siemens Healthineers Ltd, Seoul, Republic of Korea
| | - Hyun Gi Kim
- Department of Radiology, Eunpyeong St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
11
|
Kilic T, Liebig P, Demirel OB, Herrler J, Nagel AM, Ugurbil K, Akçakaya M. Unsupervised deep learning with convolutional neural networks for static parallel transmit design: A retrospective study. Magn Reson Med 2024; 91:2498-2507. [PMID: 38247050 PMCID: PMC10997461 DOI: 10.1002/mrm.30014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 12/28/2023] [Accepted: 12/29/2023] [Indexed: 01/23/2024]
Abstract
PURPOSE To mitigateB 1 + $$ {B}_1^{+} $$ inhomogeneity at 7T for multi-channel transmit arrays using unsupervised deep learning with convolutional neural networks (CNNs). METHODS Deep learning parallel transmit (pTx) pulse design has received attention, but such methods have relied on supervised training and did not use CNNs for multi-channelB 1 + $$ {B}_1^{+} $$ maps. In this work, we introduce an alternative approach that facilitates the use of CNNs with multi-channelB 1 + $$ {B}_1^{+} $$ maps while performing unsupervised training. The multi-channelB 1 + $$ {B}_1^{+} $$ maps are concatenated along the spatial dimension to enable shift-equivariant processing amenable to CNNs. Training is performed in an unsupervised manner using a physics-driven loss function that minimizes the discrepancy of the Bloch simulation with the target magnetization, which eliminates the calculation of reference transmit RF weights. The training database comprises 3824 2D sagittal, multi-channelB 1 + $$ {B}_1^{+} $$ maps of the healthy human brain from 143 subjects.B 1 + $$ {B}_1^{+} $$ data were acquired at 7T using an 8Tx/32Rx head coil. The proposed method is compared to the unregularized magnitude least-squares (MLS) solution for the target magnetization in static pTx design. RESULTS The proposed method outperformed the unregularized MLS solution for RMS error and coefficient-of-variation and had comparable energy consumption. Additionally, the proposed method did not show local phase singularities leading to distinct holes in the resulting magnetization unlike the unregularized MLS solution. CONCLUSION Proposed unsupervised deep learning with CNNs performs better than unregularized MLS in static pTx for speed and robustness.
Collapse
Affiliation(s)
- Toygan Kilic
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, Minnesota, USA
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota, USA
| | | | - Omer Burak Demirel
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, Minnesota, USA
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota, USA
| | | | - Armin M Nagel
- Institute of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
- Division of Medical Physics in Radiology, German Cancer Research Centre (DKFZ), Heidelberg, Germany
| | - Kamil Ugurbil
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota, USA
| | - Mehmet Akçakaya
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, Minnesota, USA
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota, USA
| |
Collapse
|
12
|
Hilgers L, Ghaffari Laleh N, West NP, Westwood A, Hewitt KJ, Quirke P, Grabsch HI, Carrero ZI, Matthaei E, Loeffler CML, Brinker TJ, Yuan T, Brenner H, Brobeil A, Hoffmeister M, Kather JN. Automated curation of large-scale cancer histopathology image datasets using deep learning. Histopathology 2024; 84:1139-1153. [PMID: 38409878 DOI: 10.1111/his.15159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 12/29/2023] [Accepted: 02/09/2024] [Indexed: 02/28/2024]
Abstract
BACKGROUND Artificial intelligence (AI) has numerous applications in pathology, supporting diagnosis and prognostication in cancer. However, most AI models are trained on highly selected data, typically one tissue slide per patient. In reality, especially for large surgical resection specimens, dozens of slides can be available for each patient. Manually sorting and labelling whole-slide images (WSIs) is a very time-consuming process, hindering the direct application of AI on the collected tissue samples from large cohorts. In this study we addressed this issue by developing a deep-learning (DL)-based method for automatic curation of large pathology datasets with several slides per patient. METHODS We collected multiple large multicentric datasets of colorectal cancer histopathological slides from the United Kingdom (FOXTROT, N = 21,384 slides; CR07, N = 7985 slides) and Germany (DACHS, N = 3606 slides). These datasets contained multiple types of tissue slides, including bowel resection specimens, endoscopic biopsies, lymph node resections, immunohistochemistry-stained slides, and tissue microarrays. We developed, trained, and tested a deep convolutional neural network model to predict the type of slide from the slide overview (thumbnail) image. The primary statistical endpoint was the macro-averaged area under the receiver operating curve (AUROCs) for detection of the type of slide. RESULTS In the primary dataset (FOXTROT), with an AUROC of 0.995 [95% confidence interval [CI]: 0.994-0.996] the algorithm achieved a high classification performance and was able to accurately predict the type of slide from the thumbnail image alone. In the two external test cohorts (CR07, DACHS) AUROCs of 0.982 [95% CI: 0.979-0.985] and 0.875 [95% CI: 0.864-0.887] were observed, which indicates the generalizability of the trained model on unseen datasets. With a confidence threshold of 0.95, the model reached an accuracy of 94.6% (7331 classified cases) in CR07 and 85.1% (2752 classified cases) for the DACHS cohort. CONCLUSION Our findings show that using the low-resolution thumbnail image is sufficient to accurately classify the type of slide in digital pathology. This can support researchers to make the vast resource of existing pathology archives accessible to modern AI models with only minimal manual annotations.
Collapse
Affiliation(s)
- Lars Hilgers
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Narmin Ghaffari Laleh
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Nicholas P West
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
| | - Alice Westwood
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
| | - Katherine J Hewitt
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Philip Quirke
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
| | - Heike I Grabsch
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
- Department of Pathology, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
| | - Zunamys I Carrero
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Emylou Matthaei
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Chiara M L Loeffler
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tanwei Yuan
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Hermann Brenner
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Division of Preventive Oncology, German Cancer Research Center (DKFZ) and National Center for Tumor Diseases (NCT), Heidelberg, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Alexander Brobeil
- Institute of Pathology, University Hospital Heidelberg, Heidelberg, Germany
- Tissue Bank, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| | - Michael Hoffmeister
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
- Pathology & Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
- Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| |
Collapse
|
13
|
Fernandez-Bermejo J, Martinez-Del-Rincon J, Dorado J, Toro XD, Santofimia MJ, Lopez JC. Edge Computing Transformers for Fall Detection in Older Adults. Int J Neural Syst 2024; 34:2450026. [PMID: 38490957 DOI: 10.1142/s0129065724500266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2024]
Abstract
The global trend of increasing life expectancy introduces new challenges with far-reaching implications. Among these, the risk of falls among older adults is particularly significant, affecting individual health and the quality of life, and placing an additional burden on healthcare systems. Existing fall detection systems often have limitations, including delays due to continuous server communication, high false-positive rates, low adoption rates due to wearability and comfort issues, and high costs. In response to these challenges, this work presents a reliable, wearable, and cost-effective fall detection system. The proposed system consists of a fit-for-purpose device, with an embedded algorithm and an Inertial Measurement Unit (IMU), enabling real-time fall detection. The algorithm combines a Threshold-Based Algorithm (TBA) and a neural network with low number of parameters based on a Transformer architecture. This system demonstrates notable performance with 95.29% accuracy, 93.68% specificity, and 96.66% sensitivity, while only using a 0.38% of the trainable parameters used by the other approach.
Collapse
Affiliation(s)
- Jesús Fernandez-Bermejo
- Faculty of Social Science and Information Technology, University of Castilla-La Mancha, 45600 Talavera de la Reina, Toledo, Spain
| | - Jesús Martinez-Del-Rincon
- The Centre for Secure Information Technologies (CSIT), Institute of Electronics, Communications & Information Technology, Queen's University of Belfast, Belfast BT3 9DT, UK
| | - Javier Dorado
- School of Computer Engineering, University of Castilla-La Mancha, 13071 Ciudad Real, Ciudad Real, Spain
| | - Xavier Del Toro
- School of Computer Engineering, University of Castilla-La Mancha, 13071 Ciudad Real, Ciudad Real, Spain
| | - María J Santofimia
- School of Computer Engineering, University of Castilla-La Mancha, 13071 Ciudad Real, Ciudad Real, Spain
| | - Juan C Lopez
- School of Computer Engineering, University of Castilla-La Mancha, 13071 Ciudad Real, Ciudad Real, Spain
| |
Collapse
|
14
|
S V A, G DB, Raman R. Automatic Identification and Severity Classification of Retinal Biomarkers in SD-OCT Using Dilated Depthwise Separable Convolution ResNet with SVM Classifier. Curr Eye Res 2024; 49:513-523. [PMID: 38251704 DOI: 10.1080/02713683.2024.2303713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 01/03/2024] [Indexed: 01/23/2024]
Abstract
PURPOSE Diagnosis of Uveitic Macular Edema (UME) using Spectral Domain OCT (SD-OCT) is a promising method for early detection and monitoring of sight-threatening visual impairment. Viewing multiple B-scans and identifying biomarkers is challenging and time-consuming for clinical practitioners. To overcome these challenges, this paper proposes an image classification hybrid framework for predicting the presence of biomarkers such as intraretinal cysts (IRC), hyperreflective foci (HRF), hard exudates (HE) and neurosensory detachment (NSD) in OCT B-scans along with their severity. METHODS A dataset of 10880 B-scans from 85 Uveitic patients is collected and graded by two board-certified ophthalmologists for the presence of biomarkers. A novel image classification framework, Dilated Depthwise Separable Convolution ResNet (DDSC-RN) with SVM classifier, is developed to achieve network compression with a larger receptive field that captures both low and high-level features of the biomarkers without loss of classification accuracy. The severity level of each biomarker is predicted from the feature map, extracted by the proposed DDSC-RN network. RESULTS The proposed hybrid model is evaluated using ground truth labels from the hospital. The deep learning model initially, identified the presence of biomarkers in B-scans. It achieved an overall accuracy of 98.64%, which is comparable to the performance of other state-of-the-art models, such as DRN-C-42 and ResNet-34. The SVM classifier then predicted the severity of each biomarker, achieving an overall accuracy of 89.3%. CONCLUSIONS A new hybrid model accurately identifies four retinal biomarkers on a tissue map and predicts their severity. The model outperforms other methods for identifying multiple biomarkers in complex OCT B-scans. This helps clinicians to screen multiple B-scans of UME more effectively, leading to better treatment outcomes.
Collapse
Affiliation(s)
- Adithiya S V
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Dharani Bai G
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, Tamil Nadu, India
| |
Collapse
|
15
|
Ramos JRC, Pinto J, Poiares-Oliveira G, Peeters L, Dumas P, Oliveira R. Deep hybrid modeling of a HEK293 process: Combining long short-term memory networks with first principles equations. Biotechnol Bioeng 2024; 121:1554-1568. [PMID: 38343176 DOI: 10.1002/bit.28668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 12/22/2023] [Accepted: 01/22/2024] [Indexed: 04/14/2024]
Abstract
The combination of physical equations with deep learning is becoming a promising methodology for bioprocess digitalization. In this paper, we investigate for the first time the combination of long short-term memory (LSTM) networks with first principles equations in a hybrid workflow to describe human embryonic kidney 293 (HEK293) culture dynamics. Experimental data of 27 extracellular state variables in 20 fed-batch HEK293 cultures were collected in a parallel high throughput 250 mL cultivation system in an industrial process development setting. The adaptive moment estimation method with stochastic regularization and cross-validation were employed for deep learning. A total of 784 hybrid models with varying deep neural network architectures, depths, layers sizes and node activation functions were compared. In most scenarios, hybrid LSTM models outperformed classical hybrid Feedforward Neural Network (FFNN) models in terms of training and testing error. Hybrid LSTM models revealed to be less sensitive to data resampling than FFNN hybrid models. As disadvantages, Hybrid LSTM models are in general more complex (higher number of parameters) and have a higher computation cost than FFNN hybrid models. The hybrid model with the highest prediction accuracy consisted in a LSTM network with seven internal states connected in series with dynamic material balance equations. This hybrid model correctly predicted the dynamics of the 27 state variables (R2 = 0.93 in the test data set), including biomass, key substrates, amino acids and metabolic by-products for around 10 cultivation days.
Collapse
Affiliation(s)
- João R C Ramos
- LAQV-REQUIMTE, Department of Chemistry, NOVA School of Science and Technology, NOVA University Lisbon, Caparica, Portugal
| | - José Pinto
- LAQV-REQUIMTE, Department of Chemistry, NOVA School of Science and Technology, NOVA University Lisbon, Caparica, Portugal
| | - Gil Poiares-Oliveira
- LAQV-REQUIMTE, Department of Chemistry, NOVA School of Science and Technology, NOVA University Lisbon, Caparica, Portugal
| | | | | | - Rui Oliveira
- LAQV-REQUIMTE, Department of Chemistry, NOVA School of Science and Technology, NOVA University Lisbon, Caparica, Portugal
| |
Collapse
|
16
|
Rando HM, Graim K, Hampikian G, Greene CS. Many direct-to-consumer canine genetic tests can identify the breed of purebred dogs. J Am Vet Med Assoc 2024; 262:1-8. [PMID: 38417257 DOI: 10.2460/javma.23.07.0372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 01/24/2024] [Indexed: 03/01/2024]
Abstract
OBJECTIVE To compare pedigree documentation and genetic test results to evaluate whether user-provided photographs influence the breed ancestry predictions of direct-to-consumer (DTC) genetic tests for dogs. ANIMALS 12 registered purebred pet dogs representing 12 different breeds. METHODS Each dog owner submitted 6 buccal swabs, 1 to each of 6 DTC genetic testing companies. Experimenters registered each sample per manufacturer instructions. For half of the dogs, the registration included a photograph of the DNA donor. For the other half of the dogs, photographs were swapped between dogs. DNA analysis and breed ancestry prediction were conducted by each company. The effect of condition (ie, matching vs shuffled photograph) was evaluated for each company's breed predictions. As a positive control, a convolutional neural network was also used to predict breed based solely on the photograph. RESULTS Results from 5 of the 6 tests always included the dog's registered breed. One test and the convolutional neural network were unlikely to identify the registered breed and frequently returned results that were more similar to the photograph than the DNA. Additionally, differences in the predictions made across all tests underscored the challenge of identifying breed ancestry, even in purebred dogs. CLINICAL RELEVANCE Veterinarians are likely to encounter patients who have conducted DTC genetic testing and may be asked to explain the results of genetic tests they did not order. This systematic comparison of commercially available tests provides context for interpreting results from consumer-grade DTC genetic testing kits.
Collapse
Affiliation(s)
- Halie M Rando
- 1Department of Biomedical Informatics, Anschutz School of Medicine, University of Colorado, Aurora, CO
- 2Department of Computer Science, Smith College, Northampton, MA
| | - Kiley Graim
- 3Department of Computer and Information Science and Engineering, Herbert Wertheim College of Engineering, University of Florida, Gainesville, FL
| | - Greg Hampikian
- 4Department of Biological Sciences, College of Arts and Sciences, Boise State University, Boise, ID
| | - Casey S Greene
- 1Department of Biomedical Informatics, Anschutz School of Medicine, University of Colorado, Aurora, CO
| |
Collapse
|
17
|
Motyka S, Weiser P, Bachrata B, Hingerl L, Strasser B, Hangel G, Niess E, Niess F, Zaitsev M, Robinson SD, Langs G, Trattnig S, Bogner W. Predicting dynamic, motion-related changes in B 0 field in the brain at a 7T MRI using a subject-specific fine-trained U-net. Magn Reson Med 2024; 91:2044-2056. [PMID: 38193276 DOI: 10.1002/mrm.29980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 11/28/2023] [Accepted: 11/30/2023] [Indexed: 01/10/2024]
Abstract
PURPOSE Subject movement during the MR examination is inevitable and causes not only image artifacts but also deteriorates the homogeneity of the main magnetic field (B0 ), which is a prerequisite for high quality data. Thus, characterization of changes to B0 , for example induced by patient movement, is important for MR applications that are prone to B0 inhomogeneities. METHODS We propose a deep learning based method to predict such changes within the brain from the change of the head position to facilitate retrospective or even real-time correction. A 3D U-net was trained on in vivo gradient-echo brain 7T MRI data. The input consisted of B0 maps and anatomical images at an initial position, and anatomical images at a different head position (obtained by applying a rigid-body transformation on the initial anatomical image). The output consisted of B0 maps at the new head positions. We further fine-trained the network weights to each subject by measuring a limited number of head positions of the given subject, and trained the U-net with these data. RESULTS Our approach was compared to established dynamic B0 field mapping via interleaved navigators, which suffer from limited spatial resolution and the need for undesirable sequence modifications. Qualitative and quantitative comparison showed similar performance between an interleaved navigator-equivalent method and proposed method. CONCLUSION It is feasible to predict B0 maps from rigid subject movement and, when combined with external tracking hardware, this information could be used to improve the quality of MR acquisitions without the use of navigators.
Collapse
Affiliation(s)
- Stanislav Motyka
- High Field MR Center, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
- Christian Doppler Laboratory for Clinical Molecular MR Imaging, Vienna, Austria
| | - Paul Weiser
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Beata Bachrata
- Department of Medical Engineering, Carinthia University of Applied Sciences, Klagenfurt, Austria
| | - Lukas Hingerl
- High Field MR Center, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Bernhard Strasser
- High Field MR Center, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Gilbert Hangel
- High Field MR Center, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
- Department of Neurosurgery, Medical University of Vienna, Vienna, Austria
| | - Eva Niess
- High Field MR Center, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
- Christian Doppler Laboratory for Clinical Molecular MR Imaging, Vienna, Austria
| | - Fabian Niess
- High Field MR Center, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Maxim Zaitsev
- Department of Radiology - Medical Physics, University of Freiburg, Freiburg, Germany
- Faculty of Medicine, University of Freiburg - Medical Centre, Freiburg, Germany
| | - Simon Daniel Robinson
- High Field MR Center, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Georg Langs
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Siegfried Trattnig
- High Field MR Center, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Wolfgang Bogner
- High Field MR Center, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
- Christian Doppler Laboratory for Clinical Molecular MR Imaging, Vienna, Austria
| |
Collapse
|
18
|
Dhaygude AD. Optimization-enabled deep learning model for disease detection in IoT platform. Network 2024; 35:190-211. [PMID: 38155546 DOI: 10.1080/0954898x.2023.2296568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 12/13/2023] [Indexed: 12/30/2023]
Abstract
Nowadays, Internet of things (IoT) and IoT platforms are extensively utilized in several healthcare applications. The IoT devices produce a huge amount of data in healthcare field that can be inspected on an IoT platform. In this paper, a novel algorithm, named artificial flora optimization-based chameleon swarm algorithm (AFO-based CSA), is developed for optimal path finding. Here, data are collected by the sensors and transmitted to the base station (BS) using the proposed AFO-based CSA, which is derived by integrating artificial flora optimization (AFO) in chameleon swarm algorithm (CSA). This integration refers to the AFO-based CSA model enhancing the strengths and features of both AFO and CSA for optimal routing of medical data in IoT. Moreover, the proposed AFO-based CSA algorithm considers factors such as energy, delay, and distance for the effectual routing of data. At BS, prediction is conducted, followed by stages, like pre-processing, feature dimension reduction, adopting Pearson's correlation, and disease detection, done by recurrent neural network, which is trained by the proposed AFO-based CSA. Experimental result exhibited that the performance of the proposed AFO-based CSA is superior to competitive approaches based on the energy consumption (0.538 J), accuracy (0.950), sensitivity (0.965), and specificity (0.937).
Collapse
|
19
|
Zhang X, Zhang B, Zhang F. Stenosis Detection and Quantification of Coronary Artery Using Machine Learning and Deep Learning. Angiology 2024; 75:405-416. [PMID: 37399509 DOI: 10.1177/00033197231187063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/05/2023]
Abstract
The aim of this review is to introduce some applications of artificial intelligence (AI) algorithms for the detection and quantification of coronary stenosis using computed tomography angiography (CTA). The realization of automatic/semi-automatic stenosis detection and quantification includes the following steps: vessel central axis extraction, vessel segmentation, stenosis detection, and quantification. Many new AI techniques, such as machine learning and deep learning, have been widely used in medical image segmentation and stenosis detection. This review also summarizes the recent progress regarding coronary stenosis detection and quantification, and discusses the development trends in this field. Through evaluation and comparison, researchers can better understand the research frontier in related fields, compare the advantages and disadvantages of various methods, and better optimize the new technologies. Machine learning and deep learning will promote the process of automatic detection and quantification of coronary artery stenosis. However, the machine learning and the deep learning methods need a large amount of data, so they also face some challenges because of the lack of professional image annotations (manually add labels by experts).
Collapse
Affiliation(s)
- Xinhong Zhang
- School of Software, Henan University, Kaifeng, China
| | - Boyan Zhang
- School of Software, Henan University, Kaifeng, China
| | - Fan Zhang
- Huaihe Hospital, Henan University, Kaifeng, China
| |
Collapse
|
20
|
Matveev AV, Nartova AV, Sankova NN, Okunev AG. DLgram cloud service for deep-learning analysis of microscopy images. Microsc Res Tech 2024; 87:991-998. [PMID: 38186233 DOI: 10.1002/jemt.24480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 12/12/2023] [Indexed: 01/09/2024]
Abstract
To analyze images in various fields of science and technology, it is often necessary to count observed objects and determine their parameters. This can be quite labor-intensive and time-consuming. This article presents DLgram, a universal, user-friendly cloud service that is developed for this purpose. It is based on deep learning technologies and does not require programming skills. The user labels several objects in the image and uploads it to the cloud where the neural network is trained to recognize the objects being studied. The user receives recognition results, which if necessary, can be corrected, errors removed, or missing objects added. In addition, it is possible to carry out mathematical processing of the data obtained to get information about the sizes, areas, and coordinates of the observed objects. The article describes the service features and discusses examples of its application. The DLgram service allows to reduce significantly the time spent on quantitative image analysis, reduce subjective factor influence, and increase the accuracy of analysis. RESEARCH HIGHLIGHTS: DLgram automatically recognizes and counts the number of objects in images and their parameters. DLgram is a universal service, which was created on the basis of the latest deep learning developments and does not require programming skills.
Collapse
Affiliation(s)
- Andrey V Matveev
- Institute of Intellectual Robototechnics, Novosibirsk State University, Novosibirsk, Russia
| | - Anna V Nartova
- Institute of Intellectual Robototechnics, Novosibirsk State University, Novosibirsk, Russia
- Department of Physico-Chemical Research Methods, Boreskov Institute of Catalysis SB RAS, Novosibirsk, Russia
| | - Natalya N Sankova
- Institute of Intellectual Robototechnics, Novosibirsk State University, Novosibirsk, Russia
- Department of Non-Traditional Catalytic Processes, Boreskov Institute of Catalysis SB RAS, Novosibirsk, Russia
| | - Alexey G Okunev
- Institute of Intellectual Robototechnics, Novosibirsk State University, Novosibirsk, Russia
| |
Collapse
|
21
|
Tao X, Zhao X, Liu H, Wang J, Tian C, Liu L, Ding Y, Chen X, Liu Y. Automatic Recognition of Concealed Fish Bones under Laryngoscopy: A Practical AI Model Based on YOLO-V5. Laryngoscope 2024; 134:2162-2169. [PMID: 37983879 DOI: 10.1002/lary.31175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 09/27/2023] [Accepted: 10/24/2023] [Indexed: 11/22/2023]
Abstract
BACKGROUND Fish bone impaction is one of the most common problems encountered in otolaryngology emergencies. Due to their small and transparent nature, as well as the complexity of pharyngeal anatomy, identifying fish bones efficiently under laryngoscopy requires substantial clinical experience. This study aims to create an AI model to assist clinicians in detecting pharyngeal fish bones more efficiently under laryngoscopy. METHODS Totally 3133 laryngoscopic images related to fish bones were collected for model training and validation. The images in the training dataset were trained using the YOLO-V5 algorithm model. After training, the model was validated and its performance was evaluated using a test dataset. The model's predictions were compared to those of human experts. Seven laryngoscopic videos related to fish bone were used to validate real-time target detection by the model. RESULTS The model trained in YOLO-V5 demonstrated good generalization and performance, with an average precision of 0.857 when the intersection over union (IOU) threshold was set to 0.5. The precision, recall rate, and F1 scores of the model are 0.909, 0.818, and 0.87, respectively. The overall accuracy of the model in the validation set was 0.821, comparable to that of ENT specialists. The model processed each image in 0.012 s, significantly faster than human processing (p < 0.001). Furthermore, the model exhibited outstanding performance in video recognition. CONCLUSION Our AI model based on YOLO-V5 effectively identifies and localizes fish bone foreign bodies in static laryngoscopic images and dynamic videos. It shows great potential for clinical application. LEVEL OF EVIDENCE 3 Laryngoscope, 134:2162-2169, 2024.
Collapse
Affiliation(s)
- Xiaoyao Tao
- Otorhinolaryngology Head and Neck Surgery Department, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xu Zhao
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Hairui Liu
- School of Information Engineering, China University of Geosciences, Beijing, China
| | - Jinqiao Wang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Chunhui Tian
- Otolaryngology-Head and Neck Surgery Department, Suzhou Hospital of Anhui Medical University, Suzhou, China
| | - Longsheng Liu
- Otolaryngology-Head and Neck Surgery Department, Chaohu Hospital of Anhui Medical University, Hefei, China
| | - Yujie Ding
- Otolaryngology-Head and Neck Surgery Department, Feixi County People's Hospital, Hefei, China
| | - Xue Chen
- Otolaryngology-Head and Neck Surgery Department, Feidong County People's Hospital, Hefei, China
| | - Yehai Liu
- Otorhinolaryngology Head and Neck Surgery Department, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| |
Collapse
|
22
|
Zhou H, Hua Z, Gao J, Lin F, Chen Y, Zhang S, Zheng T, Wang Z, Shao H, Li W, Liu F, Li Q, Chen J, Wang X, Zhao F, Qu N, Xie H, Ma H, Zhang H, Mao N. Multitask Deep Learning-Based Whole-Process System for Automatic Diagnosis of Breast Lesions and Axillary Lymph Node Metastasis Discrimination from Dynamic Contrast-Enhanced-MRI: A Multicenter Study. J Magn Reson Imaging 2024; 59:1710-1722. [PMID: 37497811 DOI: 10.1002/jmri.28913] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 07/02/2023] [Accepted: 07/03/2023] [Indexed: 07/28/2023] Open
Abstract
BACKGROUND Accurate diagnosis of breast lesions and discrimination of axillary lymph node (ALN) metastases largely depend on radiologist experience. PURPOSE To develop a deep learning-based whole-process system (DLWPS) for segmentation and diagnosis of breast lesions and discrimination of ALN metastasis. STUDY TYPE Retrospective. POPULATION 1760 breast patients, who were divided into training and validation sets (1110 patients), internal (476 patients), and external (174 patients) test sets. FIELD STRENGTH/SEQUENCE 3.0T/dynamic contrast-enhanced (DCE)-MRI sequence. ASSESSMENT DLWPS was developed using segmentation and classification models. The DLWPS-based segmentation model was developed by the U-Net framework, which combined the attention module and the edge feature extraction module. The average score of the output scores of three networks was used as the result of the DLWPS-based classification model. Moreover, the radiologists' diagnosis without and with the DLWPS-assistance was explored. To reveal the underlying biological basis of DLWPS, genetic analysis was performed based on RNA-sequencing data. STATISTICAL TESTS Dice similarity coefficient (DI), area under receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and kappa value. RESULTS The segmentation model reached a DI of 0.828 and 0.813 in the internal and external test sets, respectively. Within the breast lesions diagnosis, the DLWPS achieved AUCs of 0.973 in internal test set and 0.936 in external test set. For ALN metastasis discrimination, the DLWPS achieved AUCs of 0.927 in internal test set and 0.917 in external test set. The agreement of radiologists improved with the DLWPS-assistance from 0.547 to 0.794, and from 0.848 to 0.892 in breast lesions diagnosis and ALN metastasis discrimination, respectively. Additionally, 10 breast cancers with ALN metastasis were associated with pathways of aerobic electron transport chain and cytoplasmic translation. DATA CONCLUSION The performance of DLWPS indicates that it can promote radiologists in the judgment of breast lesions and ALN metastasis and nonmetastasis. LEVEL OF EVIDENCE 4 TECHNICAL EFFICACY STAGE: 3.
Collapse
Affiliation(s)
- Heng Zhou
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai, Shandong, China
| | - Zhen Hua
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai, Shandong, China
| | - Jing Gao
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Fan Lin
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Yuqian Chen
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai, Shandong, China
| | - Shijie Zhang
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Tiantian Zheng
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Zhongyi Wang
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Huafei Shao
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Wenjuan Li
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Fengjie Liu
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Qin Li
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Jingjing Chen
- Department of Radiology, Qingdao University Affiliated Hospital, Qingdao, Shandong, China
| | - Ximing Wang
- Department of Radiology, Shandong Provincial Hospital, Jinan, Shandong, China
| | - Feng Zhao
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, Shandong, China
| | - Nina Qu
- Department of Ultrasound, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Haizhu Xie
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Heng Ma
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Haicheng Zhang
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
- Big Data and Artificial Intelligence Laboratory, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Ning Mao
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
- Big Data and Artificial Intelligence Laboratory, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| |
Collapse
|
23
|
Usuzaki T, Takahashi K, Inamori R, Morishita Y, Shizukuishi T, Takagi H, Ishikuro M, Obara T, Takase K. Identifying key factors for predicting O6-Methylguanine-DNA methyltransferase status in adult patients with diffuse glioma: a multimodal analysis of demographics, radiomics, and MRI by variable Vision Transformer. Neuroradiology 2024; 66:761-773. [PMID: 38472373 PMCID: PMC11031474 DOI: 10.1007/s00234-024-03329-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 03/04/2024] [Indexed: 03/14/2024]
Abstract
PURPOSE This study aimed to perform multimodal analysis by vision transformer (vViT) in predicting O6-methylguanine-DNA methyl transferase (MGMT) promoter status among adult patients with diffuse glioma using demographics (sex and age), radiomic features, and MRI. METHODS The training and test datasets contained 122 patients with 1,570 images and 30 patients with 484 images, respectively. The radiomic features were extracted from enhancing tumors (ET), necrotic tumor cores (NCR), and the peritumoral edematous/infiltrated tissues (ED) using contrast-enhanced T1-weighted images (CE-T1WI) and T2-weighted images (T2WI). The vViT had 9 sectors; 1 demographic sector, 6 radiomic sectors (CE-T1WI ET, CE-T1WI NCR, CE-T1WI ED, T2WI ET, T2WI NCR, and T2WI ED), 2 image sectors (CE-T1WI, and T2WI). Accuracy and area under the curve of receiver-operating characteristics (AUC-ROC) were calculated for the test dataset. The performance of vViT was compared with AlexNet, GoogleNet, VGG16, and ResNet by McNemar and Delong test. Permutation importance (PI) analysis with the Mann-Whitney U test was performed. RESULTS The accuracy was 0.833 (95% confidence interval [95%CI]: 0.714-0.877) and the area under the curve of receiver-operating characteristics was 0.840 (0.650-0.995) in the patient-based analysis. The vViT had higher accuracy than VGG16 and ResNet, and had higher AUC-ROC than GoogleNet (p<0.05). The ED radiomic features extracted from the T2-weighted image demonstrated the highest importance (PI=0.239, 95%CI: 0.237-0.240) among all other sectors (p<0.0001). CONCLUSION The vViT is a competent deep learning model in predicting MGMT status. The ED radiomic features of the T2-weighted image demonstrated the most dominant contribution.
Collapse
Affiliation(s)
- Takuma Usuzaki
- Department of Diagnostic Radiology, Tohoku University Hospital, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, Miyagi, 980-8574, Japan.
| | - Kengo Takahashi
- Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, Miyagi, 980-8573, Japan
| | - Ryusei Inamori
- Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, Miyagi, 980-8573, Japan
| | - Yohei Morishita
- Department of Diagnostic Radiology, Tohoku University Hospital, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, Miyagi, 980-8574, Japan
| | - Takashi Shizukuishi
- Department of Diagnostic Radiology, Tohoku University Hospital, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, Miyagi, 980-8574, Japan
| | - Hidenobu Takagi
- Department of Diagnostic Radiology, Tohoku University Hospital, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, Miyagi, 980-8574, Japan
- Department of Advanced MRI Collaborative Research, Graduate School of Medicine, Tohoku University, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, Miyagi, 980-8573, Japan
| | - Mami Ishikuro
- Tohoku University Graduate School of Medicine, Division of Molecular Epidemiology, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, Miyagi, 980-8573, Japan
| | - Taku Obara
- Tohoku University Graduate School of Medicine, Division of Molecular Epidemiology, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, Miyagi, 980-8573, Japan
- Tohoku University Graduate School of Medicine, Division of Molecular Epidemiology, Department of Preventive Medicine and Epidemiology, Tohoku Medical Megabank Organization, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, Miyagi, 980-8573, Japan
- Tohoku University Hospital, Department of Pharmaceutical Sciences, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, Miyagi, 980-8574, Japan
| | - Kei Takase
- Department of Diagnostic Radiology, Tohoku University Hospital, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, Miyagi, 980-8574, Japan
| |
Collapse
|
24
|
Duan C, Bian X, Cheng K, Lyu J, Xiong Y, Xiao S, Wang X, Duan Q, Li C, Huang J, Hu J, Wang ZJ, Zhou X, Lou X. Synthesized 7T MPRAGE From 3T MPRAGE Using Generative Adversarial Network and Validation in Clinical Brain Imaging: A Feasibility Study. J Magn Reson Imaging 2024; 59:1620-1629. [PMID: 37559435 DOI: 10.1002/jmri.28944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Revised: 07/24/2023] [Accepted: 07/26/2023] [Indexed: 08/11/2023] Open
Abstract
BACKGROUND Ultra-high field 7T MRI can provide excellent tissue contrast and anatomical details, but is often cost prohibitive, and is not widely accessible in clinical practice. PURPOSE To generate synthetic 7T images from widely acquired 3T images with deep learning and to evaluate the feasibility of this approach for brain imaging. STUDY TYPE Prospective. POPULATION 33 healthy volunteers and 89 patients with brain diseases, divided into training, and evaluation datasets in the ratio 4:1. SEQUENCE AND FIELD STRENGTH T1-weighted nonenhanced or contrast-enhanced magnetization-prepared rapid acquisition gradient-echo sequence at both 3T and 7T. ASSESSMENT A generative adversarial network (SynGAN) was developed to produce synthetic 7T images from 3T images as input. SynGAN training and evaluation were performed separately for nonenhanced and contrast-enhanced paired acquisitions. Qualitative image quality of acquired 3T and 7T images and of synthesized 7T images was evaluated by three radiologists in terms of overall image quality, artifacts, sharpness, contrast, and visualization of vessel using 5-point Likert scales. STATISTICAL TESTS Wilcoxon signed rank tests to compare synthetic 7T images with acquired 7T and 3T images and intraclass correlation coefficients to evaluate interobserver variability. P < 0.05 was considered significant. RESULTS Of the 122 paired 3T and 7T MRI scans, 66 were acquired without contrast agent and 56 with contrast agent. The average time to generate synthetic images was ~11.4 msec per slice (2.95 sec per participant). The synthetic 7T images achieved significantly improved tissue contrast and sharpness in comparison to 3T images in both nonenhanced and contrast-enhanced subgroups. Meanwhile, there was no significant difference between acquired 7T and synthetic 7T images in terms of all the evaluation criteria for both nonenhanced and contrast-enhanced subgroups (P ≥ 0.180). DATA CONCLUSION The deep learning model has potential to generate synthetic 7T images with similar image quality to acquired 7T images. LEVEL OF EVIDENCE 2 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Caohui Duan
- Department of Radiology, Chinese PLA General Hospital, Beijing, China
| | - Xiangbing Bian
- Department of Radiology, Chinese PLA General Hospital, Beijing, China
| | - Kun Cheng
- Department of Radiology, Chinese PLA General Hospital, Beijing, China
| | - Jinhao Lyu
- Department of Radiology, Chinese PLA General Hospital, Beijing, China
| | - Yongqin Xiong
- Department of Radiology, Chinese PLA General Hospital, Beijing, China
| | - Sa Xiao
- Key Laboratory of Magnetic Resonance in Biological Systems, State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences-Wuhan National Laboratory for Optoelectronics, Wuhan, China
| | - Xueyang Wang
- Department of Radiology, Chinese PLA General Hospital, Beijing, China
| | - Qi Duan
- Department of Radiology, Chinese PLA General Hospital, Beijing, China
| | - Chenxi Li
- Department of Radiology, Chinese PLA General Hospital, Beijing, China
| | - Jiayu Huang
- Department of Radiology, Chinese PLA General Hospital, Beijing, China
| | - Jianxing Hu
- Department of Radiology, Chinese PLA General Hospital, Beijing, China
| | - Z Jane Wang
- Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, British Columbia, Canada
| | - Xin Zhou
- Key Laboratory of Magnetic Resonance in Biological Systems, State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences-Wuhan National Laboratory for Optoelectronics, Wuhan, China
| | - Xin Lou
- Department of Radiology, Chinese PLA General Hospital, Beijing, China
| |
Collapse
|
25
|
He H, Wang L, Wang X, Zhang M. Artificial intelligence in serum protein electrophoresis: history, state of the art, and perspective. Crit Rev Clin Lab Sci 2024; 61:226-240. [PMID: 37909425 DOI: 10.1080/10408363.2023.2274325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 10/19/2023] [Indexed: 11/03/2023]
Abstract
Serum protein electrophoresis (SPEP) is a valuable laboratory test that separates proteins from the blood based on their electrical charge and size. The test can detect and analyze various protein abnormalities, and the interpretation of graphic SPEP features plays a crucial role in the diagnosis and monitoring of conditions, such as myeloma. Furthermore, the advancement of artificial intelligence (AI) technology presents an opportunity to enhance the organization and optimization of analytical procedures by streamlining the process and reducing the potential for human error in SPEP analysis, thereby making the process more efficient and reliable. For instance, AI can assist in the identification of protein peaks, the calculation of their relative proportions, and the detection of abnormalities or inconsistencies. This review explores the characteristics and limitations of AI in SPEP, and the role of standardization in improving its clinical utility. It also offers guidance on the rational ordering and interpreting of SPEP results in conjunction with AI. Such integration can effectively reduce the time and resources required for manual analysis while improving the accuracy and consistency of the results.
Collapse
Affiliation(s)
- He He
- Department of Laboratory Medicine, West China Hospital of Sichuan University, Chengdu, China
| | - Lingfeng Wang
- College of Computer Science, Sichuan University, Chengdu, China
| | - Xia Wang
- Department of Laboratory Medicine, West China Hospital of Sichuan University, Chengdu, China
| | - Mei Zhang
- Department of Laboratory Medicine, West China Hospital of Sichuan University, Chengdu, China
| |
Collapse
|
26
|
Lee JH, Song G, Lee J, Kang S, Moon KM, Choi Y, Shen J, Noh M, Yang D. Prediction of immunochemotherapy response for diffuse large B-cell lymphoma using artificial intelligence digital pathology. J Pathol Clin Res 2024; 10:e12370. [PMID: 38584594 PMCID: PMC10999948 DOI: 10.1002/2056-4538.12370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 02/13/2024] [Accepted: 03/04/2024] [Indexed: 04/09/2024]
Abstract
Diffuse large B-cell lymphoma (DLBCL) is a heterogeneous and prevalent subtype of aggressive non-Hodgkin lymphoma that poses diagnostic and prognostic challenges, particularly in predicting drug responsiveness. In this study, we used digital pathology and deep learning to predict responses to immunochemotherapy in patients with DLBCL. We retrospectively collected 251 slide images from 216 DLBCL patients treated with rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone (R-CHOP), with their immunochemotherapy response labels. The digital pathology images were processed using contrastive learning for feature extraction. A multi-modal prediction model was developed by integrating clinical data and pathology image features. Knowledge distillation was employed to mitigate overfitting on gigapixel histopathology images to create a model that predicts responses based solely on pathology images. Based on the importance derived from the attention mechanism of the model, we extracted histological features that were considered key textures associated with drug responsiveness. The multi-modal prediction model achieved an impressive area under the ROC curve of 0.856, demonstrating significant associations with clinical variables such as Ann Arbor stage, International Prognostic Index, and bulky disease. Survival analyses indicated their effectiveness in predicting relapse-free survival. External validation using TCGA datasets supported the model's ability to predict survival differences. Additionally, pathology-based predictions show promise as independent prognostic indicators. Histopathological analysis identified centroblastic and immunoblastic features to be associated with treatment response, aligning with previous morphological classifications and highlighting the objectivity and reproducibility of artificial intelligence-based diagnosis. This study introduces a novel approach that combines digital pathology and clinical data to predict the response to immunochemotherapy in patients with DLBCL. This model shows great promise as a diagnostic and prognostic tool for clinical management of DLBCL. Further research and genomic data integration hold the potential to enhance its impact on clinical practice, ultimately improving patient outcomes.
Collapse
Affiliation(s)
- Jeong Hoon Lee
- Department of RadiologyStanford University School of MedicineStanfordCAUSA
| | - Ga‐Young Song
- Department of Hematology‐OncologyChonnam National University Hwasun HospitalHwasunRepublic of Korea
| | - Jonghyun Lee
- Department of Medical and Digital EngineeringHanyang University College of EngineeringSeoulRepublic of Korea
| | - Sae‐Ryung Kang
- Department of Nuclear MedicineChonnam National University Hwasun Hospital and Medical SchoolHwasun‐gunRepublic of Korea
| | - Kyoung Min Moon
- Division of Pulmonary and Allergy Medicine, Department of Internal MedicineChung‐Ang University Hospital, Chung‐Ang University College of MedicineSeoulRepublic of Korea
- Artificial Intelligence, Ziovision Co., Ltd.ChuncheonRepublic of Korea
| | - Yoo‐Duk Choi
- Department of PathologyChonnam National University Medical SchoolGwangjuRepublic of Korea
| | - Jeanne Shen
- Department of Pathology and Center for Artificial Intelligence in Medicine & ImagingStanford University School of MedicineStanfordCAUSA
| | - Myung‐Giun Noh
- Department of PathologyChonnam National University Medical SchoolGwangjuRepublic of Korea
- Department of PathologySchool of Medicine, Ajou UniversitySuwonRepublic of Korea
| | - Deok‐Hwan Yang
- Department of Hematology‐OncologyChonnam National University Hwasun HospitalHwasunRepublic of Korea
| |
Collapse
|
27
|
Montolío A, Cegoñino J, Garcia-Martin E, Pérez Del Palomar A. The macular retinal ganglion cell layer as a biomarker for diagnosis and prognosis in multiple sclerosis: A deep learning approach. Acta Ophthalmol 2024; 102:e272-e284. [PMID: 37300357 DOI: 10.1111/aos.15722] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 05/12/2023] [Accepted: 05/28/2023] [Indexed: 06/12/2023]
Abstract
PURPOSE The macular ganglion cell layer (mGCL) is a strong potential biomarker of axonal degeneration in multiple sclerosis (MS). For this reason, this study aims to develop a computer-aided method to facilitate diagnosis and prognosis in MS. METHODS This paper combines a cross-sectional study of 72 MS patients and 30 healthy control subjects for diagnosis and a 10-year longitudinal study of the same MS patients for the prediction of disability progression, during which the mGCL was measured using optical coherence tomography (OCT). Deep neural networks were used as an automatic classifier. RESULTS For MS diagnosis, greatest accuracy (90.3%) was achieved using 17 features as inputs. The neural network architecture comprised the input layer, two hidden layers and the output layer with softmax activation. For the prediction of disability progression 8 years later, accuracy of 81.9% was achieved with a neural network comprising two hidden layers and 400 epochs. CONCLUSION We present evidence that by applying deep learning techniques to clinical and mGCL thickness data it is possible to identify MS and predict the course of the disease. This approach potentially constitutes a non-invasive, low-cost, easy-to-implement and effective method.
Collapse
Affiliation(s)
- Alberto Montolío
- Biomaterials Group, Aragon Institute of Engineering Research (I3A), University of Zaragoza, Zaragoza, Spain
- Mechanical Engineering Department, University of Zaragoza, Zaragoza, Spain
| | - José Cegoñino
- Biomaterials Group, Aragon Institute of Engineering Research (I3A), University of Zaragoza, Zaragoza, Spain
- Mechanical Engineering Department, University of Zaragoza, Zaragoza, Spain
| | - Elena Garcia-Martin
- Ophthalmology Department, Miguel Servet University Hospital, Zaragoza, Spain
- GIMSO Research and Innovation Group, Aragon Institute for Health Research (IIS Aragon), Zaragoza, Spain
| | - Amaya Pérez Del Palomar
- Biomaterials Group, Aragon Institute of Engineering Research (I3A), University of Zaragoza, Zaragoza, Spain
- Mechanical Engineering Department, University of Zaragoza, Zaragoza, Spain
| |
Collapse
|
28
|
Zhou H, Watson M, Bernadt CT, Lin SS, Lin CY, Ritter JH, Wein A, Mahler S, Rawal S, Govindan R, Yang C, Cote RJ. AI-guided histopathology predicts brain metastasis in lung cancer patients. J Pathol 2024; 263:89-98. [PMID: 38433721 DOI: 10.1002/path.6263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 11/30/2023] [Accepted: 01/16/2024] [Indexed: 03/05/2024]
Abstract
Brain metastases can occur in nearly half of patients with early and locally advanced (stage I-III) non-small cell lung cancer (NSCLC). There are no reliable histopathologic or molecular means to identify those who are likely to develop brain metastases. We sought to determine if deep learning (DL) could be applied to routine H&E-stained primary tumor tissue sections from stage I-III NSCLC patients to predict the development of brain metastasis. Diagnostic slides from 158 patients with stage I-III NSCLC followed for at least 5 years for the development of brain metastases (Met+, 65 patients) versus no progression (Met-, 93 patients) were subjected to whole-slide imaging. Three separate iterations were performed by first selecting 118 cases (45 Met+, 73 Met-) to train and validate the DL algorithm, while 40 separate cases (20 Met+, 20 Met-) were used as the test set. The DL algorithm results were compared to a blinded review by four expert pathologists. The DL-based algorithm was able to distinguish the eventual development of brain metastases with an accuracy of 87% (p < 0.0001) compared with an average of 57.3% by the four pathologists and appears to be particularly useful in predicting brain metastases in stage I patients. The DL algorithm appears to focus on a complex set of histologic features. DL-based algorithms using routine H&E-stained slides may identify patients who are likely to develop brain metastases from those who will remain disease free over extended (>5 year) follow-up and may thus be spared systemic therapy. © 2024 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Haowen Zhou
- Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Mark Watson
- Department of Pathology and Immunology, Washington University School of Medicine, Saint Louis, MO, USA
| | - Cory T Bernadt
- Department of Pathology and Immunology, Washington University School of Medicine, Saint Louis, MO, USA
| | - Steven Siyu Lin
- Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Chieh-Yu Lin
- Department of Pathology and Immunology, Washington University School of Medicine, Saint Louis, MO, USA
| | - Jon H Ritter
- Department of Pathology and Immunology, Washington University School of Medicine, Saint Louis, MO, USA
| | - Alexander Wein
- Department of Pathology and Immunology, Washington University School of Medicine, Saint Louis, MO, USA
| | - Simon Mahler
- Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Sid Rawal
- Department of Pathology and Immunology, Washington University School of Medicine, Saint Louis, MO, USA
| | - Ramaswamy Govindan
- Department of Medicine, Washington University School of Medicine, Saint Louis, MO, USA
| | - Changhuei Yang
- Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Richard J Cote
- Department of Pathology and Immunology, Washington University School of Medicine, Saint Louis, MO, USA
| |
Collapse
|
29
|
Kowlagi N, Kemppainen A, Panfilov E, McSweeney T, Saarakkala S, Nevalainen M, Niinimäki J, Karppinen J, Tiulpin A. Semiautomatic Assessment of Facet Tropism From Lumbar Spine MRI Using Deep Learning: A Northern Finland Birth Cohort Study. Spine (Phila Pa 1976) 2024; 49:630-639. [PMID: 38105615 PMCID: PMC10997184 DOI: 10.1097/brs.0000000000004909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 12/12/2023] [Indexed: 12/19/2023]
Abstract
STUDY DESIGN This is a retrospective, cross-sectional, population-based study that automatically measured the facet joint (FJ) angles from T2-weighted axial magnetic resonance imagings (MRIs) of the lumbar spine using deep learning (DL). OBJECTIVE This work aimed to introduce a semiautomatic framework that measures the FJ angles using DL and study facet tropism (FT) in a large Finnish population-based cohort. SUMMARY OF DATA T2-weighted axial MRIs of the lumbar spine (L3/4 through L5/S1) for (n=1288) in the NFBC1966 Finnish population-based cohort were used for this study. MATERIALS AND METHODS A DL model was developed and trained on 430 participants' MRI images. The authors computed FJ angles from the model's prediction for each level, that is, L3/4 through L5/S1, for the male and female subgroups. Inter-rater and intrarater reliability was analyzed for 60 participants using annotations made by two radiologists and a musculoskeletal researcher. With the developed method, we examined FT in the entire NFBC1966 cohort, adopting the literature definitions of FT thresholds at 7° and 10°. The rater agreement was evaluated both for the annotations and the FJ angles computed based on the annotations. FJ asymmetry ( - was used to evaluate the agreement and correlation between the raters. Bland-Altman analysis was used to assess the agreement and systemic bias in the FJ asymmetry. The authors used the Dice score as the metric to compare the annotations between the raters. The authors evaluated the model predictions on the independent test set and compared them against the ground truth annotations. RESULTS This model scored Dice (92.7±0.1) and intersection over union (87.1±0.2) aggregated across all the regions of interest, that is, vertebral body (VB), FJs, and posterior arch (PA). The mean FJ angles measured for the male and female subgroups were in agreement with the literature findings. Intrarater reliability was high, with a Dice score of VB (97.3), FJ (82.5), and PA (90.3). The inter-rater reliability was better between the radiologists with a Dice score of VB (96.4), FJ (75.5), and PA (85.8) than between the radiologists and the musculoskeletal researcher. The prevalence of FT was higher in the male subgroup, with L4/5 found to be the most affected region. CONCLUSION The authors developed a DL-based framework that enabled us to study FT in a large cohort. Using the proposed method, the authors present the prevalence of FT in a Finnish population-based cohort.
Collapse
Affiliation(s)
- Narasimharao Kowlagi
- Research Unit of Health Sciences and Technology, University of Oulu, Oulu, Finland
| | - Antti Kemppainen
- Department of Diagnostic Radiology, University Oulu Hospital, Oulu, Finland
| | - Egor Panfilov
- Research Unit of Health Sciences and Technology, University of Oulu, Oulu, Finland
| | - Terence McSweeney
- Research Unit of Health Sciences and Technology, University of Oulu, Oulu, Finland
| | - Simo Saarakkala
- Research Unit of Health Sciences and Technology, University of Oulu, Oulu, Finland
- Department of Diagnostic Radiology, University Oulu Hospital, Oulu, Finland
| | - Mika Nevalainen
- Research Unit of Health Sciences and Technology, University of Oulu, Oulu, Finland
- Department of Diagnostic Radiology, University Oulu Hospital, Oulu, Finland
| | - Jaakko Niinimäki
- Department of Diagnostic Radiology, University Oulu Hospital, Oulu, Finland
| | - Jaro Karppinen
- Research Unit of Health Sciences and Technology, University of Oulu, Oulu, Finland
- Rehabilitation Services of South Karelia Social and Health Care District, Lappeenranta, Finland
| | - Aleksei Tiulpin
- Research Unit of Health Sciences and Technology, University of Oulu, Oulu, Finland
- Neurocentral Oulu, Oulu University Hospital, Oulu, Finland
| |
Collapse
|
30
|
Qi H, Luo J, Chen G, Zhang J, Chen F, Li H, Shen C, Zhang C. Detection of peach soluble solids based on near-infrared spectroscopy with High Order Spatial Interaction network. J Sci Food Agric 2024; 104:4309-4319. [PMID: 38305465 DOI: 10.1002/jsfa.13316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 01/14/2024] [Accepted: 01/16/2024] [Indexed: 02/03/2024]
Abstract
BACKGROUND Due to the scalability of deep learning technology, researchers have applied it to the non-destructive testing of peach internal quality. In addition, the soluble solids content (SSC) is an important internal quality indicator that determines the quality of peaches. Peaches with high SSC have a sweeter taste and better texture, making them popular in the market. Therefore, SSC is an important indicator for measuring peach internal quality and making harvesting decisions. RESULTS This article presents the High Order Spatial Interaction Network (HOSINet), which combines the Position Attention Module (PAM) and Channel Attention Module (CAM). Additionally, a feature wavelength selection algorithm similar to the Group-based Clustering Subspace Representation (GCSR-C) is used to establish the Position and Channel Attention Module-High Order Spatial Interaction (PC-HOSI) model for peach SSC prediction. The accuracy of this model is compared with traditional machine learning and traditional deep learning models. Finally, the permutation algorithm is combined with deep learning models to visually evaluate the importance of feature wavelengths. Increasing the order of the PC-HOSI model enhances its ability to learn spatial correlations in the dataset, thus improving its predictive performance. CONCLUSION The optimal model, PC-HOSI model, performed well with an order of 3 (PC-HOSI-3), with a root mean square error of 0.421 °Brix and a coefficient of determination of 0.864. Compared with traditional machine learning and deep learning algorithms, the coefficient of determination for the prediction set was improved by 0.07 and 0.39, respectively. The permutation algorithm also provided interpretability analysis for the predictions of the deep learning model, offering insights into the importance of spectral bands. These results contribute to the accurate prediction of SSC in peaches and support research on interpretability of neural network models for prediction. © 2024 Society of Chemical Industry.
Collapse
Affiliation(s)
- Hengnian Qi
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Jiahao Luo
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Gang Chen
- Zhejiang Dekfeller Intelligent Machinery Manufacturing Co., Ltd, Hangzhou, China
| | - Jianyi Zhang
- Zhejiang Dekfeller Intelligent Machinery Manufacturing Co., Ltd, Hangzhou, China
| | - Fengnong Chen
- School of Automation, School of Artificial Intelligence, Hangzhou Dianzi University, Hangzhou, China
| | - Hongyang Li
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Cong Shen
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Chu Zhang
- School of Information Engineering, Huzhou University, Huzhou, China
| |
Collapse
|
31
|
Vargas-Cardona HD, Rodriguez-Lopez M, Arrivillaga M, Vergara-Sanchez C, García-Cifuentes JP, Bermúdez PC, Jaramillo-Botero A. Artificial intelligence for cervical cancer screening: Scoping review, 2009-2022. Int J Gynaecol Obstet 2024; 165:566-578. [PMID: 37811597 DOI: 10.1002/ijgo.15179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 09/04/2023] [Accepted: 09/20/2023] [Indexed: 10/10/2023]
Abstract
BACKGROUND The intersection of artificial intelligence (AI) with cancer research is increasing, and many of the advances have focused on the analysis of cancer images. OBJECTIVES To describe and synthesize the literature on the diagnostic accuracy of AI in early imaging diagnosis of cervical cancer following Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR). SEARCH STRATEGY Arksey and O'Malley methodology was used and PubMed, Scopus, and Google Scholar databases were searched using a combination of English and Spanish keywords. SELECTION CRITERIA Identified titles and abstracts were screened to select original reports and cross-checked for overlap of cases. DATA COLLECTION AND ANALYSIS A descriptive summary was organized by the AI algorithm used, total of images analyzed, data source, clinical comparison criteria, and diagnosis performance. MAIN RESULTS We identified 32 studies published between 2009 and 2022. The primary sources of images were digital colposcopy, cervicography, and mobile devices. The machine learning/deep learning (DL) algorithms applied in the articles included support vector machine (SVM), random forest classifier, k-nearest neighbors, multilayer perceptron, C4.5, Naïve Bayes, AdaBoost, XGboots, conditional random fields, Bayes classifier, convolutional neural network (CNN; and variations), ResNet (several versions), YOLO+EfficientNetB0, and visual geometry group (VGG; several versions). SVM and DL methods (CNN, ResNet, VGG) showed the best diagnostic performances, with an accuracy of over 97%. CONCLUSION We concluded that the use of AI for cervical cancer screening has increased over the years, and some results (mainly from DL) are very promising. However, further research is necessary to validate these findings.
Collapse
Affiliation(s)
| | - Mérida Rodriguez-Lopez
- Faculty of Health Sciences, Universidad Icesi, Cali, Colombia
- Fundación Valle del Lili, Centro de Investigaciones Clínicas, Cali, Colombia
| | | | | | | | | | - Andres Jaramillo-Botero
- OMICAS Research Institute (iOMICAS), Pontificia Universidad Javeriana Cali, Cali, Colombia
- Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, California, USA
| |
Collapse
|
32
|
Lyu S, Adegboye O, Adhinugraha KM, Emeto TI, Taniar D. Analysing the impact of comorbid conditions and media coverage on online symptom search data: a novel AI-based approach for COVID-19 tracking. Infect Dis (Lond) 2024; 56:348-358. [PMID: 38305899 DOI: 10.1080/23744235.2024.2311281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 01/24/2024] [Indexed: 02/03/2024] Open
Abstract
BACKGROUND Web search data have proven to bea valuable early indicator of COVID-19 outbreaks. However, the influence of co-morbid conditions with similar symptoms and the effect of media coverage on symptom-related searches are often overlooked, leading to potential inaccuracies in COVID-19 simulations. METHOD This study introduces a machine learning-based approach to estimate the magnitude of the impact of media coverage and comorbid conditions with similar symptoms on online symptom searches, based on two scenarios with quantile levels 10-90 and 25-75. An incremental batch learning RNN-LSTM model was then developed for the COVID-19 simulation in Australia and New Zealand, allowing the model to dynamically simulate different infection rates and transmissibility of SARS-CoV-2 variants. RESULT The COVID-19 infected person-directed symptom searches were found to account for only a small proportion of the total search volume (on average 33.68% in Australia vs. 36.89% in New Zealand) compared to searches influenced by media coverage and comorbid conditions (on average 44.88% in Australia vs. 50.94% in New Zealand). The proposed method, which incorporates estimated symptom component ratios into the RNN-LSTM embedding model, significantly improved COVID-19 simulation performance. CONCLUSION Media coverage and comorbid conditions with similar symptoms dominate the total number of online symptom searches, suggesting that direct use of online symptom search data in COVID-19 simulations may overestimate COVID-19 infections. Our approach provides new insights into the accurate estimation of COVID-19 infections using online symptom searches, thereby assisting governments in developing complementary methods for public health surveillance.
Collapse
Affiliation(s)
- Shiyang Lyu
- School of Computer Science, Monash University, Melbourne, Australia
| | - Oyelola Adegboye
- Menzies School of Health Research, Darwin, Charles Darwin University, NT, Australia
| | | | - Theophilus I Emeto
- Australian Institute of Tropical Health and Medicine, James Cook University, Townsville, QLD, Australia
| | - David Taniar
- School of Computer Science, Monash University, Melbourne, Australia
| |
Collapse
|
33
|
Hekler A, Maron RC, Haggenmüller S, Schmitt M, Wies C, Utikal JS, Meier F, Hobelsberger S, Gellrich FF, Sergon M, Hauschild A, French LE, Heinzerling L, Schlager JG, Ghoreschi K, Schlaak M, Hilke FJ, Poch G, Korsing S, Berking C, Heppt MV, Erdmann M, Haferkamp S, Drexler K, Schadendorf D, Sondermann W, Goebeler M, Schilling B, Kather JN, Krieghoff-Henning E, Brinker TJ. Using multiple real-world dermoscopic photographs of one lesion improves melanoma classification via deep learning. J Am Acad Dermatol 2024; 90:1028-1031. [PMID: 38199280 DOI: 10.1016/j.jaad.2023.11.065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 10/22/2023] [Accepted: 11/27/2023] [Indexed: 01/12/2024]
Affiliation(s)
- Achim Hekler
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Roman C Maron
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Sarah Haggenmüller
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Max Schmitt
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Christoph Wies
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, University Heidelberg, Heidelberg, Germany
| | - Jochen S Utikal
- Department of Dermatology, Venereology and Allergology, University Medical Center Mannheim, Ruprecht-Karl University of Heidelberg, Mannheim, Germany; Skin Cancer Unit, German Cancer Research Center (DKFZ), Heidelberg, Germany; DKFZ Hector Cancer Institute at the University Medical Center Mannheim, Mannheim, Germany
| | - Friedegund Meier
- Department of Dermatology, Skin Cancer Center at the University Cancer Center and National Center for Tumor Diseases Dresden, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Sarah Hobelsberger
- Department of Dermatology, Skin Cancer Center at the University Cancer Center and National Center for Tumor Diseases Dresden, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Frank F Gellrich
- Department of Dermatology, Skin Cancer Center at the University Cancer Center and National Center for Tumor Diseases Dresden, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Mildred Sergon
- Department of Dermatology, Skin Cancer Center at the University Cancer Center and National Center for Tumor Diseases Dresden, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Axel Hauschild
- Department of Dermatology, University Hospital (UKSH), Kiel, Germany
| | - Lars E French
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany; Dr. Phillip Frost Department of Dermatology and Cutaneous Surgery, University of Miami, Miller School of Medicine, Miami, Florida
| | - Lucie Heinzerling
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany; Department of Dermatology, University Hospital Erlangen, Comprehensive Cancer Center Erlangen - European Metropolitan Region Nürnberg, CCC Alliance WERA, Erlangen, Germany
| | - Justin G Schlager
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - Kamran Ghoreschi
- Department of Dermatology, Venereology and Allergology, Charité - Universitätsmedizin Berlin, Corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Max Schlaak
- Department of Dermatology, Venereology and Allergology, Charité - Universitätsmedizin Berlin, Corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Franz J Hilke
- Department of Dermatology, Venereology and Allergology, Charité - Universitätsmedizin Berlin, Corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Gabriela Poch
- Department of Dermatology, Venereology and Allergology, Charité - Universitätsmedizin Berlin, Corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Sören Korsing
- Department of Dermatology, Venereology and Allergology, Charité - Universitätsmedizin Berlin, Corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Carola Berking
- Department of Dermatology, University Hospital Erlangen, Comprehensive Cancer Center Erlangen - European Metropolitan Region Nürnberg, CCC Alliance WERA, Erlangen, Germany
| | - Markus V Heppt
- Department of Dermatology, University Hospital Erlangen, Comprehensive Cancer Center Erlangen - European Metropolitan Region Nürnberg, CCC Alliance WERA, Erlangen, Germany
| | - Michael Erdmann
- Department of Dermatology, University Hospital Erlangen, Comprehensive Cancer Center Erlangen - European Metropolitan Region Nürnberg, CCC Alliance WERA, Erlangen, Germany
| | - Sebastian Haferkamp
- Department of Dermatology, University Hospital Regensburg, Regensburg, Germany
| | - Konstantin Drexler
- Department of Dermatology, University Hospital Regensburg, Regensburg, Germany
| | - Dirk Schadendorf
- Department of Dermatology, Venereology and Allergology, University Hospital Essen, Essen, Germany
| | - Wiebke Sondermann
- Department of Dermatology, Venereology and Allergology, University Hospital Essen, Essen, Germany
| | - Matthias Goebeler
- Department of Dermatology, Venereology and Allergology, University Hospital Würzburg and National Center for Tumor Diseases (NCT) WERA Würzburg, Würzburg, Germany
| | - Bastian Schilling
- Department of Dermatology, Venereology and Allergology, University Hospital Würzburg and National Center for Tumor Diseases (NCT) WERA Würzburg, Würzburg, Germany
| | - Jakob N Kather
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Eva Krieghoff-Henning
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
34
|
Jang SJ, Alpaugh K, Kunze KN, Li TY, Mayman DJ, Vigdorchik JM, Jerabek SA, Gausden EB, Sculco PK. Deep-Learning Automation of Preoperative Radiographic Parameters Associated With Early Periprosthetic Femur Fracture After Total Hip Arthroplasty. J Arthroplasty 2024; 39:1191-1198.e2. [PMID: 38007206 DOI: 10.1016/j.arth.2023.11.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 11/13/2023] [Accepted: 11/15/2023] [Indexed: 11/27/2023] Open
Abstract
BACKGROUND The radiographic assessment of bone morphology impacts implant selection and fixation type in total hip arthroplasty (THA) and is important to minimize the risk of periprosthetic femur fracture (PFF). We utilized a deep-learning algorithm to automate femoral radiographic parameters and determined which automated parameters were associated with early PFF. METHODS Radiographs from a publicly available database and from patients undergoing primary cementless THA at a high-volume institution (2016 to 2020) were obtained. A U-Net algorithm was trained to segment femoral landmarks for bone morphology parameter automation. Automated parameters were compared against that of a fellowship-trained surgeon and compared in an independent cohort of 100 patients who underwent THA (50 with early PFF and 50 controls matched by femoral component, age, sex, body mass index, and surgical approach). RESULTS On the independent cohort, the algorithm generated 1,710 unique measurements for 95 images (5% lesser trochanter identification failure) in 22 minutes. Medullary canal width, femoral cortex width, canal flare index, morphological cortical index, canal bone ratio, and canal calcar ratio had good-to-excellent correlation with surgeon measurements (Pearson's correlation coefficient: 0.76 to 0.96). Canal calcar ratios (0.43 ± 0.08 versus 0.40 ± 0.07) and canal bone ratios (0.39 ± 0.06 versus 0.36 ± 0.06) were higher (P < .05) in the PFF cohort when comparing the automated parameters. CONCLUSIONS Deep-learning automated parameters demonstrated differences in patients who had and did not have early PFF after cementless primary THA. This algorithm has the potential to complement and improve patient-specific PFF risk-prediction tools.
Collapse
Affiliation(s)
- Seong J Jang
- Weill Cornell College of Medicine, New York, New York; Department of Orthopedic Surgery, Hospital for Special Surgery, New York, New York
| | - Kyle Alpaugh
- Department of Orthopaedic Surgery, Massachusetts General Hospital, Boston, Massachusetts
| | - Kyle N Kunze
- Department of Orthopedic Surgery, Hospital for Special Surgery, New York, New York
| | - Tim Y Li
- Weill Cornell College of Medicine, New York, New York
| | - David J Mayman
- Department of Orthopedic Surgery, Adult Reconstruction and Joint Replacement Service, Hospital for Special Surgery, New York, New York
| | - Jonathan M Vigdorchik
- Department of Orthopedic Surgery, Adult Reconstruction and Joint Replacement Service, Hospital for Special Surgery, New York, New York
| | - Seth A Jerabek
- Department of Orthopedic Surgery, Adult Reconstruction and Joint Replacement Service, Hospital for Special Surgery, New York, New York
| | - Elizabeth B Gausden
- Department of Orthopedic Surgery, Adult Reconstruction and Joint Replacement Service, Hospital for Special Surgery, New York, New York
| | - Peter K Sculco
- Department of Orthopedic Surgery, Adult Reconstruction and Joint Replacement Service, Hospital for Special Surgery, New York, New York
| |
Collapse
|
35
|
Laddi A, Goyal S, Himani, Savlania A. Vein segmentation and visualization of upper and lower extremities using convolution neural network. BIOMED ENG-BIOMED TE 2024; 0:bmt-2023-0331. [PMID: 38651783 DOI: 10.1515/bmt-2023-0331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 04/03/2024] [Indexed: 04/25/2024]
Abstract
OBJECTIVES The study focused on developing a reliable real-time venous localization, identification, and visualization framework based upon deep learning (DL) self-parametrized Convolution Neural Network (CNN) algorithm for segmentation of the venous map for both lower and upper limb dataset acquired under unconstrained conditions using near-infrared (NIR) imaging setup, specifically to assist vascular surgeons during venipuncture, vascular surgeries, or Chronic Venous Disease (CVD) treatments. METHODS A portable image acquisition setup has been designed to collect venous data (upper and lower extremities) from 72 subjects. A manually annotated image dataset was used to train and compare the performance of existing well-known CNN-based architectures such as ResNet and VGGNet with self-parameterized U-Net, improving automated vein segmentation and visualization. RESULTS Experimental results indicated that self-parameterized U-Net performs better at segmenting the unconstrained dataset in comparison with conventional CNN feature-based learning models, with a Dice score of 0.58 and displaying 96.7 % accuracy for real-time vein visualization, making it appropriate to locate veins in real-time under unconstrained conditions. CONCLUSIONS Self-parameterized U-Net for vein segmentation and visualization has the potential to reduce risks associated with traditional venipuncture or CVD treatments by outperforming conventional CNN architectures, providing vascular assistance, and improving patient care and treatment outcomes.
Collapse
Affiliation(s)
- Amit Laddi
- Biomedical Applications Group, CSIR-Central Scientific Instruments Organisation (CSIO), Chandigarh-160030, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, Uttar Pradesh- 201 002, India
| | - Shivalika Goyal
- Biomedical Applications Group, CSIR-Central Scientific Instruments Organisation (CSIO), Chandigarh-160030, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, Uttar Pradesh- 201 002, India
| | | | - Ajay Savlania
- Department of General Surgery, 29751 PGIMER , Chandigarh, India
| |
Collapse
|
36
|
Håkansson J, Quinn BL, Shultz AL, Swartz SM, Corcoran AJ. Application of a novel deep learning-based 3D videography workflow to bat flight. Ann N Y Acad Sci 2024. [PMID: 38652595 DOI: 10.1111/nyas.15143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
Studying the detailed biomechanics of flying animals requires accurate three-dimensional coordinates for key anatomical landmarks. Traditionally, this relies on manually digitizing animal videos, a labor-intensive task that scales poorly with increasing framerates and numbers of cameras. Here, we present a workflow that combines deep learning-powered automatic digitization with filtering and correction of mislabeled points using quality metrics from deep learning and 3D reconstruction. We tested our workflow using a particularly challenging scenario: bat flight. First, we documented four bats flying steadily in a 2 m3 wind tunnel test section. Wing kinematic parameters resulting from manually digitizing bats with markers applied to anatomical landmarks were not significantly different from those resulting from applying our workflow to the same bats without markers for five out of six parameters. Second, we compared coordinates from manual digitization against those yielded via our workflow for bats flying freely in a 344 m3 enclosure. Average distance between coordinates from our workflow and those from manual digitization was less than a millimeter larger than the average human-to-human coordinate distance. The improved efficiency of our workflow has the potential to increase the scalability of studies on animal flight biomechanics.
Collapse
Affiliation(s)
- Jonas Håkansson
- Department of Biology, University of Colorado Colorado Springs, Colorado Springs, Colorado, USA
| | - Brooke L Quinn
- Department of Ecology, Evolution, and Organismal Biology, Brown University, Providence, Rhode Island, USA
| | - Abigail L Shultz
- Department of Biology, University of Colorado Colorado Springs, Colorado Springs, Colorado, USA
| | - Sharon M Swartz
- Department of Ecology, Evolution, and Organismal Biology, Brown University, Providence, Rhode Island, USA
- School of Engineering, Brown University, Providence, Rhode Island, USA
| | - Aaron J Corcoran
- Department of Biology, University of Colorado Colorado Springs, Colorado Springs, Colorado, USA
| |
Collapse
|
37
|
Lim J, Kim JM, Lee JY. Deep learning prediction of triplet-triplet annihilation parameters in blue fluorescent organic light-emitting diodes. Adv Mater 2024:e2312774. [PMID: 38652081 DOI: 10.1002/adma.202312774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 04/16/2024] [Indexed: 04/25/2024]
Abstract
The triplet-triplet annihilation (TTA) ratio and the rate coefficient (kTT) of TTA are key factors in estimating the contribution of triplet excitons to radiative singlet excitons in fluorescent TTA organic light-emitting diodes. In this study, we implemented deep learning models to predict key factors from transient electroluminescence (trEL) data using new numerical equations. A new TTA model was developed that considers both polaron and exciton dynamics, enabling the distinction between prompt and delayed singlet decays with a fundamental understanding of the mechanism. In addition, deep learning models for predicting the kinetic coefficients and TTA ratio were established. After comprehensive optimization inspired by photophysics, we achieved determination coefficient values of 0.992 and 0.999 in the prediction of kTT and TTA ratio, respectively, indicating a nearly perfect prediction. The contribution of each kinetic parameter of polaron and exciton dynamics to the trEL curve was discussed using various deep learning models. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Junseop Lim
- School of Chemical Engineering, Sungkyunkwan University, 2066, Seobu-ro, Jangan-gu, Suwon-si, Gyeonggi-do, 16419, Republic of Korea
| | - Jae-Min Kim
- Department of Advanced Materials Engineering, Chung-Ang University, 4726, Seodong-daero, Daedeok-myeon, Anseong-si, Gyeonggi-do, Republic of Korea
| | - Jun Yeob Lee
- School of Chemical Engineering, Sungkyunkwan University, 2066, Seobu-ro, Jangan-gu, Suwon-si, Gyeonggi-do, 16419, Republic of Korea
- SKKU Institute of Energy Science and Technology, Sungkyunkwan University, 2066, Seobu-ro, Jangan-gu, Suwon, Gyeonggi, 16 419, Republic of Korea
| |
Collapse
|
38
|
Barkey M, Büchner R, Wester A, Pritzl SD, Makarenko M, Wang Q, Weber T, Trauner D, Maier SA, Fratalocchi A, Lohmüller T, Tittl A. Pixelated High- Q Metasurfaces for in Situ Biospectroscopy and Artificial Intelligence-Enabled Classification of Lipid Membrane Photoswitching Dynamics. ACS Nano 2024. [PMID: 38653474 DOI: 10.1021/acsnano.3c09798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
Nanophotonic devices excel at confining light into intense hot spots of electromagnetic near fields, creating exceptional opportunities for light-matter coupling and surface-enhanced sensing. Recently, all-dielectric metasurfaces with ultrasharp resonances enabled by photonic bound states in the continuum (BICs) have unlocked additional functionalities for surface-enhanced biospectroscopy by precisely targeting and reading out the molecular absorption signatures of diverse molecular systems. However, BIC-driven molecular spectroscopy has so far focused on end point measurements in dry conditions, neglecting the crucial interaction dynamics of biological systems. Here, we combine the advantages of pixelated all-dielectric metasurfaces with deep learning-enabled feature extraction and prediction to realize an integrated optofluidic platform for time-resolved in situ biospectroscopy. Our approach harnesses high-Q metasurfaces specifically designed for operation in a lossy aqueous environment together with advanced spectral sampling techniques to temporally resolve the dynamic behavior of photoswitchable lipid membranes. Enabled by a software convolutional neural network, we further demonstrate the real-time classification of the characteristic cis and trans membrane conformations with 98% accuracy. Our synergistic sensing platform incorporating metasurfaces, optofluidics, and deep learning reveals exciting possibilities for studying multimolecular biological systems, ranging from the behavior of transmembrane proteins to the dynamic processes associated with cellular communication.
Collapse
Affiliation(s)
- Martin Barkey
- Chair in Hybrid Nanosystems, Nano-Institute Munich, Faculty of Physics, Ludwig-Maximilians-Universtität München, Königinstraße 10, 80539 München, Germany
| | - Rebecca Büchner
- Chair in Hybrid Nanosystems, Nano-Institute Munich, Faculty of Physics, Ludwig-Maximilians-Universtität München, Königinstraße 10, 80539 München, Germany
- Nanophotonic Systems Laboratory, ETH Zürich, 8092 Zürich, Switzerland
| | - Alwin Wester
- Chair in Hybrid Nanosystems, Nano-Institute Munich, Faculty of Physics, Ludwig-Maximilians-Universtität München, Königinstraße 10, 80539 München, Germany
| | - Stefanie D Pritzl
- Chair for Photonics and Optoelectronics, Nano-Institute Munich, Faculty of Physics, Ludwig-Maximilians-Universtität München, Königinstraße 10, 80539 München, Germany
- Department of Physics and Debye Institute for Nanomaterials Science, Utrecht University, Princetonplein 1, 3584 CC Utrecht, The Netherlands
| | - Maksim Makarenko
- PRIMALIGHT, Faculty of Electrical Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia
| | - Qizhou Wang
- PRIMALIGHT, Faculty of Electrical Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia
| | - Thomas Weber
- Chair in Hybrid Nanosystems, Nano-Institute Munich, Faculty of Physics, Ludwig-Maximilians-Universtität München, Königinstraße 10, 80539 München, Germany
| | - Dirk Trauner
- Department of Chemistry, University of Pennsylvania, Philadelphia, Pennsylvania 19104-6323, United States
| | - Stefan A Maier
- Chair in Hybrid Nanosystems, Nano-Institute Munich, Faculty of Physics, Ludwig-Maximilians-Universtität München, Königinstraße 10, 80539 München, Germany
- School of Physics and Astronomy, Monash University, Wellington Road, Clayton, VIC 3800, Australia
- The Blackett Laboratory, Department of Physics, Imperial College London, London, SW7 2AZ, United Kingdom
| | - Andrea Fratalocchi
- PRIMALIGHT, Faculty of Electrical Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia
| | - Theobald Lohmüller
- Chair for Photonics and Optoelectronics, Nano-Institute Munich, Faculty of Physics, Ludwig-Maximilians-Universtität München, Königinstraße 10, 80539 München, Germany
| | - Andreas Tittl
- Chair in Hybrid Nanosystems, Nano-Institute Munich, Faculty of Physics, Ludwig-Maximilians-Universtität München, Königinstraße 10, 80539 München, Germany
| |
Collapse
|
39
|
Bagherpour R, Bagherpour G, Mohammadi P. Application of Artificial Intelligence in Tissue Engineering. Tissue Eng Part B Rev 2024. [PMID: 38581425 DOI: 10.1089/ten.teb.2024.0022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/08/2024]
Abstract
Tissue engineering, a crucial approach in medical research and clinical applications, aims to regenerate damaged organs. By combining stem cells, biochemical factors, and biomaterials, it encounters challenges in designing complex 3D structures. Artificial intelligence (AI) enhances tissue engineering through computational modeling, biomaterial design, cell culture optimization, and personalized medicine. This review explores AI applications in organ tissue engineering (bone, heart, nerve, skin, cartilage), employing various machine learning (ML) algorithms for data analysis, prediction, and optimization. Each section discusses common ML algorithms and specific applications, emphasizing the potential and challenges in advancing regenerative therapies.
Collapse
Affiliation(s)
- Reza Bagherpour
- Department of Computer Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Ghasem Bagherpour
- Zanjan Pharmaceutical Biotechnology Research Center, Zanjan University of Medical Sciences, Zanjan, Iran
- Department of Medical Biotechnology, Faculty of Medicine, Zanjan University of Medical Sciences, Zanjan, Iran
| | - Parvin Mohammadi
- Department of Medical Biotechnology, Faculty of Medicine, Zanjan University of Medical Sciences, Zanjan, Iran
- Regenerative Medicine Research Center, Kermanshah University of Medical Sciences, Kermanshah, Iran
| |
Collapse
|
40
|
Dubljevic N, Moore S, Lauzon ML, Souza R, Frayne R. Effect of MR head coil geometry on deep-learning-based MR image reconstruction. Magn Reson Med 2024. [PMID: 38647191 DOI: 10.1002/mrm.30130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 04/02/2024] [Accepted: 04/07/2024] [Indexed: 04/25/2024]
Abstract
PURPOSE To investigate whether parallel imaging-imposed geometric coil constraints can be relaxed when using a deep learning (DL)-based image reconstruction method as opposed to a traditional non-DL method. THEORY AND METHODS Traditional and DL-based MR image reconstruction approaches operate in fundamentally different ways: Traditional methods solve a system of equations derived from the image data whereas DL methods use data/target pairs to learn a generalizable reconstruction model. Two sets of head coil profiles were evaluated: (1) 8-channel and (2) 32-channel geometries. A DL model was compared to conjugate gradient SENSE (CG-SENSE) and L1-wavelet compressed sensing (CS) through quantitative metrics and visual assessment as coil overlap was increased. RESULTS Results were generally consistent between experiments. As coil overlap increased, there was a significant (p < 0.001) decrease in performance in most cases for all methods. The decrease was most pronounced for CG-SENSE, and the DL models significantly outperformed (p < 0.001) their non-DL counterparts in all scenarios. CS showed improved robustness to coil overlap and signal-to-noise ratio (SNR) versus CG-SENSE, but had quantitatively and visually poorer reconstructions characterized by blurriness as compared to DL. DL showed virtually no change in performance across SNR and very small changes across coil overlap. CONCLUSION The DL image reconstruction method produced images that were robust to coil overlap and of higher quality than CG-SENSE and CS. This suggests that geometric coil design constraints can be relaxed when using DL reconstruction methods.
Collapse
Affiliation(s)
- Natalia Dubljevic
- Department of Biomedical Engineering, University of Calgary, Calgary, Alberta, Canada
- Seaman Family MR Research Centre, Foothills Medical Centre, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
| | - Stephen Moore
- Seaman Family MR Research Centre, Foothills Medical Centre, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
- O'Brien Centre for the Health Sciences, Cumming School of Medicine, Calgary, Alberta, Canada
| | - Michel Louis Lauzon
- Seaman Family MR Research Centre, Foothills Medical Centre, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
- Radiology and Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Roberto Souza
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
- Department of Electrical and Software Engineering, University of Calgary, Calgary, Alberta, Canada
| | - Richard Frayne
- Department of Biomedical Engineering, University of Calgary, Calgary, Alberta, Canada
- Seaman Family MR Research Centre, Foothills Medical Centre, Calgary, Alberta, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, Alberta, Canada
- Radiology and Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| |
Collapse
|
41
|
Paluru N, Susan Mathew R, Yalavarthy PK. DF-QSM: Data Fidelity based Hybrid Approach for Improved Quantitative Susceptibility Mapping of the Brain. NMR Biomed 2024:e5163. [PMID: 38649140 DOI: 10.1002/nbm.5163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 01/22/2024] [Accepted: 03/11/2024] [Indexed: 04/25/2024]
Abstract
Quantitative Susceptibility Mapping (QSM) is an advanced magnetic resonance imaging (MRI) technique to quantify the magnetic susceptibility of the tissue under investigation. Deep learning methods have shown promising results in deconvolving the susceptibility distribution from the measured local field obtained from the MR phase. Although existing deep learning based QSM methods can produce high-quality reconstruction, they are highly biased toward training data distribution with less scope for generalizability. This work proposes a hybrid two-step reconstruction approach to improve deep learning based QSM reconstruction. The susceptibility map prediction obtained from the deep learning methods has been refined in the framework developed in this work to ensure consistency with the measured local field. The developed method was validated on existing deep learning and model-based deep learning methods for susceptibility mapping of the brain. The developed method resulted in improved reconstruction for MRI volumes obtained with different acquisition settings, including deep learning models trained on constrained (limited) data settings.
Collapse
Affiliation(s)
- Naveen Paluru
- Department of Computational and Data Sciences, Indian Institute of Science, Bangalore, Karnataka, India
| | - Raji Susan Mathew
- School of Data Science, Indian Institute of Science Education and Research, Thiruvananthapuram, Kerala, India
| | - Phaneendra K Yalavarthy
- Department of Computational and Data Sciences, Indian Institute of Science, Bangalore, Karnataka, India
| |
Collapse
|
42
|
Garduno-Rapp NE, Ng YS, Weon JL, Saleh SN, Lehmann CU, Tian C, Quinn A. Early identification of patients at risk for iron-deficiency anemia using deep learning techniques. Am J Clin Pathol 2024:aqae031. [PMID: 38642073 DOI: 10.1093/ajcp/aqae031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 03/07/2024] [Indexed: 04/22/2024] Open
Abstract
OBJECTIVES Iron-deficiency anemia (IDA) is a common health problem worldwide, and up to 10% of adult patients with incidental IDA may have gastrointestinal cancer. A diagnosis of IDA can be established through a combination of laboratory tests, but it is often underrecognized until a patient becomes symptomatic. Based on advances in machine learning, we hypothesized that we could reduce the time to diagnosis by developing an IDA prediction model. Our goal was to develop 3 neural networks by using retrospective longitudinal outpatient laboratory data to predict the risk of IDA 3 to 6 months before traditional diagnosis. METHODS We analyzed retrospective outpatient electronic health record data between 2009 and 2020 from an academic medical center in northern Texas. We included laboratory features from 30,603 patients to develop 3 types of neural networks: artificial neural networks, long short-term memory cells, and gated recurrent units. The classifiers were trained using the Adam Optimizer across 200 random training-validation splits. We calculated accuracy, area under the receiving operating characteristic curve, sensitivity, and specificity in the testing split. RESULTS Although all models demonstrated comparable performance, the gated recurrent unit model outperformed the other 2, achieving an accuracy of 0.83, an area under the receiving operating characteristic curve of 0.89, a sensitivity of 0.75, and a specificity of 0.85 across 200 epochs. CONCLUSIONS Our results showcase the feasibility of employing deep learning techniques for early prediction of IDA in the outpatient setting based on sequences of laboratory data, offering a substantial lead time for clinical intervention.
Collapse
Affiliation(s)
| | | | - Jenny L Weon
- Clinical Informatics Center
- Department of Pathology
| | - Sameh N Saleh
- Clinical Informatics Center
- Clinical Informatics, Inova Health System, Falls Church, VA, US
| | | | - Chenlu Tian
- Department of Digestive and Liver Disease, University of Texas Southwestern Medical Center, Dallas, TX, US
| | | |
Collapse
|
43
|
Sajjaviriya C, Fujianti, Azuma M, Tsuchiya H, Koshimizu TA. Computer vision analysis of mother-infant interaction identified efficient pup retrieval in V1b receptor knockout mice. Peptides 2024:171226. [PMID: 38649033 DOI: 10.1016/j.peptides.2024.171226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 04/25/2024]
Abstract
Close contact between lactating rodent mothers and their infants is essential for effective nursing. Whether the mother's effort to retrieve the infants to their nest requires the vasopressin-signaling via V1b receptor has not been fully defined. To address this question, V1b receptor knockout (V1bKO) and control mice were analyzed in pup retrieval test. Because an exploring mother in a new test cage randomly accessed to multiple infants in changing backgrounds over time, a computer vision-based deep learning analysis was applied to continuously calculate the distances between the mother and the infants as a parameter of their relationship. In an open-field, a virgin female V1bKO mice entered fewer times into the center area and moved shorter distances than wild-type (WT). While this behavioral pattern persisted in V1bKO mother, the pup retrieval test demonstrated that total distances between a V1bKO mother and infants came closer in a shorter time than with a WT mother. Moreover, in the medial preoptic area, parts of the V1b receptor transcripts were detected in galanin- and c-fos-positive neurons following maternal stimulation by infants. This research highlights the effectiveness of deep learning analysis in evaluating the mother-infant relationship and the critical role of V1b receptor in pup retrieval during the early lactation phase.
Collapse
Affiliation(s)
- Chortip Sajjaviriya
- Division of Molecular Pharmacology, Department of Pharmacology, Jichi Medical University, Tochigi, 329-0489, Japan
| | - Fujianti
- Division of Molecular Pharmacology, Department of Pharmacology, Jichi Medical University, Tochigi, 329-0489, Japan
| | - Morio Azuma
- Division of Molecular Pharmacology, Department of Pharmacology, Jichi Medical University, Tochigi, 329-0489, Japan
| | - Hiroyoshi Tsuchiya
- Division of Molecular Pharmacology, Department of Pharmacology, Jichi Medical University, Tochigi, 329-0489, Japan
| | - Taka-Aki Koshimizu
- Division of Molecular Pharmacology, Department of Pharmacology, Jichi Medical University, Tochigi, 329-0489, Japan.
| |
Collapse
|
44
|
Landriel F, Franchi BC, Mosquera C, Lichtenberger FP, Benitez S, Ainesder M, Guiroy A, Hem S. Artificial intelligence assistance for the measurement of full alignment parameters in whole-spine lateral radiographs. World Neurosurg 2024:S1878-8750(24)00663-6. [PMID: 38649028 DOI: 10.1016/j.wneu.2024.04.091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Accepted: 04/15/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND Measuring spinal alignment with radiological parameters is essential in patients with spinal conditions likely to be treated surgically. These evaluations are not usually included in the radiological report. As a result, spinal surgeons commonly perform the measurement, which is time-consuming and subject to errors. We aim to develop a fully automated artificial intelligence tool to assist in measuring alignment parameters in whole-spine lateral radiograph (WSL X-rays). MATERIAL AND METHODS We developed a tool called Vertebrai that automatically calculates the global spinal parameters (GSP): Pelvic incidence (PI), sacral slope (SS), pelvic tilt (PT), L1-L4 angle, L4-S1 lumbo-pelvic angle, T1 pelvic angle (TPA), sagittal vertical axis (SVA), cervical lordosis (CL), C1-C2 lordosis, lumbar lordosis (LL), mid-thoracic kyphosis (MTK), proximal thoracic kyphosis (PTK), global thoracic kyphosis (GTK), T1 slope (T1S), C2-C7 plummet (cSVA), spino-sacral angle (SSA), C7 tilt (C7T), global tilt (GT), spinopelvic tilt (T1SPi) and hip odontoid axis (OD-HA). We assessed human-AI interaction instead of AI performance alone. We compared the time to measure GSP and inter-rater agreement with and without AI assistance. Two institutional datasets were created with 2267 multilabel images for classification, and 784 WSL X-rays with reference standard landmark labeled by spinal surgeons. RESULTS Vertebrai significantly reduced the measurement time comparing spine surgeons with AI assistance and the AI algorithm alone, without human intervention (3 minutes vs 0.26 minutes; p < 0.05). Vertebrai achieved an average accuracy of 83% in detecting abnormal alignment values, with the SS parameter exhibiting the lowest accuracy at 61.5% and T1SPi demonstrating the highest accuracy at 100%. Intra-class correlation analysis revealed a high level of correlation and consistency in the global alignment parameters. CONCLUSIONS Vertebrai's measurements can accurately detect alignment parameters, making it a promising tool for measuring GSP automatically.
Collapse
|
45
|
Shimada S, Tanimoto K, Sasaki H, Taga T, Sasaki T, Imagawa T, Sasaki N. Automated scoring of glomerular injury in TNS2-deficient nephropathy. Exp Anim 2024:24-0001. [PMID: 38644233 DOI: 10.1538/expanim.24-0001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024] Open
Abstract
Several artificial intelligence (AI) systems have been developed for glomerular pathology analysis in clinical settings. However, the application of AI systems in nonclinical fields remains limited. In this study, we trained a convolutional neural network model, which is an AI algorithm, to classify the severity of Tensin 2 (TNS2)-deficient nephropathy into seven categories. A dataset consisting of 803 glomerular images was generated from kidney sections of TNS2-deficient and wild-type mice. Manual evaluations of the images were conducted to assess their glomerular injury scores. The trained AI achieved approximately 70% accuracy in predicting the glomerular injury score for TNS2-deficient nephropathy. However, the AI achieved approximately 100% accuracy when considering predictions within one score of the true label as correct. The AI's predicted mean score closely matched the true mean score. In conclusion, while the AI model may not replace human judgment entirely, it can serve as a reliable second assessor in scoring glomerular injury, offering potential benefits in enhancing the accuracy and objectivity of such assessments.
Collapse
Affiliation(s)
- Shuji Shimada
- Laboratory of Laboratory Animal Science and Medicine, School of Veterinary Medicine, Kitasato University
| | - Kyosuke Tanimoto
- Laboratory of Laboratory Animal Science and Medicine, School of Veterinary Medicine, Kitasato University
| | - Hayato Sasaki
- Laboratory of Laboratory Animal Science and Medicine, School of Veterinary Medicine, Kitasato University
| | - Takumi Taga
- Laboratory of Laboratory Animal Science and Medicine, School of Veterinary Medicine, Kitasato University
| | - Takeru Sasaki
- Laboratory of Laboratory Animal Science and Medicine, School of Veterinary Medicine, Kitasato University
| | - Tomomi Imagawa
- Laboratory of Laboratory Animal Science and Medicine, School of Veterinary Medicine, Kitasato University
| | - Nobuya Sasaki
- Laboratory of Laboratory Animal Science and Medicine, School of Veterinary Medicine, Kitasato University
| |
Collapse
|
46
|
Song W, Shi Y, Lin GN. Haplotype function score improves biological interpretation and cross-ancestry polygenic prediction of human complex traits. eLife 2024; 12:RP92574. [PMID: 38639992 PMCID: PMC11031082 DOI: 10.7554/elife.92574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2024] Open
Abstract
We propose a new framework for human genetic association studies: at each locus, a deep learning model (in this study, Sei) is used to calculate the functional genomic activity score for two haplotypes per individual. This score, defined as the Haplotype Function Score (HFS), replaces the original genotype in association studies. Applying the HFS framework to 14 complex traits in the UK Biobank, we identified 3619 independent HFS-trait associations with a significance of p < 5 × 10-8. Fine-mapping revealed 2699 causal associations, corresponding to a median increase of 63 causal findings per trait compared with single-nucleotide polymorphism (SNP)-based analysis. HFS-based enrichment analysis uncovered 727 pathway-trait associations and 153 tissue-trait associations with strong biological interpretability, including 'circadian pathway-chronotype' and 'arachidonic acid-intelligence'. Lastly, we applied least absolute shrinkage and selection operator (LASSO) regression to integrate HFS prediction score with SNP-based polygenic risk scores, which showed an improvement of 16.1-39.8% in cross-ancestry polygenic prediction. We concluded that HFS is a promising strategy for understanding the genetic basis of human complex traits.
Collapse
Affiliation(s)
- Weichen Song
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, School of Bioengineering, Shanghai Jiao Tong UniversityShanghaiChina
- Bio-X Institutes, Key Laboratory for the Genetics of Developmental and Neuropsychiatric Disorders (Ministry of Education), Collaborative Innovation Center for Brain Science, Shanghai Jiao Tong UniversityShanghaiChina
| | - Yongyong Shi
- Bio-X Institutes, Key Laboratory for the Genetics of Developmental and Neuropsychiatric Disorders (Ministry of Education), Collaborative Innovation Center for Brain Science, Shanghai Jiao Tong UniversityShanghaiChina
- Biomedical Sciences Institute of Qingdao University (Qingdao Branch of SJTU Bio-X12 Institutes), Qingdao UniversityQingdaoChina
| | - Guan Ning Lin
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, School of Bioengineering, Shanghai Jiao Tong UniversityShanghaiChina
| |
Collapse
|
47
|
Muthusivarajan R, Celaya A, Yung JP, Long JP, Viswanath SE, Marcus DS, Chung C, Fuentes D. Evaluating the relationship between magnetic resonance image quality metrics and deep learning-based segmentation accuracy of brain tumors. Med Phys 2024. [PMID: 38640464 DOI: 10.1002/mp.17059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 01/16/2024] [Accepted: 02/25/2024] [Indexed: 04/21/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) scans are known to suffer from a variety of acquisition artifacts as well as equipment-based variations that impact image appearance and segmentation performance. It is still unclear whether a direct relationship exists between magnetic resonance (MR) image quality metrics (IQMs) (e.g., signal-to-noise, contrast-to-noise) and segmentation accuracy. PURPOSE Deep learning (DL) approaches have shown significant promise for automated segmentation of brain tumors on MRI but depend on the quality of input training images. We sought to evaluate the relationship between IQMs of input training images and DL-based brain tumor segmentation accuracy toward developing more generalizable models for multi-institutional data. METHODS We trained a 3D DenseNet model on the BraTS 2020 cohorts for segmentation of tumor subregions enhancing tumor (ET), peritumoral edematous, and necrotic and non-ET on MRI; with performance quantified via a 5-fold cross-validated Dice coefficient. MRI scans were evaluated through the open-source quality control tool MRQy, to yield 13 IQMs per scan. The Pearson correlation coefficient was computed between whole tumor (WT) dice values and IQM measures in the training cohorts to identify quality measures most correlated with segmentation performance. Each selected IQM was used to group MRI scans as "better" quality (BQ) or "worse" quality (WQ), via relative thresholding. Segmentation performance was re-evaluated for the DenseNet model when (i) training on BQ MRI images with validation on WQ images, as well as (ii) training on WQ images, and validation on BQ images. Trends were further validated on independent test sets derived from the BraTS 2021 training cohorts. RESULTS For this study, multimodal MRI scans from the BraTS 2020 training cohorts were used to train the segmentation model and validated on independent test sets derived from the BraTS 2021 cohort. Among the selected IQMs, models trained on BQ images based on inhomogeneity measurements (coefficient of variance, coefficient of joint variation, coefficient of variation of the foreground patch) and the models trained on WQ images based on noise measurement peak signal-to-noise ratio (SNR) yielded significantly improved tumor segmentation accuracy compared to their inverse models. CONCLUSIONS Our results suggest that a significant correlation may exist between specific MR IQMs and DenseNet-based brain tumor segmentation performance. The selection of MRI scans for model training based on IQMs may yield more accurate and generalizable models in unseen validation.
Collapse
Affiliation(s)
| | - Adrian Celaya
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Computational and Applied Mathematics, Rice University, Houston, Texas, USA
| | - Joshua P Yung
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - James P Long
- Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Satish E Viswanath
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Daniel S Marcus
- Department of Radiology, Washington University School of Medicine, St. Louis, Missouri, USA
| | - Caroline Chung
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - David Fuentes
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
48
|
Chen F, Wang L, Hong J, Jiang J, Zhou L. Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models. J Am Med Inform Assoc 2024; 31:1172-1183. [PMID: 38520723 PMCID: PMC11031231 DOI: 10.1093/jamia/ocae060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 02/26/2024] [Accepted: 03/05/2024] [Indexed: 03/25/2024] Open
Abstract
OBJECTIVES Leveraging artificial intelligence (AI) in conjunction with electronic health records (EHRs) holds transformative potential to improve healthcare. However, addressing bias in AI, which risks worsening healthcare disparities, cannot be overlooked. This study reviews methods to handle various biases in AI models developed using EHR data. MATERIALS AND METHODS We conducted a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines, analyzing articles from PubMed, Web of Science, and IEEE published between January 01, 2010 and December 17, 2023. The review identified key biases, outlined strategies for detecting and mitigating bias throughout the AI model development, and analyzed metrics for bias assessment. RESULTS Of the 450 articles retrieved, 20 met our criteria, revealing 6 major bias types: algorithmic, confounding, implicit, measurement, selection, and temporal. The AI models were primarily developed for predictive tasks, yet none have been deployed in real-world healthcare settings. Five studies concentrated on the detection of implicit and algorithmic biases employing fairness metrics like statistical parity, equal opportunity, and predictive equity. Fifteen studies proposed strategies for mitigating biases, especially targeting implicit and selection biases. These strategies, evaluated through both performance and fairness metrics, predominantly involved data collection and preprocessing techniques like resampling and reweighting. DISCUSSION This review highlights evolving strategies to mitigate bias in EHR-based AI models, emphasizing the urgent need for both standardized and detailed reporting of the methodologies and systematic real-world testing and evaluation. Such measures are essential for gauging models' practical impact and fostering ethical AI that ensures fairness and equity in healthcare.
Collapse
Affiliation(s)
- Feng Chen
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02115, United States
- Department of Biomedical Informatics and Health Education, University of Washington, Seattle, WA 98105, United States
| | - Liqin Wang
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02115, United States
- Division of General Internal Medicine and Primary Care, Brigham and Women’s Hospital, Boston, MA 02115, United States
| | - Julie Hong
- Wellesley High School, Wellesley, MA 02481, United States
| | - Jiaqi Jiang
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02115, United States
| | - Li Zhou
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02115, United States
- Division of General Internal Medicine and Primary Care, Brigham and Women’s Hospital, Boston, MA 02115, United States
| |
Collapse
|
49
|
Lambri N, Longari G, Loiacono D, Brioso RC, Crespi L, Galdieri C, Lobefalo F, Reggiori G, Rusconi R, Tomatis S, Bellu L, Bramanti S, Clerici E, De Philippis C, Dei D, Navarria P, Carlo-Stella C, Franzese C, Scorsetti M, Mancosu P. Deep learning-based optimization of field geometry for total marrow irradiation delivered with volumetric modulated arc therapy. Med Phys 2024. [PMID: 38634859 DOI: 10.1002/mp.17089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 03/20/2024] [Accepted: 04/05/2024] [Indexed: 04/19/2024] Open
Abstract
BACKGROUND Total marrow (lymphoid) irradiation (TMI/TMLI) is a radiotherapy treatment used to selectively target the bone marrow and lymph nodes in conditioning regimens for allogeneic hematopoietic stem cell transplantation. A complex field geometry is needed to cover the large planning target volume (PTV) of TMI/TMLI with volumetric modulated arc therapy (VMAT). Five isocenters and ten overlapping fields are needed for the upper body, while, for patients with large anatomical conformation, two specific isocenters are placed on the arms. The creation of a field geometry is clinically challenging and is performed by a medical physicist (MP) specialized in TMI/TMLI. PURPOSE To develop convolutional neural networks (CNNs) for automatically generating the field geometry of TMI/TMLI. METHODS The dataset comprised 117 patients treated with TMI/TMLI between 2011 and 2023 at our Institute. The CNN input image consisted of three channels, obtained by projecting along the sagittal plane: (1) average CT pixel intensity within the PTV; (2) PTV mask; (3) brain, lungs, liver, bowel, and bladder masks. This "averaged" frontal view combined the information analyzed by the MP when setting the field geometry in the treatment planning system (TPS). Two CNNs were trained to predict the isocenters coordinates and jaws apertures for patients with (CNN-1) and without (CNN-2) isocenters on the arms. Local optimization methods were used to refine the models output based on the anatomy of the patient. Model evaluation was performed on a test set of 15 patients in two ways: (1) by computing the root mean squared error (RMSE) between the CNN output and ground truth; (2) with a qualitative assessment of manual and generated field geometries-scale: 1 = not adequate, 4 = adequate-carried out in blind mode by three MPs with different expertise in TMI/TMLI. The Wilcoxon signed-rank test was used to evaluate the independence of the given scores between manual and generated configurations (p < 0.05 significant). RESULTS The average and standard deviation values of RMSE for CNN-1 and CNN-2 before/after local optimization were 15 ± 2/13 ± 3 mm and 16 ± 2/18 ± 4 mm, respectively. The CNNs were integrated into a planning automation software for TMI/TMLI such that the MPs could analyze in detail the proposed field geometries directly in the TPS. The selection of the CNN model to create the field geometry was based on the PTV width to approximate the decision process of an experienced MP and provide a single option of field configuration. We found no significant differences between the manual and generated field geometries for any MP, with median values of 4 versus 4 (p = 0.92), 3 versus 3 (p = 0.78), 4 versus 3 (p = 0.48), respectively. Starting from October 2023, the generated field geometry has been introduced in our clinical practice for prospective patients. CONCLUSIONS The generated field geometries were clinically acceptable and adequate, even for an MP with high level of expertise in TMI/TMLI. Incorporating the knowledge of the MPs into the development cycle was crucial for optimizing the models, especially in this scenario with limited data.
Collapse
Affiliation(s)
- Nicola Lambri
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Milan, Italy
- Radiotherapy and Radiosurgery Department, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Giorgio Longari
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Daniele Loiacono
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Ricardo Coimbra Brioso
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Leonardo Crespi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
- Health Data Science Centre, Human Technopole, Milan, Italy
| | - Carmela Galdieri
- Radiotherapy and Radiosurgery Department, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Francesca Lobefalo
- Radiotherapy and Radiosurgery Department, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Giacomo Reggiori
- Radiotherapy and Radiosurgery Department, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Roberto Rusconi
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Milan, Italy
- Radiotherapy and Radiosurgery Department, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Stefano Tomatis
- Radiotherapy and Radiosurgery Department, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Luisa Bellu
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Milan, Italy
- Radiotherapy and Radiosurgery Department, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Stefania Bramanti
- Department of Oncology and Hematology, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Elena Clerici
- Radiotherapy and Radiosurgery Department, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Chiara De Philippis
- Department of Oncology and Hematology, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Damiano Dei
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Milan, Italy
- Radiotherapy and Radiosurgery Department, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Pierina Navarria
- Radiotherapy and Radiosurgery Department, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Carmelo Carlo-Stella
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Milan, Italy
- Department of Oncology and Hematology, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Ciro Franzese
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Milan, Italy
- Radiotherapy and Radiosurgery Department, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Marta Scorsetti
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Milan, Italy
- Radiotherapy and Radiosurgery Department, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| | - Pietro Mancosu
- Radiotherapy and Radiosurgery Department, IRCCS Humanitas Research Hospital, Rozzano, Milan, Italy
| |
Collapse
|
50
|
Gao M, Jiang H, Hu Y, Ren Q, Xie Z, Liu J. Suppressing label noise in medical image classification using mixup attention and self-supervised learning. Phys Med Biol 2024. [PMID: 38636495 DOI: 10.1088/1361-6560/ad4083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2024]
Abstract
Deep neural networks (DNNs) have been widely applied in medical image classification and achieve remarkable classification performance. These achievements heavily depend on large-scale accurately annotated training data. However, label noise is inevitably introduced in the medical image annotation, as the labeling process heavily relies on the expertise and experience of annotators. Meanwhile, DNNs suffer from overfitting noisy labels, degrading the performance of models. Therefore, in this work, we innovatively devise noise-robust training approach to mitigate the adverse effects of noisy labels in medical image classification. Specifically, we incorporate contrastive learning and intra-group attention mixup strategies into the vanilla supervised learning. The contrastive learning for feature extractor helps to enhance visual representation of DNNs. The intra-group attention mixup module constructs groups and assigns self-attention weights for group-wise samples, and subsequently interpolates massive noisy-suppressed samples through weighted mixup operation. We conduct comparative experiments on both synthetic and real-world noisy medical datasets under various noise levels. Rigorous experiments validate that our noise-robust method with contrastive learning and attention mixup can effectively handle with label noise, and is superior to state-of-the-art methods. An ablation study also shows that both components contribute to boost model performance. The proposed method demonstrates its capability of curb label noise and has certain potential toward real-world clinic applications.
Collapse
Affiliation(s)
- Mengdi Gao
- Peking University, Peking University, Beijing 100871, China, Beijing, Beijing, 100871, CHINA
| | - Hongyang Jiang
- Southern University of Science and Technology, 1088 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province, Shenzhen, 518055, CHINA
| | - Yan Hu
- Southern University of Science and Technology, 1088 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province, Shenzhen, Guangdong, 518055, CHINA
| | - Qiushi Ren
- Department of Biomedical Engineering, Peking University, Beijing 100871, Beijing, Beijing, 100871, CHINA
| | - Zhaoheng Xie
- Peking University, Beijing 100091, Beijing, Beijing, 100091, CHINA
| | - Jiang Liu
- Southern University of Science and Technology, 1088 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province, Shenzhen, 518055, CHINA
| |
Collapse
|