1
|
Li Z, Wang B, Dong Y, Jie G. A multi-modal biosensing platform based on Ag-ZnIn 2S 4@Ag-Pt nanosignal probe-sensitized UiO-66 for ultra-sensitive detection of penicillin. Food Chem 2024; 444:138665. [PMID: 38335689 DOI: 10.1016/j.foodchem.2024.138665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 01/28/2024] [Accepted: 02/02/2024] [Indexed: 02/12/2024]
Abstract
We designed a multi-modal biosensing platform for versatile detection of penicillin based on a unique Ag-ZnIn2S4@Ag-Pt signal probe-sensitized UiO-66 metal-organic framework. Firstly, a large number of Ag-ZnIn2S4 quantum dots (AZIS QDs) were attached to Ag-Pt NPs, preparing a new multi-signal probe AZIS QDs@Ag-Pt NPs with excellent photoelectrochemistry (PEC), electrochemiluminescence (ECL), and fluorescence (FL) signals. Moreover, the AZIS QDs@Ag-Pt NPs signal probe can well match the energy level of UiO-66 metal-organic framework (MOF) with good photoelectric property, which can reverse the PEC current of UiO-66 to reduce false positives in detection. When penicillin was present, it bound to its aptamer to release the multifunctional signal probes, which can generate PEC, ECL, and PL signals, thus realizing ultrasensitive detection of penicillin by multi-signals. This work creates a novel three-signal QDs probe, which makes a great contribution to multi-mode photoelectric sensing analysis. The LOD of this work (3.48 fg·mL-1) was much lower than the MRLs (Maximum Residue Levels) established by the EU (4 ng·mL-1). The newly developed multi-mode biosensor has good practical application values in various biological detection, food assay, and early disease diagnosis.
Collapse
Affiliation(s)
- Zhikang Li
- Key Laboratory of Optic-electric Sensing and Analytical Chemistry for Life Science, MOE, College of Chemistry and Molecular Engineering, Qingdao University of Science and Technology, Qingdao 266042, PR China
| | - Bing Wang
- Key Laboratory of Optic-electric Sensing and Analytical Chemistry for Life Science, MOE, College of Chemistry and Molecular Engineering, Qingdao University of Science and Technology, Qingdao 266042, PR China
| | - Yongxin Dong
- Key Laboratory of Optic-electric Sensing and Analytical Chemistry for Life Science, MOE, College of Chemistry and Molecular Engineering, Qingdao University of Science and Technology, Qingdao 266042, PR China
| | - Guifen Jie
- Key Laboratory of Optic-electric Sensing and Analytical Chemistry for Life Science, MOE, College of Chemistry and Molecular Engineering, Qingdao University of Science and Technology, Qingdao 266042, PR China.
| |
Collapse
|
2
|
Gu S, Zhu F. BAGAIL: Multi-modal imitation learning from imbalanced demonstrations. Neural Netw 2024; 174:106251. [PMID: 38552352 DOI: 10.1016/j.neunet.2024.106251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 01/19/2024] [Accepted: 03/18/2024] [Indexed: 04/14/2024]
Abstract
Expert demonstrations in imitation learning often contain different behavioral modes, e.g., driving modes such as driving on the left, keeping the lane, and driving on the right in the driving tasks. Although most existing multi-modal imitation learning methods allow learning from demonstrations of multiple modes, they have strict constraints on the data of each mode, generally requiring a near data ratio of all modes. Otherwise, it tends to fall into a mode collapse or only learn the data distribution of the mode that has the largest data volume. To address the problem, an algorithm that balances real-fake loss and classification loss by modifying the output of the discriminator, referred to as BAlanced Generative Adversarial Imitation Learning (BAGAIL), is proposed. With this modification, the generator is only rewarded for generating real trajectories with correct modes. BAGAIL is therefore able to deal with imbalanced expert demonstrations and carry out efficient learning for each mode. The learning process of BAGAIL is divided into a pre-training stage and an imitation learning stage. During the pre-training stage, BAGAIL initializes the generator parameters by means of conditional Behavioral Cloning, laying the foundation for the direction of parameter optimization. During the imitation learning stage, BAGAIL optimizes the parameters by using the adversary between the generator and the modified discriminator so that the finally obtained policy can successfully learn the distribution of imbalanced expert data. The experiments showed that BAGAIL accurately distinguished different behavioral modes with imbalanced demonstrations. What is more, the learning result of each mode is close to the expert standard and more stable than other multi-modal imitation learning methods.
Collapse
Affiliation(s)
- Sijia Gu
- School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu, 215006, China.
| | - Fei Zhu
- School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu, 215006, China.
| |
Collapse
|
3
|
Veluchamy S, Sudharson S, Annamalai R, Bassfar Z, Aljaedi A, Jamal SS. Automated Detection of COVID-19 from Multimodal Imaging Data Using Optimized Convolutional Neural Network Model. J Imaging Inform Med 2024:10.1007/s10278-024-01077-y. [PMID: 38499705 DOI: 10.1007/s10278-024-01077-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 12/19/2023] [Accepted: 01/14/2024] [Indexed: 03/20/2024]
Abstract
The incidence of COVID-19, a virus that is responsible for infections in the upper respiratory tract and lungs, witnessed a daily rise in fatalities throughout the pandemic. The timely identification of COVID-19 can contribute to the formulation of strategies to control the disease and the selection of an appropriate treatment pathway. Given the necessity for broader COVID-19 diagnosis, researchers have developed more advanced, rapid, and efficient detection methods. By conducting an initial comparative analysis of various widely used convolutional neural network (CNN) models, we determine an appropriate CNN model. Subsequently, we enhance the chosen CNN model using the feature fusion strategy from multi-modal imaging datasets. Moreover, the Jaya optimization technique is employed to determine the optimal weighting for merging these dual features into a single feature vector. An SVM classifier is employed to categorize samples as either COVID-19 positive or negative. For the purpose of experimentation, a standard dataset consisting of 10,000 samples is used, divided equally between COVID-19 positive and negative classes. The experimental outcomes demonstrate that the proposed fine-tuned system, coupled with optimization techniques for multi-modal data, exhibits superior performance, achieving accuracy rates of 98.7% as compared to the existing state-of-the-art network models.
Collapse
Affiliation(s)
- S Veluchamy
- Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Chennai, 601103, India
| | - S Sudharson
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, 600127, India.
| | - R Annamalai
- Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Chennai, 601103, India
| | - Zaid Bassfar
- Department of Information Technology, University of Tabuk, Tabuk, 71491, Saudi Arabia
| | - Amer Aljaedi
- College of Computing and Information Technology, University of Tabuk, Tabuk, 71491, Saudi Arabia
| | - Sajjad Shaukat Jamal
- Department of Mathematics, College of Science, King Khalid University, Abha, 61413, Saudi Arabia
| |
Collapse
|
4
|
Hu K, Chen J, Zhang P, Xue W, Xie J. [ Multi-modal physiological time-frequency feature extraction network for accurate sleep stage classification]. Sheng Wu Yi Xue Gong Cheng Xue Za Zhi 2024; 41:26-33. [PMID: 38403601 PMCID: PMC10894739 DOI: 10.7507/1001-5515.202306010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 10/16/2023] [Indexed: 02/27/2024]
Abstract
Sleep stage classification is essential for clinical disease diagnosis and sleep quality assessment. Most of the existing methods for sleep stage classification are based on single-channel or single-modal signal, and extract features using a single-branch, deep convolutional network, which not only hinders the capture of the diversity features related to sleep and increase the computational cost, but also has a certain impact on the accuracy of sleep stage classification. To solve this problem, this paper proposes an end-to-end multi-modal physiological time-frequency feature extraction network (MTFF-Net) for accurate sleep stage classification. First, multi-modal physiological signal containing electroencephalogram (EEG), electrocardiogram (ECG), electrooculogram (EOG) and electromyogram (EMG) are converted into two-dimensional time-frequency images containing time-frequency features by using short time Fourier transform (STFT). Then, the time-frequency feature extraction network combining multi-scale EEG compact convolution network (Ms-EEGNet) and bidirectional gated recurrent units (Bi-GRU) network is used to obtain multi-scale spectral features related to sleep feature waveforms and time series features related to sleep stage transition. According to the American Academy of Sleep Medicine (AASM) EEG sleep stage classification criterion, the model achieved 84.3% accuracy in the five-classification task on the third subgroup of the Institute of Systems and Robotics of the University of Coimbra Sleep Dataset (ISRUC-S3), with 83.1% macro F1 score value and 79.8% Cohen's Kappa coefficient. The experimental results show that the proposed model achieves higher classification accuracy and promotes the application of deep learning algorithms in assisting clinical decision-making.
Collapse
Affiliation(s)
- Kailei Hu
- School of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi'an 710021, P. R. China
| | - Jingxia Chen
- School of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi'an 710021, P. R. China
| | - Pengwei Zhang
- School of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi'an 710021, P. R. China
| | - Wen Xue
- School of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi'an 710021, P. R. China
| | - Jia Xie
- School of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi'an 710021, P. R. China
| |
Collapse
|
5
|
Zhao Z, Zhu J, Jiao P, Wang J, Zhang X, Lu X, Zhang Y. Hybrid-FHR: a multi-modal AI approach for automated fetal acidosis diagnosis. BMC Med Inform Decis Mak 2024; 24:19. [PMID: 38247009 PMCID: PMC10801938 DOI: 10.1186/s12911-024-02423-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 01/10/2024] [Indexed: 01/23/2024] Open
Abstract
BACKGROUND In clinical medicine, fetal heart rate (FHR) monitoring using cardiotocography (CTG) is one of the most commonly used methods for assessing fetal acidosis. However, as the visual interpretation of CTG depends on the subjective judgment of the clinician, this has led to high inter-observer and intra-observer variability, making it necessary to introduce automated diagnostic techniques. METHODS In this study, we propose a computer-aided diagnostic algorithm (Hybrid-FHR) for fetal acidosis to assist physicians in making objective decisions and taking timely interventions. Hybrid-FHR uses multi-modal features, including one-dimensional FHR signals and three types of expert features designed based on prior knowledge (morphological time domain, frequency domain, and nonlinear). To extract the spatiotemporal feature representation of one-dimensional FHR signals, we designed a multi-scale squeeze and excitation temporal convolutional network (SE-TCN) backbone model based on dilated causal convolution, which can effectively capture the long-term dependence of FHR signals by expanding the receptive field of each layer's convolution kernel while maintaining a relatively small parameter size. In addition, we proposed a cross-modal feature fusion (CMFF) method that uses multi-head attention mechanisms to explore the relationships between different modalities, obtaining more informative feature representations and improving diagnostic accuracy. RESULTS Our ablation experiments show that the Hybrid-FHR outperforms traditional previous methods, with average accuracy, specificity, sensitivity, precision, and F1 score of 96.8, 97.5, 96, 97.5, and 96.7%, respectively. CONCLUSIONS Our algorithm enables automated CTG analysis, assisting healthcare professionals in the early identification of fetal acidosis and the prompt implementation of interventions.
Collapse
Affiliation(s)
- Zhidong Zhao
- School of Cyberspace, Hangzhou Dianzi University, Hangzhou, China.
| | - Jiawei Zhu
- College of Electronics and Information Engineering, Hangzhou Dianzi University, Hangzhou, China
| | - Pengfei Jiao
- School of Cyberspace, Hangzhou Dianzi University, Hangzhou, China
| | - Jinpeng Wang
- School of Cyberspace, Hangzhou Dianzi University, Hangzhou, China
| | - Xiaohong Zhang
- College of Electronics and Information Engineering, Hangzhou Dianzi University, Hangzhou, China
| | - Xinmiao Lu
- College of Electronics and Information Engineering, Hangzhou Dianzi University, Hangzhou, China
| | - Yefei Zhang
- School of Cyberspace, Hangzhou Dianzi University, Hangzhou, China
| |
Collapse
|
6
|
Wang Z, Zhang L, Shu X, Wang Y, Feng Y. Consistent representation via contrastive learning for skin lesion diagnosis. Comput Methods Programs Biomed 2023; 242:107826. [PMID: 37837885 DOI: 10.1016/j.cmpb.2023.107826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/19/2023] [Accepted: 09/21/2023] [Indexed: 10/16/2023]
Abstract
BACKGROUND Skin lesions are a prevalent ailment, with melanoma emerging as a particularly perilous variant. Encouragingly, artificial intelligence displays promising potential in early detection, yet its integration within clinical contexts, particularly involving multi-modal data, presents challenges. While multi-modal approaches enhance diagnostic efficacy, the influence of modal bias is often disregarded. METHODS In this investigation, a multi-modal feature learning technique termed "Contrast-based Consistent Representation Disentanglement" for dermatological diagnosis is introduced. This approach employs adversarial domain adaptation to disentangle features from distinct modalities, fostering a shared representation. Furthermore, a contrastive learning strategy is devised to incentivize the model to preserve uniformity in common lesion attributes across modalities. Emphasizing the learning of a uniform representation among models, this approach circumvents reliance on supplementary data. RESULTS Assessment of the proposed technique on a 7-point criteria evaluation dataset yields an average accuracy of 76.1% for multi-classification tasks, surpassing researched state-of-the-art methods. The approach tackles modal bias, enabling the acquisition of a consistent representation of common lesion appearances across diverse modalities, which transcends modality boundaries. This study underscores the latent potential of multi-modal feature learning in dermatological diagnosis. CONCLUSION In summation, a multi-modal feature learning strategy is posited for dermatological diagnosis. This approach outperforms other state-of-the-art methods, underscoring its capacity to enhance diagnostic precision for skin lesions.
Collapse
Affiliation(s)
- Zizhou Wang
- College of Computer Science, Sichuan University, Chengdu 610065, China; Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore.
| | - Lei Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, China.
| | - Xin Shu
- College of Computer Science, Sichuan University, Chengdu 610065, China.
| | - Yan Wang
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore.
| | - Yangqin Feng
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore.
| |
Collapse
|
7
|
Wang AQ, Yu EM, Dalca AV, Sabuncu MR. A robust and interpretable deep learning framework for multi-modal registration via keypoints. Med Image Anal 2023; 90:102962. [PMID: 37769550 PMCID: PMC10591968 DOI: 10.1016/j.media.2023.102962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 08/24/2023] [Accepted: 09/07/2023] [Indexed: 10/03/2023]
Abstract
We present KeyMorph, a deep learning-based image registration framework that relies on automatically detecting corresponding keypoints. State-of-the-art deep learning methods for registration often are not robust to large misalignments, are not interpretable, and do not incorporate the symmetries of the problem. In addition, most models produce only a single prediction at test-time. Our core insight which addresses these shortcomings is that corresponding keypoints between images can be used to obtain the optimal transformation via a differentiable closed-form expression. We use this observation to drive the end-to-end learning of keypoints tailored for the registration task, and without knowledge of ground-truth keypoints. This framework not only leads to substantially more robust registration but also yields better interpretability, since the keypoints reveal which parts of the image are driving the final alignment. Moreover, KeyMorph can be designed to be equivariant under image translations and/or symmetric with respect to the input image ordering. Finally, we show how multiple deformation fields can be computed efficiently and in closed-form at test time corresponding to different transformation variants. We demonstrate the proposed framework in solving 3D affine and spline-based registration of multi-modal brain MRI scans. In particular, we show registration accuracy that surpasses current state-of-the-art methods, especially in the context of large displacements. Our code is available at https://github.com/alanqrwang/keymorph.
Collapse
Affiliation(s)
- Alan Q Wang
- School of Electrical and Computer Engineering, Cornell University and Cornell Tech, New York, NY 10044, USA; Department of Radiology, Weill Cornell Medical School, New York, NY 10065, USA.
| | - Evan M Yu
- Iterative Scopes, Cambridge, MA 02139, USA
| | - Adrian V Dalca
- Computer Science and Artificial Intelligence Lab at the Massachusetts Institute of Technology, Cambridge, MA 02139, USA; A.A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital, Charlestown, MA 02129, USA
| | - Mert R Sabuncu
- School of Electrical and Computer Engineering, Cornell University and Cornell Tech, New York, NY 10044, USA; Department of Radiology, Weill Cornell Medical School, New York, NY 10065, USA
| |
Collapse
|
8
|
Zhou S, Sun D, Mao W, Liu Y, Cen W, Ye L, Liang F, Xu J, Shi H, Ji Y, Wang L, Chang W. Deep radiomics-based fusion model for prediction of bevacizumab treatment response and outcome in patients with colorectal cancer liver metastases: a multicentre cohort study. EClinicalMedicine 2023; 65:102271. [PMID: 37869523 PMCID: PMC10589780 DOI: 10.1016/j.eclinm.2023.102271] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 09/26/2023] [Accepted: 09/27/2023] [Indexed: 10/24/2023] Open
Abstract
Background Accurate tumour response prediction to targeted therapy allows for personalised conversion therapy for patients with unresectable colorectal cancer liver metastases (CRLM). In this study, we aimed to develop and validate a multi-modal deep learning model to predict the efficacy of bevacizumab in patients with initially unresectable CRLM using baseline PET/CT, clinical data, and colonoscopy biopsy specimens. Methods In this multicentre cohort study, we retrospectively collected data of 307 patients with CRLM from the BECOME study (NCT01972490) (Zhongshan Hospital of Fudan University, Shanghai) and two independent Chinese cohorts (internal validation cohort from January 1, 2018 to December 31, 2018 at Zhongshan Hospital of Fudan University; external validation cohort from January 1, 2020 to December 31, 2020 at Zhongshan Hospital-Xiamen, Shanghai, and the First Hospital of Wenzhou Medical University, Wenzhou). The main inclusion criteria were that patients with CRLM had pre-treatment PET/CT images as well as colonoscopy specimens. After extracting PET/CT features with deep neural networks (DNN) and selecting related clinical factors using LASSO analysis, a random forest classifier was built as the Deep Radiomics Bevacizumab efficacy predicting model (DERBY). Furthermore, by combining histopathological biomarkers into DERBY, we established DERBY+. The performance of model was evaluated using area under the curve (AUC), sensitivity, specificity, positive predictive value, and negative predictive value. Findings DERBY achieved promising performance in predicting bevacizumab sensitivity with an AUC of 0.77 and 95% confidence interval (CI) [0.67-0.87]. After combining histopathological features, we developed DERBY+, which had more robust accuracy for predicting tumour response in external validation cohort (AUC 0.83 and 95% CI [0.75-0.92], sensitivity 80.4%, specificity 76.8%). DERBY+ also had prognostic value: the responders had longer progression-free survival (median progression-free survival: 9.6 vs 6.3 months, p = 0.002) and overall survival (median overall survival: 27.6 vs 18.5 months, p = 0.010) than non-responders. Interpretation This multi-modal deep radiomics model, using PET/CT, clinical data and histopathological data, was able to identify patients with bevacizumab-sensitive CRLM, providing a favourable approach for precise patient treatment. To further validate and explore the clinical impact of this work, future prospective studies with larger patient cohorts are warranted. Funding The National Natural Science Foundation of China; Fujian Provincial Health Commission Project; Xiamen Science and Technology Agency Program; Clinical Research Plan of SHDC; Shanghai Science and Technology Committee Project; Clinical Research Plan of SHDC; Zhejiang Provincial Natural Science Foundation of China; and National Science Foundation of Xiamen.
Collapse
Affiliation(s)
- Shizhao Zhou
- Department of General Surgery, Department of Colorectal Surgery, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
- Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
| | - Dazhen Sun
- Department of Automation, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Wujian Mao
- Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
| | - Yu Liu
- Department of General Surgery, Department of Colorectal Surgery, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
- Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
| | - Wei Cen
- Department of Surgery, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, 325000, China
| | - Lechi Ye
- Department of Surgery, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, 325000, China
| | - Fei Liang
- Department of Biostatistics, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Jianmin Xu
- Department of General Surgery, Department of Colorectal Surgery, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
- Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
| | - Hongcheng Shi
- Department of Nuclear Medicine, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
| | - Yuan Ji
- Department of Pathology, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
| | - Lisheng Wang
- Department of Automation, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Wenju Chang
- Department of General Surgery, Department of Colorectal Surgery, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
- Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
- Department of General Surgery, Zhongshan Hospital (Xiamen Branch), Fudan University, Xiamen, Fujian, 361015, China
| |
Collapse
|
9
|
周 家, 郭 红, 陈 红. [Deep learning method for magnetic resonance imaging fluid-attenuated inversion recovery image synthesis]. Sheng Wu Yi Xue Gong Cheng Xue Za Zhi 2023; 40:903-911. [PMID: 37879919 PMCID: PMC10600433 DOI: 10.7507/1001-5515.202302012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 08/19/2023] [Indexed: 10/27/2023]
Abstract
Magnetic resonance imaging(MRI) can obtain multi-modal images with different contrast, which provides rich information for clinical diagnosis. However, some contrast images are not scanned or the quality of the acquired images cannot meet the diagnostic requirements due to the difficulty of patient's cooperation or the limitation of scanning conditions. Image synthesis techniques have become a method to compensate for such image deficiencies. In recent years, deep learning has been widely used in the field of MRI synthesis. In this paper, a synthesis network based on multi-modal fusion is proposed, which firstly uses a feature encoder to encode the features of multiple unimodal images separately, and then fuses the features of different modal images through a feature fusion module, and finally generates the target modal image. The similarity measure between the target image and the predicted image in the network is improved by introducing a dynamic weighted combined loss function based on the spatial domain and K-space domain. After experimental validation and quantitative comparison, the multi-modal fusion deep learning network proposed in this paper can effectively synthesize high-quality MRI fluid-attenuated inversion recovery (FLAIR) images. In summary, the method proposed in this paper can reduce MRI scanning time of the patient, as well as solve the clinical problem of missing FLAIR images or image quality that is difficult to meet diagnostic requirements.
Collapse
Affiliation(s)
- 家柠 周
- 沈阳工业大学 电气工程学院(沈阳 110870)School of Electrical Engineering, Shenyang University of Technology, Shenyang 110870, P. R. China
| | - 红宇 郭
- 沈阳工业大学 电气工程学院(沈阳 110870)School of Electrical Engineering, Shenyang University of Technology, Shenyang 110870, P. R. China
- 东软医疗系统股份有限公司(沈阳 110167)Neusoft Medical System Co. Ltd, Shenyang 110167, P. R. China
| | - 红 陈
- 沈阳工业大学 电气工程学院(沈阳 110870)School of Electrical Engineering, Shenyang University of Technology, Shenyang 110870, P. R. China
| |
Collapse
|
10
|
Maghfira TN, Krisnadhi AA, Basaruddin T, Pudjiati SRR. The Indonesian Young-Adult Attachment (IYAA): An audio-video dataset for behavioral young-adult attachment assessment. Data Brief 2023; 50:109599. [PMID: 37780464 PMCID: PMC10539883 DOI: 10.1016/j.dib.2023.109599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 09/14/2023] [Accepted: 09/15/2023] [Indexed: 10/03/2023] Open
Abstract
The attachment system is an innate human instinct to gain a sense of security as a form of self-defense from threats. Adults with secure attachment can maintain the balance of their relationships with themselves and significant others such as parents, romantic partners, and close friends. Generally, the adult attachment assessment data are collected primarily from subjective responses through questionnaires or interviews, which are closed to the research community. Attachment assessment from behavioral traits has also not been studied in depth because attachment-related behavioral data are still not openly available for research. This limits the scope of attachment assessment to new alternative innovations, such as the application of machine learning and deep learning-based approaches. This paper presents the Indonesian Young Adult Attachment (IYAA) dataset, a facial expression and speech audio dataset of Indonesian young adults in attachment projective-based assessment. The assessment contains two stages: exposure and response of 14 attachment-based stimuli. IYAA consists of audio-video data from age groups between 18-29 years old, with 20 male and 67 female subjects. It contains 1216 exposure videos, 1217 response videos, and 1217 speech response audios. Each data has a varying duration; the duration for exposure video ranges from 25 seconds to 1 minute 39 seconds, while for response video and speech response audio ranges from 40 seconds to 8 minutes and 25 seconds. The IYAA dataset is annotated into two kinds of labels: emotion and attachment. First, emotion labeling is annotated on each stimulus for all subject data (exposure videos, response videos, speech response audios). Each data is annotated into one or more labels among eight basic emotion categories (neutral, happy, sad, contempt, anger, disgust, surprised, fear) since each attachment-related event involves unconscious mental processes characterized by emotional changes. Second, each subject is annotated into one among three attachment style labels: secure, insecure-anxious, and insecure-avoidance. Given these two kinds of labeling, the IYAA dataset supports several research purposes, either using one kind of label separately or using them together for attachment classification research. It also supports innovative approaches to build automatic attachment classification through collaboration between the study of Behavioral, Developmental, and Social Psychology with Social Signal Processing.
Collapse
Affiliation(s)
| | | | - T. Basaruddin
- Computer Science Department, Universitas Indonesia, Depok 16424 Indonesia
| | | |
Collapse
|
11
|
Seada SA, van der Eerden AW, Boon AJW, Hernandez-Tamames JA. Quantitative MRI protocol and decision model for a 'one stop shop' early-stage Parkinsonism diagnosis: Study design. Neuroimage Clin 2023; 39:103506. [PMID: 37696098 PMCID: PMC10500558 DOI: 10.1016/j.nicl.2023.103506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 06/21/2023] [Accepted: 09/04/2023] [Indexed: 09/13/2023]
Abstract
Differentiating among early-stage parkinsonisms is a challenge in clinical practice. Quantitative MRI can aid the diagnostic process, but studies with singular MRI techniques have had limited success thus far. Our objective is to develop a multi-modal MRI method for this purpose. In this review we describe existing methods and present a dedicated quantitative MRI protocol, a decision model and a study design to validate our approach ahead of a pilot study. We present example imaging data from patients and a healthy control, which resemble related literature.
Collapse
Affiliation(s)
- Samy Abo Seada
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Anke W van der Eerden
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Agnita J W Boon
- Department of Neurology, Erasmus MC, Rotterdam, The Netherlands
| | - Juan A Hernandez-Tamames
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; Department of Imaging Physics, TU Delft, The Netherlands.
| |
Collapse
|
12
|
Tang W, Zhang M, Xu C, Shao Y, Tang J, Gong S, Dong H, Sheng M. Diagnostic efficiency of multi-modal MRI based deep learning with Sobel operator in differentiating benign and malignant breast mass lesions-a retrospective study. PeerJ Comput Sci 2023; 9:e1460. [PMID: 37547396 PMCID: PMC10403185 DOI: 10.7717/peerj-cs.1460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 06/06/2023] [Indexed: 08/08/2023]
Abstract
Purpose To compare the diagnostic efficiencies of deep learning single-modal and multi-modal for the classification of benign and malignant breast mass lesions. Methods We retrospectively collected data from 203 patients (207 lesions, 101 benign and 106 malignant) with breast tumors who underwent breast magnetic resonance imaging (MRI) before surgery or biopsy between January 2014 and October 2020. Mass segmentation was performed based on the three dimensions-region of interest (3D-ROI) minimum bounding cube at the edge of the lesion. We established single-modal models based on a convolutional neural network (CNN) including T2WI and non-fs T1WI, the dynamic contrast-enhanced (DCE-MRI) first phase was pre-contrast T1WI (d1), and Phases 2, 4, and 6 were post-contrast T1WI (d2, d4, d6); and Multi-modal fusion models with a Sobel operator (four_mods:T2WI, non-fs-T1WI, d1, d2). Training set (n = 145), validation set (n = 22), and test set (n = 40). Five-fold cross validation was performed. Accuracy, sensitivity, specificity, negative predictive value, positive predictive value, and area under the ROC curve (AUC) were used as evaluation indicators. Delong's test compared the diagnostic performance of the multi-modal and single-modal models. Results All models showed good performance, and the AUC values were all greater than 0.750. Among the single-modal models, T2WI, non-fs-T1WI, d1, and d2 had specificities of 77.1%, 77.2%, 80.2%, and 78.2%, respectively. d2 had the highest accuracy of 78.5% and showed the best diagnostic performance with an AUC of 0.827. The multi-modal model with the Sobel operator performed better than single-modal models, with an AUC of 0.887, sensitivity of 79.8%, specificity of 86.1%, and positive prediction value of 85.6%. Delong's test showed that the diagnostic performance of the multi-modal fusion models was higher than that of the six single-modal models (T2WI, non-fs-T1WI, d1, d2, d4, d6); the difference was statistically significant (p = 0.043, 0.017, 0.006, 0.017, 0.020, 0.004, all were greater than 0.05). Conclusions Multi-modal fusion deep learning models with a Sobel operator had excellent diagnostic value in the classification of breast masses, and further increase the efficiency of diagnosis.
Collapse
Affiliation(s)
- Weixia Tang
- Radiology Department, Affiliated Hospital 2 of Nantong University, Nantong First People’s Hospital, NanTong, Jiangsu, China
| | - Ming Zhang
- Radiology Department, Affiliated Hospital 2 of Nantong University, Nantong First People’s Hospital, NanTong, Jiangsu, China
| | - Changyan Xu
- School of Transportation and Civil Engineering, Nantong University, Nantong, China
| | - Yeqin Shao
- School of Transportation and Civil Engineering, Nantong University, Nantong, China
| | - Jiahuan Tang
- Radiology Department, Affiliated Hospital 2 of Nantong University, Nantong First People’s Hospital, NanTong, Jiangsu, China
| | - Shenchu Gong
- Radiology Department, Affiliated Hospital 2 of Nantong University, Nantong First People’s Hospital, NanTong, Jiangsu, China
| | - Hao Dong
- Department of Research Collaboration, R&D Center, Beijing Deepwise & League of PHD Technology Co., Ltd., Beijing, China
| | - Meihong Sheng
- Radiology Department, Affiliated Hospital 2 of Nantong University, Nantong First People’s Hospital, NanTong, Jiangsu, China
| |
Collapse
|
13
|
Lu S, Liu M, Yin L, Yin Z, Liu X, Zheng W. The multi-modal fusion in visual question answering: a review of attention mechanisms. PeerJ Comput Sci 2023; 9:e1400. [PMID: 37346665 PMCID: PMC10280591 DOI: 10.7717/peerj-cs.1400] [Citation(s) in RCA: 38] [Impact Index Per Article: 38.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 04/25/2023] [Indexed: 06/23/2023]
Abstract
Visual Question Answering (VQA) is a significant cross-disciplinary issue in the fields of computer vision and natural language processing that requires a computer to output a natural language answer based on pictures and questions posed based on the pictures. This requires simultaneous processing of multimodal fusion of text features and visual features, and the key task that can ensure its success is the attention mechanism. Bringing in attention mechanisms makes it better to integrate text features and image features into a compact multi-modal representation. Therefore, it is necessary to clarify the development status of attention mechanism, understand the most advanced attention mechanism methods, and look forward to its future development direction. In this article, we first conduct a bibliometric analysis of the correlation through CiteSpace, then we find and reasonably speculate that the attention mechanism has great development potential in cross-modal retrieval. Secondly, we discuss the classification and application of existing attention mechanisms in VQA tasks, analysis their shortcomings, and summarize current improvement methods. Finally, through the continuous exploration of attention mechanisms, we believe that VQA will evolve in a smarter and more human direction.
Collapse
Affiliation(s)
- Siyu Lu
- School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan Province, China
| | - Mingzhe Liu
- School of Data Science and Artificial Intelligence, Wenzhou University of Technology, Wenzhou, China
| | - Lirong Yin
- Department of Geography and Anthropology, Louisiana State University, Baton Rouge, LA, United States of America
| | - Zhengtong Yin
- College of Resource and Environment Engineering, Guizhou University, Guiyang, China
| | - Xuan Liu
- School of Public Affairs and Administration, University of Electronic Science and Technology of China, Chengdu, China
| | - Wenfeng Zheng
- School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan Province, China
| |
Collapse
|
14
|
Liu S, Liang W, Huang P, Chen D, He Q, Ning Z, Zhang Y, Xiong W, Yu J, Chen T. Multi-modal analysis for accurate prediction of preoperative stage and indications of optimal treatment in gastric cancer. Radiol Med 2023; 128:509-519. [PMID: 37115392 DOI: 10.1007/s11547-023-01625-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 03/27/2023] [Indexed: 04/29/2023]
Abstract
BACKGROUND Accurate preoperative clinical staging of gastric cancer helps determine therapeutic strategies. However, no multi-category grading models for gastric cancer have been established. This study aimed to develop multi-modal (CT/EHRs) artificial intelligence (AI) models for predicting tumor stages and optimal treatment indication based on preoperative CT images and electronic health records (EHRs) in patients with gastric cancer. METHODS This retrospective study enrolled 602 patients with a pathological diagnosis of gastric cancer from Nanfang hospital retrospectively and divided them into training (n = 452) and validation sets (n = 150). A total of 1326 features were extracted of which 1316 radiomic features were extracted from the 3D CT images and 10 clinical parameters were obtained from electronic health records (EHRs). Four multi-layer perceptrons (MLPs) whose input was the combination of radiomic features and clinical parameters were automatically learned with the neural architecture search (NAS) strategy. RESULTS Two two-layer MLPs identified by NAS approach were employed to predict the stage of the tumor showed greater discrimination with the average ACC value of 0.646 for five T stages, 0.838 for four N stages than traditional methods with ACC of 0.543 (P value = 0.034) and 0.468 (P value = 0.021), respectively. Furthermore, our models reported high prediction accuracy for the indication of endoscopic resection and the preoperative neoadjuvant chemotherapy with the AUC value of 0.771 and 0.661, respectively. CONCLUSIONS Our multi-modal (CT/EHRs) artificial intelligence models generated with the NAS approach have high accuracy for tumor stage prediction and optimal treatment regimen and timing, which could facilitate radiologists and gastroenterologists to improve diagnosis and treatment efficiency.
Collapse
Affiliation(s)
- Shangqing Liu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
| | - Weiqi Liang
- Department of General Surgery and Guangdong Provincial Key Laboratory of Precision Medicine for Gastrointestinal Tumor, Nanfang Hospital, The First School of Clinical Medicine, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Pinyu Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
| | - Dianjie Chen
- Department of General Surgery and Guangdong Provincial Key Laboratory of Precision Medicine for Gastrointestinal Tumor, Nanfang Hospital, The First School of Clinical Medicine, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Qinglie He
- Department of General Surgery and Guangdong Provincial Key Laboratory of Precision Medicine for Gastrointestinal Tumor, Nanfang Hospital, The First School of Clinical Medicine, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Zhenyuan Ning
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
| | - Wei Xiong
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Jiang Yu
- Department of General Surgery and Guangdong Provincial Key Laboratory of Precision Medicine for Gastrointestinal Tumor, Nanfang Hospital, The First School of Clinical Medicine, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Tao Chen
- Department of General Surgery and Guangdong Provincial Key Laboratory of Precision Medicine for Gastrointestinal Tumor, Nanfang Hospital, The First School of Clinical Medicine, Southern Medical University, Guangzhou, 510515, Guangdong, China.
- Department of Gastrointestinal and Hernia Surgery, Ganzhou Hospital-Nanfang Hospital, Southern Medical University, Ganzhou, 341000, Jiangxi, China.
| |
Collapse
|
15
|
Ren ZH, You ZH, Zou Q, Yu CQ, Ma YF, Guan YJ, You HR, Wang XF, Pan J. DeepMPF: deep learning framework for predicting drug-target interactions based on multi-modal representation with meta-path semantic analysis. J Transl Med 2023; 21:48. [PMID: 36698208 PMCID: PMC9876420 DOI: 10.1186/s12967-023-03876-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 01/05/2023] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Drug-target interaction (DTI) prediction has become a crucial prerequisite in drug design and drug discovery. However, the traditional biological experiment is time-consuming and expensive, as there are abundant complex interactions present in the large size of genomic and chemical spaces. For alleviating this phenomenon, plenty of computational methods are conducted to effectively complement biological experiments and narrow the search spaces into a preferred candidate domain. Whereas, most of the previous approaches cannot fully consider association behavior semantic information based on several schemas to represent complex the structure of heterogeneous biological networks. Additionally, the prediction of DTI based on single modalities cannot satisfy the demand for prediction accuracy. METHODS We propose a multi-modal representation framework of 'DeepMPF' based on meta-path semantic analysis, which effectively utilizes heterogeneous information to predict DTI. Specifically, we first construct protein-drug-disease heterogeneous networks composed of three entities. Then the feature information is obtained under three views, containing sequence modality, heterogeneous structure modality and similarity modality. We proposed six representative schemas of meta-path to preserve the high-order nonlinear structure and catch hidden structural information of the heterogeneous network. Finally, DeepMPF generates highly representative comprehensive feature descriptors and calculates the probability of interaction through joint learning. RESULTS To evaluate the predictive performance of DeepMPF, comparison experiments are conducted on four gold datasets. Our method can obtain competitive performance in all datasets. We also explore the influence of the different feature embedding dimensions, learning strategies and classification methods. Meaningfully, the drug repositioning experiments on COVID-19 and HIV demonstrate DeepMPF can be applied to solve problems in reality and help drug discovery. The further analysis of molecular docking experiments enhances the credibility of the drug candidates predicted by DeepMPF. CONCLUSIONS All the results demonstrate the effectively predictive capability of DeepMPF for drug-target interactions. It can be utilized as a useful tool to prescreen the most potential drug candidates for the protein. The web server of the DeepMPF predictor is freely available at http://120.77.11.78/DeepMPF/ , which can help relevant researchers to further study.
Collapse
Affiliation(s)
- Zhong-Hao Ren
- grid.460132.20000 0004 1758 0275School of Information Engineering, Xijing University, Xi’an, 710100 China
| | - Zhu-Hong You
- grid.440588.50000 0001 0307 1240School of Computer Science, Northwestern Polytechnical University, Xi’an, 710129 China
| | - Quan Zou
- grid.54549.390000 0004 0369 4060Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, 610054 China
| | - Chang-Qing Yu
- grid.460132.20000 0004 1758 0275School of Information Engineering, Xijing University, Xi’an, 710100 China
| | - Yan-Fang Ma
- grid.417234.70000 0004 1808 3203Department of Galactophore, The Third People’s Hospital of Gansu Province, Lanzhou, 730020 China
| | - Yong-Jian Guan
- grid.460132.20000 0004 1758 0275School of Information Engineering, Xijing University, Xi’an, 710100 China
| | - Hai-Ru You
- grid.440588.50000 0001 0307 1240School of Computer Science, Northwestern Polytechnical University, Xi’an, 710129 China
| | - Xin-Fei Wang
- grid.460132.20000 0004 1758 0275School of Information Engineering, Xijing University, Xi’an, 710100 China
| | - Jie Pan
- grid.460132.20000 0004 1758 0275School of Information Engineering, Xijing University, Xi’an, 710100 China
| |
Collapse
|
16
|
Thati RP, Dhadwal AS, Kumar P, P S. A novel multi-modal depression detection approach based on mobile crowd sensing and task-based mechanisms. Multimed Tools Appl 2023; 82:4787-4820. [PMID: 35431608 PMCID: PMC9000000 DOI: 10.1007/s11042-022-12315-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 09/20/2021] [Accepted: 01/17/2022] [Indexed: 05/05/2023]
Abstract
Depression has become a global concern, and COVID-19 also has caused a big surge in its incidence. Broadly, there are two primary methods of detecting depression: Task-based and Mobile Crowd Sensing (MCS) based methods. These two approaches, when integrated, can complement each other. This paper proposes a novel approach for depression detection that combines real-time MCS and task-based mechanisms. We aim to design an end-to-end machine learning pipeline, which involves multimodal data collection, feature extraction, feature selection, fusion, and classification to distinguish between depressed and non-depressed subjects. For this purpose, we created a real-world dataset of depressed and non-depressed subjects. We experimented with: various features from multi-modalities, feature selection techniques, fused features, and machine learning classifiers such as Logistic Regression, Support Vector Machines (SVM), etc. for classification. Our findings suggest that combining features from multiple modalities perform better than any single data modality, and the best classification accuracy is achieved when features from all three data modalities are fused. Feature selection method based on Pearson's correlation coefficients improved the accuracy in comparison with other methods. Also, SVM yielded the best accuracy of 86%. Our proposed approach was also applied on benchmarking dataset, and results demonstrated that the multimodal approach is advantageous in performance with state-of-the-art depression recognition techniques.
Collapse
Affiliation(s)
- Ravi Prasad Thati
- Department of Computer Science and Engineering, Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440010 Maharashtra India
| | - Abhishek Singh Dhadwal
- Department of Computer Science and Engineering, Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440010 Maharashtra India
| | - Praveen Kumar
- Department of Computer Science and Engineering, Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440010 Maharashtra India
| | - Sainaba P
- Department of Applied Psychology, Central University of Tamil Nadu, Tamilnadu, India
| |
Collapse
|
17
|
Ozawa S, Cenani A, Sanchez-Migallon Guzman Lv D. Treatment of Pain in Rabbits. Vet Clin North Am Exot Anim Pract 2023; 26:201-227. [PMID: 36402482 DOI: 10.1016/j.cvex.2022.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Rabbits occupy facets of veterinary medicine spanning from companion mammals, wildlife medicine, zoologic species, and research models. Therefore, analgesia is required for a variety of conditions in rabbits and is a critical component of patient care. Considerations when selecting an analgesic protocol in rabbits include timing of administration, route of administration, degree or anticipated pain, ability to access or use controlled drugs, systemic health, and any potential side effects. This review focuses on pharmacologic and locoregional management of pain in rabbits and emphasizes the need for further studies on pain management in this species.
Collapse
Affiliation(s)
- Sarah Ozawa
- Department of Clinical Sciences, College of Veterinary Medicine, North Carolina State University, 1060 Williams Moore Dr, Raleigh, NC 27606, USA.
| | - Alessia Cenani
- Department of Surgical and Radiographical Sciences, School of Veterinary Medicine University of California Davis, One Shields Avenue, Davis, CA 95616, USA
| | - David Sanchez-Migallon Guzman Lv
- Department of Medicine and Epidemiology, School of Veterinary Medicine University of California Davis, One Shields Avenue, Davis, CA 95616, USA
| |
Collapse
|
18
|
Gao Y, Fu X, Chen Y, Guo C, Wu J. Post-pandemic healthcare for COVID-19 vaccine: Tissue-aware diagnosis of cervical lymphadenopathy via multi-modal ultrasound semantic segmentation. Appl Soft Comput 2023; 133:109947. [PMID: 36570119 PMCID: PMC9762098 DOI: 10.1016/j.asoc.2022.109947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 11/30/2022] [Accepted: 12/15/2022] [Indexed: 12/24/2022]
Abstract
With the widespread deployment of COVID-19 vaccines all around the world, billions of people have benefited from the vaccination and thereby avoiding infection. However, huge amount of clinical cases revealed diverse side effects of COVID-19 vaccines, among which cervical lymphadenopathy is one of the most frequent local reactions. Therefore, rapid detection of cervical lymph node (LN) is essential in terms of vaccine recipients' healthcare and avoidance of misdiagnosis in the post-pandemic era. This paper focuses on a novel deep learning-based framework for the rapid diagnosis of cervical lymphadenopathy towards COVID-19 vaccine recipients. Existing deep learning-based computer-aided diagnosis (CAD) methods for cervical LN enlargement mostly only depend on single modal images, e.g., grayscale ultrasound (US), color Doppler ultrasound, and CT, while failing to effectively integrate information from the multi-source medical images. Meanwhile, both the surrounding tissue objects of the cervical LNs and different regions inside the cervical LNs may imply valuable diagnostic knowledge which is pending for mining. In this paper, we propose an Tissue-Aware Cervical Lymph Node Diagnosis method (TACLND) via multi-modal ultrasound semantic segmentation. The method effectively integrates grayscale and color Doppler US images and realizes a pixel-level localization of different tissue objects, i.e., lymph, muscle, and blood vessels. With inter-tissue and intra-tissue attention mechanisms applied, our proposed method can enhance the implicit tissue-level diagnostic knowledge in both spatial and channel dimension, and realize diagnosis of cervical LN with normal, benign or malignant state. Extensive experiments conducted on our collected cervical LN US dataset demonstrate the effectiveness of our methods on both tissue detection and cervical lymphadenopathy diagnosis. Therefore, our proposed framework can guarantee efficient diagnosis for the vaccine recipients' cervical LN, and assist doctors to discriminate between COVID-related reactive lymphadenopathy and metastatic lymphadenopathy.
Collapse
Affiliation(s)
- Yue Gao
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China,Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China
| | - Xiangling Fu
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China,Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China,Corresponding author at: School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China
| | - Yuepeng Chen
- School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing, 100876, China,Key Laboratory of Trustworthy Distributed Computing and Service (BUPT), Ministry of Education, Beijing, 100876, China
| | - Chenyi Guo
- Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Ji Wu
- Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| |
Collapse
|
19
|
Jiao W, Song S, Han H, Wang W, Zhang Q. Artificially intelligent differential diagnosis of enlarged lymph nodes with random vector functional link network plus. Med Eng Phys 2023; 111:103939. [PMID: 36792248 DOI: 10.1016/j.medengphy.2022.103939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 11/10/2022] [Accepted: 12/04/2022] [Indexed: 12/12/2022]
Abstract
Differential diagnosis of enlarged lymph nodes (ELNs) is essential for the treatment of related patients. Though multi-modal ultrasound including B-mode, Doppler ultrasound, elastography and contrast-enhanced ultrasound (CEUS) can enhance diagnostic performance for ELNs, the scenario of having only single or dual modal data is often encountered. In this study, an artificially intelligent diagnosis model based on the learning using privileged information was proposed to aid in differential diagnosis of ELNs in the case of single or dual modal images. In our model, B-mode, or combined with another modality, was used as the standard information (SI) and other modalities were used as the privileged information (PI). The model was constructed through the combination of the SI and PI in the training stage. By learning from the training samples, a random vector functional link network with privileged information (RVFL+) was obtained, which was used to classify the testing samples of solely the SI. Results showed that the accuracy, precision and Youden's index of the RVFL+ model, using B-mode with elastography as the SI and CEUS as the PI, reached 78.4%, 92.4% and 54.9%, increased by 14.0%, 8.4% and 24.5% compared with the model using B-mode as the SI without the PI. The method based on the LUPI can improve the diagnostic performance for ELNs.
Collapse
Affiliation(s)
- Weiwei Jiao
- The SMART (Smart Medicine and AI-based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Shuang Song
- The SMART (Smart Medicine and AI-based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Hong Han
- Department of Ultrasound, Zhongshan Hospital Fudan University, 200032, Shanghai, China; Shanghai Institute of Medical Imaging, 200032, Shanghai, China.
| | - Wenping Wang
- Department of Ultrasound, Zhongshan Hospital Fudan University, 200032, Shanghai, China; Shanghai Institute of Medical Imaging, 200032, Shanghai, China.
| | - Qi Zhang
- The SMART (Smart Medicine and AI-based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai, China.
| |
Collapse
|
20
|
Wu JTY, de la Hoz MÁA, Kuo PC, Paguio JA, Yao JS, Dee EC, Yeung W, Jurado J, Moulick A, Milazzo C, Peinado P, Villares P, Cubillo A, Varona JF, Lee HC, Estirado A, Castellano JM, Celi LA. Developing and Validating Multi-Modal Models for Mortality Prediction in COVID-19 Patients: a Multi-center Retrospective Study. J Digit Imaging 2022; 35:1514-1529. [PMID: 35789446 PMCID: PMC9255527 DOI: 10.1007/s10278-022-00674-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 05/15/2022] [Accepted: 06/08/2022] [Indexed: 01/07/2023] Open
Abstract
The unprecedented global crisis brought about by the COVID-19 pandemic has sparked numerous efforts to create predictive models for the detection and prognostication of SARS-CoV-2 infections with the goal of helping health systems allocate resources. Machine learning models, in particular, hold promise for their ability to leverage patient clinical information and medical images for prediction. However, most of the published COVID-19 prediction models thus far have little clinical utility due to methodological flaws and lack of appropriate validation. In this paper, we describe our methodology to develop and validate multi-modal models for COVID-19 mortality prediction using multi-center patient data. The models for COVID-19 mortality prediction were developed using retrospective data from Madrid, Spain (N = 2547) and were externally validated in patient cohorts from a community hospital in New Jersey, USA (N = 242) and an academic center in Seoul, Republic of Korea (N = 336). The models we developed performed differently across various clinical settings, underscoring the need for a guided strategy when employing machine learning for clinical decision-making. We demonstrated that using features from both the structured electronic health records and chest X-ray imaging data resulted in better 30-day mortality prediction performance across all three datasets (areas under the receiver operating characteristic curves: 0.85 (95% confidence interval: 0.83-0.87), 0.76 (0.70-0.82), and 0.95 (0.92-0.98)). We discuss the rationale for the decisions made at every step in developing the models and have made our code available to the research community. We employed the best machine learning practices for clinical model development. Our goal is to create a toolkit that would assist investigators and organizations in building multi-modal models for prediction, classification, and/or optimization.
Collapse
Affiliation(s)
- Joy Tzung-Yu Wu
- Department of Radiology and Nuclear Medicine, Stanford University, Palo Alto, CA, USA
| | - Miguel Ángel Armengol de la Hoz
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Anesthesia, Critical Care and Pain Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Big Data Department, Fundacion Progreso Y Salud, Regional Ministry of Health of Andalucia, Andalucia, Spain
| | - Po-Chih Kuo
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan.
| | - Joseph Alexander Paguio
- Albert Einstein Medical Center, Philadelphia, PA, USA
- Hoboken University Medical Center-CarePoint Health, Hoboken, NJ, USA
| | - Jasper Seth Yao
- Albert Einstein Medical Center, Philadelphia, PA, USA
- Hoboken University Medical Center-CarePoint Health, Hoboken, NJ, USA
| | - Edward Christopher Dee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Wesley Yeung
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- National University Heart Center, National University Hospital, Singapore, Singapore
| | - Jerry Jurado
- Hoboken University Medical Center-CarePoint Health, Hoboken, NJ, USA
| | - Achintya Moulick
- Hoboken University Medical Center-CarePoint Health, Hoboken, NJ, USA
| | - Carmelo Milazzo
- Hoboken University Medical Center-CarePoint Health, Hoboken, NJ, USA
| | - Paloma Peinado
- Centro Integral de Enfermedades Cardiovasculares, Hospital Universitario Monteprincipe, Grupo HM Hospitales, Madrid, Spain
| | - Paula Villares
- Centro Integral de Enfermedades Cardiovasculares, Hospital Universitario Monteprincipe, Grupo HM Hospitales, Madrid, Spain
| | - Antonio Cubillo
- Centro Integral de Enfermedades Cardiovasculares, Hospital Universitario Monteprincipe, Grupo HM Hospitales, Madrid, Spain
| | - José Felipe Varona
- Centro Integral de Enfermedades Cardiovasculares, Hospital Universitario Monteprincipe, Grupo HM Hospitales, Madrid, Spain
| | - Hyung-Chul Lee
- Department of Anesthesiology and Pain Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Alberto Estirado
- Centro Integral de Enfermedades Cardiovasculares, Hospital Universitario Monteprincipe, Grupo HM Hospitales, Madrid, Spain
| | - José Maria Castellano
- Centro Integral de Enfermedades Cardiovasculares, Hospital Universitario Monteprincipe, Grupo HM Hospitales, Madrid, Spain
- Centro Nacional de Investigaciones Cardiovasculares, Instituto de Salud Carlos III, Madrid, Spain
| | - Leo Anthony Celi
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| |
Collapse
|
21
|
Xiang Z, Zhuo Q, Zhao C, Deng X, Zhu T, Wang T, Jiang W, Lei B. Self-supervised multi-modal fusion network for multi-modal thyroid ultrasound image diagnosis. Comput Biol Med 2022; 150:106164. [PMID: 36240597 DOI: 10.1016/j.compbiomed.2022.106164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 09/11/2022] [Accepted: 10/01/2022] [Indexed: 12/07/2022]
Abstract
Ultrasound is a typical non-invasive diagnostic method often used to detect thyroid cancer lesions. However, due to the limitations of the information provided by ultrasound images, shear wave elastography (SWE) and color doppler ultrasound (CDUS) are also used clinically to assist in diagnosis, which makes the diagnosis time-consuming, labor-intensive, and highly subjective process. Therefore, automatic diagnosis of benign and malignant thyroid nodules is beneficial for the clinical diagnosis of the thyroid. To this end, based on three modalities of gray-scale ultrasound images(US), SWE, and CDUS, we propose a deep learning-based multi-modal feature fusion network for the automatic diagnosis of thyroid disease based on the ultrasound images. First, three ResNet18s initialized by self-supervised learning are used as branches to extract the image information of each modality, respectively. Then, a multi-modal multi-head attention branch is used to remove the common information of three modalities, and the knowledge of each modal is combined for thyroid diagnosis. At the same time, to better integrate the features between modalities, a multi-modal feature guidance module is also proposed to guide the feature extraction of each branch and reduce the difference between each-modal feature. We verify the multi-modal thyroid ultrasound image diagnosis method on the self-collected dataset, and the results prove that this method could provide fast and accurate assistance for sonographers in diagnosing thyroid nodules.
Collapse
Affiliation(s)
- Zhuo Xiang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Qiuluan Zhuo
- Huazhong University of Science and Technology Union Shenzhen Hospital, Department of Ultrasound, China
| | - Cheng Zhao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Xiaofei Deng
- Huazhong University of Science and Technology Union Shenzhen Hospital, Department of Ultrasound, China
| | - Ting Zhu
- Huazhong University of Science and Technology Union Shenzhen Hospital, Department of Ultrasound, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Wei Jiang
- Huazhong University of Science and Technology Union Shenzhen Hospital, Department of Ultrasound, China.
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China.
| |
Collapse
|
22
|
Cisotto G, Capuzzo M, Guglielmi AV, Zanella A. Feature stability and setup minimization for EEG-EMG-enabled monitoring systems. EURASIP J Adv Signal Process 2022; 2022:103. [PMID: 36320592 PMCID: PMC9612609 DOI: 10.1186/s13634-022-00939-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Accepted: 10/18/2022] [Indexed: 06/16/2023]
Abstract
Delivering health care at home emerged as a key advancement to reduce healthcare costs and infection risks, as during the SARS-Cov2 pandemic. In particular, in motor training applications, wearable and portable devices can be employed for movement recognition and monitoring of the associated brain signals. This is one of the contexts where it is essential to minimize the monitoring setup and the amount of data to collect, process, and share. In this paper, we address this challenge for a monitoring system that includes high-dimensional EEG and EMG data for the classification of a specific type of hand movement. We fuse EEG and EMG into the magnitude squared coherence (MSC) signal, from which we extracted features using different algorithms (one from the authors) to solve binary classification problems. Finally, we propose a mapping-and-aggregation strategy to increase the interpretability of the machine learning results. The proposed approach provides very low mis-classification errors ( < 0.1 ), with very few and stable MSC features ( < 10 % of the initial set of available features). Furthermore, we identified a common pattern across algorithms and classification problems, i.e., the activation of the centro-parietal brain areas and arm's muscles in 8-80 Hz frequency band, in line with previous literature. Thus, this study represents a step forward to the minimization of a reliable EEG-EMG setup to enable gesture recognition.
Collapse
Affiliation(s)
- Giulia Cisotto
- Department of Information Engineering, University of Padova, Via Gradenigo, 6, 35121 Padova, Italy
- Inter-University Consortium for Telecommunications (CNIT), Padova, Italy
- Department of Informatics, Systems and Communications, University of Milano-Bicocca, Viale Sarca, 336, 20126 Milano, Italy
| | - Martina Capuzzo
- Department of Information Engineering, University of Padova, Via Gradenigo, 6, 35121 Padova, Italy
- Human Inspired Technologies Research Center, University of Padova, Via Luzzatti, 4, 35121 Padova, Italy
| | - Anna Valeria Guglielmi
- Department of Information Engineering, University of Padova, Via Gradenigo, 6, 35121 Padova, Italy
| | - Andrea Zanella
- Department of Information Engineering, University of Padova, Via Gradenigo, 6, 35121 Padova, Italy
- Inter-University Consortium for Telecommunications (CNIT), Padova, Italy
- Human Inspired Technologies Research Center, University of Padova, Via Luzzatti, 4, 35121 Padova, Italy
| |
Collapse
|
23
|
Zhang H, Zhu X, Li B, Dai X, Bao X, Fu Q, Tong Z, Liu L, Zheng Y, Zhao P, Ye L, Chen Z, Fang W, Ruan L, Jin X. Development and validation of a meta-learning-based multi-modal deep learning algorithm for detection of peritoneal metastasis. Int J Comput Assist Radiol Surg 2022; 17:1845-1853. [PMID: 35867303 DOI: 10.1007/s11548-022-02698-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 06/05/2022] [Indexed: 11/27/2022]
Abstract
PURPOSE The existing medical imaging tools have a detection accuracy of 97% for peritoneal metastasis(PM) bigger than 0.5 cm, but only 29% for that smaller than 0.5 cm, the early detection of PM is still a difficult problem. This study is aiming at constructing a deep convolution neural network classifier based on meta-learning to predict PM. METHOD Peritoneal metastases are delineated on enhanced CT. The model is trained based on meta-learning, and features are extracted using multi-modal deep Convolutional Neural Network(CNN) with enhanced CT to classify PM. Besides, we evaluate the performance on the test dataset, and compare it with other PM prediction algorithm. RESULTS The training datasets are consisted of 9574 images from 43 patients with PM and 67 patients without PM. The testing datasets are consisted of 1834 images from 21 testing patients. To increase the accuracy of the prediction, we combine the multi-modal inputs of plain scan phase, portal venous phase and arterial phase to build a meta-learning-based multi-modal PM predictor. The classifier shows an accuracy of 87.5% with Area Under Curve(AUC) of 0.877, sensitivity of 73.4%, specificity of 95.2% on the testing datasets. The performance is superior to routine PM classify based on logistic regression (AUC: 0.795), a deep learning method named ResNet3D (AUC: 0.827), and a domain generalization (DG) method named MADDG (AUC: 0.834). CONCLUSIONS we proposed a novel training strategy based on meta-learning to improve the model's robustness to "unseen" samples. The experiments shows that our meta-learning-based multi-modal PM predicting classifier obtain more competitive results in synchronous PM prediction compared to existing algorithms and the model's improvements of generalization ability even with limited data.
Collapse
Affiliation(s)
- Hangyu Zhang
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Xudong Zhu
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Bin Li
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Xiaomeng Dai
- Institute of Information Science and Electronic Engineering, Zhejiang University, Yuquan Campus, Hangzhou, Zhejiang, China
| | - Xuanwen Bao
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Qihan Fu
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Zhou Tong
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Lulu Liu
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yi Zheng
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Peng Zhao
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Luan Ye
- Institute of Information Science and Electronic Engineering, Zhejiang University, Yuquan Campus, Hangzhou, Zhejiang, China
| | - Zhihong Chen
- Institute of Information Science and Electronic Engineering, Zhejiang University, Yuquan Campus, Hangzhou, Zhejiang, China
| | - Weijia Fang
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Lingxiang Ruan
- The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China.
| | - Xinyu Jin
- Institute of Information Science and Electronic Engineering, Zhejiang University, Yuquan Campus, Hangzhou, Zhejiang, China.
| |
Collapse
|
24
|
Liu L, Wang YP, Wang Y, Zhang P, Xiong S. An enhanced multi-modal brain graph network for classifying neuropsychiatric disorders. Med Image Anal 2022; 81:102550. [PMID: 35872360 DOI: 10.1016/j.media.2022.102550] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 07/06/2022] [Accepted: 07/13/2022] [Indexed: 10/17/2022]
Abstract
It has been proven that neuropsychiatric disorders (NDs) can be associated with both structures and functions of brain regions. Thus, data about structures and functions could be usefully combined in a comprehensive analysis. While brain structural MRI (sMRI) images contain anatomic and morphological information about NDs, functional MRI (fMRI) images carry complementary information. However, efficient extraction and fusion of sMRI and fMRI data remains challenging. In this study, we develop an enhanced multi-modal graph convolutional network (MME-GCN) in a binary classification between patients with NDs and healthy controls, based on the fusion of the structural and functional graphs of the brain region. First, based on the same brain atlas, we construct structural and functional graphs from sMRI and fMRI data, respectively. Second, we use machine learning to extract important features from the structural graph network. Third, we use these extracted features to adjust the corresponding edge weights in the functional graph network. Finally, we train a multi-layer GCN and use it in binary classification task. MME-GCN achieved 93.71% classification accuracy on the open data set provided by the Consortium for Neuropsychiatric Phenomics. In addition, we analyzed the important features selected from the structural graph and verified them in the functional graph. Using MME-GCN, we found several specific brain connections important to NDs.
Collapse
Affiliation(s)
- Liangliang Liu
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, P.R. China.
| | - Yu-Ping Wang
- Dthe Biomedical Engineering Department, Tulane University, New Orleans, LA 70118, USA
| | - Yi Wang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, P.R. China
| | - Pei Zhang
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, P.R. China
| | - Shufeng Xiong
- College of Information and Management Science, Henan Agricultural University, Zhengzhou, Henan 450046, P.R. China
| |
Collapse
|
25
|
Fatimah B, Singhal A, Singh P. A multi-modal assessment of sleep stages using adaptive Fourier decomposition and machine learning. Comput Biol Med 2022; 148:105877. [PMID: 35853400 DOI: 10.1016/j.compbiomed.2022.105877] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 06/29/2022] [Accepted: 07/09/2022] [Indexed: 11/30/2022]
Abstract
Healthy sleep is essential for the rejuvenation of the body and helps in maintaining good health. Many people suffer from sleep disorders that are characterized by abnormal sleep patterns. Automated assessment of such disorders using biomedical signals has been an active subject of research. Electroencephalogram (EEG) is a popular diagnostic used in this regard. We consider a widely-used publicly available database and process the signals using the Fourier decomposition method (FDM) to obtain narrowband signal components. Statistical features extracted from these components are passed on to machine learning classifiers to identify different stages of sleep. A novel feature measuring the non-stationarity of the signal is also used to capture salient information. It is shown that classification results can be improved by using multi-channel EEG instead of single-channel EEG data. Simultaneous utilization of multiple modalities, such as Electromyogram (EMG), Electrooculogram (EOG) along with EEG data leads to further enhancement in the obtained results. The proposed method can be efficiently implemented in real-time using fast Fourier transform (FFT), and it provides better classification results than the other algorithms existing in the literature. It can assist in the development of low-cost sensor-based setups for continuous patient monitoring and feedback.
Collapse
Affiliation(s)
| | - Amit Singhal
- Netaji Subhas University of Technology, Delhi, India.
| | - Pushpendra Singh
- National Institute of Technology Hamirpur, Himachal Pradesh, India
| |
Collapse
|
26
|
Li X, Ma J, Li S. Intelligence customs declaration for cross-border e-commerce based on the multi-modal model and the optimal window mechanism. Ann Oper Res 2022:1-25. [PMID: 35729984 PMCID: PMC9200377 DOI: 10.1007/s10479-022-04799-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 05/23/2022] [Indexed: 06/15/2023]
Abstract
This paper aims to study the intelligent customs declaration of cross-border e-commerce commodities from algorithm design and implementation. The difficulty of this issue is the recognition of commodity names, materials, and processing processes. Because the process of recognizing these three kinds of commodity information is similar, this paper chooses to identify the commodity name as the experimental research object. The algorithm in this paper is based on the premise of pre-clustering, using an optimal window mechanism to obtain the best word embedding vector representation. The Vision Transformer model extracts image features instead of traditional CNN models, and then text features are fused with image features to generate a multi-modal semantically feature vector. Finally, a deep forest classifier replaces the conventional neural network classifiers to complete the commodity name recognition task. The experimental results show that, for more than 600 different commodities on the 120,000 data records, the precision is 0.85, the recall is 0.87, and the F1 _score is 0.86. So, our algorithm can effectively and accurately recognize e-commerce commodity names and provide a new perspective on the research of e-commerce intelligence declarations.
Collapse
Affiliation(s)
- Xiaofeng Li
- College of Economics and Management, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106 Jiangsu China
| | - Jing Ma
- College of Economics and Management, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106 Jiangsu China
| | - Shan Li
- College of Economics and Management, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106 Jiangsu China
| |
Collapse
|
27
|
McDonagh AW, McNeil BL, Rousseau J, Roberts RJ, Merkens H, Yang H, Bénard F, Ramogida CF. Development of a multi faceted platform containing a tetrazine, fluorophore and chelator: synthesis, characterization, radiolabeling, and immuno-SPECT imaging. EJNMMI Radiopharm Chem 2022; 7:12. [PMID: 35666363 PMCID: PMC9170845 DOI: 10.1186/s41181-022-00164-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 05/23/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Combining optical (fluorescence) imaging with nuclear imaging has the potential to offer a powerful tool in personal health care, where nuclear imaging offers in vivo functional whole-body visualization, and the fluorescence modality may be used for image-guided tumor resection. Varying chemical strategies have been exploited to fuse both modalities into one molecular entity. When radiometals are employed in nuclear imaging, a chelator is typically inserted into the molecule to facilitate radiolabeling; the availability of the chelator further expands the potential use of these platforms for targeted radionuclide therapy if a therapeutic radiometal is employed. Herein, a novel mixed modality scaffold which contains a tetrazine (Tz)--for biomolecule conjugation, fluorophore-for optical imaging, and chelator-for radiometal incorporation, in one construct is presented. The novel platform was characterized for its fluorescence properties, radiolabeled with single-photon emission computed tomography (SPECT) isotope indium-111 (111In3+) and therapeutic alpha emitter actinium-225 (225Ac3+). Both radiolabels were conjugated in vitro to trans-cyclooctene (TCO)-modified trastuzumab; biodistribution and immuno-SPECT imaging of the former conjugate was assessed. RESULTS Key to the success of the platform synthesis was incorporation of a 4,4'-dicyano-BODIPY fluorophore. The route gives access to an advanced intermediate where final chelator-incorporated compounds can be easily accessed in one step prior to radiolabeling or biomolecule conjugation. The DOTA (1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid) conjugate was prepared, displayed good fluorescence properties, and was successfully radiolabeled with 111In & 225Ac in high radiochemical yield. Both complexes were then separately conjugated in vitro to TCO modified trastuzumab through an inverse electron demand Diels-Alder (IEDDA) reaction with the Tz. Pilot small animal in vivo immuno-SPECT imaging with [111In]In-DO3A-BODIPY-Tz-TCO-trastuzumab was also conducted and exhibited high tumor uptake (21.2 ± 5.6%ID/g 6 days post-injection) with low uptake in non-target tissues. CONCLUSIONS The novel platform shows promise as a multi-modal probe for theranostic applications. In particular, access to an advanced synthetic intermediate where tailored chelators can be incorporated in the last step of synthesis expands the potential use of the scaffold to other radiometals. Future studies including validation of ex vivo fluorescence imaging and exploiting the pre-targeting approach available through the IEDDA reaction are warranted.
Collapse
Affiliation(s)
- Anthony W McDonagh
- Department of Chemistry, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada
| | - Brooke L McNeil
- Department of Chemistry, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada.,Life Sciences Division, TRIUMF, Vancouver, BC, V6T 2A3, Canada
| | - Julie Rousseau
- Department of Molecular Oncology, BC Cancer, Vancouver, BC, V5Z 1L3, Canada
| | - Ryan J Roberts
- Department of Chemistry, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada
| | - Helen Merkens
- Department of Molecular Oncology, BC Cancer, Vancouver, BC, V5Z 1L3, Canada
| | - Hua Yang
- Department of Chemistry, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada.,Life Sciences Division, TRIUMF, Vancouver, BC, V6T 2A3, Canada
| | - François Bénard
- Department of Molecular Oncology, BC Cancer, Vancouver, BC, V5Z 1L3, Canada
| | - Caterina F Ramogida
- Department of Chemistry, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada. .,Life Sciences Division, TRIUMF, Vancouver, BC, V6T 2A3, Canada.
| |
Collapse
|
28
|
Tan T, Das B, Soni R, Fejes M, Yang H, Ranjan S, Szabo DA, Melapudi V, Shriram K, Agrawal U, Rusko L, Herczeg Z, Darazs B, Tegzes P, Ferenczi L, Mullick R, Avinash G. Multi-modal trained artificial intelligence solution to triage chest X-ray for COVID-19 using pristine ground-truth, versus radiologists. Neurocomputing 2022; 485:36-46. [PMID: 35185296 PMCID: PMC8847079 DOI: 10.1016/j.neucom.2022.02.040] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 12/25/2021] [Accepted: 02/11/2022] [Indexed: 11/05/2022]
Abstract
The front-line imaging modalities computed tomography (CT) and X-ray play important roles for triaging COVID patients. Thoracic CT has been accepted to have higher sensitivity than a chest X-ray for COVID diagnosis. Considering the limited access to resources (both hardware and trained personnel) and issues related to decontamination, CT may not be ideal for triaging suspected subjects. Artificial intelligence (AI) assisted X-ray based application for triaging and monitoring require experienced radiologists to identify COVID patients in a timely manner with the additional ability to delineate and quantify the disease region is seen as a promising solution for widespread clinical use. Our proposed solution differs from existing solutions presented by industry and academic communities. We demonstrate a functional AI model to triage by classifying and segmenting a single chest X-ray image, while the AI model is trained using both X-ray and CT data. We report on how such a multi-modal training process improves the solution compared to single modality (X-ray only) training. The multi-modal solution increases the AUC (area under the receiver operating characteristic curve) from 0.89 to 0.93 for a binary classification between COVID-19 and non-COVID-19 cases. It also positively impacts the Dice coefficient (0.59 to 0.62) for localizing the COVID-19 pathology. To compare the performance of experienced readers to the AI model, a reader study is also conducted. The AI model showed good consistency with respect to radiologists. The DICE score between two radiologists on the COVID group was 0.53 while the AI had a DICE value of 0.52 and 0.55 when compared to the segmentation done by the two radiologists separately. From a classification perspective, the AUCs of two readers was 0.87 and 0.81 while the AUC of the AI is 0.93 based on the reader study dataset. We also conducted a generalization study by comparing our method to the-state-art methods on independent datasets. The results show better performance from the proposed method. Leveraging multi-modal information for the development benefits the single-modal inferencing.
Collapse
Affiliation(s)
- Tao Tan
- GE Healthcare, The Netherlands,Corresponding author
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
29
|
Perkins ER, King BT, Sörman K, Patrick CJ. Trait boldness and emotion regulation: An event-related potential investigation. Int J Psychophysiol 2022; 176:1-13. [PMID: 35301027 DOI: 10.1016/j.ijpsycho.2022.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 03/02/2022] [Accepted: 03/09/2022] [Indexed: 11/16/2022]
Abstract
The present study sought to extend knowledge of the role of boldness, a transdiagnostic bipolar trait dimension involving low sensitivity to threat, in emotional reactivity and regulation using physiological and report-based measures. One prior study found that boldness was associated with reduced late positive potential (LPP) while passively viewing aversive images, but not during emotion regulation; a disconnect between LPP and self-reported reactivity was also observed. Here, participants (N = 63) completed an emotion regulation task in which they either passively viewed or effortfully up- or downregulated their emotional reactivity to pleasant, unpleasant, and neutral pictures while EEG activity was recorded; they later retrospectively rated the success of their regulation efforts. ANOVAs examining the interactive effects of regulation instruction and boldness on LPP amplitude revealed that lower boldness (higher trait fearfulness) was associated with paradoxical increases in LPP to threat photos during instructed downregulation, relative to passive viewing, along with lower reported regulation success on these trials. Unexpectedly, similar LPP effects were observed for affective images overall, and especially nurturance photos. Although subject to certain limitations, these results suggest that individual differences in boldness play a role not only in general reactivity to aversive stimuli, as evidenced by prior work, but in the ability to effortfully downregulate emotional response.
Collapse
Affiliation(s)
- Emily R Perkins
- Department of Psychology, Florida State University, Tallahassee, FL, USA.
| | - Brittany T King
- Department of Psychiatry and Human Behavior, Brown University, Providence, RI, USA; Partial Hospitalization Program, Rhode Island Hospital, Providence, RI, USA
| | | | | |
Collapse
|
30
|
Peng C, Zhang Y, Zheng J, Li B, Shen J, Li M, Liu L, Qiu B, Chen DZ. IMIIN: An inter-modality information interaction network for 3D multi-modal breast tumor segmentation. Comput Med Imaging Graph 2021; 95:102021. [PMID: 34861622 DOI: 10.1016/j.compmedimag.2021.102021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 11/02/2021] [Accepted: 11/23/2021] [Indexed: 11/22/2022]
Abstract
Breast tumor segmentation is critical to the diagnosis and treatment of breast cancer. In clinical breast cancer analysis, experts often examine multi-modal images since such images provide abundant complementary information on tumor morphology. Known multi-modal breast tumor segmentation methods extracted 2D tumor features and used information from one modal to assist another. However, these methods were not conducive to fusing multi-modal information efficiently, or may even fuse interference information, due to the lack of effective information interaction management between different modalities. Besides, these methods did not consider the effect of small tumor characteristics on the segmentation results. In this paper, We propose a new inter-modality information interaction network to segment breast tumors in 3D multi-modal MRI. Our network employs a hierarchical structure to extract local information of small tumors, which facilitates precise segmentation of tumor boundaries. Under this structure, we present a 3D tiny object segmentation network based on DenseVoxNet to preserve the boundary details of the segmented tumors (especially for small tumors). Further, we introduce a bi-directional request-supply information interaction module between different modalities so that each modal can request helpful auxiliary information according to its own needs. Experiments on a clinical 3D multi-modal MRI breast tumor dataset show that our new 3D IMIIN is superior to state-of-the-art methods and attains better segmentation results, suggesting that our new method has a good clinical application prospect.
Collapse
|
31
|
Lei B, Cheng N, Frangi AF, Wei Y, Yu B, Liang L, Mai W, Duan G, Nong X, Li C, Su J, Wang T, Zhao L, Deng D, Zhang Z. Auto-weighted centralised multi-task learning via integrating functional and structural connectivity for subjective cognitive decline diagnosis. Med Image Anal 2021; 74:102248. [PMID: 34597938 DOI: 10.1016/j.media.2021.102248] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Revised: 08/21/2021] [Accepted: 09/14/2021] [Indexed: 11/29/2022]
Abstract
Early diagnosis and intervention of mild cognitive impairment (MCI) and its early stage (i.e., subjective cognitive decline (SCD)) is able to delay or reverse the disease progression. However, discrimination between SCD, MCI and healthy subjects accurately remains challenging. This paper proposes an auto-weighted centralised multi-task (AWCMT) learning framework for differential diagnosis of SCD and MCI. AWCMT is based on structural and functional connectivity information inferred from magnetic resonance imaging (MRI). To be specific, we devise a novel multi-task learning algorithm to combine neuroimaging functional and structural connective information. We construct a functional brain network through a sparse and low-rank machine learning method, and also a structural brain network via fibre bundle tracking. Those two networks are constructed separately and independently. Multi-task learning is then used to identify features integration of functional and structural connectivity. Hence, we can learn each task's significance automatically in a balanced way. By combining the functional and structural information, the most informative features of SCD and MCI are obtained for diagnosis. The extensive experiments on the public and self-collected datasets demonstrate that the proposed algorithm obtains better performance in classifying SCD, MCI and healthy people than traditional algorithms. The newly proposed method has good interpretability as it is able to discover the most disease-related brain regions and their connectivity. The results agree well with current clinical findings and provide new insights into early AD detection based on the multi-modal neuroimaging technique.
Collapse
Affiliation(s)
- Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Marshall Laboratory of Biomedical Engineering, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Nina Cheng
- CISTIB, School of Computing and LICAMM, School of Medicine, University of Leeds, Leeds, United Kingdom
| | - Alejandro F Frangi
- CISTIB, School of Computing and LICAMM, School of Medicine, University of Leeds, Leeds, United Kingdom; Department of Cardiovascular Sciences, and Department of Electrical Engineering, ESAT/PSI, KU Leuven, Leuven, Belgium; Medical Imaging Research Center, UZ Leuven, Herestraat 49, 3000 Leuven, Belgium; Alan Turing Institute, London, United Kingdom
| | - Yichen Wei
- Department of Radiology, First Affiliated Hospital, Guangxi University of Chinese Medicine, 530023 Nanning, China
| | - Bihan Yu
- Department of Acupuncture, First Affiliated Hospital, Guangxi University of Chinese Medicine, 530023 Nanning, China
| | - Lingyan Liang
- Department of Radiology, the People's Hospital of Guangxi Zhuang Autonomous Region, 530021 Guangxi, China
| | - Wei Mai
- Department of Acupuncture, First Affiliated Hospital, Guangxi University of Chinese Medicine, 530023 Nanning, China
| | - Gaoxiong Duan
- Department of Radiology, the People's Hospital of Guangxi Zhuang Autonomous Region, 530021 Guangxi, China
| | - Xiucheng Nong
- Department of Acupuncture, First Affiliated Hospital, Guangxi University of Chinese Medicine, 530023 Nanning, China
| | - Chong Li
- Department of Acupuncture, First Affiliated Hospital, Guangxi University of Chinese Medicine, 530023 Nanning, China
| | - Jiahui Su
- Department of Acupuncture, First Affiliated Hospital, Guangxi University of Chinese Medicine, 530023 Nanning, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Marshall Laboratory of Biomedical Engineering, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Lihua Zhao
- Department of Acupuncture, First Affiliated Hospital, Guangxi University of Chinese Medicine, 530023 Nanning, China.
| | - Demao Deng
- Department of Radiology, the People's Hospital of Guangxi Zhuang Autonomous Region, 530021 Guangxi, China.
| | - Zhiguo Zhang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Marshall Laboratory of Biomedical Engineering, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China.
| |
Collapse
|
32
|
Teo SYM, Kanaley JA, Guelfi KJ, Dimmock JA, Jackson B, Fairchild TJ. Effects of diurnal exercise timing on appetite, energy intake and body composition: A parallel randomized trial. Appetite 2021; 167:105600. [PMID: 34284064 DOI: 10.1016/j.appet.2021.105600] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 07/11/2021] [Accepted: 07/12/2021] [Indexed: 12/13/2022]
Abstract
OBJECTIVE To determine the effect of diurnal exercise timing on appetite, energy intake and body composition in individuals with overweight or obesity. METHODS Forty sedentary, individuals with overweight or obesity (17 males, 23 females; age: 51 ± 13 years; BMI: 30.9 ± 4.2 kg/m2) were randomly allocated to complete a 12-week supervised multi-modal exercise training program performed either in the morning (amEX) or evening (pmEX). Outcome measures included appetite in response to a standardised test meal, daily energy intake (EI), body weight and body composition. Measures of dietary behaviour were assessed at baseline and post-intervention, along with habitual physical activity, sleep quality and sleep quantity. Significance was set at p ≤ .05 and Hedge's g effect sizes were calculated. RESULTS Regardless of timing, exercise training increased perceived fullness (AUC; g = 0.82-1.67; both p < .01), decreased daily EI (g = 0.73-0.93; both p < .01) and body-fat (g = 0.29-0.32; both p <. 01). The timing of exercise did not change the daily EI or body-fat response to training (all p ≥ .27), however, perceived fullness increased in the amEX group (p ≤ .01). DISINHIBITION: (g = 0.35-1.95; p ≤ .01) and Hunger (g = 0.05-0.4; p = .02) behaviours decreased following exercise training, with Disinhibition demonstrating greater improvements in the pmEX group (p = .01). Objective and subjective sleep quantity increased with training (all p ≤ .01), but sleep quality was not reported to change. CONCLUSIONS Multi-modal exercise training improved body composition and some appetite outcomes, although changes were inconsistent and largely independent of exercise-timing. In the absence of dietary manipulation, the effect of diurnal exercise timing on appetite and body composition appear trivial compared to the overall benefits of exercise participation.
Collapse
Affiliation(s)
- Shaun Y M Teo
- Discipline of Exercise Science, Murdoch University, Australia; The Centre for Molecular Medicine and Innovative Therapeutics, Health Futures Institute, Australia.
| | - Jill A Kanaley
- Department of Nutrition & Exercise Physiology, University of Missouri, USA.
| | - Kym J Guelfi
- School of Human Sciences (Exercise and Sport Science), The University of Western Australia, Australia.
| | | | - Ben Jackson
- School of Human Sciences (Exercise and Sport Science), The University of Western Australia, Australia; Telethon Kids Institute, Australia.
| | - Timothy J Fairchild
- Discipline of Exercise Science, Murdoch University, Australia; The Centre for Molecular Medicine and Innovative Therapeutics, Health Futures Institute, Australia.
| |
Collapse
|
33
|
Keup C, Suryaprakash V, Hauch S, Storbeck M, Hahn P, Sprenger-Haussels M, Kolberg HC, Tewes M, Hoffmann O, Kimmig R, Kasimir-Bauer S. Integrative statistical analyses of multiple liquid biopsy analytes in metastatic breast cancer. Genome Med 2021; 13:85. [PMID: 34001236 PMCID: PMC8130163 DOI: 10.1186/s13073-021-00902-1] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 04/30/2021] [Indexed: 02/08/2023] Open
Abstract
Background Single liquid biopsy analytes (LBAs) have been utilized for therapy selection in metastatic breast cancer (MBC). We performed integrative statistical analyses to examine the clinical relevance of using multiple LBAs: matched circulating tumor cell (CTC) mRNA, CTC genomic DNA (gDNA), extracellular vesicle (EV) mRNA, and cell-free DNA (cfDNA). Methods Blood was drawn from 26 hormone receptor-positive, HER2-negative MBC patients. CTC mRNA and EV mRNA were analyzed using a multi-marker qPCR. Plasma from CTC-depleted blood was utilized for cfDNA isolation. gDNA from CTCs was isolated from mRNA-depleted CTC lysates. CTC gDNA and cfDNA were analyzed by targeted sequencing. Hierarchical clustering was performed within each analyte, and its results were combined into a score termed Evaluation of multiple Liquid biopsy analytes In Metastatic breast cancer patients All from one blood sample (ELIMA.score), which calculates the contribution of each analyte to the overall survival prediction. Singular value decomposition (SVD), mutual information calculation, k-means clustering, and graph-theoretic analysis were conducted to elucidate the dependence between individual analytes. Results A combination of two/three/four LBAs increased the prevalence of patients with actionable signals. Aggregating the results of hierarchical clustering of individual LBAs into the ELIMA.score resulted in a highly significant correlation with overall survival, thereby bolstering evidence for the additive value of using multiple LBAs. Computation of mutual information indicated that none of the LBAs is independent of the others, but the ability of a single LBA to describe the others is rather limited—only CTC gDNA could partially describe the other three LBAs. SVD revealed that the strongest singular vectors originate from all four LBAs, but a majority originated from CTC gDNA. After k-means clustering of patients based on parameters of all four LBAs, the graph-theoretic analysis revealed CTC ERBB2 variants only in patients belonging to one particular cluster. Conclusions The additional benefits of using all four LBAs were objectively demonstrated in this pilot study, which also indicated a relative dominance of CTC gDNA over the other LBAs. Consequently, a multi-parametric liquid biopsy approach deconvolutes the genomic and transcriptomic complexity and should be considered in clinical practice. Supplementary Information The online version contains supplementary material available at 10.1186/s13073-021-00902-1.
Collapse
Affiliation(s)
- Corinna Keup
- Department of Gynecology and Obstetrics, University Hospital of Essen, Hufelandstr. 55, 45122, Essen, Germany.
| | | | | | | | | | | | - Hans-Christian Kolberg
- Department of Gynecology and Obstetrics, Marienhospital Bottrop, 46236, Bottrop, Germany
| | - Mitra Tewes
- Department of Medical Oncology, University Hospital of Essen, 45122, Essen, Germany
| | - Oliver Hoffmann
- Department of Gynecology and Obstetrics, University Hospital of Essen, Hufelandstr. 55, 45122, Essen, Germany
| | - Rainer Kimmig
- Department of Gynecology and Obstetrics, University Hospital of Essen, Hufelandstr. 55, 45122, Essen, Germany
| | - Sabine Kasimir-Bauer
- Department of Gynecology and Obstetrics, University Hospital of Essen, Hufelandstr. 55, 45122, Essen, Germany
| |
Collapse
|
34
|
Yu G, Yu Z, Shi Y, Wang Y, Liu X, Li Z, Zhao Y, Sun F, Yu Y, Shu Q. Identification of pediatric respiratory diseases using a fine-grained diagnosis system. J Biomed Inform 2021; 117:103754. [PMID: 33831537 DOI: 10.1016/j.jbi.2021.103754] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 03/09/2021] [Accepted: 03/14/2021] [Indexed: 11/17/2022]
Abstract
Respiratory diseases, including asthma, bronchitis, pneumonia, and upper respiratory tract infection (RTI), are among the most common diseases in clinics. The similarities among the symptoms of these diseases precludes prompt diagnosis upon the patients' arrival. In pediatrics, the patients' limited ability in expressing their situation makes precise diagnosis even harder. This becomes worse in primary hospitals, where the lack of medical imaging devices and the doctors' limited experience further increase the difficulty of distinguishing among similar diseases. In this paper, a pediatric fine-grained diagnosis-assistant system is proposed to provide prompt and precise diagnosis using solely clinical notes upon admission, which would assist clinicians without changing the diagnostic process. The proposed system consists of two stages: a test result structuralization stage and a disease identification stage. The first stage structuralizes test results by extracting relevant numerical values from clinical notes, and the disease identification stage provides a diagnosis based on text-form clinical notes and the structured data obtained from the first stage. A novel deep learning algorithm was developed for the disease identification stage, where techniques including adaptive feature infusion and multi-modal attentive fusion were introduced to fuse structured and text data together. Clinical notes from over 12000 patients with respiratory diseases were used to train a deep learning model, and clinical notes from a non-overlapping set of about 1800 patients were used to evaluate the performance of the trained model. The average precisions (AP) for pneumonia, RTI, bronchitis and asthma are 0.878, 0.857, 0.714, and 0.825, respectively, achieving a mean AP (mAP) of 0.819. These results demonstrate that our proposed fine-grained diagnosis-assistant system provides precise identification of the diseases.
Collapse
Affiliation(s)
- Gang Yu
- Department of IT Center, The Children's Hospital, Zhejiang University School of Medicine, China; National Clinical Research Center for Child Health, China
| | | | - Yemin Shi
- Department of Computer Science, School of EE&CS, Peking University, Beijing, China
| | - Yingshuo Wang
- Department of Pulmonology, The Children's Hospital, Zhejiang University School of Medicine, China; National Clinical Research Center for Child Health, China
| | | | - Zheming Li
- Department of IT Center, The Children's Hospital, Zhejiang University School of Medicine, China; National Clinical Research Center for Child Health, China
| | - Yonggen Zhao
- Department of IT Center, The Children's Hospital, Zhejiang University School of Medicine, China; National Clinical Research Center for Child Health, China
| | | | - Yizhou Yu
- Department of Computer Science, The University of Hong Kong, Hong Kong.
| | - Qiang Shu
- National Clinical Research Center for Child Health, China.
| |
Collapse
|
35
|
Liu X, Eickhoff SB, Hoffstaedter F, Genon S, Caspers S, Reetz K, Dogan I, Eickhoff CR, Chen J, Caspers J, Reuter N, Mathys C, Aleman A, Jardri R, Riedl V, Sommer IE, Patil KR. Joint Multi-modal Parcellation of the Human Striatum: Functions and Clinical Relevance. Neurosci Bull 2020; 36:1123-1136. [PMID: 32700142 DOI: 10.1007/s12264-020-00543-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 04/10/2020] [Indexed: 12/20/2022] Open
Abstract
The human striatum is essential for both low- and high-level functions and has been implicated in the pathophysiology of various prevalent disorders, including Parkinson's disease (PD) and schizophrenia (SCZ). It is known to consist of structurally and functionally divergent subdivisions. However, previous parcellations are based on a single neuroimaging modality, leaving the extent of the multi-modal organization of the striatum unknown. Here, we investigated the organization of the striatum across three modalities-resting-state functional connectivity, probabilistic diffusion tractography, and structural covariance-to provide a holistic convergent view of its structure and function. We found convergent clusters in the dorsal, dorsolateral, rostral, ventral, and caudal striatum. Functional characterization revealed the anterior striatum to be mainly associated with cognitive and emotional functions, while the caudal striatum was related to action execution. Interestingly, significant structural atrophy in the rostral and ventral striatum was common to both PD and SCZ, but atrophy in the dorsolateral striatum was specifically attributable to PD. Our study revealed a cross-modal convergent organization of the striatum, representing a fundamental topographical model that can be useful for investigating structural and functional variability in aging and in clinical conditions.
Collapse
Affiliation(s)
- Xiaojin Liu
- Institute of Neuroscience and Medicine (INM-7, Brain and Behaviour), Research Centre Jülich, Jülich, Germany.,Institute of Systems Neuroscience, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Simon B Eickhoff
- Institute of Neuroscience and Medicine (INM-7, Brain and Behaviour), Research Centre Jülich, Jülich, Germany.,Institute of Systems Neuroscience, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Felix Hoffstaedter
- Institute of Neuroscience and Medicine (INM-7, Brain and Behaviour), Research Centre Jülich, Jülich, Germany.,Institute of Systems Neuroscience, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Sarah Genon
- Institute of Neuroscience and Medicine (INM-7, Brain and Behaviour), Research Centre Jülich, Jülich, Germany.,Institute of Systems Neuroscience, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Svenja Caspers
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, 52428, Jülich, Germany.,Institute for Anatomy I, Medical Faculty, Heinrich Heine University Düsseldorf, 40225, Düsseldorf, Germany
| | - Kathrin Reetz
- Department of Neurology, Rheinisch Westfällische Technische Hochschule (RWTH) Aachen University, 52074, Aachen, Germany
| | - Imis Dogan
- Jülich Aachen Research Alliance-BRAIN (JARA) Institute of Molecular Neuroscience and Neuroimaging, Forschungszentrum Jülich, Rheinisch Westfällische Technische Hochschule (RWTH) Aachen University, 52074, Aachen, Germany.,Department of Neurology, Rheinisch Westfällische Technische Hochschule (RWTH) Aachen University, 52074, Aachen, Germany
| | - Claudia R Eickhoff
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, 52428, Jülich, Germany.,Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, University of Düsseldorf, 40225, Düsseldorf, Germany
| | - Ji Chen
- Institute of Neuroscience and Medicine (INM-7, Brain and Behaviour), Research Centre Jülich, Jülich, Germany.,Institute of Systems Neuroscience, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Julian Caspers
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, 52428, Jülich, Germany.,Department of Diagnostic and Interventional Radiology, Medical Faculty, University of Düsseldorf, 40225, Düsseldorf, Germany
| | - Niels Reuter
- Institute of Neuroscience and Medicine (INM-7, Brain and Behaviour), Research Centre Jülich, Jülich, Germany.,Institute of Systems Neuroscience, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Christian Mathys
- Department of Diagnostic and Interventional Radiology, Medical Faculty, University of Düsseldorf, 40225, Düsseldorf, Germany.,Institute of Radiology and Neuroradiology, Evangelisches Krankenhaus, University of Oldenburg, 26129, Oldenburg, Germany
| | - André Aleman
- Department of Neuroscience, University Medical Center Groningen, University of Groningen, 9713 AV, Groningen, The Netherlands
| | - Renaud Jardri
- SCALab (CNRS UMR9193) & CHU de Lille, Hôpital Fontan, Pôle de Psychiatrie (CURE), Université de Lille, 59037, Lille, France
| | - Valentin Riedl
- Departments of Neuroradiology, Nuclear Medicine and Neuroimaging Center, Technische Universität München, 80333, Munich, Germany
| | - Iris E Sommer
- Institute of Radiology and Neuroradiology, Evangelisches Krankenhaus, University of Oldenburg, 26129, Oldenburg, Germany
| | - Kaustubh R Patil
- Institute of Neuroscience and Medicine (INM-7, Brain and Behaviour), Research Centre Jülich, Jülich, Germany. .,Institute of Systems Neuroscience, Heinrich Heine University Düsseldorf, Düsseldorf, Germany.
| |
Collapse
|
36
|
Louthan A, Gray L, Gabriele ML. Multi-sensory (auditory and somatosensory) pre-pulse inhibition in mice. Physiol Behav 2020; 222:112901. [PMID: 32360813 DOI: 10.1016/j.physbeh.2020.112901] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Revised: 03/31/2020] [Accepted: 03/31/2020] [Indexed: 12/27/2022]
Abstract
We investigated the perception of two mechanoreceptive modalities alone and in combination: main effects and interaction between auditory and somatosensory stimulation in mice. Fifteen C57BL/6J mice between the ages of 1 and 6 months were tested three times each. Experimental design roughly followed published procedures using pre-pulse inhibition (PPI) of the acoustic startle response, except pre-pulses included vibration of the test chamber as well as soft sounds. Auditory pre-pulses were 80 dB broadband noises of 4, 9, 25, or 45 ms duration. Vibrations were of the same duration but of different frequencies (500, 460, 360, and 220 Hz). Pre-pulse inhibition increased with duration of the auditory pre-pulses, as expected. There was significant PPI to some but not all vibrotactile pre-pulses. Multimodal PPI was approximately additive (no significant auditory-by-somatosensory interaction). PPI increased more with age to somatosensory than to auditory pre-pulses. Future studies of multi-modal psychophysics in various mouse mutants could lend support to more mechanistic studies of neural specificity and possibly autism, tinnitus, and PTSD.
Collapse
|
37
|
Shao W, Xiang S, Zhang Z, Huang K, Zhang J. Hyper-graph based sparse canonical correlation analysis for the diagnosis of Alzheimer's disease from multi-dimensional genomic data. Methods 2020; 189:86-94. [PMID: 32360353 DOI: 10.1016/j.ymeth.2020.04.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 03/30/2020] [Accepted: 04/23/2020] [Indexed: 10/24/2022] Open
Abstract
The effective and accurate diagnosis of Alzheimer's disease (AD), especially in the early stage (i.e., mild cognitive impairment (MCI)) remains a big challenge in AD research. So far, multiple biomarkers have been associated with AD diagnosis and progression. However, most of the existing research only utilized single modality data for diagnostic biomarker identification, which did not take the advantages of multi-modal data that provide comprehensive and complementary information at multiple levels into consideration. In this paper, we integrate multi-modal genomic data from postmortem AD brains (i.e., mRNA, miRNA and epigenomic data) and propose a hyper-graph based sparse canonical correlation analysis (HGSCCA) method to extract the most correlated multi-modal biomarkers associated with AD and MCI. Specifically, our model utilizes the sparse canonical correlation analysis framework (SCCA), which aims at finding the best linear projections for each input modality so that the strongest correlation within the selected features of multi-dimensional genomic data can be captured. In addition, with the consideration of high-order relationships among different subjects, we also introduce a hyper-graph-based regularization term that will lead to the selection of more discriminative biomarkers. To evaluate the effectiveness of the proposed method, we conduct the experiments on the well-known AD cohort study, The Religious Orders Study and Memory and Aging Project (ROSMAP) dataset, and the results show that our method can not only identify meaningful biomarkers for the diagnosis AD disease, but also achieve superior classification performance than the comparing methods.
Collapse
Affiliation(s)
- Wei Shao
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202 USA
| | - Shunian Xiang
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University, Shenzhen 518060, China; Department of Medical & Molecular Genetics, Indiana University, Indianapolis, IN 46202, USA
| | - Zuoyi Zhang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202 USA; Regenstrief Institute, Indianapolis, IN 46202, USA
| | - Kun Huang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202 USA; Regenstrief Institute, Indianapolis, IN 46202, USA.
| | - Jie Zhang
- Department of Medical & Molecular Genetics, Indiana University, Indianapolis, IN 46202, USA.
| |
Collapse
|
38
|
Plassard AJ, Bao S, D'Haese PF, Pallavaram S, Claassen DO, Dawant BM, Landman BA. Multi-modal imaging with specialized sequences improves accuracy of the automated subcortical grey matter segmentation. Magn Reson Imaging 2019; 61:131-136. [PMID: 31121202 PMCID: PMC6980439 DOI: 10.1016/j.mri.2019.05.025] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2019] [Revised: 04/23/2019] [Accepted: 05/19/2019] [Indexed: 10/26/2022]
Abstract
The basal ganglia and limbic system, particularly the thalamus, putamen, internal and external globus pallidus, substantia nigra, and sub-thalamic nucleus, comprise a clinically relevant signal network for Parkinson's disease. In order to manually trace these structures, a combination of high-resolution and specialized sequences at 7 T are used, but it is not feasible to routinely scan clinical patients in those scanners. Targeted imaging sequences at 3 T have been presented to enhance contrast in a select group of these structures. In this work, we show that a series of atlases generated at 7 T can be used to accurately segment these structures at 3 T using a combination of standard and optimized imaging sequences, though no one approach provided the best result across all structures. In the thalamus and putamen, a median Dice Similarity Coefficient (DSC) over 0.88 and a mean surface distance <1.0 mm were achieved using a combination of T1 and an optimized inversion recovery imaging sequences. In the internal and external globus pallidus a DSC over 0.75 and a mean surface distance <1.2 mm were achieved using a combination of T1 and inversion recovery imaging sequences. In the substantia nigra and sub-thalamic nucleus a DSC of over 0.6 and a mean surface distance of <1.0 mm were achieved using the inversion recovery imaging sequence. On average, using T1 and optimized inversion recovery together significantly improved segmentation results than over individual modality (p < 0.05 Wilcoxon sign-rank test).
Collapse
Affiliation(s)
- Andrew J Plassard
- Computer Science, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA
| | - Shunxing Bao
- Computer Science, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA.
| | - Pierre F D'Haese
- Electrical Engineering, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA
| | - Srivatsan Pallavaram
- Electrical Engineering, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA
| | - Daniel O Claassen
- Neurology, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA
| | - Benoit M Dawant
- Computer Science, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA; Electrical Engineering, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA
| | - Bennett A Landman
- Computer Science, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA; Electrical Engineering, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA
| |
Collapse
|
39
|
Saffo S, Taddei TH. Systemic Management for Advanced Hepatocellular Carcinoma: A Review of the Molecular Pathways of Carcinogenesis, Current and Emerging Therapies, and Novel Treatment Strategies. Dig Dis Sci 2019; 64:1016-29. [PMID: 30887150 DOI: 10.1007/s10620-019-05582-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Hepatocellular carcinoma (HCC) arises from a number of cirrhosis-related and non-cirrhosis-related exposures and is one of the leading causes of cancer-related deaths worldwide. Achieving a durable cure currently relies on either resection or transplantation, but since most patients will be diagnosed with inoperable disease, there is great interest in achieving more effective systemic therapies. At a molecular level, HCC is heterogeneous, but initial treatment strategies, including the use of multi-targeted tyrosine kinase inhibitors and checkpoint inhibitors, have been fairly homogenous, depending on general host factors and overall tumor burden rather than specific molecular signatures. Over the past 2 decades, however, there has been significant success in identifying key molecular targets, including driver mutations involving the telomerase reverse transcriptase, p53, and beta-catenin genes, and significant work is now being devoted to translating these discoveries into the development of robust and well-tolerated targeted therapies. Furthermore, multi-modal therapies have also begun to emerge, harnessing possible synergism amongst a variety of different treatment classes. As the findings of these landmark trials become available over the next several years, the landscape of the systemic management of advanced HCC will change significantly.
Collapse
|
40
|
Ghelfi M, Maddalena LA, Stuart JA, Atkinson J, Harroun TA, Marquardt D. Vitamin E-inspired multi-scale imaging agent. Bioorg Med Chem Lett 2019; 29:107-14. [PMID: 30459096 DOI: 10.1016/j.bmcl.2018.10.052] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 09/13/2018] [Accepted: 10/31/2018] [Indexed: 12/18/2022]
Abstract
The production and use of multi-modal imaging agents is on the rise. The vast majority of these imaging agents are limited to a single length scale for the agent (e.g. tissues only), which is typically at the organ or tissue scale. This work explores the synthesis of such an imaging agent and discusses the applications of our vitamin E-inspired multi-modal and multi-length scale imaging agents TB-Toc ((S,E)-5,5-difluoro-7-(2-(5-((6-hydroxy-2,5,7,8-tetramethylchroman-2-yl) methyl) thiophen-2-yl) vinyl)-9-methyl-5H-dipyrrolo-[1,2-c:2',1'-f][1,3,2]diazaborinin-4-ium-5-uide). We investigate the toxicity of TB-Toc along with the starting materials and lipid based delivery vehicle in mouse myoblasts and fibroblasts. Further we investigate the uptake of TB-Toc delivered to cultured cells in both solvent and liposomes. TB-Toc has low toxicity, and no change in cell viability was observed up to concentrations of 10 mM. TB-Toc shows time-dependent cellular uptake that is complete in about 30 min. This work is the first step in demonstrating our vitamin E derivatives are viable multi-modal and length scale diagnostic tools.
Collapse
|
41
|
Slator P, Aughwane R, Cade G, Taylor D, David AL, Lewis R, Jauniaux E, Desjardins A, Salomon LJ, Millischer AE, Tsatsaris V, Rutherford M, Johnstone ED, Melbourne A. Placenta Imaging Workshop 2018 report: Multiscale and multimodal approaches. Placenta 2018; 79:78-82. [PMID: 30396518 DOI: 10.1016/j.placenta.2018.10.010] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Revised: 10/24/2018] [Accepted: 10/30/2018] [Indexed: 11/19/2022]
Abstract
The Centre for Medical Image Computing (CMIC) at University College London (UCL) hosted a two-day workshop on placenta imaging on April 12th and 13th 2018. The workshop consisted of 10 invited talks, 3 contributed talks, a poster session, a public interaction session and a panel discussion about the future direction of placental imaging. With approximately 50 placental researchers in attendance, the workshop was a platform for engineers, clinicians and medical experts in the field to network and exchange ideas. Attendees had the chance to explore over 20 posters with subjects ranging from the movement of blood within the placenta to the efficient segmentation of fetal MRI using deep learning tools. UCL public engagement specialists also presented a poster, encouraging attendees to learn more about how to engage patients and the public with their research, creating spaces for mutual learning and dialogue.
Collapse
Affiliation(s)
- Paddy Slator
- Centre for Medical Image Computing and Department of Computer Science, University College London, UK.
| | - Rosalind Aughwane
- Dept. of Medical Physics and Biomedical Engineering, University College London, UK; Institute for Women's Health, University College London, UK
| | - Georgina Cade
- Dept. of Medical Physics and Biomedical Engineering, University College London, UK
| | - Daniel Taylor
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, UK
| | - Anna L David
- Institute for Women's Health, University College London, UK; NIHR University College London Hospitals Biomedical Research Centre, London, UK
| | | | - Eric Jauniaux
- Institute for Women's Health, University College London, UK
| | - Adrien Desjardins
- Dept. of Medical Physics and Biomedical Engineering, University College London, UK
| | - Laurent J Salomon
- Hôpital Necker & Paris Descartes University, France, LUMIERE Platform, EHU Fetus Affiliated to the IMAGINE Institute, France
| | - Anne-Elodie Millischer
- Hôpital Necker & Paris Descartes University, France, LUMIERE Platform, EHU Fetus Affiliated to the IMAGINE Institute, France
| | | | | | - Edward D Johnstone
- Maternal and Fetal Health Research Centre, Division of Developmental Biology and Medicine, School of Medical Sciences, Faculty of Biology, Medicine and Health, University of Manchester, UK
| | - Andrew Melbourne
- Dept. of Medical Physics and Biomedical Engineering, University College London, UK; King's College London, UK
| |
Collapse
|
42
|
Hu B, Wang X, He JB, Dai YJ, Zhang J, Yu Y, Sun Q, Lin-FengYan, Hu YC, Nan HY, Yang Y, Kaye AD, Cui GB, Wang W. Structural and functional brain changes in perimenopausal women who are susceptible to migraine: a study protocol of multi-modal MRI trial. BMC Med Imaging 2018; 18:26. [PMID: 30189858 PMCID: PMC6127929 DOI: 10.1186/s12880-018-0272-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Accepted: 08/29/2018] [Indexed: 01/01/2023] Open
Abstract
Background As a common clinical symptom that often bothers midlife females, migraine is closely associated with perimenopause. Previous studies suggest that one of the most prominent triggers is the sudden decline of estrogen during perimenopausal period. Hormone replacement therapy (HRT) is widely used to prevent this suffering in perimenopausal women, but effective diagnostic system is lacked for quantifying the severity of the diseaase. To avoid the abuse and overuse of HRT, we propose to conduct a diagnostic trial using multimodal MRI techniques to quantify the severity of these perimenopausal migraineurs who are susceptible to the decline of estrogen. Methods Perimenopausal women suffering from migraine will be recruited from the pain clinic of our hospital. Perimenopausal women not suffering from any kind of headache will be recruited from the local community. Clinical assessment and multi-modal MR imaging examination will be conducted. A follow up will be conducted once half year within 3 years. Pain behavior, neuropsychology scores, fMRI analysis combined with suitable statistical software will be used to reveal the potential association between these above traits and the susceptibility of migraine. Discussion Multi-modal imaging features of both healthy controls and perimenopausal women who are susceptible to estrogen decline will be acquired. Imaging features will include volumetric characteristics, white matter integrity, functional characteristics, topological properties, and perfusion properties. Clinical information, such as basic information, blood estrogen level, information of migraine, and a bunch of neurological scale will also be used for statistic assessment. This clinical trial would help to build an effective screen system for quantifying the severity of illness of those susceptible women during the perimenopausal period. Trial registration This study has already been registered at Clinical Trials. gov (ID: NCT02820974). Registration date: September 28th, 2014.
Collapse
Affiliation(s)
- Bo Hu
- Department of Radiology & Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University (Air Force Medical University), 569 Xinsi Road, Xi'an, 710038, Shaanxi, China
| | - Xu Wang
- Student Brigade, Fourth Military Medical University (Air Force Medical University), 169 West Changle Road, Xi'an, 710032, Shaanxi Province, China
| | - Jie-Bing He
- Student Brigade, Fourth Military Medical University (Air Force Medical University), 169 West Changle Road, Xi'an, 710032, Shaanxi Province, China
| | - Yu-Jie Dai
- Department of Clinical Nutrition, Xijing Hospital, The Fourth Military Medical University, Xi'an, China
| | - Jin Zhang
- Department of Radiology & Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University (Air Force Medical University), 569 Xinsi Road, Xi'an, 710038, Shaanxi, China
| | - Ying Yu
- Department of Radiology & Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University (Air Force Medical University), 569 Xinsi Road, Xi'an, 710038, Shaanxi, China
| | - Qian Sun
- Department of Radiology & Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University (Air Force Medical University), 569 Xinsi Road, Xi'an, 710038, Shaanxi, China
| | - Lin-FengYan
- Department of Radiology & Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University (Air Force Medical University), 569 Xinsi Road, Xi'an, 710038, Shaanxi, China
| | - Yu-Chuan Hu
- Department of Radiology & Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University (Air Force Medical University), 569 Xinsi Road, Xi'an, 710038, Shaanxi, China
| | - Hai-Yan Nan
- Department of Radiology & Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University (Air Force Medical University), 569 Xinsi Road, Xi'an, 710038, Shaanxi, China
| | - Yang Yang
- Department of Radiology & Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University (Air Force Medical University), 569 Xinsi Road, Xi'an, 710038, Shaanxi, China
| | - Alan D Kaye
- Departments of Anesthesiology and Pharmacology, Louisiana State University School of Medicine, New Orleans, Louisiana, USA
| | - Guang-Bin Cui
- Department of Radiology & Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University (Air Force Medical University), 569 Xinsi Road, Xi'an, 710038, Shaanxi, China.
| | - Wen Wang
- Department of Radiology & Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University (Air Force Medical University), 569 Xinsi Road, Xi'an, 710038, Shaanxi, China.
| |
Collapse
|
43
|
Brand L, Wang H, Huang H, Risacher S, Saykin A, Shen L. Joint High-Order Multi-Task Feature Learning to Predict the Progression of Alzheimer's Disease. Med Image Comput Comput Assist Interv 2018; 11070:555-562. [PMID: 31179446 PMCID: PMC6553480 DOI: 10.1007/978-3-030-00928-1_63] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Alzheimer's disease (AD) is a degenerative brain disease that affects millions of people around the world. As populations in the United States and worldwide age, the prevalence of Alzheimer's disease will only increase. In turn, the social and financial costs of AD will create a difficult environment for many families and caregivers across the globe. By combining genetic information, brain scans, and clinical data, gathered over time through the Alzheimer's Disease Neuroimaging Initiative (ADNI), we propose a new Joint High-Order Multi-Modal Multi-Task Feature Learning method to predict the cognitive performance and diagnosis of patients with and without AD.
Collapse
Affiliation(s)
- Lodewijk Brand
- Department of Computer Science, Colorado School of Mines, Golden, CO, USA
| | - Hua Wang
- Department of Computer Science, Colorado School of Mines, Golden, CO, USA
| | - Heng Huang
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Shannon Risacher
- Department of Radiology and Imaging Sciences, Department of BioHealth Informatics, Indiana University, Indianapolis, IN, USA
| | - Andrew Saykin
- Department of Radiology and Imaging Sciences, Department of BioHealth Informatics, Indiana University, Indianapolis, IN, USA
| | - Li Shen
- Department of Radiology and Imaging Sciences, Department of BioHealth Informatics, Indiana University, Indianapolis, IN, USA,Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | | |
Collapse
|
44
|
Lin Y, Wan N, Sheets S, Gong X, Davies A. A multi-modal relative spatial access assessment approach to measure spatial accessibility to primary care providers. Int J Health Geogr 2018; 17:33. [PMID: 30139378 PMCID: PMC6108155 DOI: 10.1186/s12942-018-0153-9] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Accepted: 08/16/2018] [Indexed: 11/23/2022] Open
Abstract
Two-step floating catchment area (2SFCA) methods that account for multiple transportation modes provide more realistic accessibility representation than single-mode methods. However, the use of the impedance coefficient in an impedance function (e.g., Gaussian function) introduces uncertainty to 2SFCA results. This paper proposes an enhancement to the multi-modal 2SFCA methods through incorporating the concept of a spatial access ratio (SPAR) for spatial access measurement. SPAR is the ratio of a given place's access score to the mean of all access scores in the study area. An empirical study on spatial access to primary care physicians (PCPs) in the city of Albuquerque, NM, USA was conducted to evaluate the effectiveness of SPAR in addressing uncertainty introduced by the choice of the impedance coefficient in the classic Gaussian impedance function. We used ESRI StreetMap Premium and General Transit Specification Feed (GTFS) data to calculate the travel time to PCPs by car and bus. We first generated two spatial access scores-using different catchment sizes for car and bus, respectively-for each demanding population location: an accessibility score for car drivers and an accessibility score for bus riders. We then computed three corresponding spatial access ratios of the above scores for each population location. Sensitivity analysis results suggest that the spatial access scores vary significantly when using different impedance coefficients (p < 0.05); while SPAR remains stable (p = 1). Results from this paper suggest that a spatial access ratio can significantly reduce impedance coefficient-related uncertainties in multi-modal 2SFCA methods.
Collapse
Affiliation(s)
- Yan Lin
- Department of Geography and Environmental Studies, 1 University of New Mexico, MSC 01 1110, Albuquerque, NM, 87131, Mexico.
| | - Neng Wan
- Department of Geography, University of Utah, 260 S. Central Campus Dr., Room 270, Salt Lake City, UT, 84112-9155, USA
| | - Sagert Sheets
- Department of Geography and Environmental Studies, 1 University of New Mexico, MSC 01 1110, Albuquerque, NM, 87131, Mexico
| | - Xi Gong
- Department of Geography and Environmental Studies, 1 University of New Mexico, MSC 01 1110, Albuquerque, NM, 87131, Mexico
| | - Angela Davies
- Department of Geography and Environmental Studies, 1 University of New Mexico, MSC 01 1110, Albuquerque, NM, 87131, Mexico
| |
Collapse
|
45
|
Ara L, Bashar F, Tamal MEH, Siddiquee NKA, Mowla SMN, Sarker SA. Transferring knowledge into practice: a multi-modal, multi-centre intervention for enhancing nurses' infection control competency in Bangladesh. J Hosp Infect 2018; 102:234-240. [PMID: 30081147 DOI: 10.1016/j.jhin.2018.07.042] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2018] [Accepted: 07/30/2018] [Indexed: 11/24/2022]
Abstract
BACKGROUND Nurses are considered as the key to infection prevention as they play a major role in treatment as well as taking care of patients. AIM To assess the role of a multi-modal intervention (MMI) in improving nurses' competency and adherence to standard infection control practices in Bangladesh. METHODS The study adopted a pretest-post-test intervention approach, in three different periods (from 2012 to 2017) in five hospitals (two public, two private, and one autonomous) in Bangladesh. Each study period was divided into three phases: pretest, MMI, and post-test. Data were collected on 642 nurses using direct observation method through a structured checklist. FINDINGS After implementing the MMI, overall hand hygiene compliance significantly increased before patient contact (from 1.3% to 50.2%; P < 0.000) and after patient contact (from 2.8% to 59.6%; P < 0.000). Remarkable improvements were also achieved in adherence to use of gloves (from 14.6% to 57.6%; P < 0.000), maintaining sterility of equipment during aseptic techniques (from 34.9% to 86%; P < 0.000), biomedical waste segregation (from 1.8% to 81.3%; P < 0.000) and labelling of procedural sites (from 0% to 85.7%; P < 0.000). Moreover, needlestick injury rate notably decreased (from 6.2% to 0.6%; P < 0.000). CONCLUSION MMI can play a vital role in improving nurses' compliance with the standard infection control practices. Such context-specific interventions, which are crucial for preventing healthcare-associated infections and for decreasing occupational hazards, should be replicated in resource-poor countries for achieving universal health coverage by 2030.
Collapse
Affiliation(s)
- L Ara
- Nutrition and Clinical Services Division, icddr,b, Mohakhali, Dhaka 1212, Bangladesh.
| | - F Bashar
- Nutrition and Clinical Services Division, icddr,b, Mohakhali, Dhaka 1212, Bangladesh
| | - M E H Tamal
- Nutrition and Clinical Services Division, icddr,b, Mohakhali, Dhaka 1212, Bangladesh
| | - N K A Siddiquee
- Nutrition and Clinical Services Division, icddr,b, Mohakhali, Dhaka 1212, Bangladesh
| | - S M N Mowla
- Nutrition and Clinical Services Division, icddr,b, Mohakhali, Dhaka 1212, Bangladesh
| | - S A Sarker
- Nutrition and Clinical Services Division, icddr,b, Mohakhali, Dhaka 1212, Bangladesh
| |
Collapse
|
46
|
Trujillo JP, Simanova I, Bekkering H, Özyürek A. Communicative intent modulates production and comprehension of actions and gestures: A Kinect study. Cognition 2018; 180:38-51. [PMID: 29981967 DOI: 10.1016/j.cognition.2018.04.003] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2017] [Revised: 03/16/2018] [Accepted: 04/02/2018] [Indexed: 10/28/2022]
Abstract
Actions may be used to directly act on the world around us, or as a means of communication. Effective communication requires the addressee to recognize the act as being communicative. Humans are sensitive to ostensive communicative cues, such as direct eye gaze (Csibra & Gergely, 2009). However, there may be additional cues present in the action or gesture itself. Here we investigate features that characterize the initiation of a communicative interaction in both production and comprehension. We asked 40 participants to perform 31 pairs of object-directed actions and representational gestures in more- or less- communicative contexts. Data were collected using motion capture technology for kinematics and video recording for eye-gaze. With these data, we focused on two issues. First, if and how actions and gestures are systematically modulated when performed in a communicative context. Second, if observers exploit such kinematic information to classify an act as communicative. Our study showed that during production the communicative context modulates space-time dimensions of kinematics and elicits an increase in addressee-directed eye-gaze. Naïve participants detected communicative intent in actions and gestures preferentially using eye-gaze information, only utilizing kinematic information when eye-gaze was unavailable. Our study highlights the general communicative modulation of action and gesture kinematics during production but also shows that addressees only exploit this modulation to recognize communicative intention in the absence of eye-gaze. We discuss these findings in terms of distinctive but potentially overlapping functions of addressee directed eye-gaze and kinematic modulations within the wider context of human communication and learning.
Collapse
Affiliation(s)
- James P Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands; Centre for Language Studies, Radboud University Nijmegen, The Netherlands.
| | - Irina Simanova
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands
| | - Harold Bekkering
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands
| | - Asli Özyürek
- Centre for Language Studies, Radboud University Nijmegen, The Netherlands; Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD Nijmegen, The Netherlands
| |
Collapse
|
47
|
Tao Z, Yao Z, Kong H, Duan F, Li G. Spatial accessibility to healthcare services in Shenzhen, China: improving the multi-modal two-step floating catchment area method by estimating travel time via online map APIs. BMC Health Serv Res 2018; 18:345. [PMID: 29743111 PMCID: PMC5944163 DOI: 10.1186/s12913-018-3132-8] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Accepted: 04/17/2018] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Shenzhen has rapidly grown into a megacity in the recent decades. It is a challenging task for the Shenzhen government to provide sufficient healthcare services. The spatial configuration of healthcare services can influence the convenience for the consumers to obtain healthcare services. Spatial accessibility has been widely adopted as a scientific measurement for evaluating the rationality of the spatial configuration of healthcare services. METHODS The multi-modal two-step floating catchment area (2SFCA) method is an important advance in the field of healthcare accessibility modelling, which enables the simultaneous assessment of spatial accessibility via multiple transport modes. This study further develops the multi-modal 2SFCA method by introducing online map APIs to improve the estimation of travel time by public transit or by car respectively. RESULTS As the results show, the distribution of healthcare accessibility by multi-modal 2SFCA shows significant spatial disparity. Moreover, by dividing the multi-modal accessibility into car-mode and transit-mode accessibility, this study discovers that the transit-mode subgroup is disadvantaged in the competition for healthcare services with the car-mode subgroup. The disparity in transit-mode accessibility is the main reason of the uneven pattern of healthcare accessibility in Shenzhen. CONCLUSIONS The findings suggest improving the public transit conditions for accessing healthcare services to reduce the disparity of healthcare accessibility. More healthcare services should be allocated in the eastern and western Shenzhen, especially sub-districts in Dapeng District and western Bao'an District. As these findings cannot be drawn by the traditional single-modal 2SFCA method, the advantage of the multi-modal 2SFCA method is significant to both healthcare studies and healthcare system planning.
Collapse
Affiliation(s)
- Zhuolin Tao
- School of Urban Planning and Design, Peking University, Shenzhen, 518055, Guangdong, China
| | - Zaoxing Yao
- Economics and Business Administration, Chongqing University, Chongqing, 400030, China.
| | - Hui Kong
- Department of Geography, The Ohio State University, Columbus, OH, 43210, USA
| | - Fei Duan
- School of Urban Planning and Design, Peking University, Shenzhen, 518055, Guangdong, China
| | - Guicai Li
- School of Urban Planning and Design, Peking University, Shenzhen, 518055, Guangdong, China
| |
Collapse
|
48
|
Brown MR, Burnham MS, Johnson SA, Lute SC, Brorson KA, Roush DJ. Evaluating the effect of in-process material on the binding mechanisms of surrogate viral particles to a multi-modal anion exchange resin. J Biotechnol 2018; 267:29-35. [PMID: 29278725 DOI: 10.1016/j.jbiotec.2017.12.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Revised: 12/06/2017] [Accepted: 12/17/2017] [Indexed: 11/21/2022]
Abstract
Bacteriophage binding mechanisms to multi-modal anion exchange resin may include both anion exchange and hydrophobic interactions, or the mechanism can be dominated by a single moiety. However, previous studies have reported binding mechanisms defined for simple solutions containing only buffer and a surrogate viral spike (i.e. bacteriophage ΦX174, PR772, and PP7). We employed phage spiked in-process monoclonal antibody (mAb) pools to model binding under bioprocessing conditions. These experiments allow the individual contributions of the mAb, in-process impurities, and buffer composition on mechanistic removal of phages to be studied. PP7 and PR772 use synergetic binding by the positively charged quaternary amine and the hydrophobic aromatic phenyl group to bind multi-modal resin. ΦX174's binding mechanism remains inconclusive due to operating conditions.
Collapse
|
49
|
Solheim TS, Laird BJA, Balstad TR, Stene GB, Bye A, Johns N, Pettersen CH, Fallon M, Fayers P, Fearon K, Kaasa S. A randomized phase II feasibility trial of a multimodal intervention for the management of cachexia in lung and pancreatic cancer. J Cachexia Sarcopenia Muscle 2017; 8:778-788. [PMID: 28614627 PMCID: PMC5659068 DOI: 10.1002/jcsm.12201] [Citation(s) in RCA: 197] [Impact Index Per Article: 28.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/12/2016] [Revised: 02/07/2017] [Accepted: 02/09/2017] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND Cancer cachexia is a syndrome of weight loss (including muscle and fat), anorexia, and decreased physical function. It has been suggested that the optimal treatment for cachexia should be a multimodal intervention. The primary aim of this study was to examine the feasibility and safety of a multimodal intervention (n-3 polyunsaturated fatty acid nutritional supplements, exercise, and anti-inflammatory medication: celecoxib) for cancer cachexia in patients with incurable lung or pancreatic cancer, undergoing chemotherapy. METHODS Patients receiving two cycles of standard chemotherapy were randomized to either the multimodal cachexia intervention or standard care. Primary outcome measures were feasibility assessed by recruitment, attrition, and compliance with intervention (>50% of components in >50% of patients). Key secondary outcomes were change in weight, muscle mass, physical activity, safety, and survival. RESULTS Three hundred and ninety-nine were screened resulting in 46 patients recruited (11.5%). Twenty five patients were randomized to the treatment and 21 as controls. Forty-one completed the study (attrition rate 11%). Compliance to the individual components of the intervention was 76% for celecoxib, 60% for exercise, and 48% for nutritional supplements. As expected from the sample size, there was no statistically significant effect on physical activity or muscle mass. There were no intervention-related Serious Adverse Events and survival was similar between the groups. CONCLUSIONS A multimodal cachexia intervention is feasible and safe in patients with incurable lung or pancreatic cancer; however, compliance to nutritional supplements was suboptimal. A phase III study is now underway to assess fully the effect of the intervention.
Collapse
Affiliation(s)
- Tora S Solheim
- European Palliative Care Research Centre (PRC), Department of Cancer Research and Molecular Medicine, Faculty of Medicine, Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Cancer Clinic, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Barry J A Laird
- European Palliative Care Research Centre (PRC), Department of Cancer Research and Molecular Medicine, Faculty of Medicine, Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Edinburgh Cancer Research Centre, University of Edinburgh, Edinburgh, UK
| | - Trude Rakel Balstad
- European Palliative Care Research Centre (PRC), Department of Cancer Research and Molecular Medicine, Faculty of Medicine, Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Cancer Clinic, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Guro B Stene
- Cancer Clinic, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Asta Bye
- Regional Advisory Unit for Palliative Care, Department of Oncology, Oslo University Hospital, Oslo, Norway.,Department of Nursing and Health Promotion, Faculty of Health Sciences, Oslo and Akershus University College of Applied Sciences, Oslo, Norway
| | - Neil Johns
- Department of Surgery, School of Clinical Sciences, University of Edinburgh, Little France Crescent, Edinburgh, UK
| | - Caroline H Pettersen
- European Palliative Care Research Centre (PRC), Department of Cancer Research and Molecular Medicine, Faculty of Medicine, Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Department of Laboratory Medicine, Children's and Women's Health, Faculty of Medicine, Trondheim, Norway
| | - Marie Fallon
- Edinburgh Cancer Research Centre, University of Edinburgh, Edinburgh, UK
| | - Peter Fayers
- European Palliative Care Research Centre (PRC), Department of Cancer Research and Molecular Medicine, Faculty of Medicine, Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Institute of Applied Health Sciences, University of Aberdeen, Aberdeen, UK
| | - Kenneth Fearon
- Department of Surgery, School of Clinical Sciences, University of Edinburgh, Little France Crescent, Edinburgh, UK
| | - Stein Kaasa
- European Palliative Care Research Centre (PRC), Department of Cancer Research and Molecular Medicine, Faculty of Medicine, Norwegian University of Science and Technology (NTNU), Trondheim, Norway.,Cancer Clinic, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| |
Collapse
|
50
|
Knopp SJ, Bones PJ, Weddell SJ, Jones RD. A software framework for real-time multi-modal detection of microsleeps. Australas Phys Eng Sci Med 2017; 40:739-749. [PMID: 28573545 DOI: 10.1007/s13246-017-0559-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2016] [Accepted: 05/14/2017] [Indexed: 11/29/2022]
Abstract
A software framework is described which was designed to process EEG, video of one eye, and head movement in real time, towards achieving early detection of microsleeps for prevention of fatal accidents, particularly in transport sectors. The framework is based around a pipeline structure with user-replaceable signal processing modules. This structure can encapsulate a wide variety of feature extraction and classification techniques and can be applied to detecting a variety of aspects of cognitive state. Users of the framework can implement signal processing plugins in C++ or Python. The framework also provides a graphical user interface and the ability to save and load data to and from arbitrary file formats. Two small studies are reported which demonstrate the capabilities of the framework in typical applications: monitoring eye closure and detecting simulated microsleeps. While specifically designed for microsleep detection/prediction, the software framework can be just as appropriately applied to (i) other measures of cognitive state and (ii) development of biomedical instruments for multi-modal real-time physiological monitoring and event detection in intensive care, anaesthesiology, cardiology, neurosurgery, etc. The software framework has been made freely available for researchers to use and modify under an open source licence.
Collapse
Affiliation(s)
- Simon J Knopp
- Department of Electrical and Computer Engineering, University of Canterbury, Christchurch, New Zealand. .,New Zealand Brain Research Institute, Christchurch, New Zealand.
| | - Philip J Bones
- Department of Electrical and Computer Engineering, University of Canterbury, Christchurch, New Zealand
| | - Stephen J Weddell
- Department of Electrical and Computer Engineering, University of Canterbury, Christchurch, New Zealand
| | - Richard D Jones
- Department of Electrical and Computer Engineering, University of Canterbury, Christchurch, New Zealand.,New Zealand Brain Research Institute, Christchurch, New Zealand
| |
Collapse
|