1
|
Ersavas T, Smith MA, Mattick JS. Novel applications of Convolutional Neural Networks in the age of Transformers. Sci Rep 2024; 14:10000. [PMID: 38693215 PMCID: PMC11063149 DOI: 10.1038/s41598-024-60709-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 04/26/2024] [Indexed: 05/03/2024] Open
Abstract
Convolutional Neural Networks (CNNs) have been central to the Deep Learning revolution and played a key role in initiating the new age of Artificial Intelligence. However, in recent years newer architectures such as Transformers have dominated both research and practical applications. While CNNs still play critical roles in many of the newer developments such as Generative AI, they are far from being thoroughly understood and utilised to their full potential. Here we show that CNNs can recognise patterns in images with scattered pixels and can be used to analyse complex datasets by transforming them into pseudo images with minimal processing for any high dimensional dataset, representing a more general approach to the application of CNNs to datasets such as in molecular biology, text, and speech. We introduce a pipeline called DeepMapper, which allows analysis of very high dimensional datasets without intermediate filtering and dimension reduction, thus preserving the full texture of the data, enabling detection of small variations normally deemed 'noise'. We demonstrate that DeepMapper can identify very small perturbations in large datasets with mostly random variables, and that it is superior in speed and on par in accuracy to prior work in processing large datasets with large numbers of features.
Collapse
Affiliation(s)
- Tansel Ersavas
- School of Biotechnology and Biomolecular Sciences, UNSW Sydney, Sydney, NSW, 2052, Australia.
| | - Martin A Smith
- School of Biotechnology and Biomolecular Sciences, UNSW Sydney, Sydney, NSW, 2052, Australia
- Department of Biochemistry and Molecular Medicine, Faculty of Medicine, Université de Montréal, Montréal, QC, H3C 3J7, Canada
- CHU Sainte-Justine Research Centre, Montreal, Canada
- UNSW RNA Institute, UNSW Sydney, Australia
| | - John S Mattick
- School of Biotechnology and Biomolecular Sciences, UNSW Sydney, Sydney, NSW, 2052, Australia.
| |
Collapse
|
2
|
Alves VM, dos Santos Cardoso J, Gama J. Classification of Pulmonary Nodules in 2-[ 18F]FDG PET/CT Images with a 3D Convolutional Neural Network. Nucl Med Mol Imaging 2024; 58:9-24. [PMID: 38261899 PMCID: PMC10796312 DOI: 10.1007/s13139-023-00821-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 05/17/2023] [Accepted: 08/08/2023] [Indexed: 01/25/2024] Open
Abstract
Purpose 2-[18F]FDG PET/CT plays an important role in the management of pulmonary nodules. Convolutional neural networks (CNNs) automatically learn features from images and have the potential to improve the discrimination between malignant and benign pulmonary nodules. The purpose of this study was to develop and validate a CNN model for classification of pulmonary nodules from 2-[18F]FDG PET images. Methods One hundred thirteen participants were retrospectively selected. One nodule per participant. The 2-[18F]FDG PET images were preprocessed and annotated with the reference standard. The deep learning experiment entailed random data splitting in five sets. A test set was held out for evaluation of the final model. Four-fold cross-validation was performed from the remaining sets for training and evaluating a set of candidate models and for selecting the final model. Models of three types of 3D CNNs architectures were trained from random weight initialization (Stacked 3D CNN, VGG-like and Inception-v2-like models) both in original and augmented datasets. Transfer learning, from ImageNet with ResNet-50, was also used. Results The final model (Stacked 3D CNN model) obtained an area under the ROC curve of 0.8385 (95% CI: 0.6455-1.0000) in the test set. The model had a sensibility of 80.00%, a specificity of 69.23% and an accuracy of 73.91%, in the test set, for an optimised decision threshold that assigns a higher cost to false negatives. Conclusion A 3D CNN model was effective at distinguishing benign from malignant pulmonary nodules in 2-[18F]FDG PET images. Supplementary Information The online version contains supplementary material available at 10.1007/s13139-023-00821-6.
Collapse
Affiliation(s)
- Victor Manuel Alves
- Faculty of Economics, University of Porto, Rua Dr. Roberto Frias, Porto, 4200-464 Porto, Portugal
- Department of Nuclear Medicine, University Hospital Center of São João, Alameda Prof. Hernâni Monteiro, 4200-319 Porto, Portugal
| | - Jaime dos Santos Cardoso
- Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
| | - João Gama
- Faculty of Economics, University of Porto, Rua Dr. Roberto Frias, Porto, 4200-464 Porto, Portugal
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
| |
Collapse
|
3
|
Song L, Ren Y, Xu S, Hou Y, He X. A hybrid spatiotemporal deep belief network and sparse representation-based framework reveals multilevel core functional components in decoding multitask fMRI signals. Netw Neurosci 2023; 7:1513-1532. [PMID: 38144693 PMCID: PMC10745082 DOI: 10.1162/netn_a_00334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 08/17/2023] [Indexed: 12/26/2023] Open
Abstract
Decoding human brain activity on various task-based functional brain imaging data is of great significance for uncovering the functioning mechanism of the human mind. Currently, most feature extraction model-based methods for brain state decoding are shallow machine learning models, which may struggle to capture complex and precise spatiotemporal patterns of brain activity from the highly noisy fMRI raw data. Moreover, although decoding models based on deep learning methods benefit from their multilayer structure that could extract spatiotemporal features at multiscale, the relatively large populations of fMRI datasets are indispensable, and the explainability of their results is elusive. To address the above problems, we proposed a computational framework based on hybrid spatiotemporal deep belief network and sparse representations to differentiate multitask fMRI (tfMRI) signals. Using a relatively small cohort of tfMRI data as a test bed, our framework can achieve an average classification accuracy of 97.86% and define the multilevel temporal and spatial patterns of multiple cognitive tasks. Intriguingly, our model can characterize the key components for differentiating the multitask fMRI signals. Overall, the proposed framework can identify the interpretable and discriminative fMRI composition patterns at multiple scales, offering an effective methodology for basic neuroscience and clinical research with relatively small cohorts.
Collapse
Affiliation(s)
- Limei Song
- School of Information Science and Technology, Northwest University, Xi’an, China
| | - Yudan Ren
- School of Information Science and Technology, Northwest University, Xi’an, China
| | - Shuhan Xu
- School of Information Science and Technology, Northwest University, Xi’an, China
| | - Yuqing Hou
- School of Information Science and Technology, Northwest University, Xi’an, China
| | - Xiaowei He
- School of Information Science and Technology, Northwest University, Xi’an, China
| |
Collapse
|
4
|
Uyulan C, Erguzel TT, Turk O, Farhad S, Metin B, Tarhan N. A Class Activation Map-Based Interpretable Transfer Learning Model for Automated Detection of ADHD from fMRI Data. Clin EEG Neurosci 2023; 54:151-159. [PMID: 36052402 DOI: 10.1177/15500594221122699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Automatic detection of Attention Deficit Hyperactivity Disorder (ADHD) based on the functional Magnetic Resonance Imaging (fMRI) through Deep Learning (DL) is becoming a quite useful methodology due to the curse of-dimensionality problem of the data is solved. Also, this method proposes an invasive and robust solution to the variances in data acquisition and class distribution imbalances. In this paper, a transfer learning approach, specifically ResNet-50 type pre-trained 2D-Convolutional Neural Network (CNN) was used to automatically classify ADHD and healthy children. The results demonstrated that ResNet-50 architecture with 10-k cross-validation (CV) achieves an overall classification accuracy of 93.45%. The interpretation of the results was done via the Class Activation Map (CAM) analysis which showed that children with ADHD differed from controls in a wide range of brain areas including frontal, parietal and temporal lobes.
Collapse
Affiliation(s)
- Caglar Uyulan
- Department of Mechanical Engineering, Faculty of Engineering and Architecture, İzmir Katip Çelebi University, İzmir, Turkey
| | | | - Omer Turk
- Department of Computer Programming, Vocational School, Mardin Artuklu University, Mardin, Turkey
| | - Shams Farhad
- Department of Neuroscience, 232990Uskudar University, Istanbul, Turkey
| | - Baris Metin
- Department of Neuroscience, 232990Uskudar University, Istanbul, Turkey
| | - Nevzat Tarhan
- Department of Psychiatry, NPIstanbul Brain Hospital, Istanbul, Turkey
| |
Collapse
|
5
|
Germani E, Fromont E, Maumet C. On the benefits of self-taught learning for brain decoding. Gigascience 2022; 12:giad029. [PMID: 37132522 PMCID: PMC10155221 DOI: 10.1093/gigascience/giad029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 01/24/2023] [Accepted: 04/14/2023] [Indexed: 05/04/2023] Open
Abstract
CONTEXT We study the benefits of using a large public neuroimaging database composed of functional magnetic resonance imaging (fMRI) statistic maps, in a self-taught learning framework, for improving brain decoding on new tasks. First, we leverage the NeuroVault database to train, on a selection of relevant statistic maps, a convolutional autoencoder to reconstruct these maps. Then, we use this trained encoder to initialize a supervised convolutional neural network to classify tasks or cognitive processes of unseen statistic maps from large collections of the NeuroVault database. RESULTS We show that such a self-taught learning process always improves the performance of the classifiers, but the magnitude of the benefits strongly depends on the number of samples available both for pretraining and fine-tuning the models and on the complexity of the targeted downstream task. CONCLUSION The pretrained model improves the classification performance and displays more generalizable features, less sensitive to individual differences.
Collapse
Affiliation(s)
- Elodie Germani
- Univ Rennes, Inria, CNRS, Inserm, IRISA UMR 6074, Empenn ERL U 1228, 35000 Rennes, France
| | - Elisa Fromont
- Univ Rennes, IUF, Inria, CNRS, IRISA UMR 6074, 35000 Rennes, France
| | - Camille Maumet
- Univ Rennes, Inria, CNRS, Inserm, IRISA UMR 6074, Empenn ERL U 1228, 35000 Rennes, France
| |
Collapse
|
6
|
Avberšek LK, Repovš G. Deep learning in neuroimaging data analysis: Applications, challenges, and solutions. FRONTIERS IN NEUROIMAGING 2022; 1:981642. [PMID: 37555142 PMCID: PMC10406264 DOI: 10.3389/fnimg.2022.981642] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 10/10/2022] [Indexed: 08/10/2023]
Abstract
Methods for the analysis of neuroimaging data have advanced significantly since the beginning of neuroscience as a scientific discipline. Today, sophisticated statistical procedures allow us to examine complex multivariate patterns, however most of them are still constrained by assuming inherent linearity of neural processes. Here, we discuss a group of machine learning methods, called deep learning, which have drawn much attention in and outside the field of neuroscience in recent years and hold the potential to surpass the mentioned limitations. Firstly, we describe and explain the essential concepts in deep learning: the structure and the computational operations that allow deep models to learn. After that, we move to the most common applications of deep learning in neuroimaging data analysis: prediction of outcome, interpretation of internal representations, generation of synthetic data and segmentation. In the next section we present issues that deep learning poses, which concerns multidimensionality and multimodality of data, overfitting and computational cost, and propose possible solutions. Lastly, we discuss the current reach of DL usage in all the common applications in neuroimaging data analysis, where we consider the promise of multimodality, capability of processing raw data, and advanced visualization strategies. We identify research gaps, such as focusing on a limited number of criterion variables and the lack of a well-defined strategy for choosing architecture and hyperparameters. Furthermore, we talk about the possibility of conducting research with constructs that have been ignored so far or/and moving toward frameworks, such as RDoC, the potential of transfer learning and generation of synthetic data.
Collapse
Affiliation(s)
- Lev Kiar Avberšek
- Department of Psychology, Faculty of Arts, University of Ljubljana, Ljubljana, Slovenia
| | | |
Collapse
|
7
|
Effect of Multichannel Convolutional Neural Network-Based Model on the Repair and Aesthetic Effect of Eye Plastic Surgery Patients. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:5315146. [PMID: 36092793 PMCID: PMC9458399 DOI: 10.1155/2022/5315146] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 08/01/2022] [Accepted: 08/08/2022] [Indexed: 11/25/2022]
Abstract
Objective This study is aimed at exploring the impact of eye model based on multichannel convolutional neural network (CNN) on eye plastic surgery and aesthetic effect, thus formulating methods to improve the effect of eye plastic surgery. Methods A total of 64 patients who underwent pouch plastic surgery from January 2020 to March 2021 were selected as the research objects and were divided into observation group and control group by random number table method. The subjects in the observation group were evaluated by multichannel CNN-based eye model and doctors' experience, while those in the control group were evaluated by doctors' experience only, with 32 cases in both groups. Blepharoplasty, lower eyelid skin wrinkles, skin luster, and aesthetic scores were compared between the two groups. Results The similarity between the multichannel CNN model detected shape and the actual eye shape (98.78%) was considerably higher than that of the CNN model detected shape (78.65%) (P < 0.05). After treatment, the indexes of pouch degree, lower eyelid skin wrinkle, eyelid lacrimal sulcus, skin gloss, and aesthetic score in the observation group were better than those in the control group (P < 0.05). The incidence of complications in the observation group (13%) was considerably lower than that in the control group (28%) (P < 0.05). Conclusion The eye model based on the multichannel CNN model was helpful to improve the surgical repair and aesthetic effect of patients and can improve the occurrence of postoperative complications.
Collapse
|
8
|
Deep learning for predicting respiratory rate from biosignals. Comput Biol Med 2022; 144:105338. [DOI: 10.1016/j.compbiomed.2022.105338] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 01/27/2022] [Accepted: 02/10/2022] [Indexed: 12/23/2022]
|
9
|
Qureshi MB, Azad L, Qureshi MS, Aslam S, Aljarbouh A, Fayaz M. Brain Decoding Using fMRI Images for Multiple Subjects through Deep Learning. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:1124927. [PMID: 35273647 PMCID: PMC8904097 DOI: 10.1155/2022/1124927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 02/06/2022] [Accepted: 02/11/2022] [Indexed: 12/02/2022]
Abstract
Substantial information related to human cerebral conditions can be decoded through various noninvasive evaluating techniques like fMRI. Exploration of the neuronal activity of the human brain can divulge the thoughts of a person like what the subject is perceiving, thinking, or visualizing. Furthermore, deep learning techniques can be used to decode the multifaceted patterns of the brain in response to external stimuli. Existing techniques are capable of exploring and classifying the thoughts of the human subject acquired by the fMRI imaging data. fMRI images are the volumetric imaging scans which are highly dimensional as well as require a lot of time for training when fed as an input in the deep learning network. However, the hassle for more efficient learning of highly dimensional high-level features in less training time and accurate interpretation of the brain voxels with less misclassification error is needed. In this research, we propose an improved CNN technique where features will be functionally aligned. The optimal features will be selected after dimensionality reduction. The highly dimensional feature vector will be transformed into low dimensional space for dimensionality reduction through autoadjusted weights and combination of best activation functions. Furthermore, we solve the problem of increased training time by using Swish activation function, making it denser and increasing efficiency of the model in less training time. Finally, the experimental results are evaluated and compared with other classifiers which demonstrated the supremacy of the proposed model in terms of accuracy.
Collapse
Affiliation(s)
- Muhammad Bilal Qureshi
- Department of Computer Science & IT, University of Lakki Marwat, Lakki Marwat 28420, KPK, Pakistan
| | - Laraib Azad
- Department of Computer Science, Shaheed Zulfikar Ali Bhutto Institute of Science and Technology, Islamabad 44000, Pakistan
| | - Muhammad Shuaib Qureshi
- Department of Computer Science, School of Arts and Sciences, University of Central Asia, Kyrgyzstan
| | - Sheraz Aslam
- Department of Electrical Engineering, Computer Engineering, and Informatics, Cyprus University of Technology, Cyprus
| | - Ayman Aljarbouh
- Department of Computer Science, School of Arts and Sciences, University of Central Asia, Kyrgyzstan
| | - Muhammad Fayaz
- Department of Computer Science, School of Arts and Sciences, University of Central Asia, Kyrgyzstan
| |
Collapse
|
10
|
Abstract
This study utilized the multi-channel convolutional neural network (MCNN) and applied it to wind turbine blade and blade angle fault detection. The proposed approach automatically and effectively captures fault characteristics from the imported original vibration signals and identifies their state in multiple convolutional neural network (CNN) models. The result obtained from each model is sent to the output layer, which is a maximum output network (MAXNET), to compute the most accurate state. First, in terms of wind turbine blade state detection, this paper builds blade models based on the normal state and three common fault types, including blade angle anomaly, blade surface damage, and blade breakage. Vibration signals are employed for fault detection. The proposed wind turbine fault diagnosis approach adopts a triaxial vibration transducer and frame grabber to capture vibration signals and then applies the new MCNN algorithm to identify the state. The test results show that the proposed approach could deliver up to 87.8% identification accuracy for four fault types of large wind turbine blades.
Collapse
|
11
|
A CNN Deep Local and Global ASD Classification Approach with Continuous Wavelet Transform Using Task-Based FMRI. SENSORS 2021; 21:s21175822. [PMID: 34502710 PMCID: PMC8433893 DOI: 10.3390/s21175822] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 08/24/2021] [Accepted: 08/25/2021] [Indexed: 11/17/2022]
Abstract
Autism spectrum disorder (ASD) is a neurodegenerative disorder characterized by lingual and social disabilities. The autism diagnostic observation schedule is the current gold standard for ASD diagnosis. Developing objective computer aided technologies for ASD diagnosis with the utilization of brain imaging modalities and machine learning is one of main tracks in current studies to understand autism. Task-based fMRI demonstrates the functional activation in the brain by measuring blood oxygen level-dependent (BOLD) variations in response to certain tasks. It is believed to hold discriminant features for autism. A novel computer aided diagnosis (CAD) framework is proposed to classify 50 ASD and 50 typically developed toddlers with the adoption of CNN deep networks. The CAD system includes both local and global diagnosis in a response to speech task. Spatial dimensionality reduction with region of interest selection and clustering has been utilized. In addition, the proposed framework performs discriminant feature extraction with continuous wavelet transform. Local diagnosis on cingulate gyri, superior temporal gyrus, primary auditory cortex and angular gyrus achieves accuracies ranging between 71% and 80% with a four-fold cross validation technique. The fused global diagnosis achieves an accuracy of 86% with 82% sensitivity, 92% specificity. A brain map indicating ASD severity level for each brain area is created, which contributes to personalized diagnosis and treatment plans.
Collapse
|
12
|
Design of Deep Learning Model for Task-Evoked fMRI Data Classification. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:6660866. [PMID: 34422034 PMCID: PMC8378948 DOI: 10.1155/2021/6660866] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2020] [Revised: 05/26/2021] [Accepted: 07/15/2021] [Indexed: 11/25/2022]
Abstract
Machine learning methods have been successfully applied to neuroimaging signals, one of which is to decode specific task states from functional magnetic resonance imaging (fMRI) data. In this paper, we propose a model that simultaneously utilizes characteristics of both spatial and temporal sequential information of fMRI data with deep neural networks to classify the fMRI task states. We designed a convolution network module and a recurrent network module to extract the spatial and temporal features of fMRI data, respectively. In particular, we also add the attention mechanism to the recurrent network module, which more effectively highlights the brain activation state at the moment of reaction. We evaluated the model using task-evoked fMRI data from the Human Connectome Project (HCP) dataset, the classification accuracy got 94.31%, and the experimental results have shown that the model can effectively distinguish the brain states under different task stimuli.
Collapse
|
13
|
Yotsutsuji S, Lei M, Akama H. Evaluation of Task fMRI Decoding With Deep Learning on a Small Sample Dataset. Front Neuroinform 2021; 15:577451. [PMID: 33679360 PMCID: PMC7928289 DOI: 10.3389/fninf.2021.577451] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Accepted: 01/25/2021] [Indexed: 01/12/2023] Open
Abstract
Recently, several deep learning methods have been applied to decoding in task-related fMRI, and their advantages have been exploited in a variety of ways. However, this paradigm is sometimes problematic, due to the difficulty of applying deep learning to high-dimensional data and small sample size conditions. The difficulties in gathering a large amount of data to develop predictive machine learning models with multiple layers from fMRI experiments with complicated designs and tasks are well-recognized. Group-level, multi-voxel pattern analysis with small sample sizes results in low statistical power and large accuracy evaluation errors; failure in such instances is ascribed to the individual variability that risks information leakage, a particular issue when dealing with a limited number of subjects. In this study, using a small-size fMRI dataset evaluating bilingual language switch in a property generation task, we evaluated the relative fit of different deep learning models, incorporating moderate split methods to control the amount of information leakage. Our results indicated that using the session shuffle split as the data folding method, along with the multichannel 2D convolutional neural network (M2DCNN) classifier, recorded the best authentic classification accuracy, which outperformed the efficiency of 3D convolutional neural network (3DCNN). In this manuscript, we discuss the tolerability of within-subject or within-session information leakage, of which the impact is generally considered small but complex and essentially unknown; this requires clarification in future studies.
Collapse
Affiliation(s)
- Sunao Yotsutsuji
- School of Life Science and Technology, Tokyo Institute of Technology, Tokyo, Japan
| | - Miaomei Lei
- Ex-Graduate School of Science and Technology, Tokyo Institute of Technology, Tokyo, Japan
| | - Hiroyuki Akama
- School of Life Science and Technology, Tokyo Institute of Technology, Tokyo, Japan.,Institute of Liberal Arts, Tokyo Institute of Technology, Tokyo, Japan
| |
Collapse
|