1
|
Zeng Q, Zhou J, Ji Y, Wang H. A semiparametric Gaussian mixture model for chest CT-based 3D blood vessel reconstruction. Biostatistics 2024:kxae013. [PMID: 38637995 DOI: 10.1093/biostatistics/kxae013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 03/22/2024] [Accepted: 03/25/2024] [Indexed: 04/20/2024] Open
Abstract
Computed tomography (CT) has been a powerful diagnostic tool since its emergence in the 1970s. Using CT data, 3D structures of human internal organs and tissues, such as blood vessels, can be reconstructed using professional software. This 3D reconstruction is crucial for surgical operations and can serve as a vivid medical teaching example. However, traditional 3D reconstruction heavily relies on manual operations, which are time-consuming, subjective, and require substantial experience. To address this problem, we develop a novel semiparametric Gaussian mixture model tailored for the 3D reconstruction of blood vessels. This model extends the classical Gaussian mixture model by enabling nonparametric variations in the component-wise parameters of interest according to voxel positions. We develop a kernel-based expectation-maximization algorithm for estimating the model parameters, accompanied by a supporting asymptotic theory. Furthermore, we propose a novel regression method for optimal bandwidth selection. Compared to the conventional cross-validation-based (CV) method, the regression method outperforms the CV method in terms of computational and statistical efficiency. In application, this methodology facilitates the fully automated reconstruction of 3D blood vessel structures with remarkable accuracy.
Collapse
Affiliation(s)
- Qianhan Zeng
- Guanghua School of Management, Peking University, Beijing, 100871, China
| | - Jing Zhou
- Center for Applied Statistics, School of Statistics, Renmin University of China, Beijing, 100872, China
| | - Ying Ji
- Department of Thoracic Surgery, Beijing Institute of Respiratory Medicine and Beijing Chao-Yang Hospital, Capital Medical University, Beijing, 100020, China
| | - Hansheng Wang
- Guanghua School of Management, Peking University, Beijing, 100871, China
| |
Collapse
|
2
|
Hernández-López R, Travieso-González CM. Reptile Identification for Endemic and Invasive Alien Species Using Transfer Learning Approaches. Sensors (Basel) 2024; 24:1372. [PMID: 38474908 DOI: 10.3390/s24051372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 01/29/2024] [Accepted: 02/17/2024] [Indexed: 03/14/2024]
Abstract
The Canary Islands are considered a hotspot of biodiversity and have high levels of endemicity, including endemic reptile species. Nowadays, some invasive alien species of reptiles are proliferating with no control in different parts of the territory, creating a dangerous situation for the ecosystems of this archipelago. Despite the fact that the regional authorities have initiated actions to try to control the proliferation of invasive species, the problem has not been solved as it depends on sporadic sightings, and it is impossible to determine when these species appear. Since no studies for automatically identifying certain species of reptiles endemic to the Canary Islands have been found in the current state-of-the-art, from the Signals and Communications Department of the Las Palmas de Gran Canaria University (ULPGC), we consider the possibility of developing a detection system based on automatic species recognition using deep learning (DL) techniques. So this research conducts an initial identification study of some species of interest by implementing different neural network models based on transfer learning approaches. This study concludes with a comparison in which the best performance is achieved by integrating the EfficientNetV2B3 base model, which has a mean Accuracy of 98.75%.
Collapse
Affiliation(s)
- Ruymán Hernández-López
- Signals and Communications Department (DSC), Institute for Technological Development and Innovation in Communications (IDeTIC), University of Las Palmas de Gran Canaria (ULPGC), 35017 Las Palmas de Gran Canaria, Spain
| | - Carlos M Travieso-González
- Signals and Communications Department (DSC), Institute for Technological Development and Innovation in Communications (IDeTIC), University of Las Palmas de Gran Canaria (ULPGC), 35017 Las Palmas de Gran Canaria, Spain
| |
Collapse
|
3
|
Jijila B, Nirmala V, Selvarengan P, Kavitha D, Arun Muthuraj V, Rajagopal A. Employing neural density functionals to generate potential energy surfaces. J Mol Model 2024; 30:65. [PMID: 38340208 DOI: 10.1007/s00894-024-05834-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 01/04/2024] [Indexed: 02/12/2024]
Abstract
CONTEXT With the union of machine learning (ML) and quantum chemistry, amid the debate between machine-learned functionals and human-designed functionals in density functional theory (DFT), this paper aims to demonstrate the generation of potential energy surfaces using computations with machine-learned density functional approximation (ML-DFA). A recent research trend is the application of ML in quantum sciences in the design of density functionals such as DeepMind's Deep Learning model (DeepMind21, DM21). Though science reported the state-of-the-art performance of DM21, the opportunity to utilize DeepMind's pretrained DM21 neural networks in computations in quantum chemistry has not yet been tapped. So far in the literature, the Deep Learning density functionals (DM21) have not been applied to generate potential energy surfaces. While the superior accuracy of DM21 has been reported, there is still a scarcity of publications that apply DM21 in calculations in the field. In this context, for the first time in literature, neural density functionals inferring 2D potential energy surfaces (ML-DFA-PES) based on machine-learned DFA-based computational method is contributed in this paper. This paper reports the ML-DFA-generated PES for C4H8, H2O, H2, and H2+ by employing a pretrained DM21m TensorFlow model with cc-pVDZ basis set. In addition, we also analyze the long-range behavior of DM21 based PES to investigate the ability to describe a system at long ranges. Furthermore, we compare PES diagrams from DM21 with popular DFT functionals (b3lyp/ PW6B95) and CCSD(T). METHODS In this method, 2D potential energy surfaces are obtained using a method that relies upon the neural network's ability to accurately learn the mapping between 3D electron density and exchange-correlation potential. By inserting Deep Learning inference in DFT with a pretrained neural network, self-consistent field (SCF) energy at different geometries along the coordinates of interest is computed, and then, potential energy surfaces are plotted. In this method, first, the electron density is computed mathematically, and this computed 3D electron density is used as a ML feature vector to predict the exchange correlation potential as a ML inference computed by a forward pass of pre-trained DM21 TensorFlow computational graph, followed by the computation of self-consistent field energy at multiple geometries, and then, SCF energies at different bond lengths/angles are plotted as 2D PES. We implement this in a python source code using frameworks such as PySCF and DM21. This paper contributes this implementation in open source. The source code and DM21-DFA-based PES are contributed at https://sites.google.com/view/MLfunctionals-DeepMind-PES .
Collapse
Affiliation(s)
- B Jijila
- Queen Mary's College, Chennai, India
| | - V Nirmala
- Queen Mary's College, Chennai, India.
| | - P Selvarengan
- Kalasalingam Academy of Research & Education, Krishnankoil, India
| | - D Kavitha
- Dr. MGR Educational and Research Institute, Chennai, India
| | | | - A Rajagopal
- Indian Institute of Technology, Madras, India
| |
Collapse
|
4
|
Muley VY. Deep Learning for Predicting Gene Regulatory Networks: A Step-by-Step Protocol in R. Methods Mol Biol 2024; 2719:265-294. [PMID: 37803123 DOI: 10.1007/978-1-0716-3461-5_15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
Deep learning has emerged as a powerful tool for solving complex problems, including reconstruction of gene regulatory networks within the realm of biology. These networks consist of transcription factors and their associations with genes they regulate. Despite the utility of deep learning methods in studying gene expression and regulation, their accessibility remains limited for biologists, mainly due to the prerequisites of programming skills and a nuanced grasp of the underlying algorithms. This chapter presents a deep learning protocol that utilize TensorFlow and the Keras API in R/RStudio, with the aim of making deep learning accessible for individuals without specialized expertise. The protocol focuses on the genome-wide prediction of regulatory interactions between transcription factors and genes, leveraging publicly available gene expression data in conjunction with well-established benchmarks. The protocol encompasses pivotal phases including data preprocessing, conceptualization of neural network architectures, iterative processes of model training and validation, as well as forecasting of novel regulatory associations. Furthermore, it provides insights into parameter tuning for deep learning models. By adhering to this protocol, researchers are expected to gain a comprehensive understanding of applying deep learning techniques to predict regulatory interactions. This protocol can be readily modifiable to serve diverse research problems, thereby empowering scientists to effectively harness the capabilities of deep learning in their investigations.
Collapse
Affiliation(s)
- Vijaykumar Yogesh Muley
- Independent Researcher, Hingoli, India.
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, México.
| |
Collapse
|
5
|
Dragan P, Joshi K, Atzei A, Latek D. Keras/ TensorFlow in Drug Design for Immunity Disorders. Int J Mol Sci 2023; 24:15009. [PMID: 37834457 PMCID: PMC10573944 DOI: 10.3390/ijms241915009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 09/21/2023] [Accepted: 09/29/2023] [Indexed: 10/15/2023] Open
Abstract
Homeostasis of the host immune system is regulated by white blood cells with a variety of cell surface receptors for cytokines. Chemotactic cytokines (chemokines) activate their receptors to evoke the chemotaxis of immune cells in homeostatic migrations or inflammatory conditions towards inflamed tissue or pathogens. Dysregulation of the immune system leading to disorders such as allergies, autoimmune diseases, or cancer requires efficient, fast-acting drugs to minimize the long-term effects of chronic inflammation. Here, we performed structure-based virtual screening (SBVS) assisted by the Keras/TensorFlow neural network (NN) to find novel compound scaffolds acting on three chemokine receptors: CCR2, CCR3, and one CXC receptor, CXCR3. Keras/TensorFlow NN was used here not as a typically used binary classifier but as an efficient multi-class classifier that can discard not only inactive compounds but also low- or medium-activity compounds. Several compounds proposed by SBVS and NN were tested in 100 ns all-atom molecular dynamics simulations to confirm their binding affinity. To improve the basic binding affinity of the compounds, new chemical modifications were proposed. The modified compounds were compared with known antagonists of these three chemokine receptors. Known CXCR3 compounds were among the top predicted compounds; thus, the benefits of using Keras/TensorFlow in drug discovery have been shown in addition to structure-based approaches. Furthermore, we showed that Keras/TensorFlow NN can accurately predict the receptor subtype selectivity of compounds, for which SBVS often fails. We cross-tested chemokine receptor datasets retrieved from ChEMBL and curated datasets for cannabinoid receptors. The NN model trained on the cannabinoid receptor datasets retrieved from ChEMBL was the most accurate in the receptor subtype selectivity prediction. Among NN models trained on the chemokine receptor datasets, the CXCR3 model showed the highest accuracy in differentiating the receptor subtype for a given compound dataset.
Collapse
Affiliation(s)
- Paulina Dragan
- Faculty of Chemistry, University of Warsaw, Pasteura 1, 02-903 Warsaw, Poland; (P.D.); (A.A.)
| | - Kavita Joshi
- Faculty of Chemistry, University of Warsaw, Pasteura 1, 02-903 Warsaw, Poland; (P.D.); (A.A.)
| | - Alessandro Atzei
- Faculty of Chemistry, University of Warsaw, Pasteura 1, 02-903 Warsaw, Poland; (P.D.); (A.A.)
- Department of Life and Environmental Science, Food Toxicology Unit, University of Cagliari, University Campus of Monserrato, SS 554, 09042 Cagliari, Italy
| | - Dorota Latek
- Faculty of Chemistry, University of Warsaw, Pasteura 1, 02-903 Warsaw, Poland; (P.D.); (A.A.)
| |
Collapse
|
6
|
Majumder M, Wilmot C. Automated Vehicle Counting from Pre-Recorded Video Using You Only Look Once (YOLO) Object Detection Model. J Imaging 2023; 9:131. [PMID: 37504808 PMCID: PMC10381655 DOI: 10.3390/jimaging9070131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 06/20/2023] [Accepted: 06/22/2023] [Indexed: 07/29/2023] Open
Abstract
Different techniques are being applied for automated vehicle counting from video footage, which is a significant subject of interest to many researchers. In this context, the You Only Look Once (YOLO) object detection model, which has been developed recently, has emerged as a promising tool. In terms of accuracy and flexible interval counting, the adequacy of existing research on employing the model for vehicle counting from video footage is unlikely sufficient. The present study endeavors to develop computer algorithms for automated traffic counting from pre-recorded videos using the YOLO model with flexible interval counting. The study involves the development of algorithms aimed at detecting, tracking, and counting vehicles from pre-recorded videos. The YOLO model was applied in TensorFlow API with the assistance of OpenCV. The developed algorithms implement the YOLO model for counting vehicles in two-way directions in an efficient way. The accuracy of the automated counting was evaluated compared to the manual counts, and was found to be about 90 percent. The accuracy comparison also shows that the error of automated counting consistently occurs due to undercounting from unsuitable videos. In addition, a benefit-cost (B/C) analysis shows that implementing the automated counting method returns 1.76 times the investment.
Collapse
Affiliation(s)
- Mishuk Majumder
- Department of Civil & Environmental Engineering, Louisiana State University, Baton Rouge, LA 70803, USA
| | - Chester Wilmot
- Department of Civil & Environmental Engineering, Louisiana State University, Baton Rouge, LA 70803, USA
| |
Collapse
|
7
|
Hou KM, Diao X, Shi H, Ding H, Zhou H, de Vaulx C. Trends and Challenges in AIoT/IIoT/IoT Implementation. Sensors (Basel) 2023; 23:s23115074. [PMID: 37299800 DOI: 10.3390/s23115074] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 05/16/2023] [Accepted: 05/18/2023] [Indexed: 06/12/2023]
Abstract
For the next coming years, metaverse, digital twin and autonomous vehicle applications are the leading technologies for many complex applications hitherto inaccessible such as health and life sciences, smart home, smart agriculture, smart city, smart car and logistics, Industry 4.0, entertainment (video game) and social media applications, due to recent tremendous developments in process modeling, supercomputing, cloud data analytics (deep learning, etc.), communication network and AIoT/IIoT/IoT technologies. AIoT/IIoT/IoT is a crucial research field because it provides the essential data to fuel metaverse, digital twin, real-time Industry 4.0 and autonomous vehicle applications. However, the science of AIoT is inherently multidisciplinary, and therefore, it is difficult for readers to understand its evolution and impacts. Our main contribution in this article is to analyze and highlight the trends and challenges of the AIoT technology ecosystem including core hardware (MCU, MEMS/NEMS sensors and wireless access medium), core software (operating system and protocol communication stack) and middleware (deep learning on a microcontroller: TinyML). Two low-powered AI technologies emerge: TinyML and neuromorphic computing, but only one AIoT/IIoT/IoT device implementation using TinyML dedicated to strawberry disease detection as a case study. So far, despite the very rapid progress of AIoT/IIoT/IoT technologies, several challenges remain to be overcome such as safety, security, latency, interoperability and reliability of sensor data, which are essential characteristics to meet the requirements of metaverse, digital twin, autonomous vehicle and Industry 4.0. applications.
Collapse
Affiliation(s)
- Kun Mean Hou
- Université Clermont-Auvergne, CNRS, Mines de Saint-Étienne, Clermont-Auvergne-INP, LIMOS, F-63000 Clermont-Ferrand, France
| | | | - Hongling Shi
- College of Electronics and Information Engineering, South Central Minzu University (SCMZU), Wuhan 430070, China
| | - Hao Ding
- College of Electronics and Information Engineering, South Central Minzu University (SCMZU), Wuhan 430070, China
| | | | - Christophe de Vaulx
- Université Clermont-Auvergne, CNRS, Mines de Saint-Étienne, Clermont-Auvergne-INP, LIMOS, F-63000 Clermont-Ferrand, France
| |
Collapse
|
8
|
Tovey S, Zills F, Torres-Herrador F, Lohrmann C, Brückner M, Holm C. MDSuite: comprehensive post-processing tool for particle simulations. J Cheminform 2023; 15:19. [PMID: 36774469 DOI: 10.1186/s13321-023-00687-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 01/22/2023] [Indexed: 02/13/2023] Open
Abstract
Particle-Based (PB) simulations, including Molecular Dynamics (MD), provide access to system observables that are not easily available experimentally. However, in most cases, PB data needs to be processed after a simulation to extract these observables. One of the main challenges in post-processing PB simulations is managing the large amounts of data typically generated without incurring memory or computational capacity limitations. In this work, we introduce the post-processing tool: MDSuite. This software, developed in Python, combines state-of-the-art computing technologies such as TensorFlow, with modern data management tools such as HDF5 and SQL for a fast, scalable, and accurate PB data processing engine. This package, built around the principles of FAIR data, provides a memory safe, parallelized, and GPU accelerated environment for the analysis of particle simulations. The software currently offers 17 calculators for the computation of properties including diffusion coefficients, thermal conductivity, viscosity, radial distribution functions, coordination numbers, and more. Further, the object-oriented framework allows for the rapid implementation of new calculators or file-readers for different simulation software. The Python front-end provides a familiar interface for many users in the scientific community and a mild learning curve for the inexperienced. Future developments will include the introduction of more analysis associated with ab-initio methods, colloidal/macroscopic particle methods, and extension to experimental data.
Collapse
|
9
|
Dragan P, Merski M, Wiśniewski S, Sanmukh SG, Latek D. Chemokine Receptors-Structure-Based Virtual Screening Assisted by Machine Learning. Pharmaceutics 2023; 15:pharmaceutics15020516. [PMID: 36839838 PMCID: PMC9965785 DOI: 10.3390/pharmaceutics15020516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/30/2023] [Accepted: 01/31/2023] [Indexed: 02/08/2023] Open
Abstract
Chemokines modulate the immune response by regulating the migration of immune cells. They are also known to participate in such processes as cell-cell adhesion, allograft rejection, and angiogenesis. Chemokines interact with two different subfamilies of G protein-coupled receptors: conventional chemokine receptors and atypical chemokine receptors. Here, we focused on the former one which has been linked to many inflammatory diseases, including: multiple sclerosis, asthma, nephritis, and rheumatoid arthritis. Available crystal and cryo-EM structures and homology models of six chemokine receptors (CCR1 to CCR6) were described and tested in terms of their usefulness in structure-based drug design. As a result of structure-based virtual screening for CCR2 and CCR3, several new active compounds were proposed. Known inhibitors of CCR1 to CCR6, acquired from ChEMBL, were used as training sets for two machine learning algorithms in ligand-based drug design. Performance of LightGBM was compared with a sequential Keras/TensorFlow model of neural network for these diverse datasets. A combination of structure-based virtual screening with machine learning allowed to propose several active ligands for CCR2 and CCR3 with two distinct compounds predicted as CCR3 actives by all three tested methods: Glide, Keras/TensorFlow NN, and LightGBM. In addition, the performance of these three methods in the prediction of the CCR2/CCR3 receptor subtype selectivity was assessed.
Collapse
|
10
|
Novac OC, Chirodea MC, Novac CM, Bizon N, Oproescu M, Stan OP, Gordan CE. Analysis of the Application Efficiency of TensorFlow and PyTorch in Convolutional Neural Network. Sensors (Basel) 2022; 22:s22228872. [PMID: 36433470 PMCID: PMC9699128 DOI: 10.3390/s22228872] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 10/16/2022] [Accepted: 11/09/2022] [Indexed: 05/27/2023]
Abstract
In this paper, we present an analysis of important aspects that arise during the development of neural network applications. Our aim is to determine if the choice of library can impact the system's overall performance, either during training or design, and to extract a set of criteria that could be used to highlight the advantages and disadvantages of each library under consideration. To do so, we first extracted the previously mentioned aspects by comparing two of the most popular neural network libraries-PyTorch and TensorFlow-and then we performed an analysis on the obtained results, with the intent of determining if our initial hypothesis was correct. In the end, the results of the analysis are gathered, and an overall picture of what tasks are better suited for what library is presented.
Collapse
Affiliation(s)
- Ovidiu-Constantin Novac
- Department of Computers and Information Technology, Electrical Engineering and Information Technology Faculty, University of Oradea, 410087 Oradea, Romania
| | - Mihai Cristian Chirodea
- Department of Computers and Information Technology, Electrical Engineering and Information Technology Faculty, University of Oradea, 410087 Oradea, Romania
| | - Cornelia Mihaela Novac
- Department of Electrical Engineering, Electrical Engineering and Information Technology Faculty, University of Oradea, 410087 Oradea, Romania
| | - Nicu Bizon
- Department of Electronics, Computers and Electrical Engineering, Faculty of Electronics, Telecommunication, and Computer Science, University of Pitesti, 110040 Pitesti, Romania
| | - Mihai Oproescu
- Department of Electronics, Computers and Electrical Engineering, Faculty of Electronics, Telecommunication, and Computer Science, University of Pitesti, 110040 Pitesti, Romania
| | - Ovidiu Petru Stan
- Department of Automation, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Cornelia Emilia Gordan
- Department of Electronics and Telecommunications, Electrical Engineering and Information Technology Faculty, University of Oradea, 410087 Oradea, Romania
| |
Collapse
|
11
|
Kosarac A, Cep R, Trochta M, Knezev M, Zivkovic A, Mladjenovic C, Antic A. Thermal Behavior Modeling Based on BP Neural Network in Keras Framework for Motorized Machine Tool Spindles. Materials (Basel) 2022; 15:ma15217782. [PMID: 36363373 PMCID: PMC9658404 DOI: 10.3390/ma15217782] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 10/20/2022] [Accepted: 11/02/2022] [Indexed: 05/20/2023]
Abstract
This paper presents the development and evaluation of neural network models using a small input-output dataset to predict the thermal behavior of a high-speed motorized spindles. Different neural multi-output regression models were developed and evaluated using Keras, one of the most popular deep learning frameworks at the moment. ANN was developed and evaluated considering the following: the influence of the topology (number of hidden layers and neurons within), the learning parameter, and validation techniques. The neural network was simulated using a dataset that was completely unknown to the network. The ANN model was used for analyzing the effect of working conditions on the thermal behavior of the motorized grinder spindle. The prediction accuracy of the ANN model for the spindle thermal behavior ranged from 95% to 98%. The results show that the ANN model with small datasets can accurately predict the temperature of the spindle under different working conditions. In addition, the analysis showed a very strong effect of type coolant on spindle unit temperature, particularly for intensive cooling with water.
Collapse
Affiliation(s)
- Aleksandar Kosarac
- Faculty of Mechanical Engineering, University of East Sarajevo, 71123 Istocno Sarajevo, Bosnia and Herzegovina
| | - Robert Cep
- Faculty of Mechanical Engineering, VSB-Technical University of Ostrava, 70833 Ostrava, Czech Republic
| | - Miroslav Trochta
- Faculty of Mechanical Engineering, VSB-Technical University of Ostrava, 70833 Ostrava, Czech Republic
| | - Milos Knezev
- Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
- Correspondence: ; Tel.: +381-21-485-2345
| | - Aleksandar Zivkovic
- Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
| | | | - Aco Antic
- Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
| |
Collapse
|
12
|
Mokhtarzadeh H, Jiang F, Zhao S, Malekipour F. OpenColab project: OpenSim in Google colaboratory to explore biomechanics on the web. Comput Methods Biomech Biomed Engin 2022:1-9. [PMID: 35930042 DOI: 10.1080/10255842.2022.2104607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
OpenSim is an open-source biomechanical package with a variety of applications. It is available for many users with bindings in MATLAB, Python, and Java via its application programming interfaces (APIs). Although the developers described well the OpenSim installation on different operating systems (Windows, Mac, and Linux), it is time-consuming and complex since each operating system requires a different configuration. This project aims to demystify the development of neuro-musculoskeletal modeling in OpenSim with zero configuration on any operating system for installation (thus cross-platform), easy to share models while accessing free graphical processing units (GPUs) on a web-based platform of Google Colab. To achieve this, OpenColab was developed where OpenSim source code was used to build a Conda package that can be installed on the Google Colab with only one block of code in less than 7 min. To use OpenColab, one requires a connection to the internet and a Gmail account. Moreover, OpenColab accesses vast libraries of machine learning methods available within free Google products, e.g. TensorFlow. Next, we performed an inverse problem in biomechanics and compared OpenColab results with OpenSim graphical user interface (GUI) for validation. The outcomes of OpenColab and GUI matched well (r≥0.82). OpenColab takes advantage of the zero-configuration of cloud-based platforms, accesses GPUs, and enables users to share and reproduce modeling approaches for further validation, innovative online training, and research applications. Step-by-step installation processes and examples are available at: https://simtk.org/projects/opencolab.
Collapse
Affiliation(s)
- Hossein Mokhtarzadeh
- Department of Biomedical Engineering, The University of Melbourne, Melbourne, Australia
| | - Fangwei Jiang
- Faculty of Engineering and Information Technology, The University of Melbourne, Melbourne, Australia
| | - Shengzhe Zhao
- Faculty of Engineering and Information Technology, The University of Melbourne, Melbourne, Australia
| | - Fatemeh Malekipour
- Department of Biomedical Engineering, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
13
|
Branescua M, Swifta S, Tuckera A. A Comparison of Convolutional Neural Networks and Traditional Feature-Based Classification Applied to Leukaemia Image Analysis. Stud Health Technol Inform 2022; 295:545-550. [PMID: 35773932 DOI: 10.3233/shti220786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The accuracy of smear test image classification is a fundamental aspect in differentiating the type of leukaemia and determining the right treatment to improve the patient's chances of survival and recovery. Image Classification has lately become a very effective tool in detecting and analysing the right type of leukaemia as each type of the disease looks differently when evaluated under microscope. This paper is evaluating and comparing the efficiency and performance of feature extraction techniques (colour descriptors and Haralick texture descriptors) and a CNN (Convolutional Neural Network) built and trained by using the TensorFlow packages for classifying leukaemia images. Extracting texture and colour features from a given set of leukaemia images through computation was successful in detecting the type of disease and the results analysed with Weka Classifiers were giving the highest accuracy of 93.58%. TensorFlow tested with Cross-Validation proves efficient in training and customising the system, but the accuracy was median 56% and was not greatly improved by addressing the class imbalance issue from the data set with SMOTE. Further studies will investigate increasing the number of images by using a segmentation and image manipulation/augmentation techniques and increasing the accuracy of CNN through the addition of the investigated traditional features.
Collapse
Affiliation(s)
- Marinela Branescua
- The Department of Computer Science, Brunel University, West London, United Kingdom
| | - Stephen Swifta
- The Department of Computer Science, Brunel University, West London, United Kingdom
| | - Allan Tuckera
- The Department of Computer Science, Brunel University, West London, United Kingdom
| |
Collapse
|
14
|
Dong Q, Zhang X, Luo G. Improving the Accuracy of Progress Indication for Constructing Deep Learning Models. IEEE Access 2022; 10:63754-63781. [PMID: 35873900 PMCID: PMC9302923 DOI: 10.1109/access.2022.3181493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
For many machine learning tasks, deep learning greatly outperforms all other existing learning algorithms. However, constructing a deep learning model on a big data set often takes days or months. During this long process, it is preferable to provide a progress indicator that keeps predicting the model construction time left and the percentage of model construction work done. Recently, we developed the first method to do this that permits early stopping. That method revises its predicted model construction cost using information gathered at the validation points, where the model's error rate is computed on the validation set. Due to the sparsity of validation points, the resulting progress indicators often have a long delay in gathering information from enough validation points and obtaining relatively accurate progress estimates. In this paper, we propose a new progress indication method to overcome this shortcoming by judiciously inserting extra validation points between the original validation points. We implemented this new method in TensorFlow. Our experiments show that compared with using our prior method, using this new method reduces the progress indicator's prediction error of the model construction time left by 57.5% on average. Also, with a low overhead, this new method enables us to obtain relatively accurate progress estimates faster.
Collapse
Affiliation(s)
- Qifei Dong
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA 98195, USA
| | - Xiaoyi Zhang
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA 98195, USA
| | - Gang Luo
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
15
|
Janbi N, Mehmood R, Katib I, Albeshri A, Corchado JM, Yigitcanlar T. Imtidad: A Reference Architecture and a Case Study on Developing Distributed AI Services for Skin Disease Diagnosis over Cloud, Fog and Edge. Sensors (Basel) 2022; 22:1854. [PMID: 35271000 PMCID: PMC8914788 DOI: 10.3390/s22051854] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 02/17/2022] [Accepted: 02/21/2022] [Indexed: 06/14/2023]
Abstract
Several factors are motivating the development of preventive, personalized, connected, virtual, and ubiquitous healthcare services. These factors include declining public health, increase in chronic diseases, an ageing population, rising healthcare costs, the need to bring intelligence near the user for privacy, security, performance, and costs reasons, as well as COVID-19. Motivated by these drivers, this paper proposes, implements, and evaluates a reference architecture called Imtidad that provides Distributed Artificial Intelligence (AI) as a Service (DAIaaS) over cloud, fog, and edge using a service catalog case study containing 22 AI skin disease diagnosis services. These services belong to four service classes that are distinguished based on software platforms (containerized gRPC, gRPC, Android, and Android Nearby) and are executed on a range of hardware platforms (Google Cloud, HP Pavilion Laptop, NVIDIA Jetson nano, Raspberry Pi Model B, Samsung Galaxy S9, and Samsung Galaxy Note 4) and four network types (Fiber, Cellular, Wi-Fi, and Bluetooth). The AI models for the diagnosis include two standard Deep Neural Networks and two Tiny AI deep models to enable their execution at the edge, trained and tested using 10,015 real-life dermatoscopic images. The services are evaluated using several benchmarks including model service value, response time, energy consumption, and network transfer time. A DL service on a local smartphone provides the best service in terms of both energy and speed, followed by a Raspberry Pi edge device and a laptop in fog. The services are designed to enable different use cases, such as patient diagnosis at home or sending diagnosis requests to travelling medical professionals through a fog device or cloud. This is the pioneering work that provides a reference architecture and such a detailed implementation and treatment of DAIaaS services, and is also expected to have an extensive impact on developing smart distributed service infrastructures for healthcare and other sectors.
Collapse
Affiliation(s)
- Nourah Janbi
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (N.J.); (I.K.); (A.A.)
| | - Rashid Mehmood
- High Performance Computing Center, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Iyad Katib
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (N.J.); (I.K.); (A.A.)
| | - Aiiad Albeshri
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (N.J.); (I.K.); (A.A.)
| | - Juan M. Corchado
- Bisite Research Group, University of Salamanca, 37007 Salamanca, Spain;
- Air Institute, IoT Digital Innovation Hub, 37188 Salamanca, Spain
- Department of Electronics, Information and Communication, Faculty of Engineering, Osaka Institute of Technology, Osaka 535-8585, Japan
| | - Tan Yigitcanlar
- School of Architecture and Built Environment, Queensland University of Technology, 2 George Street, Brisbane, QLD 4000, Australia;
| |
Collapse
|
16
|
Wang Z, Li J, Wu J, Xu H. Application of Deep Learning Algorithms to Visual Communication Courses. Front Psychol 2021; 12:713723. [PMID: 34659027 PMCID: PMC8514077 DOI: 10.3389/fpsyg.2021.713723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 09/03/2021] [Indexed: 11/13/2022] Open
Abstract
There are rare studies on the combination of visual communication courses and image style transfer. Nevertheless, such a combination can make students understand the difference in perception brought by image styles more vividly. Therefore, a collaborative application is reported here combining visual communication courses and image style transfer. First, the visual communication courses are sorted out to obtain the relationship between them and image style transfer. Then, a style transfer method based on deep learning is designed, and a fast transfer network is introduced. Moreover, the image rendering is accelerated by separating training and execution. Besides, a fast style conversion network is constructed based on TensorFlow, and a style model is obtained after training. Finally, six types of images are selected from the Google Gallery for the conversion of image style, including landscape images, architectural images, character images, animal images, cartoon images, and hand-painted images. The style transfer method achieves excellent effects on the whole image besides the part hard to be rendered. Furthermore, the increase in iterations of the image style transfer network alleviates lack of image content and image style. The image style transfer method reported here can quickly transmit image style in less than 1 s and realize real-time image style transmission. Besides, this method effectively improves the stylization effect and image quality during the image style conversion. The proposed style transfer system can increase students’ understanding of different artistic styles in visual communication courses, thereby improving the learning efficiency of students.
Collapse
Affiliation(s)
- Zewen Wang
- Pan Tianshou College of Architecture, Art and Design, Ningbo University, Ningbo, China
| | - Jiayi Li
- Department of Control and Computer Engineering, Polytechnic University of Turin, Turin, Italy
| | - Jieting Wu
- Engineering University of Armed Police Force, Urumqi, China
| | - Hui Xu
- College of Education, University of Perpetual Help System DALTA, Manila, Philippines
| |
Collapse
|
17
|
Youn YC, Pyun JM, Ryu N, Baek MJ, Jang JW, Park YH, Ahn SW, Shin HW, Park KY, Kim SY. Use of the Clock Drawing Test and the Rey-Osterrieth Complex Figure Test-copy with convolutional neural networks to predict cognitive impairment. Alzheimers Res Ther 2021; 13:85. [PMID: 33879200 PMCID: PMC8059231 DOI: 10.1186/s13195-021-00821-8] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Accepted: 04/05/2021] [Indexed: 11/10/2022]
Abstract
BACKGROUND The Clock Drawing Test (CDT) and Rey-Osterrieth Complex Figure Test (RCFT) are widely used as a part of neuropsychological test batteries to assess cognitive function. Our objective was to confirm the prediction accuracies of the RCFT-copy and CDT for cognitive impairment (CI) using convolutional neural network algorithms as a screening tool. METHODS The CDT and RCFT-copy data were obtained from patients aged 60-80 years who had more than 6 years of education. In total, 747 CDT and 980 RCFT-copy figures were utilized. Convolutional neural network algorithms using TensorFlow (ver. 2.3.0) on the Colab cloud platform ( www.colab. RESEARCH google.com ) were used for preprocessing and modeling. We measured the prediction accuracy of each drawing test 10 times using this dataset with the following classes: normal cognition (NC) vs. mildly impaired cognition (MI), NC vs. severely impaired cognition (SI), and NC vs. CI (MI + SI). RESULTS The accuracy of the CDT was better for differentiating MI (CDT, 78.04 ± 2.75; RCFT-copy, not being trained) and SI from NC (CDT, 91.45 ± 0.83; RCFT-copy, 90.27 ± 1.52); however, the RCFT-copy was better at predicting CI (CDT, 77.37 ± 1.77; RCFT, 83.52 ± 1.41). The accuracy for a 3-way classification (NC vs. MI vs. SI) was approximately 71% for both tests; no significant difference was found between them. CONCLUSIONS The two drawing tests showed good performance for predicting severe impairment of cognition; however, a drawing test alone is not enough to predict overall CI. There are some limitations to our study: the sample size was small, all the participants did not perform both the CDT and RCFT-copy, and only the copy condition of the RCFT was used. Algorithms involving memory performance and longitudinal changes are worth future exploration. These results may contribute to improved home-based healthcare delivery.
Collapse
Affiliation(s)
- Young Chul Youn
- Department of Neurology, College of Medicine, Chung-Ang University, Seoul, Republic of Korea
- Department of Medical Informatics, Chung-Ang University College of Medicine, Seoul, Republic of Korea
| | - Jung-Min Pyun
- Department of Neurology, Seoul National University College of Medicine & Seoul National University Bundang Hospital, Seoul, Republic of Korea
| | - Nayoung Ryu
- Department of Neurology, Seoul National University College of Medicine & Seoul National University Bundang Hospital, Seoul, Republic of Korea
| | - Min Jae Baek
- Department of Neurology, Seoul National University College of Medicine & Seoul National University Bundang Hospital, Seoul, Republic of Korea
| | - Jae-Won Jang
- Department of Neurology, Kangwon National University Hospital, Kangwon National University College of Medicine, Chuncheon, Republic of Korea
| | - Young Ho Park
- Department of Neurology, Seoul National University College of Medicine & Seoul National University Bundang Hospital, Seoul, Republic of Korea
| | - Suk-Won Ahn
- Department of Neurology, College of Medicine, Chung-Ang University, Seoul, Republic of Korea
| | - Hae-Won Shin
- Department of Neurology, College of Medicine, Chung-Ang University, Seoul, Republic of Korea
| | - Kwang-Yeol Park
- Department of Neurology, College of Medicine, Chung-Ang University, Seoul, Republic of Korea
- Department of Medical Informatics, Chung-Ang University College of Medicine, Seoul, Republic of Korea
| | - Sang Yun Kim
- Department of Medical Informatics, Chung-Ang University College of Medicine, Seoul, Republic of Korea.
- Department of Neurology, Seoul National University College of Medicine & Neurocognitive Behavior Center, Seoul National University Bundang Hospital, 300 Gumi-dong, Bundang-gu, Seongnam-si, Gyeonggi-do, 463-707, Republic of Korea.
| |
Collapse
|
18
|
Abstract
Targeting protein-protein interactions is a challenge and crucial task of the drug discovery process. A good starting point for rational drug design is the identification of hot spots (HS) at protein-protein interfaces, typically conserved residues that contribute most significantly to the binding. In this chapter, we depict point-by-point an in-house pipeline used for HS prediction using only sequence-based features from the well-known SpotOn dataset of soluble proteins (Moreira et al., Sci Rep 7:8007, 2017), through the implementation of a deep neural network. The presented pipeline is divided into three steps: (1) feature extraction, (2) deep learning classification, and (3) model evaluation. We present all the available resources, including code snippets, the main dataset, and the free and open-source modules/packages necessary for full replication of the protocol. The users should be able to develop an HS prediction model with accuracy, precision, recall, and AUROC of 0.96, 0.93, 0.91, and 0.86, respectively.
Collapse
Affiliation(s)
- António J Preto
- Center for Innovative Biomedicine and Biotechnology, University of Coimbra, Coimbra, Portugal
- Center for Neuroscience and Cell Biology, University of Coimbra, Coimbra, Portugal
- Institute for Interdisciplinary Research, University of Coimbra, Coimbra, Portugal
| | - Pedro Matos-Filipe
- Center for Innovative Biomedicine and Biotechnology, University of Coimbra, Coimbra, Portugal
- Center for Neuroscience and Cell Biology, University of Coimbra, Coimbra, Portugal
| | - José G de Almeida
- Center for Innovative Biomedicine and Biotechnology, University of Coimbra, Coimbra, Portugal
- Center for Neuroscience and Cell Biology, University of Coimbra, Coimbra, Portugal
| | - Joana Mourão
- Center for Innovative Biomedicine and Biotechnology, University of Coimbra, Coimbra, Portugal
- Center for Neuroscience and Cell Biology, University of Coimbra, Coimbra, Portugal
- Institute for Interdisciplinary Research, University of Coimbra, Coimbra, Portugal
| | - Irina S Moreira
- Center for Innovative Biomedicine and Biotechnology, University of Coimbra, Coimbra, Portugal.
- Center for Neuroscience and Cell Biology, University of Coimbra, Coimbra, Portugal.
- University of Coimbra, Department of Life Sciences, University of Coimbra, Coimbra, Portugal.
| |
Collapse
|
19
|
Jiménez B, Maya C, Velásquez G, Barrios JA, Pérez M, Román A. Helminth Egg Automatic Detector (HEAD): Improvements in development for digital identification and quantification of Helminth eggs and its application online. MethodsX 2020; 7:101158. [PMID: 33318959 PMCID: PMC7725948 DOI: 10.1016/j.mex.2020.101158] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2020] [Revised: 10/21/2020] [Accepted: 11/19/2020] [Indexed: 11/22/2022] Open
Abstract
Conventional analytical techniques for evaluating Helminth eggs are based on different steps to concentrate them in a pellet for direct observation and quantification under a light microscope, which can generate under-counts or over-counts and be time consuming. To enhance this process, a new approach via automatic identification was implemented in which various image processing detectors were developed and incorporated into a Helminth Egg Automatic Detector (HEAD) system. This allowed the identification and quantification of pathogenic eggs of global medical importance. More than 2.6 billion people are currently affected and infected, and this results in approximately 80,000 child deaths each year. As a result, since 1980 the World Health Organization (WHO) has implemented guidelines, regulations and criteria for the control of the health risk. After the initial release of the analytical technique, two improvements were developed in the detector: first, a texture verification process that reduced the number of false positive results; and second, the establishment of the optimal thresholds for each species. In addition, the software was made available on a free platform. After performing an internal statistical verification of the system, testing with internationally recognized parasitology laboratories was carried out, Subsequently, the HEAD System is capable of identifying and quantifying different species of Helminth eggs in different environmental samples: wastewater, sludge, biosolids, excreta and soil, with in-service sensitivity and specificity values for the open library for machine learning TensorFlow (TF) model of 96.82% and 97.96% respectively. The current iteration uses AutoML Vision (a computer platform for the automatization of machine learning models, making it easier to train, optimize and export results to cloud applications or devices). It represents a useful and cheap tool that could be utilized by environmental monitoring facilities and laboratories around the world.•The HEAD Software will significantly reduce the costs associated with the detection and quantification of helminth eggs to a high level of accuracy.•It represents a tool, not only for microbiologists and researchers, but also for various agencies involved in sanitation, such as environmental regulation agencies, which currently require highly trained technicians.•The simplicity of the device contributes to the control the contamination of water, soil, and crops, even in poor and isolated communities.
Collapse
|
20
|
Li Y, Zhao Z, Luo Y, Qiu Z. Real-Time Pattern-Recognition of GPR Images with YOLO v3 Implemented by Tensorflow. Sensors (Basel) 2020; 20:s20226476. [PMID: 33198420 PMCID: PMC7696763 DOI: 10.3390/s20226476] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 11/10/2020] [Accepted: 11/10/2020] [Indexed: 06/01/2023]
Abstract
Artificial intelligence (AI) is widely used in pattern recognition and positioning. In most of the geological exploration applications, it needs to locate and identify underground objects according to electromagnetic wave characteristics from the ground-penetrating radar (GPR) images. Currently, a few robust AI approach can detect targets by real-time with high precision or automation for GPR images recognition. This paper proposes an approach that can be used to identify parabolic targets with different sizes and underground soil or concrete structure voids based on you only look once (YOLO) v3. With the TensorFlow 1.13.0 developed by Google, we construct YOLO v3 neural network to realize real-time pattern recognition of GPR images. We propose the specific coding method for the GPR image samples in Yolo V3 to improve the prediction accuracy of bounding boxes. At the same time, K-means algorithm is also applied to select anchor boxes to improve the accuracy of positioning hyperbolic vertex. For some instances electromagnetic-vacillated signals may occur, which refers to multiple parabolic electromagnetic waves formed by strong conductive objects among soils or overlapping waveforms. This paper deals with the vacillating signal similarity intersection over union (IoU) (V-IoU) methods. Experimental result shows that the V-IoU combined with non-maximum suppression (NMS) can accurately frame targets in GPR image and reduce the misidentified boxes as well. Compared with the single shot multi-box detector (SSD), YOLO v2, and Faster-RCNN, the V-IoU YOLO v3 shows its superior performance even when implemented by CPU. It can meet the real-time output requirements by an average 12 fps detected speed. In summary, this paper proposes a simple and high-precision real-time pattern recognition method for GPR imagery, and promoted the application of artificial intelligence or deep learning in the field of the geophysical science.
Collapse
Affiliation(s)
- Yuanhong Li
- College of Engineering, South China Agricultural University, Guangzhou 510642, China; (Y.L.); (Y.L.); (Z.Q.)
- Ministry of Education Key Technologies and Equipment Laboratory of Agricultural Machinery and Equipment in South China, South China Agricultural University, Guangzhou 510642, China
- Department of Biological and Agricultural Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Zuoxi Zhao
- College of Engineering, South China Agricultural University, Guangzhou 510642, China; (Y.L.); (Y.L.); (Z.Q.)
- Ministry of Education Key Technologies and Equipment Laboratory of Agricultural Machinery and Equipment in South China, South China Agricultural University, Guangzhou 510642, China
| | - Yangfan Luo
- College of Engineering, South China Agricultural University, Guangzhou 510642, China; (Y.L.); (Y.L.); (Z.Q.)
- Ministry of Education Key Technologies and Equipment Laboratory of Agricultural Machinery and Equipment in South China, South China Agricultural University, Guangzhou 510642, China
| | - Zhi Qiu
- College of Engineering, South China Agricultural University, Guangzhou 510642, China; (Y.L.); (Y.L.); (Z.Q.)
- Ministry of Education Key Technologies and Equipment Laboratory of Agricultural Machinery and Equipment in South China, South China Agricultural University, Guangzhou 510642, China
| |
Collapse
|
21
|
Kotlarz K, Mielczarek M, Suchocki T, Czech B, Guldbrandtsen B, Szyda J. The application of deep learning for the classification of correct and incorrect SNP genotypes from whole-genome DNA sequencing pipelines. J Appl Genet 2020; 61:607-616. [PMID: 32996082 PMCID: PMC7652806 DOI: 10.1007/s13353-020-00586-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 09/11/2020] [Accepted: 09/18/2020] [Indexed: 11/18/2022]
Abstract
A downside of next-generation sequencing technology is the high technical error rate. We built a tool, which uses array-based genotype information to classify next-generation sequencing–based SNPs into the correct and the incorrect calls. The deep learning algorithms were implemented via Keras. Several algorithms were tested: (i) the basic, naïve algorithm, (ii) the naïve algorithm modified by pre-imposing different weights on incorrect and correct SNP class in calculating the loss metric and (iii)–(v) the naïve algorithm modified by random re-sampling (with replacement) of the incorrect SNPs to match 30%/60%/100% of the number of correct SNPs. The training data set was composed of data from three bulls and consisted of 2,227,995 correct (97.94%) and 46,920 incorrect SNPs, while the validation data set consisted of data from one bull with 749,506 correct (98.05%) and 14,908 incorrect SNPs. The results showed that for a rare event classification problem, like incorrect SNP detection in NGS data, the most parsimonious naïve model and a model with the weighting of SNP classes provided the best results for the classification of the validation data set. Both classified 19% of truly incorrect SNPs as incorrect and 99% of truly correct SNPs as correct and resulted in the F1 score of 0.21 — the highest among the compared algorithms. We conclude the basic models were less adapted to the specificity of a training data set and thus resulted in better classification of the independent, validation data set, than the other tested models.
Collapse
Affiliation(s)
- Krzysztof Kotlarz
- Biostatistics Group, Department of Genetics, Wroclaw University of Environmental and Life Sciences, Kozuchowska 7, 51-631, Wroclaw, Poland
| | - Magda Mielczarek
- Biostatistics Group, Department of Genetics, Wroclaw University of Environmental and Life Sciences, Kozuchowska 7, 51-631, Wroclaw, Poland.,Institute of Animal Breeding, Balice, Poland
| | - Tomasz Suchocki
- Biostatistics Group, Department of Genetics, Wroclaw University of Environmental and Life Sciences, Kozuchowska 7, 51-631, Wroclaw, Poland.,Institute of Animal Breeding, Balice, Poland
| | - Bartosz Czech
- Biostatistics Group, Department of Genetics, Wroclaw University of Environmental and Life Sciences, Kozuchowska 7, 51-631, Wroclaw, Poland
| | - Bernt Guldbrandtsen
- Animal Breeding Group, Department of Animal Sciences, University of Bonn, Bonn, Germany
| | - Joanna Szyda
- Biostatistics Group, Department of Genetics, Wroclaw University of Environmental and Life Sciences, Kozuchowska 7, 51-631, Wroclaw, Poland. .,Institute of Animal Breeding, Balice, Poland.
| |
Collapse
|
22
|
Lin Z, Guo W. Sorghum Panicle Detection and Counting Using Unmanned Aerial System Images and Deep Learning. Front Plant Sci 2020; 11:534853. [PMID: 32983210 PMCID: PMC7492560 DOI: 10.3389/fpls.2020.534853] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 08/17/2020] [Indexed: 05/26/2023]
Abstract
Machine learning and computer vision technologies based on high-resolution imagery acquired using unmanned aerial systems (UAS) provide a potential for accurate and efficient high-throughput plant phenotyping. In this study, we developed a sorghum panicle detection and counting pipeline using UAS images based on an integration of image segmentation and a convolutional neural networks (CNN) model. A UAS with an RGB camera was used to acquire images (2.7 mm resolution) at 10-m height in a research field with 120 small plots. A set of 1,000 images were randomly selected, and a mask was developed for each by manually delineating sorghum panicles. These images and their corresponding masks were randomly divided into 10 training datasets, each with a different number of images and masks, ranging from 100 to 1,000 with an interval of 100. A U-Net CNN model was built using these training datasets. The sorghum panicles were detected and counted by a predicted mask through the algorithm. The algorithm was implemented using Python with the Tensorflow library for the deep learning procedure and the OpenCV library for the process of sorghum panicle counting. Results showed the accuracy had a general increasing trend with the number of training images. The algorithm performed the best with 1,000 training images, with an accuracy of 95.5% and a root mean square error (RMSE) of 2.5. The results indicate that the integration of image segmentation and the U-Net CNN model is an accurate and robust method for sorghum panicle counting and offers an opportunity for enhanced sorghum breeding efficiency and accurate yield estimation.
Collapse
Affiliation(s)
- Zhe Lin
- Department of Plant and Soil Science, Texas Tech University, Lubbock, TX, United States
| | - Wenxuan Guo
- Department of Plant and Soil Science, Texas Tech University, Lubbock, TX, United States
- Department of Soil and Crop Sciences, Texas A&M AgriLife Research, Lubbock, TX, United States
| |
Collapse
|
23
|
Khan MN, Ahmed MM. Trajectory-level fog detection based on in-vehicle video camera with TensorFlow deep learning utilizing SHRP2 naturalistic driving data. Accid Anal Prev 2020; 142:105521. [PMID: 32408146 DOI: 10.1016/j.aap.2020.105521] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Revised: 03/07/2020] [Accepted: 03/22/2020] [Indexed: 06/11/2023]
Abstract
Providing drivers with real-time weather information and driving assistance during adverse weather, including fog, is crucial for safe driving. The primary focus of this study was to develop an affordable in-vehicle fog detection method, which will provide accurate trajectory-level weather information in real-time. The study used the SHRP2 Naturalistic Driving Study (NDS) video data and utilized several promising Deep Learning techniques, including Deep Neural Network (DNN), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), and Convolutional Neural Network (CNN). Python programming on the TensorFlow Machine Learning library has been used for training the Deep Learning models. The analysis was done on a dataset consisted of three weather conditions, including clear, distant fog and near fog. During the training process, two optimizers, including Adam and Gradient Descent, have been used. While the overall prediction accuracy of the DNN, RNN, LSTM, and CNN using the Gradient Descent optimizer were found to be around 85 %, 77 %, 84 %, and 97 %, respectively; much improved overall prediction accuracy of 88 %, 91 %, 93 %, and 98 % for the DNN, RNN, LSTM, and CNN, respectively, were observed considering the Adam optimizer. The proposed fog detection method requires only a single video camera to detect weather conditions, and therefore, can be an inexpensive option to be fitted in maintenance vehicles to collect trajectory-level weather information in real-time for expanding as well as updating weather-based Variable Speed Limit (VSL) systems and Advanced Traveler Information Systems (ATIS).
Collapse
Affiliation(s)
- Md Nasim Khan
- University of Wyoming, Department of Civil & Architectural Engineering, 1000 E University Ave, Dept. 3295, Laramie, WY 82071, United States.
| | - Mohamed M Ahmed
- University of Wyoming, Department of Civil & Architectural Engineering, 1000 E University Ave, Dept. 3295, Laramie, WY 82071, United States.
| |
Collapse
|
24
|
Ott T, Palm C, Vogt R, Oberprieler C. GinJinn: An object-detection pipeline for automated feature extraction from herbarium specimens. Appl Plant Sci 2020; 8:e11351. [PMID: 32626606 PMCID: PMC7328649 DOI: 10.1002/aps3.11351] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Accepted: 02/06/2020] [Indexed: 05/28/2023]
Abstract
PREMISE The generation of morphological data in evolutionary, taxonomic, and ecological studies of plants using herbarium material has traditionally been a labor-intensive task. Recent progress in machine learning using deep artificial neural networks (deep learning) for image classification and object detection has facilitated the establishment of a pipeline for the automatic recognition and extraction of relevant structures in images of herbarium specimens. METHODS AND RESULTS We implemented an extendable pipeline based on state-of-the-art deep-learning object-detection methods to collect leaf images from herbarium specimens of two species of the genus Leucanthemum. Using 183 specimens as the training data set, our pipeline extracted one or more intact leaves in 95% of the 61 test images. CONCLUSIONS We establish GinJinn as a deep-learning object-detection tool for the automatic recognition and extraction of individual leaves or other structures from herbarium specimens. Our pipeline offers greater flexibility and a lower entrance barrier than previous image-processing approaches based on hand-crafted features.
Collapse
Affiliation(s)
- Tankred Ott
- Evolutionary and Systematic Botany GroupInstitute of Plant SciencesUniversity of RegensburgUniversitätsstraße 31D‐93053RegensburgGermany
| | - Christoph Palm
- Regensburg Medical Image Computing (ReMIC)Ostbayerische Technische Hochschule Regensburg (OTH Regensburg)Galgenbergstraße 32D‐93053RegensburgGermany
| | - Robert Vogt
- Botanic Garden and Botanical Museum Berlin‐DahlemFreie Universität BerlinKönigin‐Luise‐Straße 6‐8D‐14191BerlinGermany
| | - Christoph Oberprieler
- Evolutionary and Systematic Botany GroupInstitute of Plant SciencesUniversity of RegensburgUniversitätsstraße 31D‐93053RegensburgGermany
| |
Collapse
|
25
|
Dong Q, Luo G. Progress Indication for Deep Learning Model Training: A Feasibility Demonstration. IEEE Access 2020; 8:79811-79843. [PMID: 32483518 PMCID: PMC7263346 DOI: 10.1109/access.2020.2989684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Deep learning is the state-of-the-art learning algorithm for many machine learning tasks. Yet, training a deep learning model on a large data set is often time-consuming, taking several days or even months. During model training, it is desirable to offer a non-trivial progress indicator that can continuously project the remaining model training time and the fraction of model training work completed. This makes the training process more user-friendly. In addition, we can use the information given by the progress indicator to assist in workload management. In this paper, we present the first set of techniques to support non-trivial progress indicators for deep learning model training when early stopping is allowed. We report an implementation of these techniques in TensorFlow and our evaluation results for both convolutional and recurrent neural networks. Our experiments show that our progress indicator can offer useful information even if the run-time system load varies over time. In addition, the progress indicator can self-correct its initial estimation errors, if any, over time.
Collapse
Affiliation(s)
- Qifei Dong
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA 98195, USA
| | - Gang Luo
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
26
|
Abstract
BACKGROUND The mobile screening test system for mild cognitive impairment (mSTS-MCI) was developed and validated to address the low sensitivity and specificity of the Montreal Cognitive Assessment (MoCA) widely used clinically. OBJECTIVE This study was to evaluate the efficacy machine learning algorithms based on the mSTS-MCI and Korean version of MoCA. METHOD In total, 103 healthy individuals and 74 patients with MCI were randomly divided into training and test data sets, respectively. The algorithm using TensorFlow was trained based on the training data set, and then its accuracy was calculated based on the test data set. The cost was calculated via logistic regression in this case. RESULT Predictive power of the algorithms was higher than those of the original tests. In particular, the algorithm based on the mSTS-MCI showed the highest positive-predictive value. CONCLUSION The machine learning algorithms predicting MCI showed the comparable findings with the conventional screening tools.
Collapse
Affiliation(s)
- Jin-Hyuck Park
- Department of Occupational Therapy, College of Medical Science, Soonchunhyang University, Asan, Korea
| |
Collapse
|
27
|
Abstract
Logistic regression model is one of the most widely used modeling techniques in clinical medicine, owing to the widely available statistical packages for its implementation, and the ease of interpretation. However, logistic model training requires strict assumptions (such as additive and linearity) to be met and these assumptions may not hold true in real world. Thus, clinical investigators need to master some advanced model training methods that can predict more accurately. TensorFlow™ is a popular tool in training machine learning models such as supervised, unsupervised and reinforcement learning methods. Thus, it is important to learn TensorFlow™ in the era of big data. Since most clinical investigators are familiar with the logistic regression model, this article provides a step-by-step tutorial on how to train a logistic regression model in TensorFlow™, with the primary purpose to illustrate how the TensorFlow™ works. We first need to construct a graph with tensors and operations, then the graph is run in a session. Finally, we display the graph and summary statistics in the TensorBoard, which shows the changes of the accuracy and loss value across the training iterations.
Collapse
Affiliation(s)
- Zhongheng Zhang
- Department of Emergency Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou 310016, China
| | - Lei Mo
- Department of Biostatistics, Lejiu Healthcare Technology Co., Ltd, Shanghai, China
| | - Chen Huang
- Nursing Department, Information Technology (IT) Center, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou 310016, China
| | - Ping Xu
- Emergency Department, Zigong Fourth People's Hospital, Zigong 643000, China
| | | |
Collapse
|
28
|
Khalighifar A, Komp E, Ramsey JM, Gurgel-Gonçalves R, Peterson AT. Deep Learning Algorithms Improve Automated Identification of Chagas Disease Vectors. J Med Entomol 2019; 56:1404-1410. [PMID: 31121052 DOI: 10.1093/jme/tjz065] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Indexed: 06/09/2023]
Abstract
Vector-borne Chagas disease is endemic to the Americas and imposes significant economic and social burdens on public health. In a previous contribution, we presented an automated identification system that was able to discriminate among 12 Mexican and 39 Brazilian triatomine (Hemiptera: Reduviidae) species from digital images. To explore the same data more deeply using machine-learning approaches, hoping for improvements in classification, we employed TensorFlow, an open-source software platform for a deep learning algorithm. We trained the algorithm based on 405 images for Mexican triatomine species and 1,584 images for Brazilian triatomine species. Our system achieved 83.0 and 86.7% correct identification rates across all Mexican and Brazilian species, respectively, an improvement over comparable rates from statistical classifiers (80.3 and 83.9%, respectively). Incorporating distributional information to reduce numbers of species in analyses improved identification rates to 95.8% for Mexican species and 98.9% for Brazilian species. Given the 'taxonomic impediment' and difficulties in providing entomological expertise necessary to control such diseases, automating the identification process offers a potential partial solution to crucial challenges.
Collapse
Affiliation(s)
- Ali Khalighifar
- Biodiversity Institute and Department of Ecology and Evolutionary Biology, University of Kansas, Lawrence, KS
| | - Ed Komp
- Information and Telecommunication Technology Center, University of Kansas, Lawrence, KS
| | - Janine M Ramsey
- Centro Regional de Investigación en Salud Pública, Instituto Nacional de Salud Publica, Tapachula, Chiapas, Mexico
| | | | - A Townsend Peterson
- Biodiversity Institute and Department of Ecology and Evolutionary Biology, University of Kansas, Lawrence, KS
| |
Collapse
|
29
|
Sun J, Guo J, Wu X, Zhu Q, Wu D, Xian K, Zhou X. Analyzing the Impact of Traffic Congestion Mitigation: From an Explainable Neural Network Learning Framework to Marginal Effect Analyses. Sensors (Basel) 2019; 19:s19102254. [PMID: 31096706 PMCID: PMC6567360 DOI: 10.3390/s19102254] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 04/30/2019] [Accepted: 05/07/2019] [Indexed: 11/19/2022]
Abstract
Computational graphs (CGs) have been widely utilized in numerical analysis and deep learning to represent directed forward networks of data flows between operations. This paper aims to develop an explainable learning framework that can fully integrate three major steps of decision support: Synthesis of diverse traffic data, multilayered traffic demand estimation, and marginal effect analyses for transport policies. Following the big data-driven transportation computational graph (BTCG) framework, which is an emerging framework for explainable neural networks, we map different external traffic measurements collected from household survey data, mobile phone data, floating car data, and sensor networks to multilayered demand variables in a CG. Furthermore, we extend the CG-based framework by mapping different congestion mitigation strategies to CG layers individually or in combination, allowing the marginal effects and potential migration magnitudes of the strategies to be reliably quantified. Using the TensorFlow architecture, we evaluate our framework on the Sioux Falls network and present a large-scale case study based on a subnetwork of Beijing using a data set from the metropolitan planning organization.
Collapse
Affiliation(s)
- Jianping Sun
- School of Traffic and Transportation, Beijing Jiaotong University, Beijing 100044, China.
- Beijing Transport Institute, Beijing 100073, China.
| | - Jifu Guo
- School of Traffic and Transportation, Beijing Jiaotong University, Beijing 100044, China.
- Beijing Transport Institute, Beijing 100073, China.
| | - Xin Wu
- School of Sustainable Engineering and the Built Environment, Arizona State University, Tempe, AZ 85281, USA.
| | - Qian Zhu
- School of Transportation and Logistics, Southwest Jiaotong University, Chengdu 611756, China.
| | - Danting Wu
- School of Traffic and Transportation, Beijing Jiaotong University, Beijing 100044, China.
| | - Kai Xian
- Beijing Transport Institute, Beijing 100073, China.
| | - Xuesong Zhou
- Beijing Transport Institute, Beijing 100073, China.
- School of Sustainable Engineering and the Built Environment, Arizona State University, Tempe, AZ 85281, USA.
| |
Collapse
|
30
|
Aqib M, Mehmood R, Alzahrani A, Katib I, Albeshri A, Altowaijri SM. Smarter Traffic Prediction Using Big Data, In-Memory Computing, Deep Learning and GPUs. Sensors (Basel) 2019; 19:s19092206. [PMID: 31086055 PMCID: PMC6539338 DOI: 10.3390/s19092206] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Revised: 05/01/2019] [Accepted: 05/10/2019] [Indexed: 11/16/2022]
Abstract
Road transportation is the backbone of modern economies, albeit it annually costs 1.25 million deaths and trillions of dollars to the global economy, and damages public health and the environment. Deep learning is among the leading-edge methods used for transportation-related predictions, however, the existing works are in their infancy, and fall short in multiple respects, including the use of datasets with limited sizes and scopes, and insufficient depth of the deep learning studies. This paper provides a novel and comprehensive approach toward large-scale, faster, and real-time traffic prediction by bringing four complementary cutting-edge technologies together: big data, deep learning, in-memory computing, and Graphics Processing Units (GPUs). We trained deep networks using over 11 years of data provided by the California Department of Transportation (Caltrans), the largest dataset that has been used in deep learning studies. Several combinations of the input attributes of the data along with various network configurations of the deep learning models were investigated for training and prediction purposes. The use of the pre-trained model for real-time prediction was explored. The paper contributes novel deep learning models, algorithms, implementation, analytics methodology, and software tool for smart cities, big data, high performance computing, and their convergence.
Collapse
Affiliation(s)
- Muhammad Aqib
- Department of Computer Science, FCIT, King Abdulaziz University, Jeddah 21589, Kingdom of Saudi Arabia.
| | - Rashid Mehmood
- High Performance Computing Center, King Abdulaziz University, Jeddah 21589, Kingdom of Saudi Arabia.
| | - Ahmed Alzahrani
- Department of Computer Science, FCIT, King Abdulaziz University, Jeddah 21589, Kingdom of Saudi Arabia.
| | - Iyad Katib
- Department of Computer Science, FCIT, King Abdulaziz University, Jeddah 21589, Kingdom of Saudi Arabia.
| | - Aiiad Albeshri
- Department of Computer Science, FCIT, King Abdulaziz University, Jeddah 21589, Kingdom of Saudi Arabia.
| | - Saleh M Altowaijri
- Faculty of Computing and Information Technology, Northern Border University, Rafha 91911, Kingdom of Saudi Arabia.
| |
Collapse
|
31
|
Chang Y, Wei D, Jia H, Curreli C, Wu Z, Sheng M, Glaser SJ, Yang X. Spin-Scenario: A flexible scripting environment for realistic MR simulations. J Magn Reson 2019; 301:1-9. [PMID: 30825713 DOI: 10.1016/j.jmr.2019.01.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 01/29/2019] [Accepted: 01/30/2019] [Indexed: 06/09/2023]
Abstract
In this paper we present a new open source package, Spin-Scenario, aimed at developing an intuitive, flexible and unique scripting framework able to cover many aspects of simulations in both MR imaging and MR spectroscopy. For this purpose, we adopted the Liouville space model as the standard computing engine and let the consequent computational burden be afforded by parallel computing techniques. Benefitting from the powerful Lua scripting language, the pulse sequence programming syntax was specially designed to offer an extremely concise way of scripting. Moreover, the built-in dataflow graph based optimal control scheme enables an efficient optimization of shaped pulses or multiple cooperative pulses for real-life experiment evaluations. As the name states, the users are expected to be able to realize their creative ideas like a scenarist that creates a scenario script and looks at the spin actors acting accordingly. The validation of the framework was demonstrated with several examples within MR imaging and MR spectroscopy. Spin-Scenario is available for download at https://github.com/spin-scenario.
Collapse
Affiliation(s)
- Yan Chang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 215163 Suzhou, China; University of Chinese Academy of Sciences, China
| | - Daxiu Wei
- Physics Department and Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, 200062 Shanghai, China.
| | - Huihui Jia
- Department of Radiology, Children's Hospital of Soochow University, 215025 Suzhou, Jiangsu, China
| | - Cecilia Curreli
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 215163 Suzhou, China; Munich School of Engineering, Technical University of Munich, 85748 Garching, Germany
| | - Zhenzhou Wu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 215163 Suzhou, China
| | - Mao Sheng
- Department of Radiology, Children's Hospital of Soochow University, 215025 Suzhou, Jiangsu, China
| | - Steffen J Glaser
- Department of Chemistry, Technical University of Munich, 85748 Garching, Germany.
| | - Xiaodong Yang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 215163 Suzhou, China.
| |
Collapse
|
32
|
Lussier F, Missirlis D, Spatz JP, Masson JF. Machine-Learning-Driven Surface-Enhanced Raman Scattering Optophysiology Reveals Multiplexed Metabolite Gradients Near Cells. ACS Nano 2019; 13:1403-1411. [PMID: 30724079 DOI: 10.1021/acsnano.8b07024] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
The extracellular environment is a complex medium in which cells secrete and consume metabolites. Molecular gradients are thereby created near cells, triggering various biological and physiological responses. However, investigating these molecular gradients remains challenging because the current tools are ill-suited and provide poor temporal and special resolution while also being destructive. Herein, we report the development and application of a machine learning approach in combination with a surface-enhanced Raman spectroscopy (SERS) nanoprobe to measure simultaneously the gradients of at least eight metabolites in vitro near different cell lines. We found significant increase in the secretion or consumption of lactate, glucose, ATP, glutamine, and urea within 20 μm from the cells surface compared to the bulk. We also observed that cancerous cells (HeLa) compared to fibroblasts (REF52) have a greater glycolytic rate, as is expected for this phenotype. Endothelial (HUVEC) and HeLa cells exhibited significant increase in extracellular ATP compared to the control, shining light on the implication of extracellular ATP within the cancer local environment. Machine-learning-driven SERS optophysiology is generally applicable to metabolites involved in cellular processes, providing a general platform on which to study cell biology.
Collapse
Affiliation(s)
- Félix Lussier
- Department of Chemistry , Université de Montréal , Case Postale 6128 Succursale Centre-Ville, Montreal , Quebec , Canada , H3C 3J7
| | - Dimitris Missirlis
- Department of Cellular Biophysics , Max Planck Institute for Medical Research , INF 253, D-69120 Heidelberg , Germany
- Department of Biophysical Chemistry , University of Heidelberg , INF 253, D-69120 Heidelberg , Germany
| | - Joachim P Spatz
- Department of Cellular Biophysics , Max Planck Institute for Medical Research , INF 253, D-69120 Heidelberg , Germany
- Department of Biophysical Chemistry , University of Heidelberg , INF 253, D-69120 Heidelberg , Germany
| | - Jean-François Masson
- Department of Chemistry , Université de Montréal , Case Postale 6128 Succursale Centre-Ville, Montreal , Quebec , Canada , H3C 3J7
- Centre Québécois des Matériaux Fonctionnels (CQMF) , Canada
| |
Collapse
|
33
|
Segovia F, Górriz JM, Ramírez J, Martinez-Murcia FJ, García-Pérez M. Using deep neural networks along with dimensionality reduction techniques to assist the diagnosis of neurodegenerative disorders. Log J IGPL 2018; 26:618-628. [PMID: 30532642 PMCID: PMC6267552 DOI: 10.1093/jigpal/jzy026] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Indexed: 06/09/2023]
Abstract
The analysis of neuroimaging data is frequently used to assist the diagnosis of neurodegenerative disorders such as Alzheimer's disease (AD) or Parkinson's disease (PD) and has become a routine procedure in the clinical practice. During the past decade, the pattern recognition community has proposed a number of machine learning-based systems that automatically analyse neuroimaging data in order to improve the diagnosis. However, the high dimensionality of the data is still a challenge and there is room for improvement. The development of novel classification frameworks as TensorFlow, recently released as open source by Google Inc., represents an opportunity to continue evolving these systems. In this work, we demonstrate several computer-aided diagnosis (CAD) systems based on Deep Neural Networks that improve the diagnosis for AD and PD and outperform those based on classical classifiers. In order to address the small sample size problem we evaluate two dimensionality reduction algorithms based on Principal Component Analysis and Non-Negative Matrix Factorization (NNMF), respectively. The performance of developed CAD systems is assessed using 4 datasets with neuroimaging data of different modalities.
Collapse
Affiliation(s)
- F Segovia
- Department of Signal Theory, Networking and Communications, University of Granada, Spain
| | - J M Górriz
- Department of Signal Theory, Networking and Communications, University of Granada, Spain
| | - J Ramírez
- Department of Signal Theory, Networking and Communications, University of Granada, Spain
| | - F J Martinez-Murcia
- Department of Signal Theory, Networking and Communications, University of Granada, Spain
| | - M García-Pérez
- Department of Signal Theory, Networking and Communications, University of Granada, Spain
| |
Collapse
|
34
|
Savalia S, Emamian V. Cardiac Arrhythmia Classification by Multi-Layer Perceptron and Convolution Neural Networks. Bioengineering (Basel) 2018; 5:E35. [PMID: 29734666 PMCID: PMC6027502 DOI: 10.3390/bioengineering5020035] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2018] [Revised: 04/18/2018] [Accepted: 04/28/2018] [Indexed: 12/02/2022] Open
Abstract
The electrocardiogram (ECG) plays an imperative role in the medical field, as it records heart signal over time and is used to discover numerous cardiovascular diseases. If a documented ECG signal has a certain irregularity in its predefined features, this is called arrhythmia, the types of which include tachycardia, bradycardia, supraventricular arrhythmias, and ventricular, etc. This has encouraged us to do research that consists of distinguishing between several arrhythmias by using deep neural network algorithms such as multi-layer perceptron (MLP) and convolution neural network (CNN). The TensorFlow library that was established by Google for deep learning and machine learning is used in python to acquire the algorithms proposed here. The ECG databases accessible at PhysioBank.com and kaggle.com were used for training, testing, and validation of the MLP and CNN algorithms. The proposed algorithm consists of four hidden layers with weights, biases in MLP, and four-layer convolution neural networks which map ECG samples to the different classes of arrhythmia. The accuracy of the algorithm surpasses the performance of the current algorithms that have been developed by other cardiologists in both sensitivity and precision.
Collapse
Affiliation(s)
- Shalin Savalia
- Department of Electrical Engineering, St. Mary's University, 1 Camino Santa Maria, San Antonio, TX 78228, USA.
| | - Vahid Emamian
- School of Science, Engineering and Technology, St. Mary's University, San Antonio, TX 78228, USA.
| |
Collapse
|
35
|
Alzantot M, Wang Y, Ren Z, Srivastava MB. RS TensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices. MobiSys 2018; 2017:7-12. [PMID: 29629431 DOI: 10.1145/3089801.3089805] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support.
Collapse
Affiliation(s)
| | - Yingnan Wang
- University of California, Los Angeles, Los Angeles, CA 90095
| | - Zhengshuang Ren
- University of California, Los Angeles, Los Angeles, CA 90095
| | | |
Collapse
|
36
|
Abstract
PURPOSE The Mini-Mental State Examination (MMSE) is one of the most frequently used bedside screening measures of cognition. However, the Korean Dementia Screening Questionnaire (KDSQ) is an easier and more reliable screening method. Instead, other clinical variables and raw data were used for this study without the consideration of a cutoff value. The objective of this study was to develop a machine-learning algorithm for the detection of cognitive impairment (CI) based on the KDSQ and the MMSE. PATIENTS AND METHODS The original dataset from the Clinical Research Center for Dementia of South Korea study was obtained. In total, 9,885 and 300 patients were randomly allocated to the training and test datasets, respectively. We selected up to 24 variables including sex, age, education duration, diabetes mellitus, and hypertension. We trained a machine-learning algorithm using TensorFlow based on the training dataset and then calculated its accuracy using the test dataset. The cost was calculated by conducting a logistic regression. RESULTS The accuracy of the model in predicting CI based on the KDSQ only, the MMSE only, and the combination of the KDSQ and MMSE was 84.3%, 88.3%, and 86.3%, respectively. For the KDSQ, the sensitivity for detecting CI was 91.50% and the specificity for detecting normal cognition (NL) was 59.60%. The sensitivity of the MMSE was 94.35%, and the specificity was 59.62%. When combining the KDSQ and the MMSE, the sensitivity for detecting CI was 91.5% and the specificity for detecting NL was 61.5%. CONCLUSION The algorithm predicting CI based on the MMSE is superior. However, the KDSQ can be administered more easily in clinical practice and the algorithm using KDSQ is a comparable screening tool.
Collapse
Affiliation(s)
- Young Chul Youn
- Department of Neurology, College of Medicine, Chung-Ang University, Seoul, South Korea
| | - Seong Hye Choi
- Department of Neurology, Inha University College of Medicine, Incheon, South Korea
| | - Hae-Won Shin
- Department of Neurology, College of Medicine, Chung-Ang University, Seoul, South Korea
| | - Ko Woon Kim
- Department of Neurology, Chonbuk National University Medical School and Hospital, Chonbuk, South Korea
| | - Jae-Won Jang
- Department of Neurology, Kangwon National University Hospital, Chuncheon, South Korea
| | - Jason J Jung
- Department of Computer Engineering, Chung-Ang University, Seoul, South Korea
| | - Ging-Yuek Robin Hsiung
- Division of Neurology, Department of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - SangYun Kim
- Department of Neurology, Seoul National University College of Medicine and Seoul National University Bundang Hospital, Seoul, South Korea,
| |
Collapse
|