1
|
Luo X, Zhang J, Tan H, Jiang J, Li J, Wen W. Real-Time 3D Tracking of Multi-Particle in the Wide-Field Illumination Based on Deep Learning. Sensors (Basel) 2024; 24:2583. [PMID: 38676200 PMCID: PMC11054292 DOI: 10.3390/s24082583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 04/09/2024] [Accepted: 04/16/2024] [Indexed: 04/28/2024]
Abstract
In diverse realms of research, such as holographic optical tweezer mechanical measurements, colloidal particle motion state examinations, cell tracking, and drug delivery, the localization and analysis of particle motion command paramount significance. Algorithms ranging from conventional numerical methods to advanced deep-learning networks mark substantial strides in the sphere of particle orientation analysis. However, the need for datasets has hindered the application of deep learning in particle tracking. In this work, we elucidated an efficacious methodology pivoted toward generating synthetic datasets conducive to this domain that resonates with robustness and precision when applied to real-world data of tracking 3D particles. We developed a 3D real-time particle positioning network based on the CenterNet network. After conducting experiments, our network has achieved a horizontal positioning error of 0.0478 μm and a z-axis positioning error of 0.1990 μm. It shows the capability to handle real-time tracking of particles, diverse in dimensions, near the focal plane with high precision. In addition, we have rendered all datasets cultivated during this investigation accessible.
Collapse
Affiliation(s)
- Xiao Luo
- Department of Physics, The Hong Kong University of Science and Technology, Hong Kong 999077, China;
| | - Jie Zhang
- Advanced Materials Thrust, The Hong Kong University of Science and Technology, Guangzhou 511400, China; (J.Z.); (J.J.); (J.L.)
| | - Handong Tan
- Department of Individualized Interdisciplinary Program (Advanced Materials), The Hong Kong University of Science and Technology, Hong Kong 999077, China;
| | - Jiahao Jiang
- Advanced Materials Thrust, The Hong Kong University of Science and Technology, Guangzhou 511400, China; (J.Z.); (J.J.); (J.L.)
| | - Junda Li
- Advanced Materials Thrust, The Hong Kong University of Science and Technology, Guangzhou 511400, China; (J.Z.); (J.J.); (J.L.)
| | - Weijia Wen
- Department of Physics, The Hong Kong University of Science and Technology, Hong Kong 999077, China;
- Advanced Materials Thrust, The Hong Kong University of Science and Technology, Guangzhou 511400, China; (J.Z.); (J.J.); (J.L.)
| |
Collapse
|
2
|
Li Y, Yan X, Zhang B, Wang Z, Su H, Jia Z. A Method for Detecting and Analyzing Facial Features of People with Drug Use Disorders. Diagnostics (Basel) 2021; 11:diagnostics11091562. [PMID: 34573904 PMCID: PMC8465466 DOI: 10.3390/diagnostics11091562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 08/20/2021] [Accepted: 08/25/2021] [Indexed: 11/30/2022] Open
Abstract
Drug use disorders caused by illicit drug use are significant contributors to the global burden of disease, and it is vital to conduct early detection of people with drug use disorders (PDUD). However, the primary care clinics and emergency departments lack simple and effective tools for screening PDUD. This study proposes a novel method to detect PDUD using facial images. Various experiments are designed to obtain the convolutional neural network (CNN) model by transfer learning based on a large-scale dataset (9870 images from PDUD and 19,567 images from GP (the general population)). Our results show that the model achieved 84.68%, 87.93%, and 83.01% in accuracy, sensitivity, and specificity in the dataset, respectively. To verify its effectiveness, the model is evaluated on external datasets based on real scenarios, and we found it still achieved high performance (accuracy > 83.69%, specificity > 90.10%, sensitivity > 80.00%). Our results also show differences between PDUD and GP in different facial areas. Compared with GP, the facial features of PDUD were mainly concentrated in the left cheek, right cheek, and nose areas (p < 0.001), which also reveals the potential relationship between mechanisms of drugs action and changes in facial tissues. This is the first study to apply the CNN model to screen PDUD in clinical practice and is also the first attempt to quantitatively analyze the facial features of PDUD. This model could be quickly integrated into the existing clinical workflow and medical care to provide capabilities.
Collapse
Affiliation(s)
- Yongjie Li
- School of Public Health, Peking University, Beijing 100191, China; (Y.L.); (X.Y.); (B.Z.); (Z.W.); (H.S.)
| | - Xiangyu Yan
- School of Public Health, Peking University, Beijing 100191, China; (Y.L.); (X.Y.); (B.Z.); (Z.W.); (H.S.)
| | - Bo Zhang
- School of Public Health, Peking University, Beijing 100191, China; (Y.L.); (X.Y.); (B.Z.); (Z.W.); (H.S.)
| | - Zekun Wang
- School of Public Health, Peking University, Beijing 100191, China; (Y.L.); (X.Y.); (B.Z.); (Z.W.); (H.S.)
| | - Hexuan Su
- School of Public Health, Peking University, Beijing 100191, China; (Y.L.); (X.Y.); (B.Z.); (Z.W.); (H.S.)
- Medical Informatics Center, Peking University, Beijing 100191, China
| | - Zhongwei Jia
- School of Public Health, Peking University, Beijing 100191, China; (Y.L.); (X.Y.); (B.Z.); (Z.W.); (H.S.)
- Center for Intelligent Public Health, Institute for Artificial Intelligence, Peking University, Beijing 100191, China
- Center for Drug Abuse Control and Prevention, National Institute of Health Data Science, Peking University, Beijing 100191, China
- Correspondence:
| |
Collapse
|
3
|
Barbosa ACF, Gerolamo CS, Lima AC, Angyalossy V, Pace MR. Polishing entire stems and roots using sandpaper under water: An alternative method for macroscopic analyses. Appl Plant Sci 2021; 9:APS311421. [PMID: 34141498 PMCID: PMC8202830 DOI: 10.1002/aps3.11421] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 04/22/2021] [Indexed: 05/11/2023]
Abstract
PREMISE Polishing entire stem and root samples is an effective method for studying their anatomy; however, polishing fresh samples to preserve woods with soft tissues or barks is challenging given that soft tissues shrink when dried. We propose sanding fresh or liquid-preserved samples under water as an alternative, given that it preserves all tissues in an intact and clear state. METHODS AND RESULTS By manually grinding the surface of the samples under water using three ascending grits of waterproof sandpapers, an excellent polished sanded surface is obtained. The wood swarf goes into the water without clogging the cell lumina, rendering the surfaces adequate for cell visualization and description. We show results in palms, liana stems, roots, and wood blocks. CONCLUSIONS Using this simple, inexpensive, rapid technique, it is possible to polish either fresh, dry, or liquid-preserved woody plant samples, preserving the integrity of both the soft and hard tissues and allowing for detailed observations of the stems and roots.
Collapse
Affiliation(s)
- Antonio C. F. Barbosa
- Laboratório de Madeira e Produtos DerivadosCentro de Tecnologia de Recursos FlorestaisInstituto de Pesquisas TecnológicasAv. Prof. Almeida Prado 532, Cidade UniversitáriaSão Paulo05508‐901Brazil
| | - Caian S. Gerolamo
- Laboratório de Anatomia VegetalInstituto de BiociênciasDepartamento de BotânicaUniversidade de São PauloRua do Matão 277, Cidade UniversitáriaSão Paulo05508‐090Brazil
| | - André C. Lima
- Weizmann Tree LabDepartment of Plant and Environmental SciencesWeizmann Institute of ScienceHerzl Street 234Rehovot76100Israel
| | - Veronica Angyalossy
- Laboratório de Anatomia VegetalInstituto de BiociênciasDepartamento de BotânicaUniversidade de São PauloRua do Matão 277, Cidade UniversitáriaSão Paulo05508‐090Brazil
| | - Marcelo R. Pace
- Departamento de BotánicaInstituto de BiologíaUniversidad Nacional Autónoma de MéxicoCircuito Zona Deportiva s/n de Ciudad UniversitariaMexico City04510Mexico
| |
Collapse
|
4
|
Nie P, Qu F, Lin L, He Y, Feng X, Yang L, Gao H, Zhao L, Huang L. Trace Identification and Visualization of Multiple Benzimidazole Pesticide Residues on Toona sinensis Leaves Using Terahertz Imaging Combined with Deep Learning. Int J Mol Sci 2021; 22:3425. [PMID: 33810447 DOI: 10.3390/ijms22073425] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 03/23/2021] [Accepted: 03/23/2021] [Indexed: 12/03/2022] Open
Abstract
Molecular spectroscopy has been widely used to identify pesticides. The main limitation of this approach is the difficulty of identifying pesticides with similar molecular structures. When these pesticide residues are in trace and mixed states in plants, it poses great challenges for practical identification. This study proposed a state-of-the-art method for the rapid identification of trace (10 mg·L−1) and multiple similar benzimidazole pesticide residues on the surface of Toona sinensis leaves, mainly including benzoyl (BNL), carbendazim (BCM), thiabendazole (TBZ), and their mixtures. The new method combines high-throughput terahertz (THz) imaging technology with a deep learning framework. To further improve the model reliability beyond the THz fingerprint peaks (BNL: 0.70, 1.07, 2.20 THz; BCM: 1.16, 1.35, 2.32 THz; TBZ: 0.92, 1.24, 1.66, 1.95, 2.58 THz), we extracted the absorption spectra in frequencies of 0.2–2.2 THz from images as the input to the deep convolution neural network (DCNN). Compared with fuzzy Sammon clustering and four back-propagation neural network (BPNN) models (TrainCGB, TrainCGF, TrainCGP, and TrainRP), DCNN achieved the highest prediction accuracies of 100%, 94.51%, 96.26%, 94.64%, 98.81%, 94.90%, 96.17%, and 96.99% for the control check group, BNL, BCM, TBZ, BNL + BCM, BNL + TBZ, BCM + TBZ, and BNL + BCM + TBZ, respectively. Taking advantage of THz imaging and DCNN, the image visualization of pesticide distribution and residue types on leaves was realized simultaneously. The results demonstrated that THz imaging and deep learning can be potentially adopted for rapid-sensing detection of trace multi-residues on leaf surfaces, which is of great significance for agriculture and food safety.
Collapse
|
5
|
Panayides AS, Amini A, Filipovic ND, Sharma A, Tsaftaris SA, Young A, Foran D, Do N, Golemati S, Kurc T, Huang K, Nikita KS, Veasey BP, Zervakis M, Saltz JH, Pattichis CS. AI in Medical Imaging Informatics: Current Challenges and Future Directions. IEEE J Biomed Health Inform 2020; 24:1837-1857. [PMID: 32609615 PMCID: PMC8580417 DOI: 10.1109/jbhi.2020.2991043] [Citation(s) in RCA: 94] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine.
Collapse
|
6
|
Wagner MG, Hatt CR, Dunkerley DAP, Bodart LE, Raval AN, Speidel MA. A dynamic model-based approach to motion and deformation tracking of prosthetic valves from biplane x-ray images. Med Phys 2018; 45:2583-2594. [PMID: 29659023 DOI: 10.1002/mp.12913] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Revised: 04/02/2018] [Accepted: 04/02/2018] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Transcatheter aortic valve replacement (TAVR) is a minimally invasive procedure in which a prosthetic heart valve is placed and expanded within a defective aortic valve. The device placement is commonly performed using two-dimensional (2D) fluoroscopic imaging. Within this work, we propose a novel technique to track the motion and deformation of the prosthetic valve in three dimensions based on biplane fluoroscopic image sequences. METHODS The tracking approach uses a parameterized point cloud model of the valve stent which can undergo rigid three-dimensional (3D) transformation and different modes of expansion. Rigid elements of the model are individually rotated and translated in three dimensions to approximate the motions of the stent. Tracking is performed using an iterative 2D-3D registration procedure which estimates the model parameters by minimizing the mean-squared image values at the positions of the forward-projected model points. Additionally, an initialization technique is proposed, which locates clusters of salient features to determine the initial position and orientation of the model. RESULTS The proposed algorithms were evaluated based on simulations using a digital 4D CT phantom as well as experimentally acquired images of a prosthetic valve inside a chest phantom with anatomical background features. The target registration error was 0.12 ± 0.04 mm in the simulations and 0.64 ± 0.09 mm in the experimental data. CONCLUSIONS The proposed algorithm could be used to generate 3D visualization of the prosthetic valve from two projections. In combination with soft-tissue sensitive-imaging techniques like transesophageal echocardiography, this technique could enable 3D image guidance during TAVR procedures.
Collapse
Affiliation(s)
- Martin G Wagner
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, USA.,Department of Radiology, University of Wisconsin-Madison, Madison, WI, USA
| | - Charles R Hatt
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, USA
| | - David A P Dunkerley
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, USA
| | - Lindsay E Bodart
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, USA
| | - Amish N Raval
- Department of Medicine, University of Wisconsin-Madison, Madison, WI, USA
| | - Michael A Speidel
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, USA.,Department of Medicine, University of Wisconsin-Madison, Madison, WI, USA
| |
Collapse
|
7
|
Xin X, Clark D, Ang KC, van Rossum DB, Copper J, Xiao X, La Riviere PJ, Cheng KC. Synchrotron microCT imaging of soft tissue in juvenile zebrafish reveals retinotectal projections. Proc SPIE Int Soc Opt Eng 2017; 10060. [PMID: 32733117 DOI: 10.1117/12.2267477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Biomedical research and clinical diagnosis would benefit greatly from full volume determinations of anatomical phenotype. Comprehensive tools for morphological phenotyping are central for the emerging field of phenomics, which requires high-throughput, systematic, accurate, and reproducible data collection from organisms affected by genetic, disease, or environmental variables. Theoretically, complete anatomical phenotyping requires the assessment of every cell type in the whole organism, but this ideal is presently untenable due to the lack of an unbiased 3D imaging method that allows histopathological assessment of any cell type despite optical opacity. Histopathology, the current clinical standard for diagnostic phenotyping, involves the microscopic study of tissue sections to assess qualitative aspects of tissue architecture, disease mechanisms, and physiological state. However, quantitative features of tissue architecture such as cellular composition and cell counting in tissue volumes can only be approximated due to characteristics of tissue sectioning, including incomplete sampling and the constraints of 2D imaging of 5 micron thick tissue slabs. We have used a small, vertebrate organism, the zebrafish, to test the potential of microCT for systematic macroscopic and microscopic morphological phenotyping. While cell resolution is routinely achieved using methods such as light sheet fluorescence microscopy and optical tomography, these methods do not provide the pancellular perspective characteristic of histology, and are constrained by the limited penetration of visible light through pigmented and opaque specimens, as characterizes zebrafish juveniles. Here, we provide an example of neuroanatomy that can be studied by microCT of stained soft tissue at 1.43 micron isotropic voxel resolution. We conclude that synchrotron microCT is a form of 3D imaging that may potentially be adopted towards more reproducible, large-scale, morphological phenotyping of optically opaque tissues. Further development of soft tissue microCT, visualization and quantitative tools will enhance its utility.
Collapse
Affiliation(s)
- Xuying Xin
- Department of Pathology, Penn State College of Medicine, Hershey, PA 17033, USA.,Jake Gittlen Laboratories for Cancer Research, Hershey, PA 17033, USA.,Penn State Consortium for Interdisciplinary Image Informatics and Visualization, USA
| | - Darin Clark
- Center for In Vivo Microscopy, Department of Radiology, Duke University Medical Center, Durham, NC 27708, USA
| | - Khai Chung Ang
- Department of Pathology, Penn State College of Medicine, Hershey, PA 17033, USA.,Jake Gittlen Laboratories for Cancer Research, Hershey, PA 17033, USA.,Penn State Consortium for Interdisciplinary Image Informatics and Visualization, USA
| | - Damian B van Rossum
- Department of Pathology, Penn State College of Medicine, Hershey, PA 17033, USA.,Jake Gittlen Laboratories for Cancer Research, Hershey, PA 17033, USA.,Penn State Consortium for Interdisciplinary Image Informatics and Visualization, USA
| | - Jean Copper
- Department of Pathology, Penn State College of Medicine, Hershey, PA 17033, USA.,Jake Gittlen Laboratories for Cancer Research, Hershey, PA 17033, USA.,Penn State Consortium for Interdisciplinary Image Informatics and Visualization, USA
| | - Xianghui Xiao
- Advanced Photon Source, Argonne National Laboratory Argonne, IL 60439, USA
| | | | - Keith C Cheng
- Department of Pathology, Penn State College of Medicine, Hershey, PA 17033, USA.,Jake Gittlen Laboratories for Cancer Research, Hershey, PA 17033, USA.,Penn State Consortium for Interdisciplinary Image Informatics and Visualization, USA
| |
Collapse
|
8
|
Abstract
Although it is widely appreciated that cells migrate in a variety of diverse environments in vivo, we are only now beginning to use experimental workflows that yield images with sufficient spatiotemporal resolution to study the molecular processes governing cell migration in 3D environments. Since cell migration is a dynamic process, it is usually studied via microscopy, but 3D movies of 3D processes are difficult to interpret by visual inspection. In this review, we discuss the technologies required to study the diversity of 3D cell migration modes with a focus on the visualization and computational analysis tools needed to study cell migration quantitatively at a level comparable to the analyses performed today on cells crawling on flat substrates.
Collapse
|
9
|
Shi L, Liu W, Zhang H, Xie Y, Wang D. A survey of GPU-based medical image computing techniques. Quant Imaging Med Surg 2012; 2:188-206. [PMID: 23256080 PMCID: PMC3496509 DOI: 10.3978/j.issn.2223-4292.2012.08.02] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2012] [Accepted: 08/08/2012] [Indexed: 11/14/2022]
Abstract
Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine.
Collapse
Affiliation(s)
- Lin Shi
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
- CUHK Shenzhen Research Institute, Shenzhen, Guangdong Province, P.R. China
- Shenzhen Institute of Advanced Integration Technology, Chinese Academy of Sciences, Shenzhen, Guangdong Province, P.R. China
| | - Wen Liu
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
| | - Heye Zhang
- Shenzhen Institute of Advanced Integration Technology, Chinese Academy of Sciences, Shenzhen, Guangdong Province, P.R. China
| | - Yongming Xie
- Shenzhen Institute of Advanced Integration Technology, Chinese Academy of Sciences, Shenzhen, Guangdong Province, P.R. China
| | - Defeng Wang
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
- CUHK Shenzhen Research Institute, Shenzhen, Guangdong Province, P.R. China
| |
Collapse
|