1
|
Wang J, Ma T, Jin L, Zhu Y, Yu J, Chen F, Fu S, Xu Y. Prior Visual-Guided Self-Supervised Learning Enables Color Vignetting Correction for High-Throughput Microscopic Imaging. IEEE J Biomed Health Inform 2025; 29:2669-2682. [PMID: 39412976 DOI: 10.1109/jbhi.2024.3471907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2024]
Abstract
Vignetting constitutes a prevalent optical degradation that significantly compromises the quality of biomedical microscopic imaging. However, a robust and efficient vignetting correction methodology in multi-channel microscopic images remains absent at present. In this paper, we take advantage of a prior knowledge about the homogeneity of microscopic images and radial attenuation property of vignetting to develop a self-supervised deep learning algorithm that achieves complex vignetting removal in color microscopic images. Our proposed method, vignetting correction lookup table (VCLUT), is trainable on both single and multiple images, which employs adversarial learning to effectively transfer good imaging conditions from the user visually defined central region of its own light field to the entire image. To illustrate its effectiveness, we performed individual correction experiments on data from five distinct biological specimens. The results demonstrate that VCLUT exhibits enhanced performance compared to classical methods. We further examined its performance as a multi-image-based approach on a pathological dataset, revealing its advantage over other state-of-the-art approaches in both qualitative and quantitative measurements. Moreover, it uniquely possesses the capacity for generalization across various levels of vignetting intensity and an ultra-fast model computation capability, rendering it well-suited for integration into high-throughput imaging pipelines of digital microscopy.
Collapse
|
2
|
Albuquerque C, Henriques R, Castelli M. Deep learning-based object detection algorithms in medical imaging: Systematic review. Heliyon 2025; 11:e41137. [PMID: 39758372 PMCID: PMC11699422 DOI: 10.1016/j.heliyon.2024.e41137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 12/04/2024] [Accepted: 12/10/2024] [Indexed: 01/06/2025] Open
Abstract
Over the past decade, Deep Learning (DL) techniques have demonstrated remarkable advancements across various domains, driving their widespread adoption. Particularly in medical image analysis, DL received greater attention for tasks like image segmentation, object detection, and classification. This paper provides an overview of DL-based object recognition in medical images, exploring recent methods and emphasizing different imaging techniques and anatomical applications. Utilizing a meticulous quantitative and qualitative analysis following PRISMA guidelines, we examined publications based on citation rates to explore into the utilization of DL-based object detectors across imaging modalities and anatomical domains. Our findings reveal a consistent rise in the utilization of DL-based object detection models, indicating unexploited potential in medical image analysis. Predominantly within Medicine and Computer Science domains, research in this area is most active in the US, China, and Japan. Notably, DL-based object detection methods have gotten significant interest across diverse medical imaging modalities and anatomical domains. These methods have been applied to a range of techniques including CR scans, pathology images, and endoscopic imaging, showcasing their adaptability. Moreover, diverse anatomical applications, particularly in digital pathology and microscopy, have been explored. The analysis underscores the presence of varied datasets, often with significant discrepancies in size, with a notable percentage being labeled as private or internal, and with prospective studies in this field remaining scarce. Our review of existing trends in DL-based object detection in medical images offers insights for future research directions. The continuous evolution of DL algorithms highlighted in the literature underscores the dynamic nature of this field, emphasizing the need for ongoing research and fitted optimization for specific applications.
Collapse
|
3
|
Zhang J, Hao F, Liu X, Yao S, Wu Y, Li M, Zheng W. Multi-scale multi-instance contrastive learning for whole slide image classification. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 2024; 138:109300. [DOI: 10.1016/j.engappai.2024.109300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
4
|
Wang C, Choi HJ, Woodbury L, Lee K. Interpretable Fine-Grained Phenotypes of Subcellular Dynamics via Unsupervised Deep Learning. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2403547. [PMID: 39239705 PMCID: PMC11538677 DOI: 10.1002/advs.202403547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 08/09/2024] [Indexed: 09/07/2024]
Abstract
Uncovering fine-grained phenotypes of live cell dynamics is pivotal for a comprehensive understanding of the heterogeneity in healthy and diseased biological processes. However, this endeavor poses significant technical challenges for unsupervised machine learning, requiring the extraction of features that not only faithfully preserve this heterogeneity but also effectively discriminate between established biological states, all while remaining interpretable. To tackle these challenges, a self-training deep learning framework designed for fine-grained and interpretable phenotyping is presented. This framework incorporates an unsupervised teacher model with interpretable features to facilitate feature learning in a student deep neural network (DNN). Significantly, an autoencoder-based regularizer is designed to encourage the student DNN to maximize the heterogeneity associated with molecular perturbations. This method enables the acquisition of features with enhanced discriminatory power, while simultaneously preserving the heterogeneity associated with molecular perturbations. This study successfully delineated fine-grained phenotypes within the heterogeneous protrusion dynamics of migrating epithelial cells, revealing specific responses to pharmacological perturbations. Remarkably, this framework adeptly captured a concise set of highly interpretable features uniquely linked to these fine-grained phenotypes, each corresponding to specific temporal intervals crucial for their manifestation. This unique capability establishes it as a valuable tool for investigating diverse cellular dynamics and their heterogeneity.
Collapse
Affiliation(s)
- Chuangqi Wang
- Department of Immunology and MicrobiologyUniversity of Colorado Anschutz Medical CampusAuroraCO80045USA
- Department of Biomedical EngineeringWorcester Polytechnic InstituteWorcesterMA01609USA
| | - Hee June Choi
- Department of Biomedical EngineeringWorcester Polytechnic InstituteWorcesterMA01609USA
- Vascular Biology Program and Department of SurgeryBoston Children's HospitalHarvard Medical SchoolBostonMA02115USA
| | - Lucy Woodbury
- Department of Biomedical EngineeringWorcester Polytechnic InstituteWorcesterMA01609USA
- Department of Biomedical EngineeringUniversity of ArkansasFayettevilleAR72701USA
| | - Kwonmoo Lee
- Department of Biomedical EngineeringWorcester Polytechnic InstituteWorcesterMA01609USA
- Vascular Biology Program and Department of SurgeryBoston Children's HospitalHarvard Medical SchoolBostonMA02115USA
| |
Collapse
|
5
|
Pun TB, Thapa Magar R, Koech R, Owen KJ, Adorada DL. Emerging Trends and Technologies Used for the Identification, Detection, and Characterisation of Plant-Parasitic Nematode Infestation in Crops. PLANTS (BASEL, SWITZERLAND) 2024; 13:3041. [PMID: 39519959 PMCID: PMC11548156 DOI: 10.3390/plants13213041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Revised: 10/23/2024] [Accepted: 10/28/2024] [Indexed: 11/16/2024]
Abstract
Accurate identification and estimation of the population densities of microscopic, soil-dwelling plant-parasitic nematodes (PPNs) are essential, as PPNs cause significant economic losses in agricultural production systems worldwide. This study presents a comprehensive review of emerging techniques used for the identification of PPNs, including morphological identification, molecular diagnostics such as polymerase chain reaction (PCR), high-throughput sequencing, meta barcoding, remote sensing, hyperspectral analysis, and image processing. Classical morphological methods require a microscope and nematode taxonomist to identify species, which is laborious and time-consuming. Alternatively, quantitative polymerase chain reaction (qPCR) has emerged as a reliable and efficient approach for PPN identification and quantification; however, the cost associated with the reagents, instrumentation, and careful optimisation of reaction conditions can be prohibitive. High-throughput sequencing and meta-barcoding are used to study the biodiversity of all tropical groups of nematodes, not just PPNs, and are useful for describing changes in soil ecology. Convolutional neural network (CNN) methods are necessary to automate the detection and counting of PPNs from microscopic images, including complex cases like tangled nematodes. Remote sensing and hyperspectral methods offer non-invasive approaches to estimate nematode infestations and facilitate early diagnosis of plant stress caused by nematodes and rapid management of PPNs. This review provides a valuable resource for researchers, practitioners, and policymakers involved in nematology and plant protection. It highlights the importance of fast, efficient, and robust identification protocols and decision-support tools in mitigating the impact of PPNs on global agriculture and food security.
Collapse
Affiliation(s)
- Top Bahadur Pun
- School of Engineering and Technology, Central Queensland University, Rockhampton, QLD 4701, Australia
| | - Roniya Thapa Magar
- DOE Joint Genome Institute, Lawrence Berkeley National Lab, Berkeley, CA 94720, USA
| | - Richard Koech
- School of Health, Medical and Applied Sciences, Central Queensland University, Bundaberg, QLD 4760, Australia;
| | - Kirsty J. Owen
- School of Agriculture and Environmental Science, University of Southern Queensland, Toowoomba, QLD 4305, Australia
| | - Dante L. Adorada
- Centre for Crop Health, University of Southern Queensland, Toowoomba, QLD 4305, Australia
| |
Collapse
|
6
|
Bae Y, Byun J, Lee H, Han B. Comparative analysis of chronic progressive nephropathy (CPN) diagnosis in rat kidneys using an artificial intelligence deep learning model. Toxicol Res 2024; 40:551-559. [PMID: 39345736 PMCID: PMC11436530 DOI: 10.1007/s43188-024-00247-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Revised: 05/04/2024] [Accepted: 05/15/2024] [Indexed: 10/01/2024] Open
Abstract
With the development of artificial intelligence (AI), technologies based on machines and deep learning are being used in many academic fields. In toxicopathology, research is actively underway to analyze whole slide image (WSI)-level images using AI deep-learning models. However, few studies have been conducted on models for diagnosing complex lesions comprising multiple lesions. Therefore, this study used deep learning segmentation models (YOLOv8, Mask R-CNN, and SOLOv2) to identify three representative lesions (tubular basophilia with atrophy, mononuclear cell infiltration, and hyaline casts) of chronic progressive nephropathy of the kidney, a complex lesion observed in a non-clinical test using rats and selected an initial model appropriate for diagnosing complex lesions by analyzing the characteristics of each algorithm. Approximately 2000 images containing three lesions were extracted using 33 WSI of rat kidneys with chronic progressive nephropathy. Among them, 1701 images were divided into first and second rounds of learning. The loss and mAP50 values were measured twice to confirm the performances of the three algorithms. Loss measurements were stopped at an appropriate epoch to prevent overfitting, and the loss value decreased in the second round based on the data learned in the first round. After measuring the accuracy twice, detection using Mask R-CNN showed the highest mAP50 in all lesions among the three models and was considered sufficient as an initial model for diagnosing complex lesions. By contrast, the YOLOv8 and SOLOv2 models showed low accuracy for all three lesions and had difficulty with segmentation tasks. Therefore, this paper proposes a Mask R-CNN as the initial model for segmenting complex lesions. Precise diagnosis is possible if the model can be trained by increasing the input data, thereby providing greater accuracy in diagnosing pathological images.
Collapse
Affiliation(s)
- Yeji Bae
- Department of Pharmaceutical Engineering, Life Health College, Hoseo University, Asan City, Republic of Korea
| | - Jongsu Byun
- Pathology Team, Microscopic Examination, Dt&CRO, Yongin City, Republic of Korea
| | - Hangyu Lee
- Program Development Team, DeepSoft, Seoul, Republic of Korea
| | - Beomseok Han
- Department of Pharmaceutical Engineering, Life Health College, Hoseo University, Asan City, Republic of Korea
| |
Collapse
|
7
|
Huang J, Luo Y, Guo Y, Li W, Wang Z, Liu G, Yang G. Accurate segmentation of intracellular organelle networks using low-level features and topological self-similarity. Bioinformatics 2024; 40:btae559. [PMID: 39302662 PMCID: PMC11467052 DOI: 10.1093/bioinformatics/btae559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 08/12/2024] [Accepted: 09/19/2024] [Indexed: 09/22/2024] Open
Abstract
MOTIVATION Intracellular organelle networks (IONs) such as the endoplasmic reticulum (ER) network and the mitochondrial (MITO) network serve crucial physiological functions. The morphology of these networks plays a critical role in mediating their functions. Accurate image segmentation is required for analyzing the morphology and topology of these networks for applications such as molecular mechanism analysis and drug target screening. So far, however, progress has been hindered by their structural complexity and density. RESULTS In this study, we first establish a rigorous performance baseline for accurate segmentation of these organelle networks from fluorescence microscopy images by optimizing a baseline U-Net model. We then develop the multi-resolution encoder (MRE) and the hierarchical fusion loss (Lhf) based on two inductive components, namely low-level features and topological self-similarity, to assist the model in better adapting to the task of segmenting IONs. Empowered by MRE and Lhf, both U-Net and Pyramid Vision Transformer (PVT) outperform competing state-of-the-art models such as U-Net++, HR-Net, nnU-Net, and TransUNet on custom datasets of the ER network and the MITO network, as well as on public datasets of another biological network, the retinal blood vessel network. In addition, integrating MRE and Lhf with models such as HR-Net and TransUNet also enhances their segmentation performance. These experimental results confirm the generalization capability and potential of our approach. Furthermore, accurate segmentation of the ER network enables analysis that provides novel insights into its dynamic morphological and topological properties. AVAILABILITY AND IMPLEMENTATION Code and data are openly accessible at https://github.com/cbmi-group/MRE.
Collapse
Affiliation(s)
- Jiaxing Huang
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yaoru Luo
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yuanhao Guo
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Wenjing Li
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zichen Wang
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guole Liu
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Ge Yang
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
8
|
Zhu E, Li YR, Margolis S, Wang J, Wang K, Zhang Y, Wang S, Park J, Zheng C, Yang L, Chu A, Zhang Y, Gao L, Hsiai TK. Frontiers in artificial intelligence-directed light-sheet microscopy for uncovering biological phenomena and multi-organ imaging. VIEW 2024; 5:20230087. [PMID: 39478956 PMCID: PMC11521201 DOI: 10.1002/viw.20230087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 07/18/2024] [Indexed: 11/02/2024] Open
Abstract
Light-sheet fluorescence microscopy (LSFM) introduces fast scanning of biological phenomena with deep photon penetration and minimal phototoxicity. This advancement represents a significant shift in 3-D imaging of large-scale biological tissues and 4-D (space + time) imaging of small live animals. The large data associated with LSFM requires efficient imaging acquisition and analysis with the use of artificial intelligence (AI)/machine learning (ML) algorithms. To this end, AI/ML-directed LSFM is an emerging area for multi-organ imaging and tumor diagnostics. This review will present the development of LSFM and highlight various LSFM configurations and designs for multi-scale imaging. Optical clearance techniques will be compared for effective reduction in light scattering and optimal deep-tissue imaging. This review will further depict a diverse range of research and translational applications, from small live organisms to multi-organ imaging to tumor diagnosis. In addition, this review will address AI/ML-directed imaging reconstruction, including the application of convolutional neural networks (CNNs) and generative adversarial networks (GANs). In summary, the advancements of LSFM have enabled effective and efficient post-imaging reconstruction and data analyses, underscoring LSFM's contribution to advancing fundamental and translational research.
Collapse
Affiliation(s)
- Enbo Zhu
- Department of Bioengineering, UCLA, California, 90095, USA
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
- Department of Medicine, Greater Los Angeles VA Healthcare System, California, 90073, USA
- Department of Microbiology, Immunology & Molecular Genetics, UCLA, California, 90095, USA
| | - Yan-Ruide Li
- Department of Microbiology, Immunology & Molecular Genetics, UCLA, California, 90095, USA
| | - Samuel Margolis
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
| | - Jing Wang
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Kaidong Wang
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
- Department of Medicine, Greater Los Angeles VA Healthcare System, California, 90073, USA
| | - Yaran Zhang
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Shaolei Wang
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Jongchan Park
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Charlie Zheng
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
| | - Lili Yang
- Department of Microbiology, Immunology & Molecular Genetics, UCLA, California, 90095, USA
- Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research, UCLA, California, 90095, USA
- Jonsson Comprehensive Cancer Center, David Geffen School of Medicine, UCLA, California, 90095, USA
- Molecular Biology Institute, UCLA, California, 90095, USA
| | - Alison Chu
- Division of Neonatology and Developmental Biology, Department of Pediatrics, David Geffen School of Medicine, UCLA, California, 90095, USA
| | - Yuhua Zhang
- Doheny Eye Institute, Department of Ophthalmology, UCLA, California, 90095, USA
| | - Liang Gao
- Department of Bioengineering, UCLA, California, 90095, USA
| | - Tzung K. Hsiai
- Department of Bioengineering, UCLA, California, 90095, USA
- Division of Cardiology, Department of Medicine, David Geffen School of Medicine, UCLA, California, 90095, USA
- Department of Medicine, Greater Los Angeles VA Healthcare System, California, 90073, USA
| |
Collapse
|
9
|
Burke MJ, Batista VS, Davis CM. Similarity Metrics for Subcellular Analysis of FRET Microscopy Videos. J Phys Chem B 2024; 128:8344-8354. [PMID: 39186078 DOI: 10.1021/acs.jpcb.4c02859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/27/2024]
Abstract
Understanding the heterogeneity of molecular environments within cells is an outstanding challenge of great fundamental and technological interest. Cells are organized into specialized compartments, each with distinct functions. These compartments exhibit dynamic heterogeneity under high-resolution microscopy, which reflects fluctuations in molecular populations, concentrations, and spatial distributions. To enhance our comprehension of the spatial relationships among molecules within cells, it is crucial to analyze images of high-resolution microscopy by clustering individual pixels according to their visible spatial properties and their temporal evolution. Here, we evaluate the effectiveness of similarity metrics based on their ability to facilitate fast and accurate data analysis in time and space. We discuss the capability of these metrics to differentiate subcellular localization, kinetics, and structures of protein-RNA interactions in Forster resonance energy transfer (FRET) microscopy videos, illustrated by a practical example from recent literature. Our results suggest that using the correlation similarity metric to cluster pixels of high-resolution microscopy data should improve the analysis of high-dimensional microscopy data in a wide range of applications.
Collapse
Affiliation(s)
- Michael J Burke
- Department of Chemistry, Yale University, New Haven, Connecticut 06520, United States
| | - Victor S Batista
- Department of Chemistry, Yale University, New Haven, Connecticut 06520, United States
| | - Caitlin M Davis
- Department of Chemistry, Yale University, New Haven, Connecticut 06520, United States
| |
Collapse
|
10
|
Guetarni B, Windal F, Benhabiles H, Petit M, Dubois R, Leteurtre E, Collard D. A Vision Transformer-Based Framework for Knowledge Transfer From Multi-Modal to Mono-Modal Lymphoma Subtyping Models. IEEE J Biomed Health Inform 2024; 28:5562-5572. [PMID: 38819973 DOI: 10.1109/jbhi.2024.3407878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/02/2024]
Abstract
Determining lymphoma subtypes is a crucial step for better patient treatment targeting to potentially increase their survival chances. In this context, the existing gold standard diagnosis method, which relies on gene expression technology, is highly expensive and time-consuming, making it less accessibility. Although alternative diagnosis methods based on IHC (immunohistochemistry) technologies exist (recommended by the WHO), they still suffer from similar limitations and are less accurate. Whole Slide Image (WSI) analysis using deep learning models has shown promising potential for cancer diagnosis, that could offer cost-effective and faster alternatives to existing methods. In this work, we propose a vision transformer-based framework for distinguishing DLBCL (Diffuse Large B-Cell Lymphoma) cancer subtypes from high-resolution WSIs. To this end, we introduce a multi-modal architecture to train a classifier model from various WSI modalities. We then leverage this model through a knowledge distillation process to efficiently guide the learning of a mono-modal classifier. Our experimental study conducted on a lymphoma dataset of 157 patients shows the promising performance of our mono-modal classification model, outperforming six recent state-of-the-art methods. In addition, the power-law curve, estimated on our experimental data, suggests that with more training data from a reasonable number of additional patients, our model could achieve competitive diagnosis accuracy with IHC technologies. Furthermore, the efficiency of our framework is confirmed through an additional experimental study on an external breast cancer dataset (BCI dataset).
Collapse
|
11
|
Jia X, Gu H, Liu Y, Yang J, Wang X, Pan W, Zhang Y, Cotofana S, Zhao W. An Energy-Efficient Bayesian Neural Network Implementation Using Stochastic Computing Method. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:12913-12923. [PMID: 37134041 DOI: 10.1109/tnnls.2023.3265533] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The robustness of Bayesian neural networks (BNNs) to real-world uncertainties and incompleteness has led to their application in some safety-critical fields. However, evaluating uncertainty during BNN inference requires repeated sampling and feed-forward computing, making them challenging to deploy in low-power or embedded devices. This article proposes the use of stochastic computing (SC) to optimize the hardware performance of BNN inference in terms of energy consumption and hardware utilization. The proposed approach adopts bitstream to represent Gaussian random number and applies it in the inference phase. This allows for the omission of complex transformation computations in the central limit theorem-based Gaussian random number generating (CLT-based GRNG) method and the simplification of multipliers as AND operations. Furthermore, an asynchronous parallel pipeline calculation technique is proposed in computing block to enhance operation speed. Compared with conventional binary radix-based BNN, SC-based BNN (StocBNN) realized by FPGA with 128-bit bitstream consumes much less energy consumption and hardware resources with less than 0.1% accuracy decrease when dealing with MNIST/Fashion-MNIST datasets.
Collapse
|
12
|
Kidder BL. Advanced image generation for cancer using diffusion models. Biol Methods Protoc 2024; 9:bpae062. [PMID: 39258159 PMCID: PMC11387006 DOI: 10.1093/biomethods/bpae062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 07/25/2024] [Accepted: 08/21/2024] [Indexed: 09/12/2024] Open
Abstract
Deep neural networks have significantly advanced the field of medical image analysis, yet their full potential is often limited by relatively small dataset sizes. Generative modeling, particularly through diffusion models, has unlocked remarkable capabilities in synthesizing photorealistic images, thereby broadening the scope of their application in medical imaging. This study specifically investigates the use of diffusion models to generate high-quality brain MRI scans, including those depicting low-grade gliomas, as well as contrast-enhanced spectral mammography (CESM) and chest and lung X-ray images. By leveraging the DreamBooth platform, we have successfully trained stable diffusion models utilizing text prompts alongside class and instance images to generate diverse medical images. This approach not only preserves patient anonymity but also substantially mitigates the risk of patient re-identification during data exchange for research purposes. To evaluate the quality of our synthesized images, we used the Fréchet inception distance metric, demonstrating high fidelity between the synthesized and real images. Our application of diffusion models effectively captures oncology-specific attributes across different imaging modalities, establishing a robust framework that integrates artificial intelligence in the generation of oncological medical imagery.
Collapse
Affiliation(s)
- Benjamin L Kidder
- Department of Oncology, Wayne State University School of Medicine, Detroit, MI, 48201, United States
- Karmanos Cancer Institute, Wayne State University School of Medicine, Detroit, MI, 48201, United States
| |
Collapse
|
13
|
Zhong L, Li L, Yang G. Benchmarking robustness of deep neural networks in semantic segmentation of fluorescence microscopy images. BMC Bioinformatics 2024; 25:269. [PMID: 39164632 PMCID: PMC11334404 DOI: 10.1186/s12859-024-05894-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Accepted: 08/07/2024] [Indexed: 08/22/2024] Open
Abstract
BACKGROUND Fluorescence microscopy (FM) is an important and widely adopted biological imaging technique. Segmentation is often the first step in quantitative analysis of FM images. Deep neural networks (DNNs) have become the state-of-the-art tools for image segmentation. However, their performance on natural images may collapse under certain image corruptions or adversarial attacks. This poses real risks to their deployment in real-world applications. Although the robustness of DNN models in segmenting natural images has been studied extensively, their robustness in segmenting FM images remains poorly understood RESULTS: To address this deficiency, we have developed an assay that benchmarks robustness of DNN segmentation models using datasets of realistic synthetic 2D FM images with precisely controlled corruptions or adversarial attacks. Using this assay, we have benchmarked robustness of ten representative models such as DeepLab and Vision Transformer. We find that models with good robustness on natural images may perform poorly on FM images. We also find new robustness properties of DNN models and new connections between their corruption robustness and adversarial robustness. To further assess the robustness of the selected models, we have also benchmarked them on real microscopy images of different modalities without using simulated degradation. The results are consistent with those obtained on the realistic synthetic images, confirming the fidelity and reliability of our image synthesis method as well as the effectiveness of our assay. CONCLUSIONS Based on comprehensive benchmarking experiments, we have found distinct robustness properties of deep neural networks in semantic segmentation of FM images. Based on the findings, we have made specific recommendations on selection and design of robust models for FM image segmentation.
Collapse
Affiliation(s)
- Liqun Zhong
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China
| | - Lingrui Li
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China
| | - Ge Yang
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China.
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China.
| |
Collapse
|
14
|
Lin Q, Tan W, Cai S, Yan B, Li J, Zhong Y. Lesion-Decoupling-Based Segmentation With Large-Scale Colon and Esophageal Datasets for Early Cancer Diagnosis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:11142-11156. [PMID: 37028330 DOI: 10.1109/tnnls.2023.3248804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Lesions of early cancers often show flat, small, and isochromatic characteristics in medical endoscopy images, which are difficult to be captured. By analyzing the differences between the internal and external features of the lesion area, we propose a lesion-decoupling-based segmentation (LDS) network for assisting early cancer diagnosis. We introduce a plug-and-play module called self-sampling similar feature disentangling module (FDM) to obtain accurate lesion boundaries. Then, we propose a feature separation loss (FSL) function to separate pathological features from normal ones. Moreover, since physicians make diagnoses with multimodal data, we propose a multimodal cooperative segmentation network with two different modal images as input: white-light images (WLIs) and narrowband images (NBIs). Our FDM and FSL show a good performance for both single-modal and multimodal segmentations. Extensive experiments on five backbones prove that our FDM and FSL can be easily applied to different backbones for a significant lesion segmentation accuracy improvement, and the maximum increase of mean Intersection over Union (mIoU) is 4.58. For colonoscopy, we can achieve up to mIoU of 91.49 on our Dataset A and 84.41 on the three public datasets. For esophagoscopy, mIoU of 64.32 is best achieved on the WLI dataset and 66.31 on the NBI dataset.
Collapse
|
15
|
Zhang Y, Shen L. Automatic Learning Rate Adaption for Memristive Deep Learning Systems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:10791-10802. [PMID: 37027694 DOI: 10.1109/tnnls.2023.3244006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
As a possible device to further enhance the performance of the hybrid complementary metal oxide semiconductor (CMOS) technology in the hardware, the memristor has attracted widespread attention in implementing efficient and compact deep learning (DL) systems. In this study, an automatic learning rate tuning method for memristive DL systems is presented. Memristive devices are utilized to adjust the adaptive learning rate in deep neural networks (DNNs). The speed of the learning rate adaptation process is fast at first and then becomes slow, which consist of the memristance or conductance adjustment process of the memristors. As a result, no manual tuning of learning rates is required in the adaptive back propagation (BP) algorithm. While cycle-to-cycle and device-to-device variations could be a significant issue in memristive DL systems, the proposed method appears robust to noisy gradients, various architectures, and different datasets. Moreover, fuzzy control methods for adaptive learning are presented for pattern recognition, such that the over-fitting issue can be well addressed. To our best knowledge, this is the first memristive DL system using an adaptive learning rate for image recognition. Another highlight of the presented memristive adaptive DL system is that quantized neural network architecture is utilized, and there is therefore a significant increase in the training efficiency, without the loss of testing accuracy.
Collapse
|
16
|
Feng R, Li S, Zhang Y. AI-powered microscopy image analysis for parasitology: integrating human expertise. Trends Parasitol 2024; 40:633-646. [PMID: 38824067 DOI: 10.1016/j.pt.2024.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 05/06/2024] [Accepted: 05/07/2024] [Indexed: 06/03/2024]
Abstract
Microscopy image analysis plays a pivotal role in parasitology research. Deep learning (DL), a subset of artificial intelligence (AI), has garnered significant attention. However, traditional DL-based methods for general purposes are data-driven, often lacking explainability due to their black-box nature and sparse instructional resources. To address these challenges, this article presents a comprehensive review of recent advancements in knowledge-integrated DL models tailored for microscopy image analysis in parasitology. The massive amounts of human expert knowledge from parasitologists can enhance the accuracy and explainability of AI-driven decisions. It is expected that the adoption of knowledge-integrated DL models will open up a wide range of applications in the field of parasitology.
Collapse
Affiliation(s)
- Ruijun Feng
- College of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, Guangdong 518055, China; School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| | - Sen Li
- College of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, Guangdong 518055, China
| | - Yang Zhang
- College of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, Guangdong 518055, China.
| |
Collapse
|
17
|
Tekle E, Dese K, Girma S, Adissu W, Krishnamoorthy J, Kwa T. DeepLeish: a deep learning based support system for the detection of Leishmaniasis parasite from Giemsa-stained microscope images. BMC Med Imaging 2024; 24:152. [PMID: 38890604 PMCID: PMC11186139 DOI: 10.1186/s12880-024-01333-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Accepted: 06/13/2024] [Indexed: 06/20/2024] Open
Abstract
BACKGROUND Leishmaniasis is a vector-born neglected parasitic disease belonging to the genus Leishmania. Out of the 30 Leishmania species, 21 species cause human infection that affect the skin and the internal organs. Around, 700,000 to 1,000,000 of the newly infected cases and 26,000 to 65,000 deaths are reported worldwide annually. The disease exhibits three clinical presentations, namely, the cutaneous, muco-cutaneous and visceral Leishmaniasis which affects the skin, mucosal membrane and the internal organs, respectively. The relapsing behavior of the disease limits its diagnosis and treatment efficiency. The common diagnostic approaches follow subjective, error-prone, repetitive processes. Despite, an ever pressing need for an accurate detection of Leishmaniasis, the research conducted so far is scarce. In this regard, the main aim of the current research is to develop an artificial intelligence based detection tool for the Leishmaniasis from the Geimsa-stained microscopic images using deep learning method. METHODS Stained microscopic images were acquired locally and labeled by experts. The images were augmented using different methods to prevent overfitting and improve the generalizability of the system. Fine-tuned Faster RCNN, SSD, and YOLOV5 models were used for object detection. Mean average precision (MAP), precision, and Recall were calculated to evaluate and compare the performance of the models. RESULTS The fine-tuned YOLOV5 outperformed the other models such as Faster RCNN and SSD, with the MAP scores, of 73%, 54% and 57%, respectively. CONCLUSION The currently developed YOLOV5 model can be tested in the clinics to assist the laboratorists in diagnosing Leishmaniasis from the microscopic images. Particularly, in low-resourced healthcare facilities, with fewer qualified medical professionals or hematologists, our AI support system can assist in reducing the diagnosing time, workload, and misdiagnosis. Furthermore, the dataset collected by us will be shared with other researchers who seek to improve upon the detection system of the parasite. The current model detects the parasites even in the presence of the monocyte cells, but sometimes, the accuracy decreases due to the differences in the sizes of the parasite cells alongside the blood cells. The incorporation of cascaded networks in future and the quantification of the parasite load, shall overcome the limitations of the currently developed system.
Collapse
Affiliation(s)
- Eden Tekle
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia
| | - Kokeb Dese
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia.
- Department of Chemical and Biomedical Engineering, West Virginia University, Morgantown, WV, 26505, USA.
| | - Selfu Girma
- Pathology Unit, Armauer Hansen Research Institute, Addis Ababa, Ethiopia
| | - Wondimagegn Adissu
- School of Medical Laboratory Sciences, Institute of Health, Jimma University, Jimma, Ethiopia
- Clinical Trial Unit, Jimma University, Jimma, Ethiopia
| | | | - Timothy Kwa
- School of Biomedical Engineering, Jimma University, Jimma, Ethiopia.
- Medtronic MiniMed, 18000 Devonshire St. Northridge, Los Angeles, CA, USA.
| |
Collapse
|
18
|
Alahmari SS, Goldgof D, Hall LO, Mouton PR. A Review of Nuclei Detection and Segmentation on Microscopy Images Using Deep Learning With Applications to Unbiased Stereology Counting. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7458-7477. [PMID: 36327184 DOI: 10.1109/tnnls.2022.3213407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The detection and segmentation of stained cells and nuclei are essential prerequisites for subsequent quantitative research for many diseases. Recently, deep learning has shown strong performance in many computer vision problems, including solutions for medical image analysis. Furthermore, accurate stereological quantification of microscopic structures in stained tissue sections plays a critical role in understanding human diseases and developing safe and effective treatments. In this article, we review the most recent deep learning approaches for cell (nuclei) detection and segmentation in cancer and Alzheimer's disease with an emphasis on deep learning approaches combined with unbiased stereology. Major challenges include accurate and reproducible cell detection and segmentation of microscopic images from stained sections. Finally, we discuss potential improvements and future trends in deep learning applied to cell detection and segmentation.
Collapse
|
19
|
Ireddy ATS, Ghorabe FDE, Shishatskaya EI, Ryltseva GA, Dudaev AE, Kozodaev DA, Nosonovsky M, Skorb EV, Zun PS. Benchmarking Unsupervised Clustering Algorithms for Atomic Force Microscopy Data on Polyhydroxyalkanoate Films. ACS OMEGA 2024; 9:21595-21611. [PMID: 38764678 PMCID: PMC11097174 DOI: 10.1021/acsomega.4c02502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 04/11/2024] [Accepted: 04/12/2024] [Indexed: 05/21/2024]
Abstract
Surface of polyhydroxyalkanoate (PHA) films of varying monomer compositions are analyzed using atomic force microscopy (AFM) and unsupervised machine learning (ML) algorithms to investigate and classify films based on global attributes such as the scan size, film thickness, and monomer type. The experiment provides benchmarked results for 12 of the most widely used clustering algorithms via a hybrid investigation approach while highlighting the impact of using the Fourier transform (FT) on high-dimensional vectorized data for classification on various pools of data. Our findings indicate that the use of a one-dimensional (1D) FT of vectorized data produces the most accurate outcome. The experiment also provides insights into case-by-case investigations of algorithm performances and the impact of various data pools. Lastly, we show an early version of our tool aimed at investigating surfaces using ML approaches and discuss the results of our current experiment to configure future improvements.
Collapse
Affiliation(s)
- Ashish T. S. Ireddy
- Infochemistry
Scientific Centre, ITMO University, 9 Lomonosova St., 191002 St. Petersburg, Russia
| | - Fares D. E. Ghorabe
- Infochemistry
Scientific Centre, ITMO University, 9 Lomonosova St., 191002 St. Petersburg, Russia
| | | | - Galina A. Ryltseva
- Siberian
Federal University, 79 Svobodnyi Av., 660041 Krasnoyarsk, Russia
| | - Alexey E. Dudaev
- Siberian
Federal University, 79 Svobodnyi Av., 660041 Krasnoyarsk, Russia
| | | | - Michael Nosonovsky
- Infochemistry
Scientific Centre, ITMO University, 9 Lomonosova St., 191002 St. Petersburg, Russia
- University
of Wisconsin-Milwaukee, Milwaukee, Wisconsin 53217, United States
| | - Ekaterina V. Skorb
- Infochemistry
Scientific Centre, ITMO University, 9 Lomonosova St., 191002 St. Petersburg, Russia
| | - Pavel S. Zun
- Infochemistry
Scientific Centre, ITMO University, 9 Lomonosova St., 191002 St. Petersburg, Russia
| |
Collapse
|
20
|
Huang Z, Wang L, Xu L. DRA-Net: Medical image segmentation based on adaptive feature extraction and region-level information fusion. Sci Rep 2024; 14:9714. [PMID: 38678063 PMCID: PMC11584768 DOI: 10.1038/s41598-024-60475-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Accepted: 04/23/2024] [Indexed: 04/29/2024] Open
Abstract
Medical image segmentation is a key task in computer aided diagnosis. In recent years, convolutional neural network (CNN) has made some achievements in medical image segmentation. However, the convolution operation can only extract features in a fixed size region at a time, which leads to the loss of some key features. The recently popular Transformer has global modeling capabilities, but it does not pay enough attention to local information and cannot accurately segment the edge details of the target area. Given these issues, we proposed dynamic regional attention network (DRA-Net). Different from the above methods, it first measures the similarity of features and concentrates attention on different dynamic regions. In this way, the network can adaptively select different modeling scopes for feature extraction, reducing information loss. Then, regional feature interaction is carried out to better learn local edge details. At the same time, we also design ordered shift multilayer perceptron (MLP) blocks to enhance communication within different regions, further enhancing the network's ability to learn local edge details. After several experiments, the results indicate that our network produces more accurate segmentation performance compared to other CNN and Transformer based networks.
Collapse
Affiliation(s)
- Zhongmiao Huang
- School of Computer Science and Technology, Xinjiang University, Urumqi, 830046, China
| | - Liejun Wang
- School of Computer Science and Technology, Xinjiang University, Urumqi, 830046, China.
| | - Lianghui Xu
- School of Computer Science and Technology, Xinjiang University, Urumqi, 830046, China
| |
Collapse
|
21
|
Ferreira EKGD, Silveira GF. Classification and counting of cells in brightfield microscopy images: an application of convolutional neural networks. Sci Rep 2024; 14:9031. [PMID: 38641688 PMCID: PMC11031575 DOI: 10.1038/s41598-024-59625-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 04/12/2024] [Indexed: 04/21/2024] Open
Abstract
Microscopy is integral to medical research, facilitating the exploration of various biological questions, notably cell quantification. However, this process's time-consuming and error-prone nature, attributed to human intervention or automated methods usually applied to fluorescent images, presents challenges. In response, machine learning algorithms have been integrated into microscopy, automating tasks and constructing predictive models from vast datasets. These models adeptly learn representations for object detection, image segmentation, and target classification. An advantageous strategy involves utilizing unstained images, preserving cell integrity and enabling morphology-based classification-something hindered when fluorescent markers are used. The aim is to introduce a model proficient in classifying distinct cell lineages in digital contrast microscopy images. Additionally, the goal is to create a predictive model identifying lineage and determining optimal quantification of cell numbers. Employing a CNN machine learning algorithm, a classification model predicting cellular lineage achieved a remarkable accuracy of 93%, with ROC curve results nearing 1.0, showcasing robust performance. However, some lineages, namely SH-SY5Y (78%), HUH7_mayv (85%), and A549 (88%), exhibited slightly lower accuracies. These outcomes not only underscore the model's quality but also emphasize CNNs' potential in addressing the inherent complexities of microscopic images.
Collapse
Affiliation(s)
| | - G F Silveira
- Carlos Chagas Institute, Curitiba, PR, CEP 81310-020, Brazil.
| |
Collapse
|
22
|
Weißer C, Netzer N, Görtz M, Schütz V, Hielscher T, Schwab C, Hohenfellner M, Schlemmer HP, Maier-Hein KH, Bonekamp D. Weakly Supervised MRI Slice-Level Deep Learning Classification of Prostate Cancer Approximates Full Voxel- and Slice-Level Annotation: Effect of Increasing Training Set Size. J Magn Reson Imaging 2024; 59:1409-1422. [PMID: 37504495 DOI: 10.1002/jmri.28891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 06/16/2023] [Accepted: 06/16/2023] [Indexed: 07/29/2023] Open
Abstract
BACKGROUND Weakly supervised learning promises reduced annotation effort while maintaining performance. PURPOSE To compare weakly supervised training with full slice-wise annotated training of a deep convolutional classification network (CNN) for prostate cancer (PC). STUDY TYPE Retrospective. SUBJECTS One thousand four hundred eighty-nine consecutive institutional prostate MRI examinations from men with suspicion for PC (65 ± 8 years) between January 2015 and November 2020 were split into training (N = 794, enriched with 204 PROSTATEx examinations) and test set (N = 695). FIELD STRENGTH/SEQUENCE 1.5 and 3T, T2-weighted turbo-spin-echo and diffusion-weighted echo-planar imaging. ASSESSMENT Histopathological ground truth was provided by targeted and extended systematic biopsy. Reference training was performed using slice-level annotation (SLA) and compared to iterative training utilizing patient-level annotations (PLAs) with supervised feedback of CNN estimates into the next training iteration at three incremental training set sizes (N = 200, 500, 998). Model performance was assessed by comparing specificity at fixed sensitivity of 0.97 [254/262] emulating PI-RADS ≥ 3, and 0.88-0.90 [231-236/262] emulating PI-RADS ≥ 4 decisions. STATISTICAL TESTS Receiver operating characteristic (ROC) and area under the curve (AUC) was compared using DeLong and Obuchowski test. Sensitivity and specificity were compared using McNemar test. Statistical significance threshold was P = 0.05. RESULTS Test set (N = 695) ROC-AUC performance of SLA (trained with 200/500/998 exams) was 0.75/0.80/0.83, respectively. PLA achieved lower ROC-AUC of 0.64/0.72/0.78. Both increased performance significantly with increasing training set size. ROC-AUC for SLA at 500 exams was comparable to PLA at 998 exams (P = 0.28). ROC-AUC was significantly different between SLA and PLA at same training set sizes, however the ROC-AUC difference decreased significantly from 200 to 998 training exams. Emulating PI-RADS ≥ 3 decisions, difference between PLA specificity of 0.12 [51/433] and SLA specificity of 0.13 [55/433] became undetectable (P = 1.0) at 998 exams. Emulating PI-RADS ≥ 4 decisions, at 998 exams, SLA specificity of 0.51 [221/433] remained higher than PLA specificity at 0.39 [170/433]. However, PLA specificity at 998 exams became comparable to SLA specificity of 0.37 [159/433] at 200 exams (P = 0.70). DATA CONCLUSION Weakly supervised training of a classification CNN using patient-level-only annotation had lower performance compared to training with slice-wise annotations, but improved significantly faster with additional training data. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Cedric Weißer
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Heidelberg University Medical School, Heidelberg, Germany
| | - Nils Netzer
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Heidelberg University Medical School, Heidelberg, Germany
| | - Magdalena Görtz
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
- Junior Clinical Cooperation Unit, Multiparametric Methods for Early Detection of Prostate Cancer, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Viktoria Schütz
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Thomas Hielscher
- Division of Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Constantin Schwab
- Institute of Pathology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Markus Hohenfellner
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Heinz-Peter Schlemmer
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- National Center for Tumor Diseases (NCT) Heidelberg, Heidelberg, Germany
- German Cancer Consortium (DKTK), Germany
| | - Klaus H Maier-Hein
- National Center for Tumor Diseases (NCT) Heidelberg, Heidelberg, Germany
- Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - David Bonekamp
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Heidelberg University Medical School, Heidelberg, Germany
- National Center for Tumor Diseases (NCT) Heidelberg, Heidelberg, Germany
- German Cancer Consortium (DKTK), Germany
| |
Collapse
|
23
|
Roberts EJ, Chavez T, Hexemer A, Zwart PH. DLSIA: Deep Learning for Scientific Image Analysis. J Appl Crystallogr 2024; 57:392-402. [PMID: 38596727 PMCID: PMC11001410 DOI: 10.1107/s1600576724001390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 02/12/2024] [Indexed: 04/11/2024] Open
Abstract
DLSIA (Deep Learning for Scientific Image Analysis) is a Python-based machine learning library that empowers scientists and researchers across diverse scientific domains with a range of customizable convolutional neural network (CNN) architectures for a wide variety of tasks in image analysis to be used in downstream data processing. DLSIA features easy-to-use architectures, such as autoencoders, tunable U-Nets and parameter-lean mixed-scale dense networks (MSDNets). Additionally, this article introduces sparse mixed-scale networks (SMSNets), generated using random graphs, sparse connections and dilated convolutions connecting different length scales. For verification, several DLSIA-instantiated networks and training scripts are employed in multiple applications, including inpainting for X-ray scattering data using U-Nets and MSDNets, segmenting 3D fibers in X-ray tomographic reconstructions of concrete using an ensemble of SMSNets, and leveraging autoencoder latent spaces for data compression and clustering. As experimental data continue to grow in scale and complexity, DLSIA provides accessible CNN construction and abstracts CNN complexities, allowing scientists to tailor their machine learning approaches, accelerate discoveries, foster interdisciplinary collaboration and advance research in scientific image analysis.
Collapse
Affiliation(s)
- Eric J. Roberts
- Center for Advanced Mathematics for Energy Research Applications, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
- Molecular Biophysics and Integrated Bioimaging Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Tanny Chavez
- Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Alexander Hexemer
- Center for Advanced Mathematics for Energy Research Applications, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
- Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Petrus H. Zwart
- Center for Advanced Mathematics for Energy Research Applications, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
- Molecular Biophysics and Integrated Bioimaging Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
- Berkeley Synchrotron Infrared Structural Biology Program, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| |
Collapse
|
24
|
Ounissi M, Latouche M, Racoceanu D. PhagoStat a scalable and interpretable end to end framework for efficient quantification of cell phagocytosis in neurodegenerative disease studies. Sci Rep 2024; 14:6482. [PMID: 38499658 PMCID: PMC10948879 DOI: 10.1038/s41598-024-56081-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 03/01/2024] [Indexed: 03/20/2024] Open
Abstract
Quantifying the phagocytosis of dynamic, unstained cells is essential for evaluating neurodegenerative diseases. However, measuring rapid cell interactions and distinguishing cells from background make this task very challenging when processing time-lapse phase-contrast video microscopy. In this study, we introduce an end-to-end, scalable, and versatile real-time framework for quantifying and analyzing phagocytic activity. Our proposed pipeline is able to process large data-sets and includes a data quality verification module to counteract potential perturbations such as microscope movements and frame blurring. We also propose an explainable cell segmentation module to improve the interpretability of deep learning methods compared to black-box algorithms. This includes two interpretable deep learning capabilities: visual explanation and model simplification. We demonstrate that interpretability in deep learning is not the opposite of high performance, by additionally providing essential deep learning algorithm optimization insights and solutions. Besides, incorporating interpretable modules results in an efficient architecture design and optimized execution time. We apply this pipeline to quantify and analyze microglial cell phagocytosis in frontotemporal dementia (FTD) and obtain statistically reliable results showing that FTD mutant cells are larger and more aggressive than control cells. The method has been tested and validated on several public benchmarks by generating state-of-the art performances. To stimulate translational approaches and future studies, we release an open-source end-to-end pipeline and a unique microglial cells phagocytosis dataset for immune system characterization in neurodegenerative diseases research. This pipeline and the associated dataset will consistently crystallize future advances in this field, promoting the development of efficient and effective interpretable algorithms dedicated to the critical domain of neurodegenerative diseases' characterization. https://github.com/ounissimehdi/PhagoStat .
Collapse
Affiliation(s)
- Mehdi Ounissi
- CNRS, Inserm, AP-HP, Inria, Paris Brain Institute-ICM, Sorbonne University, 75013, Paris, France
| | - Morwena Latouche
- Inserm, CNRS, AP-HP, Institut du Cerveau, ICM, Sorbonne Université, 75013, Paris, France
- PSL Research university, EPHE, Paris, France
| | - Daniel Racoceanu
- CNRS, Inserm, AP-HP, Inria, Paris Brain Institute-ICM, Sorbonne University, 75013, Paris, France.
| |
Collapse
|
25
|
Wu W, Liao X, Wang L, Chen S, Zhuang J, Zheng Q. Rapid scanning method for SICM based on autoencoder network. Micron 2024; 177:103579. [PMID: 38154409 DOI: 10.1016/j.micron.2023.103579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 11/26/2023] [Accepted: 12/11/2023] [Indexed: 12/30/2023]
Abstract
Scanning Ion Conductance Microscopy (SICM) enables non-destructive imaging of living cells, which makes it highly valuable in life sciences, medicine, pharmacology, and many other fields. However, because of the uncertainty retrace height of SICM hopping mode, the time resolution of SICM is relatively low, which makes the device fail to meet the demands of dynamic scanning. To address above issues, we propose a fast-scanning method for SICM based on an autoencoder network. Firstly, we cut under-sampled images into small image lists. Secondly, we feed them into a self-constructed primitive-autoencoder super-resolution network to compute high-resolution images. Finally, the inferred scanning path is determined using the computed images to reconstruct the real high-resolution scanning path. The results demonstrate that the proposed network can reconstruct higher-resolution images in various super-resolution tasks of low-resolution scanned images. Compared to existing traditional interpolation methods, the average peak signal-to-noise ratio improvement is greater than 7.5823 dB, and the average structural similarity index improvement is greater than 0.2372. At the same time, using the proposed method for high-resolution image scanning leads to a 156.25% speed improvement compared to traditional methods. It opens up possibilities for achieving high-time resolution imaging of dynamic samples in SICM and further promotes the widespread application of SICM in the future.
Collapse
Affiliation(s)
- Wenlin Wu
- Key Laboratory of Testing Technology for Manufacturing Process, Ministry of Education, Southwest University of Science and Technology, Mianyang 621010, China
| | - Xiaobo Liao
- Key Laboratory of Testing Technology for Manufacturing Process, Ministry of Education, Southwest University of Science and Technology, Mianyang 621010, China.
| | - Lei Wang
- Key Laboratory of Testing Technology for Manufacturing Process, Ministry of Education, Southwest University of Science and Technology, Mianyang 621010, China
| | - Siyu Chen
- Key Laboratory of Testing Technology for Manufacturing Process, Ministry of Education, Southwest University of Science and Technology, Mianyang 621010, China
| | - Jian Zhuang
- School of Mechan ical Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Qiangqiang Zheng
- School of Mechan ical Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| |
Collapse
|
26
|
Yang X, Chin BB, Silosky M, Wehrend J, Litwiller DV, Ghosh D, Xing F. Learning Without Real Data Annotations to Detect Hepatic Lesions in PET Images. IEEE Trans Biomed Eng 2024; 71:679-688. [PMID: 37708016 DOI: 10.1109/tbme.2023.3315268] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
OBJECTIVE Deep neural networks have been recently applied to lesion identification in fluorodeoxyglucose (FDG) positron emission tomography (PET) images, but they typically rely on a large amount of well-annotated data for model training. This is extremely difficult to achieve for neuroendocrine tumors (NETs), because of low incidence of NETs and expensive lesion annotation in PET images. The objective of this study is to design a novel, adaptable deep learning method, which uses no real lesion annotations but instead low-cost, list mode-simulated data, for hepatic lesion detection in real-world clinical NET PET images. METHODS We first propose a region-guided generative adversarial network (RG-GAN) for lesion-preserved image-to-image translation. Then, we design a specific data augmentation module for our list-mode simulated data and incorporate this module into the RG-GAN to improve model training. Finally, we combine the RG-GAN, the data augmentation module and a lesion detection neural network into a unified framework for joint-task learning to adaptatively identify lesions in real-world PET data. RESULTS The proposed method outperforms recent state-of-the-art lesion detection methods in real clinical 68Ga-DOTATATE PET images, and produces very competitive performance with the target model that is trained with real lesion annotations. CONCLUSION With RG-GAN modeling and specific data augmentation, we can obtain good lesion detection performance without using any real data annotations. SIGNIFICANCE This study introduces an adaptable deep learning method for hepatic lesion identification in NETs, which can significantly reduce human effort for data annotation and improve model generalizability for lesion detection with PET imaging.
Collapse
|
27
|
Juez-Castillo G, Valencia-Vidal B, Orrego LM, Cabello-Donayre M, Montosa-Hidalgo L, Pérez-Victoria JM. FiCRoN, a deep learning-based algorithm for the automatic determination of intracellular parasite burden from fluorescence microscopy images. Med Image Anal 2024; 91:103036. [PMID: 38016388 DOI: 10.1016/j.media.2023.103036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 06/27/2023] [Accepted: 11/13/2023] [Indexed: 11/30/2023]
Abstract
Protozoan parasites are responsible for dramatic, neglected diseases. The automatic determination of intracellular parasite burden from fluorescence microscopy images is a challenging problem. Recent advances in deep learning are transforming this process, however, high-performance algorithms have not been developed. The limitations in image acquisition, especially for intracellular parasites, make this process complex. For this reason, traditional image-processing methods are not easily transferred between different datasets and segmentation-based strategies do not have a high performance. Here, we propose a novel method FiCRoN, based on fully convolutional regression networks (FCRNs), as a promising new tool for estimating intracellular parasite burden. This estimation requires three values, intracellular parasites, infected cells and uninfected cells. FiCRoN solves this problem as multi-task learning: counting by regression at two scales, a smaller one for intracellular parasites and a larger one for host cells. It does not use segmentation or detection, resulting in a higher generalization of counting tasks and, therefore, a decrease in error propagation. Linear regression reveals an excellent correlation coefficient between manual and automatic methods. FiCRoN is an innovative freedom-respecting image analysis software based on deep learning, designed to provide a fast and accurate quantification of parasite burden, also potentially useful as a single-cell counter.
Collapse
Affiliation(s)
- Graciela Juez-Castillo
- Instituto de Parasitología y Biomedicina "López-Neyra", Consejo Superior de Investigaciones Cientìficas, (IPBLN-CSIC), PTS Granada, 18016 Granada, Spain; Research Group Osiris&Bioaxis, Faculty of Engineering, El Bosque University, 110121 Bogotá, Colombia
| | - Brayan Valencia-Vidal
- Research Group Osiris&Bioaxis, Faculty of Engineering, El Bosque University, 110121 Bogotá, Colombia; Department of Computer Engineering, Automation and Robotics, Research Centre for Information and Communication Technologies, University of Granada, 18014 Granada, Spain.
| | - Lina M Orrego
- Instituto de Parasitología y Biomedicina "López-Neyra", Consejo Superior de Investigaciones Cientìficas, (IPBLN-CSIC), PTS Granada, 18016 Granada, Spain
| | - María Cabello-Donayre
- Instituto de Parasitología y Biomedicina "López-Neyra", Consejo Superior de Investigaciones Cientìficas, (IPBLN-CSIC), PTS Granada, 18016 Granada, Spain; Universidad Internacional de la Rioja, 26006 La Rioja, Spain
| | - Laura Montosa-Hidalgo
- Instituto de Parasitología y Biomedicina "López-Neyra", Consejo Superior de Investigaciones Cientìficas, (IPBLN-CSIC), PTS Granada, 18016 Granada, Spain
| | - José M Pérez-Victoria
- Instituto de Parasitología y Biomedicina "López-Neyra", Consejo Superior de Investigaciones Cientìficas, (IPBLN-CSIC), PTS Granada, 18016 Granada, Spain.
| |
Collapse
|
28
|
Xing F, Silosky M, Ghosh D, Chin BB. Location-Aware Encoding for Lesion Detection in 68Ga-DOTATATE Positron Emission Tomography Images. IEEE Trans Biomed Eng 2024; 71:247-257. [PMID: 37471190 DOI: 10.1109/tbme.2023.3297249] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/22/2023]
Abstract
OBJECTIVE Lesion detection with positron emission tomography (PET) imaging is critical for tumor staging, treatment planning, and advancing novel therapies to improve patient outcomes, especially for neuroendocrine tumors (NETs). Current lesion detection methods often require manual cropping of regions/volumes of interest (ROIs/VOIs) a priori, or rely on multi-stage, cascaded models, or use multi-modality imaging to detect lesions in PET images. This leads to significant inefficiency, high variability and/or potential accumulative errors in lesion quantification. To tackle this issue, we propose a novel single-stage lesion detection method using only PET images. METHODS We design and incorporate a new, plug-and-play codebook learning module into a U-Net-like neural network and promote lesion location-specific feature learning at multiple scales. We explicitly regularize the codebook learning with direct supervision at the network's multi-level hidden layers and enforce the network to learn multi-scale discriminative features with respect to predicting lesion positions. The network automatically combines the predictions from the codebook learning module and other layers via a learnable fusion layer. RESULTS We evaluate the proposed method on a real-world clinical 68Ga-DOTATATE PET image dataset, and our method produces significantly better lesion detection performance than recent state-of-the-art approaches. CONCLUSION We present a novel deep learning method for single-stage lesion detection in PET imaging data, with no ROI/VOI cropping in advance, no multi-stage modeling and no multi-modality data. SIGNIFICANCE This study provides a new perspective for effective and efficient lesion identification in PET, potentially accelerating novel therapeutic regimen development for NETs and ultimately improving patient outcomes including survival.
Collapse
|
29
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
30
|
Xing F, Yang X, Cornish TC, Ghosh D. Learning with limited target data to detect cells in cross-modality images. Med Image Anal 2023; 90:102969. [PMID: 37802010 DOI: 10.1016/j.media.2023.102969] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/16/2023] [Accepted: 09/11/2023] [Indexed: 10/08/2023]
Abstract
Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.
| | - Xinyi Yang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| |
Collapse
|
31
|
Liu CH, Fu LW, Chen HH, Huang SL. Toward cell nuclei precision between OCT and H&E images translation using signal-to-noise ratio cycle-consistency. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107824. [PMID: 37832427 DOI: 10.1016/j.cmpb.2023.107824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 08/31/2023] [Accepted: 09/19/2023] [Indexed: 10/15/2023]
Abstract
Medical image-to-image translation is often difficult and of limited effectiveness due to the differences in image acquisition mechanisms and the diverse structure of biological tissues. This work presents an unpaired image translation model between in-vivo optical coherence tomography (OCT) and ex-vivo Hematoxylin and eosin (H&E) stained images without the need for image stacking, registration, post-processing, and annotation. The model can generate high-quality and highly accurate virtual medical images, and is robust and bidirectional. Our framework introduces random noise to (1) blur redundant features, (2) defend against self-adversarial attacks, (3) stabilize inverse conversion, and (4) mitigate the impact of OCT speckles. We also demonstrate that our model can be pre-trained and then fine-tuned using images from different OCT systems in just a few epochs. Qualitative and quantitative comparisons with traditional image-to-image translation models show the robustness of our proposed signal-to-noise ratio (SNR) cycle-consistency method.
Collapse
Affiliation(s)
- Chih-Hao Liu
- Graduate Institute of Photonics and Optoelectronics, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| | - Li-Wei Fu
- Graduate Institute of Communication Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| | - Homer H Chen
- Graduate Institute of Communication Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; Department of Electrical Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; Graduate Institute of Networking and Multimedia, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| | - Sheng-Lung Huang
- Graduate Institute of Photonics and Optoelectronics, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; Department of Electrical Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan; All Vista Healthcare Center, National Taiwan University, No.1, Sec. 4, Roosevelt Road, Taipei, 10617, Taiwan.
| |
Collapse
|
32
|
Jiang Y, Si J, Zhang R, Enemali G, Zhou B, McCann H, Liu C. CSTNet: A Dual-Branch Convolutional Neural Network for Imaging of Reactive Flows Using Chemical Species Tomography. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:9248-9258. [PMID: 35324447 DOI: 10.1109/tnnls.2022.3157689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Chemical species tomography (CST) has been widely used for in situ imaging of critical parameters, e.g., species concentration and temperature, in reactive flows. However, even with state-of-the-art computational algorithms, the method is limited due to the inherently ill-posed and rank-deficient tomographic data inversion and by high computational cost. These issues hinder its application for real-time flow diagnosis. To address them, we present here a novel convolutional neural network, namely CSTNet, for high-fidelity, rapid, and simultaneous imaging of species concentration and temperature using CST. CSTNet introduces a shared feature extractor that incorporates the CST measurements and sensor layout into the learning network. In addition, a dual-branch decoder with internal crosstalk, which automatically learns the naturally correlated distributions of species concentration and temperature, is proposed for image reconstructions. The proposed CSTNet is validated both with simulated datasets and with measured data from real flames in experiments using an industry-oriented sensor. Superior performance is found relative to previous approaches in terms of reconstruction accuracy and robustness to measurement noise. This is the first time, to the best of our knowledge, that a deep learning-based method for CST has been experimentally validated for simultaneous imaging of multiple critical parameters in reactive flows using a low-complexity optical sensor with a severely limited number of laser beams.
Collapse
|
33
|
Yang D, Zhang S, Zheng C, Zhou G, Hu Y, Hao Q. Refractive index tomography with a physics-based optical neural network. BIOMEDICAL OPTICS EXPRESS 2023; 14:5886-5903. [PMID: 38021108 PMCID: PMC10659804 DOI: 10.1364/boe.504242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 09/27/2023] [Accepted: 09/27/2023] [Indexed: 12/01/2023]
Abstract
The non-interference three-dimensional refractive index (RI) tomography has attracted extensive attention in the life science field for its simple system implementation and robust imaging performance. However, the complexity inherent in the physical propagation process poses significant challenges when the sample under study deviates from the weak scattering approximation. Such conditions complicate the task of achieving global optimization with conventional algorithms, rendering the reconstruction process both time-consuming and potentially ineffective. To address such limitations, this paper proposes an untrained multi-slice neural network (MSNN) with an optical structure, in which each layer has a clear corresponding physical meaning according to the beam propagation model. The network does not require pre-training and performs good generalization and can be recovered through the optimization of a set of intensity images. Concurrently, MSNN can calibrate the intensity of different illumination by learnable parameters, and the multiple backscattering effects have also been taken into consideration by integrating a "scattering attenuation layer" between adjacent "RI" layers in the MSNN. Both simulations and experiments have been conducted carefully to demonstrate the effectiveness and feasibility of the proposed method. Experimental results reveal that MSNN can enhance clarity with increased efficiency in RI tomography. The implementation of MSNN introduces a novel paradigm for RI tomography.
Collapse
Affiliation(s)
- Delong Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Shaohui Zhang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
- Yangtze Delta Region Academy of Beijing Institute of Technology, China
| | - Chuanjian Zheng
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Guocheng Zhou
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Yao Hu
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Qun Hao
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
- Yangtze Delta Region Academy of Beijing Institute of Technology, China
| |
Collapse
|
34
|
Wu Y, Gadsden SA. Machine learning algorithms in microbial classification: a comparative analysis. Front Artif Intell 2023; 6:1200994. [PMID: 37928448 PMCID: PMC10620803 DOI: 10.3389/frai.2023.1200994] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 09/27/2023] [Indexed: 11/07/2023] Open
Abstract
This research paper presents an overview of contemporary machine learning methodologies and their utilization in the domain of healthcare and the prevention of infectious diseases, specifically focusing on the classification and identification of bacterial species. As deep learning techniques have gained prominence in the healthcare sector, a diverse array of architectural models has emerged. Through a comprehensive review of pertinent literature, multiple studies employing machine learning algorithms in the context of microbial diagnosis and classification are examined. Each investigation entails a tabulated presentation of data, encompassing details about the training and validation datasets, specifications of the machine learning and deep learning techniques employed, as well as the evaluation metrics utilized to gauge algorithmic performance. Notably, Convolutional Neural Networks have been the predominant selection for image classification tasks by machine learning practitioners over the last decade. This preference stems from their ability to autonomously extract pertinent and distinguishing features with minimal human intervention. A range of CNN architectures have been developed and effectively applied in the realm of image classification. However, addressing the considerable data requirements of deep learning, recent advancements encompass the application of pre-trained models using transfer learning for the identification of microbial entities. This method involves repurposing the knowledge gleaned from solving alternate image classification challenges to accurately classify microbial images. Consequently, the necessity for extensive and varied training data is significantly mitigated. This study undertakes a comparative assessment of various popular pre-trained CNN architectures for the classification of bacteria. The dataset employed is composed of approximately 660 images, representing 33 bacterial species. To enhance dataset diversity, data augmentation is implemented, followed by evaluation on multiple models including AlexNet, VGGNet, Inception networks, Residual Networks, and Densely Connected Convolutional Networks. The results indicate that the DenseNet-121 architecture yields the optimal performance, achieving a peak accuracy of 99.08%, precision of 99.06%, recall of 99.00%, and an F1-score of 98.99%. By demonstrating the proficiency of the DenseNet-121 model on a comparatively modest dataset, this study underscores the viability of transfer learning in the healthcare sector for precise and efficient microbial identification. These findings contribute to the ongoing endeavors aimed at harnessing machine learning techniques to enhance healthcare methodologies and bolster infectious disease prevention practices.
Collapse
Affiliation(s)
- Yuandi Wu
- Department of Mechanical Engineering, Intelligent and Cognitive Engineering Laboratory, McMaster University, Hamilton, ON, Canada
| | - S Andrew Gadsden
- Department of Mechanical Engineering, Intelligent and Cognitive Engineering Laboratory, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
35
|
Wei S, Si L, Huang T, Du S, Yao Y, Dong Y, Ma H. Deep-learning-based cross-modality translation from Stokes image to bright-field contrast. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:102911. [PMID: 37867633 PMCID: PMC10587695 DOI: 10.1117/1.jbo.28.10.102911] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 08/25/2023] [Accepted: 09/25/2023] [Indexed: 10/24/2023]
Abstract
Significance Mueller matrix (MM) microscopy has proven to be a powerful tool for probing microstructural characteristics of biological samples down to subwavelength scale. However, in clinical practice, doctors usually rely on bright-field microscopy images of stained tissue slides to identify characteristic features of specific diseases and make accurate diagnosis. Cross-modality translation based on polarization imaging helps to improve the efficiency and stability in analyzing sample properties from different modalities for pathologists. Aim In this work, we propose a computational image translation technique based on deep learning to enable bright-field microscopy contrast using snapshot Stokes images of stained pathological tissue slides. Taking Stokes images as input instead of MM images allows the translated bright-field images to be unaffected by variations of light source and samples. Approach We adopted CycleGAN as the translation model to avoid requirements on co-registered image pairs in the training. This method can generate images that are equivalent to the bright-field images with different staining styles on the same region. Results Pathological slices of liver and breast tissues with hematoxylin and eosin staining and lung tissues with two types of immunohistochemistry staining, i.e., thyroid transcription factor-1 and Ki-67, were used to demonstrate the effectiveness of our method. The output results were evaluated by four image quality assessment methods. Conclusions By comparing the cross-modality translation performance with MM images, we found that the Stokes images, with the advantages of faster acquisition and independence from light intensity and image registration, can be well translated to bright-field images.
Collapse
Affiliation(s)
- Shilong Wei
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Lu Si
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Tongyu Huang
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
- Tsinghua University, Department of Biomedical Engineering, Beijing, China
| | - Shan Du
- University of Chinese Academy of Sciences, Shenzhen Hospital, Department of Pathology, Shenzhen, China
| | - Yue Yao
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Yang Dong
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
| | - Hui Ma
- Tsinghua University, Shenzhen International Graduate School, Shenzhen, China
- Tsinghua University, Department of Biomedical Engineering, Beijing, China
- Tsinghua University, Department of Physics, Beijing, China
| |
Collapse
|
36
|
Aswath A, Alsahaf A, Giepmans BNG, Azzopardi G. Segmentation in large-scale cellular electron microscopy with deep learning: A literature survey. Med Image Anal 2023; 89:102920. [PMID: 37572414 DOI: 10.1016/j.media.2023.102920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 07/05/2023] [Accepted: 07/31/2023] [Indexed: 08/14/2023]
Abstract
Electron microscopy (EM) enables high-resolution imaging of tissues and cells based on 2D and 3D imaging techniques. Due to the laborious and time-consuming nature of manual segmentation of large-scale EM datasets, automated segmentation approaches are crucial. This review focuses on the progress of deep learning-based segmentation techniques in large-scale cellular EM throughout the last six years, during which significant progress has been made in both semantic and instance segmentation. A detailed account is given for the key datasets that contributed to the proliferation of deep learning in 2D and 3D EM segmentation. The review covers supervised, unsupervised, and self-supervised learning methods and examines how these algorithms were adapted to the task of segmenting cellular and sub-cellular structures in EM images. The special challenges posed by such images, like heterogeneity and spatial complexity, and the network architectures that overcame some of them are described. Moreover, an overview of the evaluation measures used to benchmark EM datasets in various segmentation tasks is provided. Finally, an outlook of current trends and future prospects of EM segmentation is given, especially with large-scale models and unlabeled images to learn generic features across EM datasets.
Collapse
Affiliation(s)
- Anusha Aswath
- Bernoulli Institute of Mathematics, Computer Science and Artificial Intelligence, University Groningen, Groningen, The Netherlands; Department of Biomedical Sciences of Cells and Systems, University Groningen, University Medical Center Groningen, Groningen, The Netherlands.
| | - Ahmad Alsahaf
- Department of Biomedical Sciences of Cells and Systems, University Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Ben N G Giepmans
- Department of Biomedical Sciences of Cells and Systems, University Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - George Azzopardi
- Bernoulli Institute of Mathematics, Computer Science and Artificial Intelligence, University Groningen, Groningen, The Netherlands
| |
Collapse
|
37
|
Ponzio F, Descombes X, Ambrosetti D. Improving CNNs classification with pathologist-based expertise: the renal cell carcinoma case study. Sci Rep 2023; 13:15887. [PMID: 37741835 PMCID: PMC10517931 DOI: 10.1038/s41598-023-42847-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 09/15/2023] [Indexed: 09/25/2023] Open
Abstract
The prognosis of renal cell carcinoma (RCC) malignant neoplasms deeply relies on an accurate determination of the histological subtype, which currently involves the light microscopy visual analysis of histological slides, considering notably tumor architecture and cytology. RCC subtyping is therefore a time-consuming and tedious process, sometimes requiring expert review, with great impact on diagnosis, prognosis and treatment of RCC neoplasms. In this study, we investigate the automatic RCC subtyping classification of 91 patients, diagnosed with clear cell RCC, papillary RCC, chromophobe RCC, or renal oncocytoma, through deep learning based methodologies. We show how the classification performance of several state-of-the-art Convolutional Neural Networks (CNNs) are perfectible among the different RCC subtypes. Thus, we introduce a new classification model leveraging a combination of supervised deep learning models (specifically CNNs) and pathologist's expertise, giving birth to a hybrid approach that we termed ExpertDeepTree (ExpertDT). Our findings prove ExpertDT's superior capability in the RCC subtyping task, with respect to traditional CNNs, and suggest that introducing some expert-based knowledge into deep learning models may be a valuable solution for complex classification cases.
Collapse
Affiliation(s)
- Francesco Ponzio
- Interuniversity Department of Regional and Urban Studies and Planning, Politecnico di Torino, Turin, Italy.
| | | | - Damien Ambrosetti
- Department of Pathology, CHU Nice, Université Côte d'Azur, Nice, France
| |
Collapse
|
38
|
Yang H, Zhu Y, Yu J, Jin L, Guo Z, Zheng C, Fu J, Xu Y. Boosting microscopic object detection via feature activation map guided poisson blending. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:18301-18317. [PMID: 38052559 DOI: 10.3934/mbe.2023813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
Microscopic examination of visible components based on micrographs is the gold standard for testing in biomedical research and clinical diagnosis. The application of object detection technology in bioimages not only improves the efficiency of the analyst but also provides decision support to ensure the objectivity and consistency of diagnosis. However, the lack of large annotated datasets is a significant impediment in rapidly deploying object detection models for microscopic formed elements detection. Standard augmentation methods used in object detection are not appropriate because they are prone to destroy the original micro-morphological information to produce counterintuitive micrographs, which is not conducive to build the trust of analysts in the intelligent system. Here, we propose a feature activation map-guided boosting mechanism dedicated to microscopic object detection to improve data efficiency. Our results show that the boosting mechanism provides solid gains in the object detection model deployed for microscopic formed elements detection. After image augmentation, the mean Average Precision (mAP) of baseline and strong baseline of the Chinese herbal medicine micrograph dataset are increased by 16.3% and 5.8% respectively. Similarly, on the urine sediment dataset, the boosting mechanism resulted in an improvement of 8.0% and 2.6% in mAP of the baseline and strong baseline maps respectively. Moreover, the method shows strong generalizability and can be easily integrated into any main-stream object detection model. The performance enhancement is interpretable, making it more suitable for microscopic biomedical applications.
Collapse
Affiliation(s)
- Haixu Yang
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Extreme Photonics and Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang Provincial Key Laboratory of Traditional Chinese Medicine for Clinical Evaluation and Translational Research, Zhejiang University, Hangzhou, 310027, China
- Binjiang Institute of Zhejiang University, Hangzhou, 310053, China
| | - Yunqi Zhu
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Extreme Photonics and Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang Provincial Key Laboratory of Traditional Chinese Medicine for Clinical Evaluation and Translational Research, Zhejiang University, Hangzhou, 310027, China
| | - Jiahui Yu
- Binjiang Institute of Zhejiang University, Hangzhou, 310053, China
| | - Luhong Jin
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Extreme Photonics and Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang Provincial Key Laboratory of Traditional Chinese Medicine for Clinical Evaluation and Translational Research, Zhejiang University, Hangzhou, 310027, China
| | - Zengxi Guo
- Zhejiang Institute for Food and Drug Control, NMPA Key Laboratory of Quality Evaluation of Traditional Chinese Medicine (Traditional Chinese Patent Medicine), Hangzhou 310052, China
| | - Cheng Zheng
- Zhejiang Institute for Food and Drug Control, NMPA Key Laboratory of Quality Evaluation of Traditional Chinese Medicine (Traditional Chinese Patent Medicine), Hangzhou 310052, China
| | - Junfen Fu
- Department of Endocrinology, Children's Hospital of Zhejiang University School of Medicine, National Clinical Research Center for Children's Health, Hangzhou, 310051 China
| | - Yingke Xu
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Extreme Photonics and Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang Provincial Key Laboratory of Traditional Chinese Medicine for Clinical Evaluation and Translational Research, Zhejiang University, Hangzhou, 310027, China
- Binjiang Institute of Zhejiang University, Hangzhou, 310053, China
- Department of Endocrinology, Children's Hospital of Zhejiang University School of Medicine, National Clinical Research Center for Children's Health, Hangzhou, 310051 China
| |
Collapse
|
39
|
Swillens JEM, Nagtegaal ID, Engels S, Lugli A, Hermens RPMG, van der Laak JAWM. Pathologists' first opinions on barriers and facilitators of computational pathology adoption in oncological pathology: an international study. Oncogene 2023; 42:2816-2827. [PMID: 37587332 PMCID: PMC10504072 DOI: 10.1038/s41388-023-02797-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 07/26/2023] [Indexed: 08/18/2023]
Abstract
Computational pathology (CPath) algorithms detect, segment or classify cancer in whole slide images, approaching or even exceeding the accuracy of pathologists. Challenges have to be overcome before these algorithms can be used in practice. We therefore aim to explore international perspectives on the future role of CPath in oncological pathology by focusing on opinions and first experiences regarding barriers and facilitators. We conducted an international explorative eSurvey and semi-structured interviews with pathologists utilizing an implementation framework to classify potential influencing factors. The eSurvey results showed remarkable variation in opinions regarding attitude, understandability and validation of CPath. Interview results showed that barriers focused on the quality of available evidence, while most facilitators concerned strengths of CPath. A lack of consensus was present for multiple factors, such as the determination of sufficient validation using CPath, the preferred function of CPath within the digital workflow and the timing of CPath introduction in pathology education. The diversity in opinions illustrates variety in influencing factors in CPath adoption. A next step would be to quantitatively determine important factors for adoption and initiate validation studies. Both should include clear case descriptions and be conducted among a more homogenous panel of pathologists based on sub specialization.
Collapse
Affiliation(s)
- Julie E M Swillens
- Scientific Center for Quality of Healthcare (IQ Healthcare), Radboud Institute for Health Sciences (RIHS), Radboud University Medical Centre, Nijmegen, The Netherlands.
| | - Iris D Nagtegaal
- Department of Pathology, Radboud Institute for Molecular Life Sciences (RIMLS), Radboud University Medical Centre, Nijmegen, The Netherlands
| | - Sam Engels
- Scientific Center for Quality of Healthcare (IQ Healthcare), Radboud Institute for Health Sciences (RIHS), Radboud University Medical Centre, Nijmegen, The Netherlands
| | | | - Rosella P M G Hermens
- Scientific Center for Quality of Healthcare (IQ Healthcare), Radboud Institute for Health Sciences (RIHS), Radboud University Medical Centre, Nijmegen, The Netherlands
| | | |
Collapse
|
40
|
Zhou F, Yin MM, Jiao CN, Zhao JX, Zheng CH, Liu JX. Predicting miRNA-Disease Associations Through Deep Autoencoder With Multiple Kernel Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:5570-5579. [PMID: 34860656 DOI: 10.1109/tnnls.2021.3129772] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Determining microRNA (miRNA)-disease associations (MDAs) is an integral part in the prevention, diagnosis, and treatment of complex diseases. However, wet experiments to discern MDAs are inefficient and expensive. Hence, the development of reliable and efficient data integrative models for predicting MDAs is of significant meaning. In the present work, a novel deep learning method for predicting MDAs through deep autoencoder with multiple kernel learning (DAEMKL) is presented. Above all, DAEMKL applies multiple kernel learning (MKL) in miRNA space and disease space to construct miRNA similarity network and disease similarity network, respectively. Then, for each disease or miRNA, its feature representation is learned from the miRNA similarity network and disease similarity network via the regression model. After that, the integrated miRNA feature representation and disease feature representation are input into deep autoencoder (DAE). Furthermore, the novel MDAs are predicted through reconstruction error. Ultimately, the AUC results show that DAEMKL achieves outstanding performance. In addition, case studies of three complex diseases further prove that DAEMKL has excellent predictive performance and can discover a large number of underlying MDAs. On the whole, our method DAEMKL is an effective method to identify MDAs.
Collapse
|
41
|
Yu M, Shi H, Shen H, Chen X, Zhang L, Zhu J, Qian G, Feng B, Yu S. Simple and Rapid Discrimination of Methicillin-Resistant Staphylococcus aureus Based on Gram Staining and Machine Vision. Microbiol Spectr 2023; 11:e0528222. [PMID: 37395643 PMCID: PMC10433844 DOI: 10.1128/spectrum.05282-22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 05/24/2023] [Indexed: 07/04/2023] Open
Abstract
Methicillin-resistant Staphylococcus aureus (MRSA) is a clinical threat with high morbidity and mortality. Here, we describe a new simple, rapid identification method for MRSA using oxacillin sodium salt, a cell wall synthesis inhibitor, combined with Gram staining and machine vision (MV) analysis. Gram staining classifies bacteria as positive (purple) or negative (pink) according to the cell wall structure and chemical composition. In the presence of oxacillin, the integrity of the cell wall for methicillin-susceptible S. aureus (MSSA) was destroyed immediately and appeared Gram negative. In contrast, MRSA was relatively stable and appeared Gram positive. This color change can be detected by MV. The feasibility of this method was demonstrated in 150 images of the staining results for 50 clinical S. aureus strains. Based on effective feature extraction and machine learning, the accuracies of the linear linear discriminant analysis (LDA) model and nonlinear artificial neural network (ANN) model for MRSA identification were 96.7% and 97.3%, respectively. Combined with MV analysis, this simple strategy improved the detection efficiency and significantly shortened the time needed to detect antibiotic resistance. The whole process can be completed within 1 h. Unlike the traditional antibiotic susceptibility test, overnight incubation is avoided. This new strategy could be used for other bacteria and represents a new rapid method for detection of clinical antibiotic resistance. IMPORTANCE Oxacillin sodium salt destroys the integrity of the cell wall of MSSA immediately, appearing Gram negative, whereas MRSA is relatively stable and still appears Gram positive. This color change can be detected by microscopic examination and MV analysis. This new strategy has significantly reduced the time to detect resistance. The results show that using oxacillin sodium salt combined with Gram staining and MV analysis is a new, simple and rapid method for identification of MRSA.
Collapse
Affiliation(s)
- Menghuan Yu
- Institute of Mass Spectrometry, School of Material Science and Chemical Engineering, Ningbo University, Ningbo, Zhejiang, China
| | - Haimei Shi
- Institute of Mass Spectrometry, School of Material Science and Chemical Engineering, Ningbo University, Ningbo, Zhejiang, China
| | - Hao Shen
- Institute of Mass Spectrometry, School of Material Science and Chemical Engineering, Ningbo University, Ningbo, Zhejiang, China
| | - Xueqin Chen
- Department of Intensive Care Unit, The First Affiliated Hospital of Ningbo University, Ningbo, Zhejiang, China
| | - Li Zhang
- Department of Clinical Lab, Peking Union Medical College Hospital, Peking Union Medical College & Chinese Academy Medical Science, Beijing, China
| | - Jianhua Zhu
- Department of Intensive Care Unit, The First Affiliated Hospital of Ningbo University, Ningbo, Zhejiang, China
| | - Guoqing Qian
- Department of Intensive Care Unit, The First Affiliated Hospital of Ningbo University, Ningbo, Zhejiang, China
| | - Bin Feng
- Institute of Mass Spectrometry, School of Material Science and Chemical Engineering, Ningbo University, Ningbo, Zhejiang, China
| | - Shaoning Yu
- Institute of Mass Spectrometry, School of Material Science and Chemical Engineering, Ningbo University, Ningbo, Zhejiang, China
| |
Collapse
|
42
|
Dos Santos DFD, de Faria PR, Travençolo BAN, do Nascimento MZ. Influence of Data Augmentation Strategies on the Segmentation of Oral Histological Images Using Fully Convolutional Neural Networks. J Digit Imaging 2023; 36:1608-1623. [PMID: 37012446 PMCID: PMC10406800 DOI: 10.1007/s10278-023-00814-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 03/01/2023] [Accepted: 03/03/2023] [Indexed: 04/05/2023] Open
Abstract
Segmentation of tumor regions in H &E-stained slides is an important task for a pathologist while diagnosing different types of cancer, including oral squamous cell carcinoma (OSCC). Histological image segmentation is often constrained by the availability of labeled training data since labeling histological images is a highly skilled, complex, and time-consuming task. Thus, data augmentation strategies become essential to train convolutional neural networks models to overcome the overfitting problem when only a few training samples are available. This paper proposes a new data augmentation strategy, named Random Composition Augmentation (RCAug), to train fully convolutional networks (FCN) to segment OSCC tumor regions in H &E-stained histological images. Given the input image and their corresponding label, a pipeline with a random composition of geometric, distortion, color transfer, and generative image transformations is executed on the fly. Experimental evaluations were performed using an FCN-based method to segment OSCC regions through a set of different data augmentation transformations. By using RCAug, we improved the FCN-based segmentation method from 0.51 to 0.81 of intersection-over-union (IOU) in a whole slide image dataset and from 0.65 to 0.69 of IOU in a tissue microarray images dataset.
Collapse
Affiliation(s)
- Dalí F D Dos Santos
- Faculty of Computer Science, Federal University of Uberlândia, Brazil and Institute of Biomedical Science, Federal University of Uberlândia, Uberlândia, Brazil.
| | - Paulo R de Faria
- Faculty of Computer Science, Federal University of Uberlândia, Brazil and Institute of Biomedical Science, Federal University of Uberlândia, Uberlândia, Brazil
| | - Bruno A N Travençolo
- Faculty of Computer Science, Federal University of Uberlândia, Brazil and Institute of Biomedical Science, Federal University of Uberlândia, Uberlândia, Brazil
| | - Marcelo Z do Nascimento
- Faculty of Computer Science, Federal University of Uberlândia, Brazil and Institute of Biomedical Science, Federal University of Uberlândia, Uberlândia, Brazil
| |
Collapse
|
43
|
Alonso A, Kirkegaard JB. Fast detection of slender bodies in high density microscopy data. Commun Biol 2023; 6:754. [PMID: 37468539 PMCID: PMC10356847 DOI: 10.1038/s42003-023-05098-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 07/05/2023] [Indexed: 07/21/2023] Open
Abstract
Computer-aided analysis of biological microscopy data has seen a massive improvement with the utilization of general-purpose deep learning techniques. Yet, in microscopy studies of multi-organism systems, the problem of collision and overlap remains challenging. This is particularly true for systems composed of slender bodies such as swimming nematodes, swimming spermatozoa, or the beating of eukaryotic or prokaryotic flagella. Here, we develop a end-to-end deep learning approach to extract precise shape trajectories of generally motile and overlapping slender bodies. Our method works in low resolution settings where feature keypoints are hard to define and detect. Detection is fast and we demonstrate the ability to track thousands of overlapping organisms simultaneously. While our approach is agnostic to area of application, we present it in the setting of and exemplify its usability on dense experiments of swimming Caenorhabditis elegans. The model training is achieved purely on synthetic data, utilizing a physics-based model for nematode motility, and we demonstrate the model's ability to generalize from simulations to experimental videos.
Collapse
Affiliation(s)
- Albert Alonso
- Niels Bohr Institute & Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Julius B Kirkegaard
- Niels Bohr Institute & Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.
| |
Collapse
|
44
|
Thiele F, Windebank AJ, Siddiqui AM. Motivation for using data-driven algorithms in research: A review of machine learning solutions for image analysis of micrographs in neuroscience. J Neuropathol Exp Neurol 2023; 82:595-610. [PMID: 37244652 PMCID: PMC10280360 DOI: 10.1093/jnen/nlad040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/29/2023] Open
Abstract
Machine learning is a powerful tool that is increasingly being used in many research areas, including neuroscience. The recent development of new algorithms and network architectures, especially in the field of deep learning, has made machine learning models more reliable and accurate and useful for the biomedical research sector. By minimizing the effort necessary to extract valuable features from datasets, they can be used to find trends in data automatically and make predictions about future data, thereby improving the reproducibility and efficiency of research. One application is the automatic evaluation of micrograph images, which is of great value in neuroscience research. While the development of novel models has enabled numerous new research applications, the barrier to use these new algorithms has also decreased by the integration of deep learning models into known applications such as microscopy image viewers. For researchers unfamiliar with machine learning algorithms, the steep learning curve can hinder the successful implementation of these methods into their workflows. This review explores the use of machine learning in neuroscience, including its potential applications and limitations, and provides some guidance on how to select a fitting framework to use in real-life research projects.
Collapse
Affiliation(s)
- Frederic Thiele
- Department of Neurology, Mayo Clinic, Rochester, Minnesota, USA
- Department of Neurosurgery, Medical Center of the University of Munich, Munich, Germany
| | | | - Ahad M Siddiqui
- Department of Neurology, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
45
|
Zhang X, Li H, Ma Y, Zhong D, Hou S. Study liquid-liquid phase separation with optical microscopy: A methodology review. APL Bioeng 2023; 7:021502. [PMID: 37180732 PMCID: PMC10171890 DOI: 10.1063/5.0137008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 04/28/2023] [Indexed: 05/16/2023] Open
Abstract
Intracellular liquid-liquid phase separation (LLPS) is a critical process involving the dynamic association of biomolecules and the formation of non-membrane compartments, playing a vital role in regulating biomolecular interactions and organelle functions. A comprehensive understanding of cellular LLPS mechanisms at the molecular level is crucial, as many diseases are linked to LLPS, and insights gained can inform drug/gene delivery processes and aid in the diagnosis and treatment of associated diseases. Over the past few decades, numerous techniques have been employed to investigate the LLPS process. In this review, we concentrate on optical imaging methods applied to LLPS studies. We begin by introducing LLPS and its molecular mechanism, followed by a review of the optical imaging methods and fluorescent probes employed in LLPS research. Furthermore, we discuss potential future imaging tools applicable to the LLPS studies. This review aims to provide a reference for selecting appropriate optical imaging methods for LLPS investigations.
Collapse
Affiliation(s)
| | | | - Yue Ma
- Institute of Systems and Physical Biology, Shenzhen Bay Laboratory, Shenzhen 518055, China
| | | | - Shangguo Hou
- Institute of Systems and Physical Biology, Shenzhen Bay Laboratory, Shenzhen 518055, China
| |
Collapse
|
46
|
Zhu Z, Yu L, Wu W, Yu R, Zhang D, Wang L. MuRCL: Multi-Instance Reinforcement Contrastive Learning for Whole Slide Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1337-1348. [PMID: 37015475 DOI: 10.1109/tmi.2022.3227066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Multi-instance learning (MIL) is widely adop- ted for automatic whole slide image (WSI) analysis and it usually consists of two stages, i.e., instance feature extraction and feature aggregation. However, due to the "weak supervision" of slide-level labels, the feature aggregation stage would suffer from severe over-fitting in training an effective MIL model. In this case, mining more information from limited slide-level data is pivotal to WSI analysis. Different from previous works on improving instance feature extraction, this paper investigates how to exploit the latent relationship of different instances (patches) to combat overfitting in MIL for more generalizable WSI classification. In particular, we propose a novel Multi-instance Rein- forcement Contrastive Learning framework (MuRCL) to deeply mine the inherent semantic relationships of different patches to advance WSI classification. Specifically, the proposed framework is first trained in a self-supervised manner and then finetuned with WSI slide-level labels. We formulate the first stage as a contrastive learning (CL) process, where positive/negative discriminative feature sets are constructed from the same patch-level feature bags of WSIs. To facilitate the CL training, we design a novel reinforcement learning-based agent to progressively update the selection of discriminative feature sets according to an online reward for slide-level feature aggregation. Then, we further update the model with labeled WSI data to regularize the learned features for the final WSI classification. Experimental results on three public WSI classification datasets (Camelyon16, TCGA-Lung and TCGA-Kidney) demonstrate that the proposed MuRCL outperforms state-of-the-art MIL models. In addition, MuRCL can achieve comparable performance to other state-of-the-art MIL models on TCGA-Esca dataset.
Collapse
|
47
|
Zhu Y, Yin X, Meijering E. A Compound Loss Function With Shape Aware Weight Map for Microscopy Cell Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1278-1288. [PMID: 36455082 DOI: 10.1109/tmi.2022.3226226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Microscopy cell segmentation is a crucial step in biological image analysis and a challenging task. In recent years, deep learning has been widely used to tackle this task, with promising results. A critical aspect of training complex neural networks for this purpose is the selection of the loss function, as it affects the learning process. In the field of cell segmentation, most of the recent research in improving the loss function focuses on addressing the problem of inter-class imbalance. Despite promising achievements, more work is needed, as the challenge of cell segmentation is not only the inter-class imbalance but also the intra-class imbalance (the cost imbalance between the false positives and false negatives of the inference model), the segmentation of cell minutiae, and the missing annotations. To deal with these challenges, in this paper, we propose a new compound loss function employing a shape aware weight map. The proposed loss function is inspired by Youden's J index to handle the problem of inter-class imbalance and uses a focal cross-entropy term to penalize the intra-class imbalance and weight easy/hard samples. The proposed shape aware weight map can handle the problem of missing annotations and facilitate valid segmentation of cell minutiae. Results of evaluations on all ten 2D+time datasets from the public cell tracking challenge demonstrate 1) the superiority of the proposed loss function with the shape aware weight map, and 2) that the performance of recent deep learning-based cell segmentation methods can be improved by using the proposed compound loss function.
Collapse
|
48
|
Wu J, Xia Y, Wang X, Wei Y, Liu A, Innanje A, Zheng M, Chen L, Shi J, Wang L, Zhan Y, Zhou XS, Xue Z, Shi F, Shen D. uRP: An integrated research platform for one-stop analysis of medical images. FRONTIERS IN RADIOLOGY 2023; 3:1153784. [PMID: 37492386 PMCID: PMC10365282 DOI: 10.3389/fradi.2023.1153784] [Citation(s) in RCA: 52] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 03/31/2023] [Indexed: 07/27/2023]
Abstract
Introduction Medical image analysis is of tremendous importance in serving clinical diagnosis, treatment planning, as well as prognosis assessment. However, the image analysis process usually involves multiple modality-specific software and relies on rigorous manual operations, which is time-consuming and potentially low reproducible. Methods We present an integrated platform - uAI Research Portal (uRP), to achieve one-stop analyses of multimodal images such as CT, MRI, and PET for clinical research applications. The proposed uRP adopts a modularized architecture to be multifunctional, extensible, and customizable. Results and Discussion The uRP shows 3 advantages, as it 1) spans a wealth of algorithms for image processing including semi-automatic delineation, automatic segmentation, registration, classification, quantitative analysis, and image visualization, to realize a one-stop analytic pipeline, 2) integrates a variety of functional modules, which can be directly applied, combined, or customized for specific application domains, such as brain, pneumonia, and knee joint analyses, 3) enables full-stack analysis of one disease, including diagnosis, treatment planning, and prognosis assessment, as well as full-spectrum coverage for multiple disease applications. With the continuous development and inclusion of advanced algorithms, we expect this platform to largely simplify the clinical scientific research process and promote more and better discoveries.
Collapse
Affiliation(s)
- Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yuwei Xia
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xuechun Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Aie Liu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Arun Innanje
- Department of Research and Development, United Imaging Intelligence Co., Ltd., Cambridge, MA, United States
| | - Meng Zheng
- Department of Research and Development, United Imaging Intelligence Co., Ltd., Cambridge, MA, United States
| | - Lei Chen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jing Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Liye Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Zhong Xue
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
49
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
50
|
Colaco SJ, Kim JH, Poulose A, Neethirajan S, Han DS. DISubNet: Depthwise Separable Inception Subnetwork for Pig Treatment Classification Using Thermal Data. Animals (Basel) 2023; 13:ani13071184. [PMID: 37048439 PMCID: PMC10093577 DOI: 10.3390/ani13071184] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/19/2023] [Accepted: 03/23/2023] [Indexed: 03/31/2023] Open
Abstract
Thermal imaging is increasingly used in poultry, swine, and dairy animal husbandry to detect disease and distress. In intensive pig production systems, early detection of health and welfare issues is crucial for timely intervention. Using thermal imaging for pig treatment classification can improve animal welfare and promote sustainable pig production. In this paper, we present a depthwise separable inception subnetwork (DISubNet), a lightweight model for classifying four pig treatments. Based on the modified model architecture, we propose two DISubNet versions: DISubNetV1 and DISubNetV2. Our proposed models are compared to other deep learning models commonly employed for image classification. The thermal dataset captured by a forward-looking infrared (FLIR) camera is used to train these models. The experimental results demonstrate that the proposed models for thermal images of various pig treatments outperform other models. In addition, both proposed models achieve approximately 99.96–99.98% classification accuracy with fewer parameters.
Collapse
Affiliation(s)
- Savina Jassica Colaco
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea; (S.J.C.); (J.H.K.)
| | - Jung Hwan Kim
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea; (S.J.C.); (J.H.K.)
| | - Alwin Poulose
- School of Data Science, Indian Institute of Science Education and Research (IISER), Thiruvananthapuram 695551, India;
| | | | - Dong Seog Han
- School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Republic of Korea; (S.J.C.); (J.H.K.)
- Correspondence: ; Tel.: +82-53-950-6609
| |
Collapse
|