1
|
Pérez-Liva M, Alonso de Leciñana M, Gutiérrez-Fernández M, Camacho Sosa Dias J, F Cruza J, Rodríguez-Pardo J, García-Suárez I, Laso-García F, Herraiz JL, Elvira Segura L. Dual photoacoustic/ultrasound technologies for preclinical research: current status and future trends. Phys Med Biol 2025; 70:07TR01. [PMID: 39914003 DOI: 10.1088/1361-6560/adb368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2024] [Accepted: 02/06/2025] [Indexed: 02/12/2025]
Abstract
Photoacoustic (PA) imaging, by integrating optical and ultrasound (US) modalities, combines high spatial resolution with deep tissue penetration, making it a transformative tool in biomedical research. This review presents a comprehensive analysis of the current status of dual PA/US imaging technologies, emphasising their applications in preclinical research. It details advancements in light excitation strategies, including tomographic and microscopic modalities, innovations in pulsed laser and alternative light sources, and US instrumentation. The review further explores preclinical methodologies, encompassing dedicated instrumentation, signal processing, and data analysis techniques essential for PA/US systems. Key applications discussed include the visualisation of blood vessels, micro-circulation, and tissue perfusion; diagnosis and monitoring of inflammation; evaluation of infections, atherosclerosis, burn injuries, healing, and scar formation; assessment of liver and renal diseases; monitoring of epilepsy and neurodegenerative conditions; studies on brain disorders and preeclampsia; cell therapy monitoring; and tumour detection, staging, and recurrence monitoring. Challenges related to imaging depth, resolution, cost, and the translation of contrast agents to clinical practice are analysed, alongside advancements in high-speed acquisition, artificial intelligence-driven reconstruction, and innovative light-delivery methods. While clinical translation remains complex, this review underscores the crucial role of preclinical studies in unravelling fundamental biomedical questions and assessing novel imaging strategies. Ultimately, this review delves into the future trends of dual PA/US imaging, highlighting its potential to bridge preclinical discoveries with clinical applications and drive advances in diagnostics, therapeutic monitoring, and personalised medicine.
Collapse
Affiliation(s)
- Mailyn Pérez-Liva
- IPARCOS Institute and EMFTEL Department, Universidad Complutense de Madrid, Pl. de las Ciencias, 1, Moncloa-Aravaca, Madrid 28040, Spain
- Health Research Institute of the Hospital Clínico San Carlos, IdISSC, C/ Profesor Martín Lagos s/n, Madrid 28040, Spain
| | - María Alonso de Leciñana
- Department of Neurology and Stroke Centre, Neurological Sciences and Cerebrovascular Research Laboratory, Neurology and Cerebrovascular Disease Group, Neuroscience Area Hospital La Paz Institute for Health Research-IdiPAZ (La Paz University Hospital, Universidad Autónoma de Madrid), Madrid, Spain
| | - María Gutiérrez-Fernández
- Department of Neurology and Stroke Centre, Neurological Sciences and Cerebrovascular Research Laboratory, Neurology and Cerebrovascular Disease Group, Neuroscience Area Hospital La Paz Institute for Health Research-IdiPAZ (La Paz University Hospital, Universidad Autónoma de Madrid), Madrid, Spain
| | - Jorge Camacho Sosa Dias
- Instituto de Tecnologías Físicas y de la Información (ITEFI, CSIC), Serrano 144, Madrid 28006, Spain
| | - Jorge F Cruza
- Instituto de Tecnologías Físicas y de la Información (ITEFI, CSIC), Serrano 144, Madrid 28006, Spain
| | - Jorge Rodríguez-Pardo
- Department of Neurology and Stroke Centre, Neurological Sciences and Cerebrovascular Research Laboratory, Neurology and Cerebrovascular Disease Group, Neuroscience Area Hospital La Paz Institute for Health Research-IdiPAZ (La Paz University Hospital, Universidad Autónoma de Madrid), Madrid, Spain
| | - Iván García-Suárez
- Department of Neurology and Stroke Centre, Neurological Sciences and Cerebrovascular Research Laboratory, Neurology and Cerebrovascular Disease Group, Neuroscience Area Hospital La Paz Institute for Health Research-IdiPAZ (La Paz University Hospital, Universidad Autónoma de Madrid), Madrid, Spain
- Department of Emergency Service, San Agustín University Hospital, Asturias, Spain
| | - Fernando Laso-García
- Department of Neurology and Stroke Centre, Neurological Sciences and Cerebrovascular Research Laboratory, Neurology and Cerebrovascular Disease Group, Neuroscience Area Hospital La Paz Institute for Health Research-IdiPAZ (La Paz University Hospital, Universidad Autónoma de Madrid), Madrid, Spain
| | - Joaquin L Herraiz
- IPARCOS Institute and EMFTEL Department, Universidad Complutense de Madrid, Pl. de las Ciencias, 1, Moncloa-Aravaca, Madrid 28040, Spain
- Health Research Institute of the Hospital Clínico San Carlos, IdISSC, C/ Profesor Martín Lagos s/n, Madrid 28040, Spain
| | - Luis Elvira Segura
- Instituto de Tecnologías Físicas y de la Información (ITEFI, CSIC), Serrano 144, Madrid 28006, Spain
| |
Collapse
|
2
|
Juhong A, Li B, Liu Y, Yao CY, Yang CW, Agnew DW, Lei YL, Luker GD, Bumpers H, Huang X, Piyawattanametha W, Qiu Z. Recurrent and convolutional neural networks for sequential multispectral optoacoustic tomography (MSOT) imaging. JOURNAL OF BIOPHOTONICS 2023; 16:e202300142. [PMID: 37382181 DOI: 10.1002/jbio.202300142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 06/14/2023] [Accepted: 06/23/2023] [Indexed: 06/30/2023]
Abstract
Multispectral optoacoustic tomography (MSOT) is a beneficial technique for diagnosing and analyzing biological samples since it provides meticulous details in anatomy and physiology. However, acquiring high through-plane resolution volumetric MSOT is time-consuming. Here, we propose a deep learning model based on hybrid recurrent and convolutional neural networks to generate sequential cross-sectional images for an MSOT system. This system provides three modalities (MSOT, ultrasound, and optoacoustic imaging of a specific exogenous contrast agent) in a single scan. This study used ICG-conjugated nanoworms particles (NWs-ICG) as the contrast agent. Instead of acquiring seven images with a step size of 0.1 mm, we can receive two images with a step size of 0.6 mm as input for the proposed deep learning model. The deep learning model can generate five other images with a step size of 0.1 mm between these two input images meaning we can reduce acquisition time by approximately 71%.
Collapse
Affiliation(s)
- Aniwat Juhong
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
| | - Bo Li
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
| | - Yifan Liu
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
| | - Cheng-You Yao
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
- Department of Biomedical Engineering, Michigan State University, East Lansing, Michigan, USA
| | - Chia-Wei Yang
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
- Department of Chemistry, Michigan State University, East Lansing, Michigan, USA
| | - Dalen W Agnew
- Department of Pathobiology and Diagnostic Investigation, College of Veterinary Medicine, Michigan State University, East Lansing, Michigan, USA
| | - Yu Leo Lei
- Department of Periodontics and Oral Medicine, University of Michigan, Ann Arbor, Michigan, USA
| | - Gary D Luker
- Department of Radiology, Microbiology and Immunology, and Biomedical Engineering, University of Michigan, Ann Arbor, Michigan, USA
| | - Harvey Bumpers
- Department of Surgery, Michigan State University, East Lansing, Michigan, USA
| | - Xuefei Huang
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
- Department of Biomedical Engineering, Michigan State University, East Lansing, Michigan, USA
- Department of Chemistry, Michigan State University, East Lansing, Michigan, USA
| | - Wibool Piyawattanametha
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
- Department of Biomedical Engineering, School of Engineering, King Mongkut's Institute of Technology Ladkrabang (KMITL), Bangkok, Thailand
| | - Zhen Qiu
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan, USA
- Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, Michigan, USA
- Department of Biomedical Engineering, Michigan State University, East Lansing, Michigan, USA
| |
Collapse
|
3
|
Alaie S, Al’Aref SJ. Application of deep neural networks for inferring pressure in polymeric acoustic transponders/sensors. MACHINE LEARNING WITH APPLICATIONS 2023; 13:100477. [PMID: 38037627 PMCID: PMC10688392 DOI: 10.1016/j.mlwa.2023.100477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2023] Open
Abstract
Passive sensor-transponders have raised interest for the last few decades, due to their capability of low-cost remote monitoring without the need for energy storage. Their operating principle includes receiving a signal from a source and then reflecting the signal. While well-established transponders operate through electromagnetic antennas, those with a fully acoustic design have advantages such as lower cost and simplicity. Therefore, detection of pressures using the ultrasound signal that is backscattered from an acoustic resonator has been of interest recently. In order to infer the pressure from the backscattered signal, the established approach has been based upon the principle of detection of the shift to the frequency of resonance. Nevertheless, regression of the pressure from the signal with a small error is challenging and has been subject to research. Here in this paper, we explore an approach that employs deep learning for inferring pressure from the ultrasound reflections of polymeric resonators. We assess if neural network regressors can efficiently infer pressure reflected from a fully acoustic transponder. For this purpose, we compare the performance of several regressors such as a convolutional neural network, a network inspired by the ResNet, and a fully connected neural network. We observe that deep neural networks are advantageous in inferring pressure information with a minimal need for analyzing the signal. Our work suggests that a deep learning approach has the potential to be integrated with or replace other traditional approaches for inferring pressure from an ultrasound signal reflected from fully acoustic transponders or passive sensors.
Collapse
Affiliation(s)
- Seyedhamidreza Alaie
- Department of Mechanical & Aerospace Engineering, New Mexico State University, Las Cruces, NM, USA
| | - Subhi J. Al’Aref
- Department of Internal Medicine — Division of Cardiovascular Medicine, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| |
Collapse
|
4
|
Goudarzi S, Whyte J, Boily M, Towers A, Kilgour RD, Rivaz H. Segmentation of Arm Ultrasound Images in Breast Cancer-Related Lymphedema: A Database and Deep Learning Algorithm. IEEE Trans Biomed Eng 2023; 70:2552-2563. [PMID: 37028332 DOI: 10.1109/tbme.2023.3253646] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Abstract
OBJECTIVE Breast cancer treatment often causes the removal of or damage to lymph nodes of the patient's lymphatic drainage system. This side effect is the origin of Breast Cancer-Related Lymphedema (BCRL), referring to a noticeable increase in excess arm volume. Ultrasound imaging is a preferred modality for the diagnosis and progression monitoring of BCRL because of its low cost, safety, and portability. As the affected and unaffected arms look similar in B-mode ultrasound images, the thickness of the skin, subcutaneous fat, and muscle have been shown to be important biomarkers for this task. The segmentation masks are also helpful in monitoring the longitudinal changes in morphology and mechanical properties of tissue layers. METHODS For the first time, a publicly available ultrasound dataset containing the Radio-Frequency (RF) data of 39 subjects and manual segmentation masks by two experts, are provided. Inter- and intra-observer reproducibility studies performed on the segmentation maps show a high Dice Score Coefficient (DSC) of 0.94±0.08 and 0.92±0.06, respectively. Gated Shape Convolutional Neural Network (GSCNN) is modified for precise automatic segmentation of tissue layers, and its generalization performance is improved by the CutMix augmentation strategy. RESULTS We got an average DSC of 0.87±0.11 on the test set, which confirms the high performance of the method. CONCLUSION Automatic segmentation can pave the way for convenient and accessible staging of BCRL, and our dataset can facilitate development and validation of those methods. SIGNIFICANCE Timely diagnosis and treatment of BCRL have crucial importance in preventing irreversible damage.
Collapse
|
5
|
Mamalakis M, Garg P, Nelson T, Lee J, Swift AJ, Wild JM, Clayton RH. Artificial Intelligence framework with traditional computer vision and deep learning approaches for optimal automatic segmentation of left ventricle with scar. Artif Intell Med 2023; 143:102610. [PMID: 37673578 DOI: 10.1016/j.artmed.2023.102610] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 05/17/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Automatic segmentation of the cardiac left ventricle with scars remains a challenging and clinically significant task, as it is essential for patient diagnosis and treatment pathways. This study aimed to develop a novel framework and cost function to achieve optimal automatic segmentation of the left ventricle with scars using LGE-MRI images. To ensure the generalization of the framework, an unbiased validation protocol was established using out-of-distribution (OOD) internal and external validation cohorts, and intra-observation and inter-observer variability ground truths. The framework employs a combination of traditional computer vision techniques and deep learning, to achieve optimal segmentation results. The traditional approach uses multi-atlas techniques, active contours, and k-means methods, while the deep learning approach utilizes various deep learning techniques and networks. The study found that the traditional computer vision technique delivered more accurate results than deep learning, except in cases where there was breath misalignment error. The optimal solution of the framework achieved robust and generalized results with Dice scores of 82.8 ± 6.4% and 72.1 ± 4.6% in the internal and external OOD cohorts, respectively. The developed framework offers a high-performance solution for automatic segmentation of the left ventricle with scars using LGE-MRI. Unlike existing state-of-the-art approaches, it achieves unbiased results across different hospitals and vendors without the need for training or tuning in hospital cohorts. This framework offers a valuable tool for experts to accomplish the task of fully automatic segmentation of the left ventricle with scars based on a single-modality cardiac scan.
Collapse
Affiliation(s)
- Michail Mamalakis
- Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, S1 4DP, UK; Department of Computer Science, University of Sheffield, Regent Court, Sheffield, S1 4DP, UK.
| | - Pankaj Garg
- Department of Cardiology, Sheffield Teaching Hospitals Sheffield S5 7AU, UK
| | - Tom Nelson
- Department of Cardiology, Sheffield Teaching Hospitals Sheffield S5 7AU, UK
| | - Justin Lee
- Department of Cardiology, Sheffield Teaching Hospitals Sheffield S5 7AU, UK
| | - Andrew J Swift
- Department of Computer Science, University of Sheffield, Regent Court, Sheffield, S1 4DP, UK; Department of Infection, Immunity & Cardiovascular Disease, University of Sheffield, Sheffield, UK
| | - James M Wild
- Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, S1 4DP, UK; Polaris, Imaging Sciences, Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, UK
| | - Richard H Clayton
- Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, S1 4DP, UK; Department of Computer Science, University of Sheffield, Regent Court, Sheffield, S1 4DP, UK.
| |
Collapse
|
6
|
De Rosa L, L’Abbate S, Kusmic C, Faita F. Applications of Deep Learning Algorithms to Ultrasound Imaging Analysis in Preclinical Studies on In Vivo Animals. Life (Basel) 2023; 13:1759. [PMID: 37629616 PMCID: PMC10455134 DOI: 10.3390/life13081759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 07/28/2023] [Accepted: 08/08/2023] [Indexed: 08/27/2023] Open
Abstract
BACKGROUND AND AIM Ultrasound (US) imaging is increasingly preferred over other more invasive modalities in preclinical studies using animal models. However, this technique has some limitations, mainly related to operator dependence. To overcome some of the current drawbacks, sophisticated data processing models are proposed, in particular artificial intelligence models based on deep learning (DL) networks. This systematic review aims to overview the application of DL algorithms in assisting US analysis of images acquired in in vivo preclinical studies on animal models. METHODS A literature search was conducted using the Scopus and PubMed databases. Studies published from January 2012 to November 2022 that developed DL models on US images acquired in preclinical/animal experimental scenarios were eligible for inclusion. This review was conducted according to PRISMA guidelines. RESULTS Fifty-six studies were enrolled and classified into five groups based on the anatomical district in which the DL models were used. Sixteen studies focused on the cardiovascular system and fourteen on the abdominal organs. Five studies applied DL networks to images of the musculoskeletal system and eight investigations involved the brain. Thirteen papers, grouped under a miscellaneous category, proposed heterogeneous applications adopting DL systems. Our analysis also highlighted that murine models were the most common animals used in in vivo studies applying DL to US imaging. CONCLUSION DL techniques show great potential in terms of US images acquired in preclinical studies using animal models. However, in this scenario, these techniques are still in their early stages, and there is room for improvement, such as sample sizes, data preprocessing, and model interpretability.
Collapse
Affiliation(s)
- Laura De Rosa
- Institute of Clinical Physiology, National Research Council (CNR), 56124 Pisa, Italy; (L.D.R.); (F.F.)
- Department of Information Engineering and Computer Science, University of Trento, 38123 Trento, Italy
| | - Serena L’Abbate
- Institute of Life Sciences, Scuola Superiore Sant’Anna, 56124 Pisa, Italy;
| | - Claudia Kusmic
- Institute of Clinical Physiology, National Research Council (CNR), 56124 Pisa, Italy; (L.D.R.); (F.F.)
| | - Francesco Faita
- Institute of Clinical Physiology, National Research Council (CNR), 56124 Pisa, Italy; (L.D.R.); (F.F.)
| |
Collapse
|
7
|
Bashkanov O, Rak M, Meyer A, Engelage L, Lumiani A, Muschter R, Hansen C. Automatic detection of prostate cancer grades and chronic prostatitis in biparametric MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 239:107624. [PMID: 37271051 DOI: 10.1016/j.cmpb.2023.107624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 05/13/2023] [Accepted: 05/25/2023] [Indexed: 06/06/2023]
Abstract
BACKGROUND AND OBJECTIVE With emerging evidence to improve prostate cancer (PCa) screening, multiparametric magnetic prostate imaging is becoming an essential noninvasive component of the diagnostic routine. Computer-aided diagnostic (CAD) tools powered by deep learning can help radiologists interpret multiple volumetric images. In this work, our objective was to examine promising methods recently proposed in the multigrade prostate cancer detection task and to suggest practical considerations regarding model training in this context. METHODS We collected 1647 fine-grained biopsy-confirmed findings, including Gleason scores and prostatitis, to form a training dataset. In our experimental framework for lesion detection, all models utilized 3D nnU-Net architecture that accounts for anisotropy in the MRI data. First, we explore an optimal range of b-values for diffusion-weighted imaging (DWI) modality and its effect on the detection of clinically significant prostate cancer (csPCa) and prostatitis using deep learning, as the optimal range is not yet clearly defined in this domain. Next, we propose a simulated multimodal shift as a data augmentation technique to compensate for the multimodal shift present in the data. Third, we study the effect of incorporating the prostatitis class alongside cancer-related findings at three different granularities of the prostate cancer class (coarse, medium, and fine) and its impact on the detection rate of the target csPCa. Furthermore, ordinal and one-hot encoded (OHE) output formulations were tested. RESULTS An optimal model configuration with fine class granularity (prostatitis included) and OHE has scored the lesion-wise partial Free-Response Receiver Operating Characteristic (FROC) area under the curve (AUC) of 1.94 (CI 95%: 1.76-2.11) and patient-wise ROC AUC of 0.874 (CI 95%: 0.793-0.938) in the detection of csPCa. Inclusion of the auxiliary prostatitis class has demonstrated a stable relative improvement in specificity at a false positive rate (FPR) of 1.0 per patient, with an increase of 3%, 7%, and 4% for coarse, medium, and fine class granularities. CONCLUSIONS This paper examines several configurations for model training in the biparametric MRI setup and proposes optimal value ranges. It also shows that the fine-grained class configuration, including prostatitis, is beneficial for detecting csPCa. The ability to detect prostatitis in all low-risk cancer lesions suggests the potential to improve the quality of the early diagnosis of prostate diseases. It also implies an improved interpretability of the results by the radiologist.
Collapse
Affiliation(s)
- Oleksii Bashkanov
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany.
| | - Marko Rak
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| | - Anneke Meyer
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| | | | | | | | - Christian Hansen
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| |
Collapse
|
8
|
Ren W, Deán-Ben XL, Skachokova Z, Augath MA, Ni R, Chen Z, Razansky D. Monitoring mouse brain perfusion with hybrid magnetic resonance optoacoustic tomography. BIOMEDICAL OPTICS EXPRESS 2023; 14:1192-1204. [PMID: 36950237 PMCID: PMC10026577 DOI: 10.1364/boe.482205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/10/2023] [Accepted: 01/10/2023] [Indexed: 06/18/2023]
Abstract
Progress in brain research critically depends on the development of next-generation multi-modal imaging tools capable of capturing transient functional events and multiplexed contrasts noninvasively and concurrently, thus enabling a holistic view of dynamic events in vivo. Here we report on a hybrid magnetic resonance and optoacoustic tomography (MROT) system for murine brain imaging, which incorporates an MR-compatible spherical matrix array transducer and fiber-based light illumination into a 9.4 T small animal scanner. An optimized radiofrequency coil has further been devised for whole-brain interrogation. System's utility is showcased by acquiring complementary angiographic and soft tissue anatomical contrast along with simultaneous dual-modality visualization of contrast agent dynamics in vivo.
Collapse
Affiliation(s)
- Wuwei Ren
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
- Present address: School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- equal contribution
| | - Xosé Luís Deán-Ben
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
- equal contribution
| | - Zhiva Skachokova
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
| | - Mark-Aurel Augath
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
| | - Ruiqing Ni
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Zurich Neuroscience Center, University of Zurich and ETH Zurich, Zurich 8093, Switzerland
- Institute for Regenerative Medicine, Faculty of Medicine, University of Zurich, Zurich 8952, Switzerland
| | - Zhenyue Chen
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
| | - Daniel Razansky
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
- Zurich Neuroscience Center, University of Zurich and ETH Zurich, Zurich 8093, Switzerland
| |
Collapse
|
9
|
Jin G, Zhu H, Jiang D, Li J, Su L, Li J, Gao F, Cai X. A Signal-Domain Object Segmentation Method for Ultrasound and Photoacoustic Computed Tomography. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:253-265. [PMID: 37015663 DOI: 10.1109/tuffc.2022.3232174] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Image segmentation is important in improving the diagnostic capability of ultrasound computed tomography (USCT) and photoacoustic computed tomography (PACT), as it can be included in the image reconstruction process to improve image quality and quantification abilities. Segmenting the imaged object out of the background using image domain methods is easily complicated by low contrast, noise, and artifacts in the reconstructed image. Here, we introduce a new signal domain object segmentation method for USCT and PACT which does not require image reconstruction beforehand and is automatic, robust, computationally efficient, accurate, and straightforward. We first establish the relationship between the time-of-flight (TOF) of the received first arrival waves and the object's boundary which is described by ellipse equations. Then, we show that the ellipses are tangent to the boundary. By looking for tangent points on the common tangent of neighboring ellipses, the boundary can be approximated with high fidelity. Imaging experiments of human fingers and mice cross sections showed that our method provided equivalent or better segmentations than the optimal ones by active contours. In summary, our method greatly reduces the overall complexity of object segmentation and shows great potential in eliminating user dependency without sacrificing segmentation accuracy. The method can be further seamlessly incorporated into algorithms for other processing purposes in USCT and PACT, such as high-quality image reconstruction.
Collapse
|
10
|
Hu Y, Lafci B, Luzgin A, Wang H, Klohs J, Dean-Ben XL, Ni R, Razansky D, Ren W. Deep learning facilitates fully automated brain image registration of optoacoustic tomography and magnetic resonance imaging. BIOMEDICAL OPTICS EXPRESS 2022; 13:4817-4833. [PMID: 36187259 PMCID: PMC9484422 DOI: 10.1364/boe.458182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 07/14/2022] [Accepted: 07/17/2022] [Indexed: 06/16/2023]
Abstract
Multispectral optoacoustic tomography (MSOT) is an emerging optical imaging method providing multiplex molecular and functional information from the rodent brain. It can be greatly augmented by magnetic resonance imaging (MRI) which offers excellent soft-tissue contrast and high-resolution brain anatomy. Nevertheless, registration of MSOT-MRI images remains challenging, chiefly due to the entirely different image contrast rendered by these two modalities. Previously reported registration algorithms mostly relied on manual user-dependent brain segmentation, which compromised data interpretation and quantification. Here we propose a fully automated registration method for MSOT-MRI multimodal imaging empowered by deep learning. The automated workflow includes neural network-based image segmentation to generate suitable masks, which are subsequently registered using an additional neural network. The performance of the algorithm is showcased with datasets acquired by cross-sectional MSOT and high-field MRI preclinical scanners. The automated registration method is further validated with manual and half-automated registration, demonstrating its robustness and accuracy.
Collapse
Affiliation(s)
- Yexing Hu
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- contributed equally
| | - Berkan Lafci
- Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
- contributed equally
| | - Artur Luzgin
- Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
| | - Hao Wang
- Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
| | - Jan Klohs
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
| | - Xose Luis Dean-Ben
- Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
| | - Ruiqing Ni
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
- Institute for Regenerative Medicine, University of Zurich, Zurich 8952, Switzerland
| | - Daniel Razansky
- Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
| | - Wuwei Ren
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| |
Collapse
|
11
|
Schellenberg M, Dreher KK, Holzwarth N, Isensee F, Reinke A, Schreck N, Seitel A, Tizabi MD, Maier-Hein L, Gröhl J. Semantic segmentation of multispectral photoacoustic images using deep learning. PHOTOACOUSTICS 2022; 26:100341. [PMID: 35371919 PMCID: PMC8968659 DOI: 10.1016/j.pacs.2022.100341] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 02/15/2022] [Accepted: 02/20/2022] [Indexed: 05/08/2023]
Abstract
Photoacoustic (PA) imaging has the potential to revolutionize functional medical imaging in healthcare due to the valuable information on tissue physiology contained in multispectral photoacoustic measurements. Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information. In this work, we present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images to facilitate image interpretability. Manually annotated photoacoustic and ultrasound imaging data are used as reference and enable the training of a deep learning-based segmentation algorithm in a supervised manner. Based on a validation study with experimentally acquired data from 16 healthy human volunteers, we show that automatic tissue segmentation can be used to create powerful analyses and visualizations of multispectral photoacoustic images. Due to the intuitive representation of high-dimensional information, such a preprocessing algorithm could be a valuable means to facilitate the clinical translation of photoacoustic imaging.
Collapse
Affiliation(s)
- Melanie Schellenberg
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- HIDSS4Health - Helmholtz Information and Data Science School for Health, Heidelberg, Germany
| | - Kris K. Dreher
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| | - Niklas Holzwarth
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Fabian Isensee
- HI Applied Computer Vision Lab, Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Annika Reinke
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- HI Applied Computer Vision Lab, Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Nicholas Schreck
- Division of Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Alexander Seitel
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Minu D. Tizabi
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Lena Maier-Hein
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- HIDSS4Health - Helmholtz Information and Data Science School for Health, Heidelberg, Germany
- HI Applied Computer Vision Lab, Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Medical Faculty, Heidelberg University, Heidelberg, Germany
| | - Janek Gröhl
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| |
Collapse
|
12
|
Deep-Learning-Based Algorithm for the Removal of Electromagnetic Interference Noise in Photoacoustic Endoscopic Image Processing. SENSORS 2022; 22:s22103961. [PMID: 35632370 PMCID: PMC9147354 DOI: 10.3390/s22103961] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 05/18/2022] [Accepted: 05/21/2022] [Indexed: 12/10/2022]
Abstract
Despite all the expectations for photoacoustic endoscopy (PAE), there are still several technical issues that must be resolved before the technique can be successfully translated into clinics. Among these, electromagnetic interference (EMI) noise, in addition to the limited signal-to-noise ratio (SNR), have hindered the rapid development of related technologies. Unlike endoscopic ultrasound, in which the SNR can be increased by simply applying a higher pulsing voltage, there is a fundamental limitation in leveraging the SNR of PAE signals because they are mostly determined by the optical pulse energy applied, which must be within the safety limits. Moreover, a typical PAE hardware situation requires a wide separation between the ultrasonic sensor and the amplifier, meaning that it is not easy to build an ideal PAE system that would be unaffected by EMI noise. With the intention of expediting the progress of related research, in this study, we investigated the feasibility of deep-learning-based EMI noise removal involved in PAE image processing. In particular, we selected four fully convolutional neural network architectures, U-Net, Segnet, FCN-16s, and FCN-8s, and observed that a modified U-Net architecture outperformed the other architectures in the EMI noise removal. Classical filter methods were also compared to confirm the superiority of the deep-learning-based approach. Still, it was by the U-Net architecture that we were able to successfully produce a denoised 3D vasculature map that could even depict the mesh-like capillary networks distributed in the wall of a rat colorectum. As the development of a low-cost laser diode or LED-based photoacoustic tomography (PAT) system is now emerging as one of the important topics in PAT, we expect that the presented AI strategy for the removal of EMI noise could be broadly applicable to many areas of PAT, in which the ability to apply a hardware-based prevention method is limited and thus EMI noise appears more prominently due to poor SNR.
Collapse
|
13
|
Ren W, Ji B, Guan Y, Cao L, Ni R. Recent Technical Advances in Accelerating the Clinical Translation of Small Animal Brain Imaging: Hybrid Imaging, Deep Learning, and Transcriptomics. Front Med (Lausanne) 2022; 9:771982. [PMID: 35402436 PMCID: PMC8987112 DOI: 10.3389/fmed.2022.771982] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 02/16/2022] [Indexed: 12/26/2022] Open
Abstract
Small animal models play a fundamental role in brain research by deepening the understanding of the physiological functions and mechanisms underlying brain disorders and are thus essential in the development of therapeutic and diagnostic imaging tracers targeting the central nervous system. Advances in structural, functional, and molecular imaging using MRI, PET, fluorescence imaging, and optoacoustic imaging have enabled the interrogation of the rodent brain across a large temporal and spatial resolution scale in a non-invasively manner. However, there are still several major gaps in translating from preclinical brain imaging to the clinical setting. The hindering factors include the following: (1) intrinsic differences between biological species regarding brain size, cell type, protein expression level, and metabolism level and (2) imaging technical barriers regarding the interpretation of image contrast and limited spatiotemporal resolution. To mitigate these factors, single-cell transcriptomics and measures to identify the cellular source of PET tracers have been developed. Meanwhile, hybrid imaging techniques that provide highly complementary anatomical and molecular information are emerging. Furthermore, deep learning-based image analysis has been developed to enhance the quantification and optimization of the imaging protocol. In this mini-review, we summarize the recent developments in small animal neuroimaging toward improved translational power, with a focus on technical improvement including hybrid imaging, data processing, transcriptomics, awake animal imaging, and on-chip pharmacokinetics. We also discuss outstanding challenges in standardization and considerations toward increasing translational power and propose future outlooks.
Collapse
Affiliation(s)
- Wuwei Ren
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
- Shanghai Engineering Research Center of Energy Efficient and Custom AI IC, Shanghai, China
| | - Bin Ji
- Department of Radiopharmacy and Molecular Imaging, School of Pharmacy, Fudan University, Shanghai, China
| | - Yihui Guan
- PET Center, Huashan Hospital, Fudan University, Shanghai, China
| | - Lei Cao
- Shanghai Changes Tech, Ltd., Shanghai, China
| | - Ruiqing Ni
- Institute for Regenerative Medicine, University of Zurich, Zurich, Switzerland
- Institute for Biomedical Engineering, ETH Zürich and University of Zurich, Zurich, Switzerland
| |
Collapse
|
14
|
Ly CD, Nguyen VT, Vo TH, Mondal S, Park S, Choi J, Vu TTH, Kim CS, Oh J. Full-view in vivo skin and blood vessels profile segmentation in photoacoustic imaging based on deep learning. PHOTOACOUSTICS 2022; 25:100310. [PMID: 34824975 PMCID: PMC8603312 DOI: 10.1016/j.pacs.2021.100310] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/23/2021] [Accepted: 10/18/2021] [Indexed: 05/08/2023]
Abstract
Photoacoustic (PA) microscopy allows imaging of the soft biological tissue based on optical absorption contrast and spatial ultrasound resolution. One of the major applications of PA imaging is its characterization of microvasculature. However, the strong PA signal from skin layer overshadowed the subcutaneous blood vessels leading to indirectly reconstruct the PA images in human study. Addressing the present situation, we examined a deep learning (DL) automatic algorithm to achieve high-resolution and high-contrast segmentation for widening PA imaging applications. In this research, we propose a DL model based on modified U-Net for extracting the relationship features between amplitudes of the generated PA signal from skin and underlying vessels. This study illustrates the broader potential of hybrid complex network as an automatic segmentation tool for the in vivo PA imaging. With DL-infused solution, our result outperforms the previous studies with achieved real-time semantic segmentation on large-size high-resolution PA images.
Collapse
Affiliation(s)
- Cao Duong Ly
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Van Tu Nguyen
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Tan Hung Vo
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Sudip Mondal
- New-senior Healthcare Innovation Center (BK21 Plus), Pukyong National University, Busan 48513, Republic of Korea
| | - Sumin Park
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Jaeyeop Choi
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
- Ohlabs Corp, Busan 48513, Republic of Korea
| | - Thi Thu Ha Vu
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Chang-Seok Kim
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Junghwan Oh
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
- Department of Biomedical Engineering, Pukyong National University, Busan 48513, Republic of Korea
- Ohlabs Corp, Busan 48513, Republic of Korea
- New-senior Healthcare Innovation Center (BK21 Plus), Pukyong National University, Busan 48513, Republic of Korea
| |
Collapse
|
15
|
Photoacoustic imaging aided with deep learning: a review. Biomed Eng Lett 2021; 12:155-173. [DOI: 10.1007/s13534-021-00210-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 10/19/2021] [Accepted: 11/07/2021] [Indexed: 12/21/2022] Open
|
16
|
Jimenez-Castaño CA, Álvarez-Meza AM, Aguirre-Ospina OD, Cárdenas-Peña DA, Orozco-Gutiérrez ÁA. Random Fourier Features-Based Deep Learning Improvement with Class Activation Interpretability for Nerve Structure Segmentation. SENSORS (BASEL, SWITZERLAND) 2021; 21:7741. [PMID: 34833817 PMCID: PMC8617795 DOI: 10.3390/s21227741] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 11/12/2021] [Accepted: 11/17/2021] [Indexed: 11/24/2022]
Abstract
Peripheral nerve blocking (PNB) is a standard procedure to support regional anesthesia. Still, correct localization of the nerve's structure is needed to avoid adverse effects; thereby, ultrasound images are used as an aid approach. In addition, image-based automatic nerve segmentation from deep learning methods has been proposed to mitigate attenuation and speckle noise ultrasonography issues. Notwithstanding, complex architectures highlight the region of interest lacking suitable data interpretability concerning the learned features from raw instances. Here, a kernel-based deep learning enhancement is introduced for nerve structure segmentation. In a nutshell, a random Fourier features-based approach was utilized to complement three well-known semantic segmentation architectures, e.g., fully convolutional network, U-net, and ResUnet. Moreover, two ultrasound image datasets for PNB were tested. Obtained results show that our kernel-based approach provides a better generalization capability from image segmentation-based assessments on different nerve structures. Further, for data interpretability, a semantic segmentation extension of the GradCam++ for class-activation mapping was used to reveal relevant learned features separating between nerve and background. Thus, our proposal favors both straightforward (shallow) and complex architectures (deeper neural networks).
Collapse
Affiliation(s)
| | | | - Oscar David Aguirre-Ospina
- Medicina Hospitalaria, Servicios Especiales de Salud (SES) Hospital de Caldas, Manizales 170003, Colombia;
| | - David Augusto Cárdenas-Peña
- Automatic Research Group, Universidad Tecnológica de Pereira, Pereira 660003, Colombia; (D.A.C.-P.); (Á.A.O.-G.)
| | | |
Collapse
|
17
|
Prakash J, Kalva SK, Pramanik M, Yalavarthy PK. Binary photoacoustic tomography for improved vasculature imaging. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210135R. [PMID: 34405599 PMCID: PMC8370884 DOI: 10.1117/1.jbo.26.8.086004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 06/29/2021] [Indexed: 05/09/2023]
Abstract
SIGNIFICANCE The proposed binary tomography approach was able to recover the vasculature structures accurately, which could potentially enable the utilization of binary tomography algorithm in scenarios such as therapy monitoring and hemorrhage detection in different organs. AIM Photoacoustic tomography (PAT) involves reconstruction of vascular networks having direct implications in cancer research, cardiovascular studies, and neuroimaging. Various methods have been proposed for recovering vascular networks in photoacoustic imaging; however, most methods are two-step (image reconstruction and image segmentation) in nature. We propose a binary PAT approach wherein direct reconstruction of vascular network from the acquired photoacoustic sinogram data is plausible. APPROACH Binary tomography approach relies on solving a dual-optimization problem to reconstruct images with every pixel resulting in a binary outcome (i.e., either background or the absorber). Further, the binary tomography approach was compared against backprojection, Tikhonov regularization, and sparse recovery-based schemes. RESULTS Numerical simulations, physical phantom experiment, and in-vivo rat brain vasculature data were used to compare the performance of different algorithms. The results indicate that the binary tomography approach improved the vasculature recovery by 10% using in-silico data with respect to the Dice similarity coefficient against the other reconstruction methods. CONCLUSION The proposed algorithm demonstrates superior vasculature recovery with limited data both visually and based on quantitative image metrics.
Collapse
Affiliation(s)
- Jaya Prakash
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bangalore, Karnataka, India
- Address all correspondence to Jaya Prakash,
| | - Sandeep Kumar Kalva
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore, Singapore
| | - Manojit Pramanik
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore, Singapore
| | - Phaneendra K. Yalavarthy
- Indian Institute of Science, Department of Computational and Data Sciences, Bangalore, Karnataka, India
| |
Collapse
|
18
|
Qu X, Yan G, Zheng D, Fan S, Rao Q, Jiang J. A Deep Learning-Based Automatic First-Arrival Picking Method for Ultrasound Sound-Speed Tomography. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:2675-2686. [PMID: 33886467 DOI: 10.1109/tuffc.2021.3074983] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Ultrasound sound-speed tomography (USST) has shown great prospects for breast cancer diagnosis due to its advantages of nonradiation, low cost, 3-D breast images, and quantitative indicators. However, the reconstruction quality of USST is highly dependent on the first-arrival picking of the transmission wave. Traditional first-arrival picking methods have low accuracy and noise robustness. To improve the accuracy and robustness, we introduced a self-attention mechanism into the bidirectional long short-term memory (BLSTM) network and proposed the self-attention BLSTM (SAT-BLSTM) network. The proposed method predicts the probability of the first-arrival time and selects the time with maximum probability. A numerical simulation and prototype experiment were conducted. In the numerical simulation, the proposed SAT-BLSTM showed the best results. For signal-to-noise ratios (SNRs) of 50, 30, and 15 dB, the mean absolute errors (MAEs) were 48, 49, and 76 ns, respectively. The BLSTM had the second-best results, with MAEs of 55, 56, and 85 ns, respectively. The MAEs of the Akaike information criterion (AIC) method were 57, 296, and 489 ns, respectively. In the prototype experiment, the MAEs of the SAT-BLSTM, the BLSTM, and the AIC were 94, 111, and 410 ns, respectively.
Collapse
|
19
|
Davoudi N, Lafci B, Özbek A, Deán-Ben XL, Razansky D. Deep learning of image- and time-domain data enhances the visibility of structures in optoacoustic tomography. OPTICS LETTERS 2021; 46:3029-3032. [PMID: 34197371 DOI: 10.1364/ol.424571] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 05/15/2021] [Indexed: 06/13/2023]
Abstract
Images rendered with common optoacoustic system implementations are often afflicted with distortions and poor visibility of structures, hindering reliable image interpretation and quantification of bio-chrome distribution. Among the practical limitations contributing to artifactual reconstructions are insufficient tomographic detection coverage and suboptimal illumination geometry, as well as inability to accurately account for acoustic reflections and speed of sound heterogeneities in the imaged tissues. Here we developed a convolutional neural network (CNN) approach for enhancement of optoacoustic image quality which combines training on both time-resolved signals and tomographic reconstructions. Reference human finger data for training the CNN were recorded using a full-ring array system that provides optimal tomographic coverage around the imaged object. The reconstructions were further refined with a dedicated algorithm that minimizes acoustic reflection artifacts induced by acoustically mismatch structures, such as bones. The combined methodology is shown to outperform other learning-based methods solely operating on image-domain data.
Collapse
|
20
|
Razansky D, Klohs J, Ni R. Multi-scale optoacoustic molecular imaging of brain diseases. Eur J Nucl Med Mol Imaging 2021; 48:4152-4170. [PMID: 33594473 PMCID: PMC8566397 DOI: 10.1007/s00259-021-05207-4] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 01/17/2021] [Indexed: 02/07/2023]
Abstract
The ability to non-invasively visualize endogenous chromophores and exogenous probes and sensors across the entire rodent brain with the high spatial and temporal resolution has empowered optoacoustic imaging modalities with unprecedented capacities for interrogating the brain under physiological and diseased conditions. This has rapidly transformed optoacoustic microscopy (OAM) and multi-spectral optoacoustic tomography (MSOT) into emerging research tools to study animal models of brain diseases. In this review, we describe the principles of optoacoustic imaging and showcase recent technical advances that enable high-resolution real-time brain observations in preclinical models. In addition, advanced molecular probe designs allow for efficient visualization of pathophysiological processes playing a central role in a variety of neurodegenerative diseases, brain tumors, and stroke. We describe outstanding challenges in optoacoustic imaging methodologies and propose a future outlook.
Collapse
Affiliation(s)
- Daniel Razansky
- Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Wolfgang-Pauli-Strasse 27, HIT E42.1, 8093, Zurich, Switzerland
- Zurich Neuroscience Center (ZNZ), Zurich, Switzerland
- Faculty of Medicine and Institute of Pharmacology and Toxicology, University of Zurich, Zurich, Switzerland
| | - Jan Klohs
- Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Wolfgang-Pauli-Strasse 27, HIT E42.1, 8093, Zurich, Switzerland
- Zurich Neuroscience Center (ZNZ), Zurich, Switzerland
| | - Ruiqing Ni
- Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Wolfgang-Pauli-Strasse 27, HIT E42.1, 8093, Zurich, Switzerland.
- Zurich Neuroscience Center (ZNZ), Zurich, Switzerland.
- Institute for Regenerative Medicine, Uiversity of Zurich, Zurich, Switzerland.
| |
Collapse
|