1
|
Naik K, Goyal RK, Foschini L, Chak CW, Thielscher C, Zhu H, Lu J, Lehár J, Pacanoswki MA, Terranova N, Mehta N, Korsbo N, Fakhouri T, Liu Q, Gobburu J. Current Status and Future Directions: The Application of Artificial Intelligence/Machine Learning for Precision Medicine. Clin Pharmacol Ther 2024; 115:673-686. [PMID: 38103204 DOI: 10.1002/cpt.3152] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 11/28/2023] [Indexed: 12/18/2023]
Abstract
Technological innovations, such as artificial intelligence (AI) and machine learning (ML), have the potential to expedite the goal of precision medicine, especially when combined with increased capacity for voluminous data from multiple sources and expanded therapeutic modalities; however, they also present several challenges. In this communication, we first discuss the goals of precision medicine, and contextualize the use of AI in precision medicine by showcasing innovative applications (e.g., prediction of tumor growth and overall survival, biomarker identification using biomedical images, and identification of patient population for clinical practice) which were presented during the February 2023 virtual public workshop entitled "Application of Artificial Intelligence and Machine Learning for Precision Medicine," hosted by the US Food and Drug Administration (FDA) and University of Maryland Center of Excellence in Regulatory Science and Innovation (M-CERSI). Next, we put forward challenges brought about by the multidisciplinary nature of AI, particularly highlighting the need for AI to be trustworthy. To address such challenges, we subsequently note practical approaches, viz., differential privacy, synthetic data generation, and federated learning. The proposed strategies - some of which are highlighted presentations from the workshop - are for the protection of personal information and intellectual property. In addition, methods such as the risk-based management approach and the need for an agile regulatory ecosystem are discussed. Finally, we lay out a call for action that includes sharing of data and algorithms, development of regulatory guidance documents, and pooling of expertise from a broad-spectrum of stakeholders to enhance the application of AI in precision medicine.
Collapse
Affiliation(s)
- Kunal Naik
- Office of Clinical Pharmacology, Office of Translational Sciences, Center for Drug Evaluation and Research, US Food and Drug Administration, Silver Spring, Maryland, USA
| | - Rahul K Goyal
- Center for Translational Medicine, University of Maryland School of Pharmacy, Baltimore, Maryland, USA
| | | | | | | | - Hao Zhu
- Office of Clinical Pharmacology, Office of Translational Sciences, Center for Drug Evaluation and Research, US Food and Drug Administration, Silver Spring, Maryland, USA
| | - James Lu
- Modeling & Simulation/Clinical Pharmacology, Genentech Inc., South San Francisco, California, USA
| | | | - Michael A Pacanoswki
- Office of Clinical Pharmacology, Office of Translational Sciences, Center for Drug Evaluation and Research, US Food and Drug Administration, Silver Spring, Maryland, USA
| | - Nadia Terranova
- Quantitative Pharmacology, Ares Trading S.A. (an affiliate of Merck KGaA, Darmstadt, Germany), Lausanne, Switzerland
| | - Neha Mehta
- Office of Clinical Pharmacology, Office of Translational Sciences, Center for Drug Evaluation and Research, US Food and Drug Administration, Silver Spring, Maryland, USA
| | | | - Tala Fakhouri
- Office of Medical Policy, Center for Drug Evaluation and Research, US Food and Drug Administration, Silver Spring, Maryland, USA
| | - Qi Liu
- Office of Clinical Pharmacology, Office of Translational Sciences, Center for Drug Evaluation and Research, US Food and Drug Administration, Silver Spring, Maryland, USA
| | - Jogarao Gobburu
- Center for Translational Medicine, University of Maryland School of Pharmacy, Baltimore, Maryland, USA
| |
Collapse
|
2
|
Gaits F, Mellado N, Bouyjou G, Garcia D, Basarab A. Efficient Stratified 3-D Scatterer Sampling for Freehand Ultrasound Simulation. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2024; 71:127-140. [PMID: 37824323 DOI: 10.1109/tuffc.2023.3324014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
Ultrasound image simulation is a well-explored field with the main objective of generating realistic synthetic images, further used as ground truth for computational imaging algorithms or for radiologists' training. Several ultrasound simulators are already available, most of them consisting in similar steps: 1) generate a collection of tissue mimicking individual scatterers with random spatial positions and random amplitudes; 2) model the ultrasound probe and the emission and reception schemes; and 3) generate the radio frequency (RF) signals resulting from the interaction between the scatterers and the propagating ultrasound waves. This article is focused on the first step. To ensure fully developed speckle, a few tens of scatterers by resolution cell are needed, demanding to handle high amounts of data (especially in 3-D) and resulting into important computational time. The objective of this work is to explore new scatterer spatial distributions, with application to multiple coherent 2-D slice simulations from 3-D volumes. More precisely, lazy evaluation of pseudorandom schemes proves them to be highly computationally efficient compared with uniform random distribution commonly used. We also propose an end-to-end method from the 3-D tissue volume to resulting ultrasound images using coherent and 3-D-aware scatterer generation and usage in a real-time context.
Collapse
|
3
|
Mendez M, Sundararaman S, Probyn L, Tyrrell PN. Approaches and Limitations of Machine Learning for Synthetic Ultrasound Generation: A Scoping Review. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2023; 42:2695-2706. [PMID: 37772474 DOI: 10.1002/jum.16332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 08/22/2023] [Accepted: 08/27/2023] [Indexed: 09/30/2023]
Abstract
This scoping review examines the emerging field of synthetic ultrasound generation using machine learning (ML) models in radiology. Nineteen studies were analyzed, revealing three primary methodological strategies: unconditional generation, conditional generation, and domain translation. Synthetic ultrasound is mainly used to augment training datasets and as training material for radiologists. Blind expert assessment and Fréchet Inception Distance are common evaluation methods. Current limitations include the need for large training datasets, manual annotations for controllable generation, and insufficient research on incorporating new domain knowledge. While generative ultrasound models show promise, further work is required for clinical implementation.
Collapse
Affiliation(s)
- Mauro Mendez
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
| | | | - Linda Probyn
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
| | - Pascal N Tyrrell
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
- Department of Statistical Sciences, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Ali H, Nyman E, Näslund U, Grönlund C. Translation of atherosclerotic disease features onto healthy carotid ultrasound images using domain-to-domain translation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
|
5
|
Xin C, Li B, Wang D, Chen W, Yue S, Meng D, Qiao X, Zhang Y. Deep learning for the rapid automatic segmentation of forearm muscle boundaries from ultrasound datasets. Front Physiol 2023; 14:1166061. [PMID: 37520832 PMCID: PMC10374344 DOI: 10.3389/fphys.2023.1166061] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Accepted: 06/28/2023] [Indexed: 08/01/2023] Open
Abstract
Ultrasound (US) is widely used in the clinical diagnosis and treatment of musculoskeletal diseases. However, the low efficiency and non-uniformity of artificial recognition hinder the application and popularization of US for this purpose. Herein, we developed an automatic muscle boundary segmentation tool for US image recognition and tested its accuracy and clinical applicability. Our dataset was constructed from a total of 465 US images of the flexor digitorum superficialis (FDS) from 19 participants (10 men and 9 women, age 27.4 ± 6.3 years). We used the U-net model for US image segmentation. The U-net output often includes several disconnected regions. Anatomically, the target muscle usually only has one connected region. Based on this principle, we designed an algorithm written in C++ to eliminate redundantly connected regions of outputs. The muscle boundary images generated by the tool were compared with those obtained by professionals and junior physicians to analyze their accuracy and clinical applicability. The dataset was divided into five groups for experimentation, and the average Dice coefficient, recall, and accuracy, as well as the intersection over union (IoU) of the prediction set in each group were all about 90%. Furthermore, we propose a new standard to judge the segmentation results. Under this standard, 99% of the total 150 predicted images by U-net are excellent, which is very close to the segmentation result obtained by professional doctors. In this study, we developed an automatic muscle segmentation tool for US-guided muscle injections. The accuracy of the recognition of the muscle boundary was similar to that of manual labeling by a specialist sonographer, providing a reliable auxiliary tool for clinicians to shorten the US learning cycle, reduce the clinical workload, and improve injection safety.
Collapse
Affiliation(s)
- Chen Xin
- Rehabilitation Center, Qilu Hospital of Shandong University, Jinan, China
| | - Baoxu Li
- School of Mathematics, Shandong University, Jinan, China
| | - Dezheng Wang
- Rehabilitation Center, Qilu Hospital of Shandong University, Jinan, China
| | - Wei Chen
- Department of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
| | - Shouwei Yue
- Rehabilitation Center, Qilu Hospital of Shandong University, Jinan, China
| | - Dong Meng
- Rehabilitation Center, Qilu Hospital of Shandong University, Jinan, China
| | - Xu Qiao
- Department of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
| | - Yang Zhang
- Rehabilitation Center, Qilu Hospital of Shandong University, Jinan, China
| |
Collapse
|
6
|
Arapi V, Hardt-Stremayr A, Weiss S, Steinbrener J. Bridging the simulation-to-real gap for AI-based needle and target detection in robot-assisted ultrasound-guided interventions. Eur Radiol Exp 2023; 7:30. [PMID: 37332035 DOI: 10.1186/s41747-023-00344-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 04/05/2023] [Indexed: 06/20/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI)-powered, robot-assisted, and ultrasound (US)-guided interventional radiology has the potential to increase the efficacy and cost-efficiency of interventional procedures while improving postsurgical outcomes and reducing the burden for medical personnel. METHODS To overcome the lack of available clinical data needed to train state-of-the-art AI models, we propose a novel approach for generating synthetic ultrasound data from real, clinical preoperative three-dimensional (3D) data of different imaging modalities. With the synthetic data, we trained a deep learning-based detection algorithm for the localization of needle tip and target anatomy in US images. We validated our models on real, in vitro US data. RESULTS The resulting models generalize well to unseen synthetic data and experimental in vitro data making the proposed approach a promising method to create AI-based models for applications of needle and target detection in minimally invasive US-guided procedures. Moreover, we show that by one-time calibration of the US and robot coordinate frames, our tracking algorithm can be used to accurately fine-position the robot in reach of the target based on 2D US images alone. CONCLUSIONS The proposed data generation approach is sufficient to bridge the simulation-to-real gap and has the potential to overcome data paucity challenges in interventional radiology. The proposed AI-based detection algorithm shows very promising results in terms of accuracy and frame rate. RELEVANCE STATEMENT This approach can facilitate the development of next-generation AI algorithms for patient anatomy detection and needle tracking in US and their application to robotics. KEY POINTS • AI-based methods show promise for needle and target detection in US-guided interventions. • Publicly available, annotated datasets for training AI models are limited. • Synthetic, clinical-like US data can be generated from magnetic resonance or computed tomography data. • Models trained with synthetic US data generalize well to real in vitro US data. • Target detection with an AI model can be used for fine positioning of the robot.
Collapse
Affiliation(s)
- Visar Arapi
- Control of Networked Systems Research Group, Institute of Smart Systems Technologies, University of Klagenfurt, Klagenfurt, Austria.
| | - Alexander Hardt-Stremayr
- Control of Networked Systems Research Group, Institute of Smart Systems Technologies, University of Klagenfurt, Klagenfurt, Austria
| | - Stephan Weiss
- Control of Networked Systems Research Group, Institute of Smart Systems Technologies, University of Klagenfurt, Klagenfurt, Austria
| | - Jan Steinbrener
- Control of Networked Systems Research Group, Institute of Smart Systems Technologies, University of Klagenfurt, Klagenfurt, Austria
| |
Collapse
|
7
|
Synthetic data in health care: A narrative review. PLOS DIGITAL HEALTH 2023; 2:e0000082. [PMID: 36812604 PMCID: PMC9931305 DOI: 10.1371/journal.pdig.0000082] [Citation(s) in RCA: 24] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 12/06/2022] [Indexed: 01/09/2023]
Abstract
Data are central to research, public health, and in developing health information technology (IT) systems. Nevertheless, access to most data in health care is tightly controlled, which may limit innovation, development, and efficient implementation of new research, products, services, or systems. Using synthetic data is one of the many innovative ways that can allow organizations to share datasets with broader users. However, only a limited set of literature is available that explores its potentials and applications in health care. In this review paper, we examined existing literature to bridge the gap and highlight the utility of synthetic data in health care. We searched PubMed, Scopus, and Google Scholar to identify peer-reviewed articles, conference papers, reports, and thesis/dissertations articles related to the generation and use of synthetic datasets in health care. The review identified seven use cases of synthetic data in health care: a) simulation and prediction research, b) hypothesis, methods, and algorithm testing, c) epidemiology/public health research, d) health IT development, e) education and training, f) public release of datasets, and g) linking data. The review also identified readily and publicly accessible health care datasets, databases, and sandboxes containing synthetic data with varying degrees of utility for research, education, and software development. The review provided evidence that synthetic data are helpful in different aspects of health care and research. While the original real data remains the preferred choice, synthetic data hold possibilities in bridging data access gaps in research and evidence-based policymaking.
Collapse
|
8
|
Cannata GP, Migliorelli L, Mancini A, Frontoni E, Pietrini R, Moccia S. Generating depth images of preterm infants in given poses using GANs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 225:107057. [PMID: 35952537 DOI: 10.1016/j.cmpb.2022.107057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 06/30/2022] [Accepted: 08/02/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVES The use of deep learning for preterm infant's movement monitoring has the potential to support clinicians in early recognizing motor and behavioural disorders. The development of deep learning algorithms is, however, hampered by the lack of publicly available annotated datasets. METHODS To mitigate the issue, this paper presents a Generative Adversarial Network-based framework to generate images of preterm infants in a given pose. The framework consists of a bibranch encoder and a conditional Generative Adversarial Network, to generate a rough image and a refined version of it, respectively. RESULTS Evaluation was performed on the Moving INfants In RGB-D dataset which has 12.000 depth frames from 12 preterm infants. A low Fréchet inception distance (142.9) and an inception score (2.8) close to that of real-image distribution (2.6) are obtained. The results achieved show the potentiality of the framework in generating realistic depth images of preterm infants in a given pose. CONCLUSIONS Pursuing research on the generation of new data may enable researchers to propose increasingly advanced and effective deep learning-based monitoring systems.
Collapse
Affiliation(s)
- Giuseppe Pio Cannata
- Department of Information Engineering, Università Politecnica delle Marche, Italy
| | - Lucia Migliorelli
- Department of Information Engineering, Università Politecnica delle Marche, Italy.
| | - Adriano Mancini
- Department of Information Engineering, Università Politecnica delle Marche, Italy
| | - Emanuele Frontoni
- Department of Political Science, Communication and International Relations, Università degli Studi di Macerata, Italy
| | - Rocco Pietrini
- Department of Information Engineering, Università Politecnica delle Marche, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Italy
| |
Collapse
|
9
|
Monkam P, Jin S, Lu W. An efficient annotated data generation method for echocardiographic image segmentation. Comput Biol Med 2022; 149:106090. [PMID: 36115304 DOI: 10.1016/j.compbiomed.2022.106090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 08/12/2022] [Accepted: 09/03/2022] [Indexed: 11/25/2022]
Abstract
BACKGROUND In recent years, deep learning techniques have demonstrated promising performances in echocardiography (echo) data segmentation, which constitutes a critical step in the diagnosis and prognosis of cardiovascular diseases (CVDs). However, their successful implementation requires large number and high-quality annotated samples, whose acquisition is arduous and expertise-demanding. To this end, this study aims at circumventing the tedious, time-consuming and expertise-demanding data annotation involved in deep learning-based echo data segmentation. METHODS We propose a two-phase framework for fast generation of annotated echo data needed for implementing intelligent cardiac structure segmentation systems. First, multi-size and multi-orientation cardiac structures are simulated leveraging polynomial fitting method. Second, the obtained cardiac structures are embedded onto curated endoscopic ultrasound images using Fourier Transform algorithm, resulting in pairs of annotated samples. The practical significance of the proposed framework is validated through using the generated realistic annotated images as auxiliary dataset to pretrain deep learning models for automatic segmentation of left ventricle and left ventricle wall in real echo data, respectively. RESULTS Extensive experimental analyses indicate that compared with training from scratch, fine-tuning after pretraining with the generated dataset always results in significant performance improvement whereby the improvement margins in terms of Dice and IoU can reach 12.9% and 7.74%, respectively. CONCLUSION The proposed framework has great potential to overcome the shortage of labeled data hampering the deployment of deep learning approaches in echo data analysis.
Collapse
Affiliation(s)
- Patrice Monkam
- Easysignal Group, State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, Department of Automation, Tsinghua University, Beijing 100084, China.
| | - Songbai Jin
- Easysignal Group, State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, Department of Automation, Tsinghua University, Beijing 100084, China.
| | - Wenkai Lu
- Easysignal Group, State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, Department of Automation, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
10
|
Automatic Extraction of Muscle Parameters with Attention UNet in Ultrasonography. SENSORS 2022; 22:s22145230. [PMID: 35890909 PMCID: PMC9324543 DOI: 10.3390/s22145230] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 07/07/2022] [Accepted: 07/09/2022] [Indexed: 02/04/2023]
Abstract
Automatically delineating the deep and superficial aponeurosis of the skeletal muscles from ultrasound images is important in many aspects of the clinical routine. In particular, finding muscle parameters, such as thickness, fascicle length or pennation angle, is a time-consuming clinical task requiring both human labour and specialised knowledge. In this study, a multi-step solution for automating these tasks is presented. A process to effortlessly extract the aponeurosis for automatically measuring the muscle thickness has been introduced as a first step. This process consists mainly of three parts. In the first part, the Attention UNet has been incorporated to automatically delineate the boundaries of the studied muscles. Afterwards, a specialised post-processing algorithm was utilised to improve (and correct) the segmentation results. Lastly, the calculation of the muscle thickness was performed. The proposed method has achieved similar to a human-level performance. In particular, the overall discrepancy between the automatic and the manual muscle thickness measurements was equal to 0.4 mm, a significant result that demonstrates the feasibility of automating this task. In the second step of the proposed methodology, the fascicle’s length and pennation angle are extracted through an unsupervised pipeline. Initially, filtering is applied to the ultrasound images to further distinguish the tissues from the other muscle structures. Later, the well-known K-Means algorithm is used to isolate them successfully. As the last step, the dominant angle of the segmented muscle tissues is reported and compared with manual measurements. The proposed pipeline is showing very promising results in the evaluated dataset. Specifically, in the calculation of the pennation angle, the overall discrepancy between the automatic and the manual measurements was less than 2.22° (degrees), once more comparable with the human-level performance. Finally, regarding the fascicle length measurements, the results were divided based on the muscle properties. In the muscles where a large portion (or all) of the fascicles are located between the upper and lower aponeuroses, the proposed pipeline exhibits superb performance; otherwise, overall accuracy deteriorates due to errors caused by the trigonometric approximations needed for the length calculation.
Collapse
|
11
|
Ali H, Umander J, Rohlén R, Röhrle O, Grönlund C. Modelling intra-muscular contraction dynamics using in silico to in vivo domain translation. Biomed Eng Online 2022; 21:46. [PMID: 35804415 PMCID: PMC9270806 DOI: 10.1186/s12938-022-01016-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 06/20/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Advances in sports medicine, rehabilitation applications and diagnostics of neuromuscular disorders are based on the analysis of skeletal muscle contractions. Recently, medical imaging techniques have transformed the study of muscle contractions, by allowing identification of individual motor units' activity, within the whole studied muscle. However, appropriate image-based simulation models, which would assist the continued development of these new imaging methods are missing. This is mainly due to a lack of models that describe the complex interaction between tissues within a muscle and its surroundings, e.g., muscle fibres, fascia, vasculature, bone, skin, and subcutaneous fat. Herein, we propose a new approach to overcome this limitation. METHODS In this work, we propose to use deep learning to model the authentic intra-muscular skeletal muscle contraction pattern using domain-to-domain translation between in silico (simulated) and in vivo (experimental) image sequences of skeletal muscle contraction dynamics. For this purpose, the 3D cycle generative adversarial network (cycleGAN) models were evaluated on several hyperparameter settings and modifications. The results show that there were large differences between the spatial features of in silico and in vivo data, and that a model could be trained to generate authentic spatio-temporal features similar to those obtained from in vivo experimental data. In addition, we used difference maps between input and output of the trained model generator to study the translated characteristics of in vivo data. RESULTS This work provides a model to generate authentic intra-muscular skeletal muscle contraction dynamics that could be used to gain further and much needed physiological and pathological insights and assess and overcome limitations within the newly developed research field of neuromuscular imaging.
Collapse
Affiliation(s)
- Hazrat Ali
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Abbottabad Campus, Abbottabad, Pakistan.,Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | | | - Robin Rohlén
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | - Oliver Röhrle
- Stuttgart Center for Simulation Technology (SC SimTech), University of Stuttgart, Stuttgart, Germany.,Institute for Modelling and Simulation of Biomechanical Systems, Chair for Computational Biophysics and Biorobotics, University of Stuttgart, Stuttgart, Germany
| | | |
Collapse
|
12
|
Vilimek D, Kubicek J, Golian M, Jaros R, Kahankova R, Hanzlikova P, Barvik D, Krestanova A, Penhaker M, Cerny M, Prokop O, Buzga M. Comparative analysis of wavelet transform filtering systems for noise reduction in ultrasound images. PLoS One 2022; 17:e0270745. [PMID: 35797331 PMCID: PMC9262246 DOI: 10.1371/journal.pone.0270745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Accepted: 06/16/2022] [Indexed: 11/19/2022] Open
Abstract
Wavelet transform (WT) is a commonly used method for noise suppression and feature extraction from biomedical images. The selection of WT system settings significantly affects the efficiency of denoising procedure. This comparative study analyzed the efficacy of the proposed WT system on real 292 ultrasound images from several areas of interest. The study investigates the performance of the system for different scaling functions of two basic wavelet bases, Daubechies and Symlets, and their efficiency on images artificially corrupted by three kinds of noise. To evaluate our extensive analysis, we used objective metrics, namely structural similarity index (SSIM), correlation coefficient, mean squared error (MSE), peak signal-to-noise ratio (PSNR) and universal image quality index (Q-index). Moreover, this study includes clinical insights on selected filtration outcomes provided by clinical experts. The results show that the efficiency of the filtration strongly depends on the specific wavelet system setting, type of ultrasound data, and the noise present. The findings presented may provide a useful guideline for researchers, software developers, and clinical professionals to obtain high quality images.
Collapse
Affiliation(s)
- Dominik Vilimek
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Jan Kubicek
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Milos Golian
- Human Motion Diagnostic Center, Department of Human Movement Studies, University of Ostrava, Ostrava, Czech Republic
| | - Rene Jaros
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Radana Kahankova
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
- * E-mail:
| | - Pavla Hanzlikova
- Department of Imaging Method, Faculty of Medicine, University of Ostrava, Ostrava, Czech Republic
| | - Daniel Barvik
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Alice Krestanova
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Marek Penhaker
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Martin Cerny
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | | | - Marek Buzga
- Human Motion Diagnostic Center, Department of Human Movement Studies, University of Ostrava, Ostrava, Czech Republic
- Deparment of Physiology and Pathophysiology, Faculty of Medicine, University of Ostrava, Ostrava, Czech Republic
| |
Collapse
|
13
|
Hernández-Belmonte A, Martínez-Cava A, Pallarés JG. Pectoralis Cross-Sectional Area can be Accurately Measured using Panoramic Ultrasound: A Validity and Repeatability Study. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:460-468. [PMID: 34857426 DOI: 10.1016/j.ultrasmedbio.2021.10.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Revised: 09/13/2021] [Accepted: 10/20/2021] [Indexed: 06/13/2023]
Abstract
The objective of the current study was to examine the validity and repeatability of panoramic ultrasound in evaluating the anatomical cross-sectional area (ACSA) of the pectoralis major. Specifically, we aimed to quantify the measurement errors generated during the image acquisition and analysis (repeatability), as well as when comparing with magnetic resonance imaging (MRI) (validity). Moreover, we aimed to analyze the influence of the operator's experience on these measurement errors. Both sides of the chest of 16 participants (n = 32) were included. Errors made by two operators (trained and novice) when measuring pectoralis major ACSA (50% of sternum-areola mammae distance) were examined. Acquisition errors included the comparison of two images acquired 5 min apart. Acquisition 1 was analyzed twice to quantify analysis errors. Thereafter, acquisition 1 was compared with MRI. Statistics include the standard error of measurement (SEM), expressed in absolute (cm2) and relative (%) terms as a coefficient of variation (CV), and the calculation of systematic bias. Errors made by the trained operator were lower than those made by the novice, especially during the image acquisition (SEM = 0.25 vs. 0.66 cm2, CV = 1.06 vs. 2.98%) and when compared with MRI (SEM = 0.27 vs. 1.90 cm2, CV = 1.13 vs. 8.16%). Furthermore, although both operators underestimated the ACSA, magnitude and variability [SD] of these errors were lower for the trained operator (bias = -0.19 [0.34] cm2) than for the novice (bias = -1.97 [2.59] cm2). Panoramic ultrasound is a valid and repeatable technique for measuring pectoralis major ACSA, especially when implemented by a trained operator.
Collapse
Affiliation(s)
| | - Alejandro Martínez-Cava
- Human Performance and Sports Science Laboratory, Faculty of Sport Sciences, University of Murcia, Murcia, Spain
| | - Jesús G Pallarés
- Human Performance and Sports Science Laboratory, Faculty of Sport Sciences, University of Murcia, Murcia, Spain.
| |
Collapse
|
14
|
AI musculoskeletal clinical applications: how can AI increase my day-to-day efficiency? Skeletal Radiol 2022; 51:293-304. [PMID: 34341865 DOI: 10.1007/s00256-021-03876-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Revised: 07/21/2021] [Accepted: 07/21/2021] [Indexed: 02/02/2023]
Abstract
Artificial intelligence (AI) is expected to bring greater efficiency in radiology by performing tasks that would otherwise require human intelligence, also at a much faster rate than human performance. In recent years, milestone deep learning models with unprecedented low error rates and high computational efficiency have shown remarkable performance for lesion detection, classification, and segmentation tasks. However, the growing field of AI has significant implications for radiology that are not limited to visual tasks. These are essential applications for optimizing imaging workflows and improving noninterpretive tasks. This article offers an overview of the recent literature on AI, focusing on the musculoskeletal imaging chain, including initial patient scheduling, optimized protocoling, magnetic resonance imaging reconstruction, image enhancement, medical image-to-image translation, and AI-aided image interpretation. The substantial developments of advanced algorithms, the emergence of massive quantities of medical data, and the interest of researchers and clinicians reveal the potential for the growing applications of AI to augment the day-to-day efficiency of musculoskeletal radiologists.
Collapse
|
15
|
Posilović L, Medak D, Subašić M, Budimir M, Lončarić S. Generating ultrasonic images indistinguishable from real images using Generative Adversarial Networks. ULTRASONICS 2022; 119:106610. [PMID: 34735930 DOI: 10.1016/j.ultras.2021.106610] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 09/30/2021] [Accepted: 09/30/2021] [Indexed: 06/13/2023]
Abstract
Ultrasonic imaging is widely used for non-destructive evaluation in various industry applications. Early detection of defects in materials is the key to keeping the integrity of inspected structures. Currently, there have been some attempts to develop models for automated defect detection on ultrasonic data. To push the performance of these models even further more data is needed to train deep convolutional neural networks. A lot of data is also needed for training human experts. However, gathering a sufficient amount of data for training is a challenge due to the rare occurrence of defects in real inspection scenarios. This is why inspection results heavily depend on the inspector's previous experience. To overcome these challenges, we propose the use of Generative Adversarial Networks for generating realistic ultrasonic images. To the best of our knowledge, this work is the first one to show that a Generative Adversarial Network is able to generate images indistinguishable from real ultrasonic images. The most thorough statistical quality analysis to date of generated ultrasonic images has been conducted with the participation of human expert inspectors. The experimental results show that images generated using our Generative Adversarial Network provide the highest quality images compared to other published methods.
Collapse
Affiliation(s)
- Luka Posilović
- University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia.
| | - Duje Medak
- University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia.
| | - Marko Subašić
- University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia.
| | - Marko Budimir
- INETEC Institute for Nuclear Technology, Zagreb, Croatia.
| | - Sven Lončarić
- University of Zagreb, Faculty of Electrical Engineering and Computing, Zagreb, Croatia.
| |
Collapse
|
16
|
Mutepfe F, Kalejahi BK, Meshgini S, Danishvar S. Generative Adversarial Network Image Synthesis Method for Skin Lesion Generation and Classification. JOURNAL OF MEDICAL SIGNALS & SENSORS 2021; 11:237-252. [PMID: 34820296 PMCID: PMC8588886 DOI: 10.4103/jmss.jmss_53_20] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 09/29/2020] [Accepted: 01/01/2021] [Indexed: 11/16/2022]
Abstract
Background: One of the common limitations in the treatment of cancer is in the early detection of this disease. The customary medical practice of cancer examination is a visual examination by the dermatologist followed by an invasive biopsy. Nonetheless, this symptomatic approach is timeconsuming and prone to human errors. An automated machine learning model is essential to capacitate fast diagnoses and early treatment. Objective: The key objective of this study is to establish a fully automatic model that helps Dermatologists in skin cancer handling process in a way that could improve skin lesion classification accuracy. Method: The work is conducted following an implementation of a Deep Convolutional Generative Adversarial Network (DCGAN) using the Python-based deep learning library Keras. We incorporated effective image filtering and enhancement algorithms such as bilateral filter to enhance feature detection and extraction during training. The Deep Convolutional Generative Adversarial Network (DCGAN) needed slightly more fine-tuning to ripe a better return. Hyperparameter optimization was utilized for selecting the best-performed hyperparameter combinations and several network hyperparameters. In this work, we decreased the learning rate from the default 0.001 to 0.0002, and the momentum for Adam optimization algorithm from 0.9 to 0.5, in trying to reduce the instability issues related to GAN models and at each iteration the weights of the discriminative and generative network were updated to balance the loss between them. We endeavour to address a binary classification which predicts two classes present in our dataset, namely benign and malignant. More so, some wellknown metrics such as the receiver operating characteristic -area under the curve and confusion matrix were incorporated for evaluating the results and classification accuracy. Results: The model generated very conceivable lesions during the early stages of the experiment and we could easily visualise a smooth transition in resolution along the way. Thus, we have achieved an overall test accuracy of 93.5% after fine-tuning most parameters of our network. Conclusion: This classification model provides spatial intelligence that could be useful in the future for cancer risk prediction. Unfortunately, it is difficult to generate high quality images that are much like the synthetic real samples and to compare different classification methods given the fact that some methods use non-public datasets for training.
Collapse
Affiliation(s)
- Freedom Mutepfe
- Department of Computer Science and Engineering, School of Science and Engineering, Khazar University, Baku, Azerbaijan
| | - Behnam Kiani Kalejahi
- Department of Computer Science and Engineering, School of Science and Engineering, Khazar University, Baku, Azerbaijan.,Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| | - Saeed Meshgini
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| | - Sebelan Danishvar
- Department of Electronic and Computer Engineering, Brunel University, London, UK
| |
Collapse
|
17
|
Modanwal G, Vellal A, Mazurowski MA. Normalization of breast MRIs using cycle-consistent generative adversarial networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106225. [PMID: 34198016 DOI: 10.1016/j.cmpb.2021.106225] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 05/29/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVES Dynamic Contrast Enhanced-Magnetic Resonance Imaging (DCE-MRI) is widely used to complement ultrasound examinations and x-ray mammography for early detection and diagnosis of breast cancer. However, images generated by various MRI scanners (e.g., GE Healthcare, and Siemens) differ both in intensity and noise distribution, preventing algorithms trained on MRIs from one scanner to generalize to data from other scanners. In this work, we propose a method to solve this problem by normalizing images between various scanners. METHODS MRI normalization is challenging because it requires normalizing intensity values and mapping noise distributions between scanners. We utilize a cycle-consistent generative adversarial network to learn a bidirectional mapping and perform normalization between MRIs produced by GE Healthcare and Siemens scanners in an unpaired setting. Initial experiments demonstrate that the traditional CycleGAN architecture struggles to preserve the anatomical structures of the breast during normalization. Thus, we propose two technical innovations in order to preserve both the shape of the breast as well as the tissue structures within the breast. First, we incorporate mutual information loss during training in order to ensure anatomical consistency. Second, we propose a modified discriminator architecture that utilizes a smaller field-of-view to ensure the preservation of finer details in the breast tissue. RESULTS Quantitative and qualitative evaluations show that the second innovation consistently preserves the breast shape and tissue structures while also performing the proper intensity normalization and noise distribution mapping. CONCLUSION Our results demonstrate that the proposed model can successfully learn a bidirectional mapping and perform normalization between MRIs produced by different vendors, potentially enabling improved diagnosis and detection of breast cancer. All the data used in this study are publicly available at https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70226903.
Collapse
Affiliation(s)
| | - Adithya Vellal
- Department of Computer Science, Duke University, Durham, NC, USA
| | | |
Collapse
|
18
|
Abdelmotaal H, Abdou AA, Omar AF, El-Sebaity DM, Abdelazeem K. Pix2pix Conditional Generative Adversarial Networks for Scheimpflug Camera Color-Coded Corneal Tomography Image Generation. Transl Vis Sci Technol 2021; 10:21. [PMID: 34132759 PMCID: PMC8242686 DOI: 10.1167/tvst.10.7.21] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
Purpose To assess the ability of pix2pix conditional generative adversarial network (pix2pix cGAN) to create plausible synthesized Scheimpflug camera color-coded corneal tomography images based upon a modest-sized original dataset to be used for image augmentation during training a deep convolutional neural network (DCNN) for classification of keratoconus and normal corneal images. Methods Original images of 1778 eyes of 923 nonconsecutive patients with or without keratoconus were retrospectively analyzed. Images were labeled and preprocessed for use in training the proposed pix2pix cGAN. The best quality synthesized images were selected based on the Fréchet inception distance score, and their quality was studied by calculating the mean square error, structural similarity index, and the peak signal-to-noise ratio. We used original, traditionally augmented original and synthesized images to train a DCNN for image classification and compared classification performance metrics. Results The pix2pix cGAN synthesized images showed plausible subjectively and objectively assessed quality. Training the DCNN with a combination of real and synthesized images allowed better classification performance compared with training using original images only or with traditional augmentation. Conclusions Using the pix2pix cGAN to synthesize corneal tomography images can overcome issues related to small datasets and class imbalance when training computer-aided diagnostic models. Translational Relevance Pix2pix cGAN can provide an unlimited supply of plausible synthetic Scheimpflug camera color-coded corneal tomography images at levels useful for experimental and clinical applications.
Collapse
Affiliation(s)
- Hazem Abdelmotaal
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | - Ahmed A Abdou
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | - Ahmed F Omar
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | | | - Khaled Abdelazeem
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| |
Collapse
|
19
|
Rosa LG, Zia JS, Inan OT, Sawicki GS. Machine learning to extract muscle fascicle length changes from dynamic ultrasound images in real-time. PLoS One 2021; 16:e0246611. [PMID: 34038426 PMCID: PMC8153491 DOI: 10.1371/journal.pone.0246611] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/20/2021] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND AND OBJECTIVE Dynamic muscle fascicle length measurements through B-mode ultrasound have become popular for the non-invasive physiological insights they provide regarding musculoskeletal structure-function. However, current practices typically require time consuming post-processing to track muscle length changes from B-mode images. A real-time measurement tool would not only save processing time but would also help pave the way toward closed-loop applications based on feedback signals driven by in vivo muscle length change patterns. In this paper, we benchmark an approach that combines traditional machine learning (ML) models with B-mode ultrasound recordings to obtain muscle fascicle length changes in real-time. To gauge the utility of this framework for 'in-the-loop' applications, we evaluate accuracy of the extracted muscle length change signals against time-series' derived from a standard, post-hoc automated tracking algorithm. METHODS We collected B-mode ultrasound data from the soleus muscle of six participants performing five defined ankle motion tasks: (a) seated, constrained ankle plantarflexion, (b) seated, free ankle dorsi/plantarflexion, (c) weight-bearing, calf raises (d) walking, and then a (e) mix. We trained machine learning (ML) models by pairing muscle fascicle lengths obtained from standardized automated tracking software (UltraTrack) with the respective B-mode ultrasound image input to the tracker, frame-by-frame. Then we conducted hyperparameter optimizations for five different ML models using a grid search to find the best performing parameters for a combination of high correlation and low RMSE between ML and UltraTrack processed muscle fascicle length trajectories. Finally, using the global best model/hyperparameter settings, we comprehensively evaluated training-testing outcomes within subject (i.e., train and test on same subject), cross subject (i.e., train on one subject, test on another) and within/direct cross task (i.e., train and test on same subject, but different task). RESULTS Support vector machine (SVM) was the best performing model with an average r = 0.70 ±0.34 and average RMSE = 2.86 ±2.55 mm across all direct training conditions and average r = 0.65 ±0.35 and average RMSE = 3.28 ±2.64 mm when optimized for all cross-participant conditions. Comparisons between ML vs. UltraTrack (i.e., ground truth) tracked muscle fascicle length versus time data indicated that ML tracked images reliably capture the salient qualitative features in ground truth length change data, even when correlation values are on the lower end. Furthermore, in the direct training, calf raises condition, which is most comparable to previous studies validating automated tracking performance during isolated contractions on a dynamometer, our ML approach yielded 0.90 average correlation, in line with other accepted tracking methods in the field. CONCLUSIONS By combining B-mode ultrasound and classical ML models, we demonstrate it is possible to achieve real-time tracking of human soleus muscle fascicles across a number of functionally relevant contractile conditions. This novel sensing modality paves the way for muscle physiology in-the-loop applications that could be used to modify gait via biofeedback or unlock novel wearable device control techniques that could enable restored or augmented locomotion performance.
Collapse
Affiliation(s)
- Luis G. Rosa
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| | - Jonathan S. Zia
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- Emory University School of Medicine, Atlanta, Georgia, United States of America
| | - Omer T. Inan
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| | - Gregory S. Sawicki
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States of America
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, United States of America
| |
Collapse
|
20
|
Shin Y, Yang J, Lee YH. Deep Generative Adversarial Networks: Applications in Musculoskeletal Imaging. Radiol Artif Intell 2021; 3:e200157. [PMID: 34136816 PMCID: PMC8204145 DOI: 10.1148/ryai.2021200157] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 02/10/2021] [Accepted: 02/16/2021] [Indexed: 12/12/2022]
Abstract
In recent years, deep learning techniques have been applied in musculoskeletal radiology to increase the diagnostic potential of acquired images. Generative adversarial networks (GANs), which are deep neural networks that can generate or transform images, have the potential to aid in faster imaging by generating images with a high level of realism across multiple contrast and modalities from existing imaging protocols. This review introduces the key architectures of GANs as well as their technical background and challenges. Key research trends are highlighted, including: (a) reconstruction of high-resolution MRI; (b) image synthesis with different modalities and contrasts; (c) image enhancement that efficiently preserves high-frequency information suitable for human interpretation; (d) pixel-level segmentation with annotation sharing between domains; and (e) applications to different musculoskeletal anatomies. In addition, an overview is provided of the key issues wherein clinical applicability is challenging to capture with conventional performance metrics and expert evaluation. When clinically validated, GANs have the potential to improve musculoskeletal imaging. Keywords: Adults and Pediatrics, Computer Aided Diagnosis (CAD), Computer Applications-General (Informatics), Informatics, Skeletal-Appendicular, Skeletal-Axial, Soft Tissues/Skin © RSNA, 2021.
Collapse
Affiliation(s)
- YiRang Shin
- From the Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, 250 Seongsanno, Seodaemun-gu, Seoul 220-701, Republic of Korea (Y.S., J.Y., Y.H.L.); Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Republic of Korea (J.Y.); and Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Republic of Korea (J.Y.)
| | - Jaemoon Yang
- From the Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, 250 Seongsanno, Seodaemun-gu, Seoul 220-701, Republic of Korea (Y.S., J.Y., Y.H.L.); Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Republic of Korea (J.Y.); and Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Republic of Korea (J.Y.)
| | - Young Han Lee
- From the Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, 250 Seongsanno, Seodaemun-gu, Seoul 220-701, Republic of Korea (Y.S., J.Y., Y.H.L.); Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Republic of Korea (J.Y.); and Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Republic of Korea (J.Y.)
| |
Collapse
|
21
|
Herskovits EH. Artificial intelligence in molecular imaging. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:824. [PMID: 34268437 PMCID: PMC8246206 DOI: 10.21037/atm-20-6191] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Accepted: 11/27/2020] [Indexed: 12/16/2022]
Abstract
AI has, to varying degrees, affected all aspects of molecular imaging, from image acquisition to diagnosis. During the last decade, the advent of deep learning in particular has transformed medical image analysis. Although the majority of recent advances have resulted from neural-network models applied to image segmentation, a broad range of techniques has shown promise for image reconstruction, image synthesis, differential-diagnosis generation, and treatment guidance. Applications of AI for drug design indicate the way forward for using AI to facilitate molecular-probe design, which is still in its early stages. Deep-learning models have demonstrated increased efficiency and image quality for PET reconstruction from sinogram data. Generative adversarial networks (GANs), which are paired neural networks that are jointly trained to generate and classify images, have found applications in modality transformation, artifact reduction, and synthetic-PET-image generation. Some AI applications, based either partly or completely on neural-network approaches, have demonstrated superior differential-diagnosis generation relative to radiologists. However, AI models have a history of brittleness, and physicians and patients may not trust AI applications that cannot explain their reasoning. To date, the majority of molecular-imaging applications of AI have been confined to research projects, and are only beginning to find their ways into routine clinical workflows via commercialization and, in some cases, integration into scanner hardware. Evaluation of actual clinical products will yield more realistic assessments of AI’s utility in molecular imaging.
Collapse
Affiliation(s)
- Edward H Herskovits
- Department of Diagnostic Radiology and Nuclear Medicine, The University of Maryland, Baltimore, School of Medicine, Baltimore, MD, USA
| |
Collapse
|