1
|
Merasli A, Ponsi B, Millardet M, Carlier T, Stute S. Comment on 'A PET reconstruction formulation that enforces non-negativity in projection space for bias reduction in Y-90 imaging'. Phys Med Biol 2024; 69:208001. [PMID: 39351704 DOI: 10.1088/1361-6560/ad7e75] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Accepted: 09/23/2024] [Indexed: 04/01/2025]
Abstract
We read with great interest the paper by Limet al(2018Phys. Med. Biol.63035042) on bias reduction in Y-90 PET imaging. In particular, they proposed a new formulation of the tomographic reconstruction problem that enforces non-negativity in projection space as opposed to image space. We comment on the algorithm they derived from this formulation and bring some clarifications on the constraint that this algorithm respects.
Collapse
Affiliation(s)
- Alexandre Merasli
- CRCI2NA, INSERM, CNRS, Université d'Angers, Université de Nantes, Nantes, France
- Siemens Healthineers, Courbevoie, France
| | - Brunnhilde Ponsi
- CRCI2NA, INSERM, CNRS, Université d'Angers, Université de Nantes, Nantes, France
- Nuclear Medicine Department, University Hospital, Nantes, France
| | - Maël Millardet
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, United States of America
- Siemens Medical Solutions USA,Inc., Knoxville, TN, United States of America
| | - Thomas Carlier
- CRCI2NA, INSERM, CNRS, Université d'Angers, Université de Nantes, Nantes, France
- Nuclear Medicine Department, University Hospital, Nantes, France
| | - Simon Stute
- CRCI2NA, INSERM, CNRS, Université d'Angers, Université de Nantes, Nantes, France
- Nuclear Medicine Department, University Hospital, Nantes, France
| |
Collapse
|
2
|
Ko S, Zhou H, Zhou JJ, Won JH. High-Performance Statistical Computing in the Computing Environments of the 2020s. Stat Sci 2022; 37:494-518. [PMID: 37168541 PMCID: PMC10168006 DOI: 10.1214/21-sts835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Technological advances in the past decade, hardware and software alike, have made access to high-performance computing (HPC) easier than ever. We review these advances from a statistical computing perspective. Cloud computing makes access to supercomputers affordable. Deep learning software libraries make programming statistical algorithms easy and enable users to write code once and run it anywhere-from a laptop to a workstation with multiple graphics processing units (GPUs) or a supercomputer in a cloud. Highlighting how these developments benefit statisticians, we review recent optimization algorithms that are useful for high-dimensional models and can harness the power of HPC. Code snippets are provided to demonstrate the ease of programming. We also provide an easy-to-use distributed matrix data structure suitable for HPC. Employing this data structure, we illustrate various statistical applications including large-scale positron emission tomography and ℓ1-regularized Cox regression. Our examples easily scale up to an 8-GPU workstation and a 720-CPU-core cluster in a cloud. As a case in point, we analyze the onset of type-2 diabetes from the UK Biobank with 200,000 subjects and about 500,000 single nucleotide polymorphisms using the HPC ℓ1-regularized Cox regression. Fitting this half-million-variate model takes less than 45 minutes and reconfirms known associations. To our knowledge, this is the first demonstration of the feasibility of penalized regression of survival outcomes at this scale.
Collapse
Affiliation(s)
- Seyoon Ko
- Department of Biostatistics, UCLA Fielding School of Public Health, Los Angeles, California 90095, USA
| | - Hua Zhou
- Department of Biostatistics, UCLA Fielding School of Public Health, Los Angeles, California 90095, USA
| | - Jin J Zhou
- Department of Medicine, UCLA David Geffen School of Medicine, Los Angeles, California 90095, USA, and Department of Epidemiology and Biostatistics, Mel and Enid Zuckerman College of Public Health, University of Arizona, Tucson, Arizona 85724, USA
| | - Joong-Ho Won
- Department of Statistics, Seoul National University, Seoul, Korea
| |
Collapse
|
3
|
Liu Z, Moon HS, Li Z, Laforest R, Perlmutter JS, Norris SA, Jha AK. A tissue-fraction estimation-based segmentation method for quantitative dopamine transporter SPECT. Med Phys 2022; 49:5121-5137. [PMID: 35635327 PMCID: PMC9703616 DOI: 10.1002/mp.15778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2022] [Revised: 04/25/2022] [Accepted: 05/16/2022] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Quantitative measures of dopamine transporter (DaT) uptake in caudate, putamen, and globus pallidus (GP) derived from dopamine transporter-single-photon emission computed tomography (DaT-SPECT) images have potential as biomarkers for measuring the severity of Parkinson's disease. Reliable quantification of this uptake requires accurate segmentation of the considered regions. However, segmentation of these regions from DaT-SPECT images is challenging, a major reason being partial-volume effects (PVEs) in SPECT. The PVEs arise from two sources, namely the limited system resolution and reconstruction of images over finite-sized voxel grids. The limited system resolution results in blurred boundaries of the different regions. The finite voxel size leads to TFEs, that is, voxels contain a mixture of regions. Thus, there is an important need for methods that can account for the PVEs, including the TFEs, and accurately segment the caudate, putamen, and GP, from DaT-SPECT images. PURPOSE Design and objectively evaluate a fully automated tissue-fraction estimation-based segmentation method that segments the caudate, putamen, and GP from DaT-SPECT images. METHODS The proposed method estimates the posterior mean of the fractional volumes occupied by the caudate, putamen, and GP within each voxel of a three-dimensional DaT-SPECT image. The estimate is obtained by minimizing a cost function based on the binary cross-entropy loss between the true and estimated fractional volumes over a population of SPECT images, where the distribution of true fractional volumes is obtained from existing populations of clinical magnetic resonance images. The method is implemented using a supervised deep-learning-based approach. RESULTS Evaluations using clinically guided highly realistic simulation studies show that the proposed method accurately segmented the caudate, putamen, and GP with high mean Dice similarity coefficients of ∼ 0.80 and significantly outperformed (p < 0.01 $p < 0.01$ ) all other considered segmentation methods. Further, an objective evaluation of the proposed method on the task of quantifying regional uptake shows that the method yielded reliable quantification with low ensemble normalized root mean square error (NRMSE) < 20% for all the considered regions. In particular, the method yielded an even lower ensemble NRMSE of ∼ 10% for the caudate and putamen. CONCLUSIONS The proposed tissue-fraction estimation-based segmentation method for DaT-SPECT images demonstrated the ability to accurately segment the caudate, putamen, and GP, and reliably quantify the uptake within these regions. The results motivate further evaluation of the method with physical-phantom and patient studies.
Collapse
Affiliation(s)
- Ziping Liu
- Department of Biomedical Engineering, Washington University, St. Louis, Missouri, USA
| | - Hae Sol Moon
- Department of Biomedical Engineering, Washington University, St. Louis, Missouri, USA
| | - Zekun Li
- Department of Biomedical Engineering, Washington University, St. Louis, Missouri, USA
| | - Richard Laforest
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, Missouri, USA
| | - Joel S. Perlmutter
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, Missouri, USA
- Department of Neurology,Washington University School of Medicine, St. Louis, Missouri, USA
| | - Scott A. Norris
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, Missouri, USA
- Department of Neurology,Washington University School of Medicine, St. Louis, Missouri, USA
| | - Abhinav K. Jha
- Department of Biomedical Engineering, Washington University, St. Louis, Missouri, USA
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, Missouri, USA
| |
Collapse
|
4
|
Millardet M, Moussaoui S, Idier J, Mateus D, Conti M, Bailly C, Stute S, Carlier T. A Multiobjective Comparative Analysis of Reconstruction Algorithms in the Context of Low-Statistics 90Y-PET Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3126951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Mael Millardet
- LS2N, CNRS UMR 6004, École centrale de Nantes, Nantes, France
| | - Said Moussaoui
- LS2N, CNRS UMR 6004, École centrale de Nantes, Nantes, France
| | - Jerome Idier
- LS2N, CNRS UMR 6004, École centrale de Nantes, Nantes, France
| | - Diana Mateus
- LS2N, CNRS UMR 6004, École centrale de Nantes, Nantes, France
| | - Maurizio Conti
- Physics Research Group, Siemens Medical Solution USA Inc., Knoxville, TN, USA
| | | | | | | |
Collapse
|
5
|
Lim H, Chun IY, Dewaraja YK, Fessler JA. Improved Low-Count Quantitative PET Reconstruction With an Iterative Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3512-3522. [PMID: 32746100 PMCID: PMC7685233 DOI: 10.1109/tmi.2020.2998480] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Image reconstruction in low-count PET is particularly challenging because gammas from natural radioactivity in Lu-based crystals cause high random fractions that lower the measurement signal-to-noise-ratio (SNR). In model-based image reconstruction (MBIR), using more iterations of an unregularized method may increase the noise, so incorporating regularization into the image reconstruction is desirable to control the noise. New regularization methods based on learned convolutional operators are emerging in MBIR. We modify the architecture of an iterative neural network, BCD-Net, for PET MBIR, and demonstrate the efficacy of the trained BCD-Net using XCAT phantom data that simulates the low true coincidence count-rates with high random fractions typical for Y-90 PET patient imaging after Y-90 microsphere radioembolization. Numerical results show that the proposed BCD-Net significantly improves CNR and RMSE of the reconstructed images compared to MBIR methods using non-trained regularizers, total variation (TV) and non-local means (NLM). Moreover, BCD-Net successfully generalizes to test data that differs from the training data. Improvements were also demonstrated for the clinically relevant phantom measurement data where we used training and testing datasets having very different activity distributions and count-levels.
Collapse
|
6
|
Millardet M, Moussaoui S, Mateus D, Idier J, Carlier T. Local-Mean Preserving Post-Processing Step for Non-Negativity Enforcement in PET Imaging: Application to 90Y-PET. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3725-3736. [PMID: 32746117 DOI: 10.1109/tmi.2020.3003428] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In a low-statistics PET imaging context, the positive bias in regions of low activity is a burning issue. To overcome this problem, algorithms without the built-in non-negativity constraint may be used. They allow negative voxels in the image to reduce, or even to cancel the bias. However, such algorithms increase the variance and are difficult to interpret since the resulting images contain negative activities, which do not hold a physical meaning when dealing with radioactive concentration. In this paper, a post-processing approach is proposed to remove these negative values while preserving the local mean activities. Its original idea is to transfer the value of each voxel with negative activity to its direct neighbors under the constraint of preserving the local means of the image. In that respect, the proposed approach is formalized as a linear programming problem with a specific symmetric structure, which makes it solvable in a very efficient way by a dual-simplex-like iterative algorithm. The relevance of the proposed approach is discussed on simulated and on experimental data. Acquired data from an yttrium-90 phantom show that on images produced by a non-constrained algorithm, a much lower variance in the cold area is obtained after the post-processing step, at the price of a slightly increased bias. More specifically, when compared with the classical OSEM algorithm, images are improved, both in terms of bias and of variance.
Collapse
|
7
|
Chun SY, Nguyen MP, Phan TQ, Kim H, Fessler JA, Dewaraja YK. Algorithms and Analyses for Joint Spectral Image Reconstruction in Y-90 Bremsstrahlung SPECT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1369-1379. [PMID: 31647425 PMCID: PMC7263381 DOI: 10.1109/tmi.2019.2949068] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Quantitative yttrium-90 (Y-90) SPECT imaging is challenging due to the nature of Y-90, an almost pure beta emitter that is associated with a continuous spectrum of bremsstrahlung photons that have a relatively low yield. This paper proposes joint spectral reconstruction (JSR), a novel bremsstrahlung SPECT reconstruction method that uses multiple narrow acquisition windows with accurate multi-band forward modeling to cover a wide range of the energy spectrum. Theoretical analyses using Fisher information and Monte-Carlo (MC) simulation with a digital phantom show that the proposed JSR model with multiple acquisition windows has better performance in terms of covariance (precision) than previous methods using multi-band forward modeling with a single acquisition window, or using a single-band forward modeling with a single acquisition window. We also propose an energy-window subset (ES) algorithm for JSR to achieve fast empirical convergence and maximum-likelihood based initialization for all reconstruction methods to improve quantification accuracy in early iterations. For both MC simulation with a digital phantom and experimental study with a physical multi-sphere phantom, our proposed JSR-ES, a fast algorithm for JSR with ES, yielded higher recovery coefficients (RCs) on hot spheres over all iterations and sphere sizes than all the other evaluated methods, due to fast empirical convergence. In experimental study, for the smallest hot sphere (diameter 1.6cm), at the 20th iteration the increase in RCs with JSR-ES was 66 and 31% compared with single wide and narrow band forward models, respectively. JSR-ES also yielded lower residual count error (RCE) on a cold sphere over all iterations than other methods for MC simulation with known scatter, but led to greater RCE compared with single narrow band forward model at higher iterations for experimental study when using estimated scatter.
Collapse
|
8
|
Haase V, Hahn K, Schöndube H, Stierstorfer K, Maier A, Noo F. Impact of the non-negativity constraint in model-based iterative reconstruction from CT data. Med Phys 2020; 46:e835-e854. [PMID: 31811793 DOI: 10.1002/mp.13702] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Revised: 06/13/2019] [Accepted: 06/13/2019] [Indexed: 12/31/2022] Open
Abstract
PURPOSE Model-based iterative reconstruction is a promising approach to achieve dose reduction without affecting image quality in diagnostic x-ray computed tomography (CT). In the problem formulation, it is common to enforce non-negative values to accommodate the physical non-negativity of x-ray attenuation. Using this a priori information is believed to be beneficial in terms of image quality and convergence speed. However, enforcing non-negativity imposes limitations on the problem formulation and the choice of optimization algorithm. For these reasons, it is critical to understand the value of the non-negativity constraint. In this work, we present an investigation that sheds light on the impact of this constraint. METHODS We primarily focus our investigation on the examination of properties of the converged solution. To avoid any possibly confounding bias, the reconstructions are all performed using a provably converging algorithm started from a zero volume. To keep the computational cost manageable, an axial CT scanning geometry with narrow collimation is employed. The investigation is divided into five experimental studies that challenge the non-negativity constraint in various ways, including noise, beam hardening, parametric choices, truncation, and photon starvation. These studies are complemented by a sixth one that examines the effect of using ordered subsets to obtain a satisfactory approximate result within 50 iterations. All studies are based on real data, which come from three phantom scans and one clinical patient scan. The reconstructions with and without the non-negativity constraint are compared in terms of image similarity and convergence speed. In select cases, the image similarity evaluation is augmented with quantitative image quality metrics such as the noise power spectrum and closeness to a known ground truth. RESULTS For cases with moderate inconsistencies in the data, associated with noise and bone-induced beam hardening, our results show that the non-negativity constraint offers little benefit. By varying the regularization parameters in one of the studies, we observed that sufficient edge-preserving regularization tends to dilute the value of the constraint. For cases with strong data inconsistencies, the results are mixed: the constraint can be both beneficial and deleterious; in either case, however, the difference between using the constraint or not is small relative to the overall level of error in the image. The results with ordered subsets are encouraging in that they show similar observations. In terms of convergence speed, we only observed one major effect, in the study with data truncation; this effect favored the use of the constraint, but had no impact on our ability to obtain the converged solution without constraint. CONCLUSIONS Our results did not highlight the non-negativity constraint as being strongly beneficial for diagnostic CT imaging. Altogether, we thus conclude that in some imaging scenarios, the non-negativity constraint could be disregarded to simplify the optimization problem or to adopt other forward projection models that require complex optimization machinery to be used together with non-negativity.
Collapse
Affiliation(s)
- Viktor Haase
- Siemens Healthcare GmbH, Siemensstr. 3, 91301, Forchheim, Germany.,Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Martensstr. 3, 91058, Erlangen, Germany
| | - Katharina Hahn
- Siemens Healthcare GmbH, Siemensstr. 3, 91301, Forchheim, Germany
| | - Harald Schöndube
- Siemens Healthcare GmbH, Siemensstr. 3, 91301, Forchheim, Germany
| | | | - Andreas Maier
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Martensstr. 3, 91058, Erlangen, Germany
| | - Frédéric Noo
- Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, UT, 84108, USA
| |
Collapse
|
9
|
Bousse A, Courdurier M, Emond E, Thielemans K, Hutton BF, Irarrazaval P, Visvikis D. PET Reconstruction With Non-Negativity Constraint in Projection Space: Optimization Through Hypo-Convergence. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:75-86. [PMID: 31170066 DOI: 10.1109/tmi.2019.2920109] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Standard positron emission tomography (PET) reconstruction techniques are based on maximum-likelihood (ML) optimization methods, such as the maximum-likelihood expectation-maximization (MLEM) algorithm and its variations. Most methodologies rely on a positivity constraint on the activity distribution image. Although this constraint is meaningful from a physical point of view, it can be a source of bias for low-count/high-background PET, which can compromise accurate quantification. Existing methods that allow for negative values in the estimated image usually utilize a modified log-likelihood, and therefore break the data statistics. In this paper, we propose to incorporate the positivity constraint on the projections only, by approximating the (penalized) log-likelihood function by an adequate sequence of objective functions that are easily maximized without constraint. This sequence is constructed such that there is hypo-convergence (a type of convergence that allows the convergence of the maximizers under some conditions) to the original log-likelihood, hence allowing us to achieve maximization with positivity constraint on the projections using simple settings. A complete proof of convergence under weak assumptions is given. We provide results of experiments on simulated data where we compare our methodology with the alternative direction method of multipliers (ADMM) method, showing that our algorithm converges to a maximizer, which stays in the desired feasibility set, with faster convergence than ADMM. We also show that this approach reduces the bias, as compared with MLEM images, in necrotic tumors-which are characterized by cold regions surrounded by hot structures-while reconstructing similar activity values in hot regions.
Collapse
|
10
|
Scipioni M, Santarelli MF, Giorgetti A, Positano V, Landini L. Negative binomial maximum likelihood expectation maximization (NB-MLEM) algorithm for reconstruction of pre-corrected PET data. Comput Biol Med 2019; 115:103481. [PMID: 31627018 DOI: 10.1016/j.compbiomed.2019.103481] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 10/01/2019] [Accepted: 10/01/2019] [Indexed: 10/25/2022]
Abstract
PURPOSE Positron emission tomography (PET) image reconstruction is usually performed using maximum likelihood (ML) iterative reconstruction methods, under the assumption of Poisson distributed data. Pre-correcting raw measured counts, this assumption is no longer realistic. The goal of this work is to develop a reconstruction algorithm based on the Negative Binomial (NB) distribution, which can generalize over the Poisson distribution in case of over-dispersion of raw data, that may occur if sinogram pre-correction is used. METHODS The mathematical derivation of a Negative Binomial Maximum Likelihood Expectation-Maximization (NB-MLEM) algorithm is presented. A simulation study to compare the performance of the proposed NB-MLEM algorithm with respect to a Poisson-based MLEM (P-MLEM) method was performed, in reconstructing PET data. The proposed NB-MLEM reconstruction was tested on a real phantom and human brain data. RESULTS For the property of NB distribution, it is a generalization of the conventional P-MLEM: for not over dispersed data, the proposed NB-MLEM algorithm behaves like the conventional P-MLEM; for over-dispersed PET data, the additional evaluation of the dispersion parameter after each reconstruction iteration leads to a more accurate final image with respect to P-MLEM. CONCLUSIONS A novel approach for PET image reconstruction from pre-corrected data has been developed, which exhibits a statistical behavior that deviates from the Poisson distribution. Simulation study and preliminary tests on real data showed how the NB-MLEM algorithm, being able to explain the over-dispersion of pre-corrected data, can outperform other algorithms that assume no over-dispersion of pre-corrected data, while still not accounting for the presence of negative data, such as P-MLEM.
Collapse
Affiliation(s)
- Michele Scipioni
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy; CNR Institute of Clinical Physiology, Via Moruzzi,1, 56124, Pisa, Italy
| | - Maria Filomena Santarelli
- CNR Institute of Clinical Physiology, Via Moruzzi,1, 56124, Pisa, Italy; Fondazione Toscana "G. Monasterio", Via Moruzzi,1, 56124, Pisa, Italy.
| | - Assuero Giorgetti
- Fondazione Toscana "G. Monasterio", Via Moruzzi,1, 56124, Pisa, Italy
| | - Vincenzo Positano
- Fondazione Toscana "G. Monasterio", Via Moruzzi,1, 56124, Pisa, Italy
| | - Luigi Landini
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy; Fondazione Toscana "G. Monasterio", Via Moruzzi,1, 56124, Pisa, Italy
| |
Collapse
|