1
|
Sadowski J, Stefanski J. A Selection of Starting Points for Iterative Position Estimation Algorithms Using Feedforward Neural Networks. Sensors (Basel) 2024; 24:332. [PMID: 38257425 PMCID: PMC10818289 DOI: 10.3390/s24020332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 12/28/2023] [Accepted: 01/03/2024] [Indexed: 01/24/2024]
Abstract
This article proposes the use of a feedforward neural network (FNN) to select the starting point for the first iteration in well-known iterative location estimation algorithms, with the research objective of finding the minimum size of a neural network that allows iterative position estimation algorithms to converge in an example positioning network. The selected algorithms for iterative position estimation, the structure of the neural network and how the FNN is used in 2D and 3D position estimation process are presented. The most important results of the work are the parameters of various FNN network structures that resulted in a 100% probability of convergence of iterative position estimation algorithms in the exemplary TDoA positioning network, as well as the average and maximum number of iterations, which can give a general idea about the effectiveness of using neural networks to support the position estimation process. In all simulated scenarios, simple networks with a single hidden layer containing a dozen non-linear neurons turned out to be sufficient to solve the convergence problem.
Collapse
Affiliation(s)
- Jaroslaw Sadowski
- Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, 80-233 Gdansk, Poland;
| | | |
Collapse
|
2
|
Zeng GL. Neural network guided sinogram-domain iterative algorithm for artifact reduction. Med Phys 2023; 50:5410-5420. [PMID: 37278308 PMCID: PMC10529507 DOI: 10.1002/mp.16546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 01/31/2023] [Accepted: 03/06/2023] [Indexed: 06/07/2023] Open
Abstract
BACKGROUND Artifact reduction or removal is a challenging task when the artifact creation physics are not well modeled mathematically. One of such situations is metal artifacts in x-ray CT when the metallic material is unknown, and the x-ray spectrum is wide. PURPOSE A neural network is used to act as the objective function for iterative artifact reduction when the artifact model is unknown. METHODS A hypothetical unpredictable projection data distortion model is used to illustrate the proposed approach. The model is unpredictable, because it is controlled by a random variable. A convolutional neural network is trained to recognize the artifacts. The trained network is then used to compute the objective function for an iterative algorithm, which tries to reduce the artifacts in a computed tomography (CT) task. The objective function is evaluated in the image domain. The iterative algorithm for artifact reduction is in the projection domain. A gradient descent algorithm is used for the objective function optimization. The associated gradient is calculated with the chain rule. RESULTS The learning curves illustrate the decreasing treads of the objective function as the number of iterations increases. The images after the iterative treatment show the reduction of artifacts. A quantitative metric, the Sum Square Difference (SSD), also indicates the effectiveness of the proposed method. CONCLUSION The methodology of using a neural network as an objective function has potential value for situations where a human developed model is difficult to describe the underlying physics. Real-world applications are expected to be benefit from this methodology.
Collapse
Affiliation(s)
- Gengsheng L Zeng
- Department of Computer Science, Utah Valley University, Salt Lake City, Utah, USA
- Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah, USA
| |
Collapse
|
3
|
Rappoport D, Bekoe S, Mohanam LN, Le S, George N, Shen Z, Furche F. Libkrylov: A modular open-source software library for extremely large on-the-fly matrix computations. J Comput Chem 2023; 44:1105-1118. [PMID: 36636945 DOI: 10.1002/jcc.27068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 12/27/2022] [Indexed: 01/14/2023]
Abstract
We present the design and implementation of libkrylov, an open-source library for solving matrix-free eigenvalue, linear, and shifted linear equations using Krylov subspace methods. The primary objectives of libkrylov are flexible API design and modular structure, which enables integration with specialized matrix-vector evaluation "engines." Libkrylov features pluggable preconditioning, orthonormalization, and tunable convergence control. Diagonal (conjugate gradient, CG), Davidson, and Jacobi-Davidson preconditioners are available, along with orthonormal and nonorthonormal (nKs) schemes. All functionality of libkrylov is exposed via Fortran and C application programming interfaces (APIs). We illustrate the performance of libkrylov for eigenvalue calculations arising in time-dependent density functional theory (TDDFT) in the Tamm-Dancoff approximation (TDA) and discuss the convergence behavior as a function of preconditioning and orthonormalization methods.
Collapse
Affiliation(s)
- Dmitrij Rappoport
- Department of Chemistry, University of California Irvine, Irvine, California, USA
| | - Samuel Bekoe
- Department of Chemistry, University of California Irvine, Irvine, California, USA
| | - Luke Nambi Mohanam
- Department of Chemistry, University of California Irvine, Irvine, California, USA
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts, USA
| | - Scott Le
- Department of Chemistry, University of California Irvine, Irvine, California, USA
| | - Naje' George
- Department of Chemistry, University of California Irvine, Irvine, California, USA
| | - Ziyue Shen
- Department of Chemistry, University of California Irvine, Irvine, California, USA
- STA Pharmaceutical, San Diego, California, USA
| | - Filipp Furche
- Department of Chemistry, University of California Irvine, Irvine, California, USA
| |
Collapse
|
4
|
Zhang M, Young GS, Tie Y, Gu X, Xu X. A New Framework of Designing Iterative Techniques for Image Deblurring. Pattern Recognit 2022; 124:108463. [PMID: 34949896 PMCID: PMC8691531 DOI: 10.1016/j.patcog.2021.108463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In this work we present a framework of designing iterative techniques for image deblurring in inverse problem. The new framework is based on two observations about existing methods. We used Landweber method as the basis to develop and present the new framework but note that the framework is applicable to other iterative techniques. First, we observed that the iterative steps of Landweber method consist of a constant term, which is a low-pass filtered version of the already blurry observation. We proposed a modification to use the observed image directly. Second, we observed that Landweber method uses an estimate of the true image as the starting point. This estimate, however, does not get updated over iterations. We proposed a modification that updates this estimate as the iterative process progresses. We integrated the two modifications into one framework of iteratively deblurring images. Finally, we tested the new method and compared its performance with several existing techniques, including Landweber method, Van Cittert method, GMRES (generalized minimal residual method), and LSQR (least square), to demonstrate its superior performance in image deblurring.
Collapse
Affiliation(s)
- Min Zhang
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Geoffrey S Young
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Yanmei Tie
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Xianfeng Gu
- Department of Computer Science, Stony Brook University, Stony Brook, NY
| | - Xiaoyin Xu
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
5
|
Guenter M, Collins S, Ogilvy A, Hare W, Jirasek A. Superiorization versus regularization: A comparison of algorithms for solving image reconstruction problems with applications in computed tomography. Med Phys 2021; 49:1065-1082. [PMID: 34813106 DOI: 10.1002/mp.15373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 10/05/2021] [Accepted: 10/25/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE A system matrix can be built in order to account for the refractions in an optical computed tomography (CT) system. In order to utilize this system matrix, iterative methods are employed to solve the image reconstruction problem. The purpose of this study is to compare potential iterative algorithms to solve this image reconstruction problem. Comparisons examine both solution time and the quality of the reconstructed image. While our work is motivated by optical CT, the results can be extended more generally to CT. METHODS A collection of 21 algorithms for solving the image reconstruction problem were evaluated. Specifically, algorithms using (i) superiorization techniques and (ii) regularization to avoid overfitting were compared. Multiple test problems are investigated using 18 different image phantoms, parallel-beam and fan-beam system matrices, and varying noise levels. Comparison of the algorithms is done using performance profiles on three different performance measures. RESULTS The results for both the synthetic and clinical test problems show that there is not one single algorithm outperforming all others, but instead a set of top algorithms that give the best values on the performance profiles. When qualitative analyses such as reliance on stopping conditions, number of input parameters, and run time are also considered, FISTA-TV shows slight advantages over the other top algorithms. CONCLUSIONS There is a set of top algorithms that all show good results in the performance profiles with a mix of superiorized and regularized model algorithms. As to which of these top algorithms outperforms the rest is undetermined and further research needs to be investigated.
Collapse
Affiliation(s)
- Maria Guenter
- Department of Mathematics, University of British Columbia - Okanagan, Kelowna, BC, Canada
| | - Steve Collins
- Department of Physics, University of British Columbia - Okanagan, Kelowna, BC, Canada
| | - Andy Ogilvy
- Department of Physics, University of British Columbia - Okanagan, Kelowna, BC, Canada
| | - Warren Hare
- Department of Mathematics, University of British Columbia - Okanagan, Kelowna, BC, Canada
| | - Andrew Jirasek
- Department of Physics, University of British Columbia - Okanagan, Kelowna, BC, Canada
| |
Collapse
|
6
|
Perelli A, Andersen MS. Regularization by denoising sub-sampled Newton method for spectral CT multi-material decomposition. Philos Trans A Math Phys Eng Sci 2021; 379:20200191. [PMID: 33966464 DOI: 10.1098/rsta.2020.0191] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Spectral Computed Tomography (CT) is an emerging technology that enables us to estimate the concentration of basis materials within a scanned object by exploiting different photon energy spectra. In this work, we aim at efficiently solving a model-based maximum-a-posterior problem to reconstruct multi-materials images with application to spectral CT. In particular, we propose to solve a regularized optimization problem based on a plug-in image-denoising function using a randomized second order method. By approximating the Newton step using a sketching of the Hessian of the likelihood function, it is possible to reduce the complexity while retaining the complex prior structure given by the data-driven regularizer. We exploit a non-uniform block sub-sampling of the Hessian with inexact but efficient conjugate gradient updates that require only Jacobian-vector products for denoising term. Finally, we show numerical and experimental results for spectral CT materials decomposition. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'.
Collapse
Affiliation(s)
- Alessandro Perelli
- Department of Applied Mathematics and Computer Science (DTU Compute), Technical University of Denmark, Lyngby 2800, Denmark
| | - Martin S Andersen
- Department of Applied Mathematics and Computer Science (DTU Compute), Technical University of Denmark, Lyngby 2800, Denmark
| |
Collapse
|
7
|
Valencia Pérez TA, Hernández López JM, Moreno-Barbosa E, de Celis Alonso B, Palomino Merino MR, Castaño Meneses VM. Efficient CT Image Reconstruction in a GPU Parallel Environment. ACTA ACUST UNITED AC 2020; 6:44-53. [PMID: 32280749 PMCID: PMC7138519 DOI: 10.18383/j.tom.2020.00011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Computed tomography is nowadays an indispensable tool in medicine used to diagnose multiple diseases. In clinical and emergency room environments, the speed of acquisition and information processing are crucial. CUDA is a software architecture used to work with NVIDIA graphics processing units. In this paper a methodology to accelerate tomographic image reconstruction based on maximum likelihood expectation maximization iterative algorithm and combined with the use of graphics processing units programmed in CUDA framework is presented. Implementations developed here are used to reconstruct images with clinical use. Timewise, parallel versions showed improvement with respect to serial implementations. These differences reached, in some cases, 2 orders of magnitude in time while preserving image quality. The image quality and reconstruction times were not affected significantly by the addition of Poisson noise to projections. Furthermore, our implementations showed good performance when compared with reconstruction methods provided by commercial software. One of the goals of this work was to provide a fast, portable, simple, and cheap image reconstruction system, and our results support the statement that the goal was achieved.
Collapse
Affiliation(s)
- Tomás A Valencia Pérez
- Faculty of Mathematical and Physical Sciences, Benemérita Universidad Autónoma de Puebla, Puebla, México; and
| | - Javier M Hernández López
- Faculty of Mathematical and Physical Sciences, Benemérita Universidad Autónoma de Puebla, Puebla, México; and
| | - Eduardo Moreno-Barbosa
- Faculty of Mathematical and Physical Sciences, Benemérita Universidad Autónoma de Puebla, Puebla, México; and
| | - Benito de Celis Alonso
- Faculty of Mathematical and Physical Sciences, Benemérita Universidad Autónoma de Puebla, Puebla, México; and
| | - Martín R Palomino Merino
- Faculty of Mathematical and Physical Sciences, Benemérita Universidad Autónoma de Puebla, Puebla, México; and
| | - Victor M Castaño Meneses
- Molecular and Materials, Engineering Department, Universidad Nacional Autónoma de México, Queretaro, México
| |
Collapse
|
8
|
Li T, Chen H, Zhang M, Liu S, Xia S, Cao X, Young GS, Xu X. A New Design in Iterative Image Deblurring for Improved Robustness and Performance. Pattern Recognit 2019; 90:134-146. [PMID: 31327876 PMCID: PMC6640862 DOI: 10.1016/j.patcog.2019.01.019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In many applications, image deblurring is a pre-requisite to improve the sharpness of an image before it can be further processed. Iterative methods are widely used for deblurring images but care must be taken to ensure that the iterative process is robust, meaning that the process does not diverge and reaches the solution reasonably fast, two goals that sometimes compete against each other. In practice, it remains challenging to choose parameters for the iterative process to be robust. We propose a new approach consisting of relaxed initialization and pixel-wise updates of the step size for iterative methods to achieve robustness. The first novel design of the approach is to modify the initialization of existing iterative methods to stop a noise term from being propagated throughout the iterative process. The second novel design is the introduction of a vectorized step size that is adaptively determined through the iteration to achieve higher stability and accuracy in the whole iterative process. The vectorized step size aims to update each pixel of an image individually, instead of updating all the pixels by the same factor. In this work, we implemented the above designs based on the Landweber method to test and demonstrate the new approach. Test results showed that the new approach can deblur images from noisy observations and achieve a low mean squared error with a more robust performance.
Collapse
Affiliation(s)
- Taihao Li
- College of Medical Instruments, Shanghai University of Medicine and Health Sciences, Shanghai, China
- Beijing Advanced Innovation Center for Imaging Technology, Capital Normal University, Beijing, China
- These authors contributed equally to the work
| | - Huai Chen
- Department of Radiology, First Affiliated Hospital of Guangzhou Medical University, Guangzhou, Guangdong, China
- These authors contributed equally to the work
| | - Min Zhang
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Shupeng Liu
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Shunren Xia
- Department of Biomedical Engineering, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xinhua Cao
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Geoffrey S Young
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Xiaoyin Xu
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
9
|
Zibetti MVW, Helou ES, Regatte RR, Herman GT. Monotone FISTA with Variable Acceleration for Compressed Sensing Magnetic Resonance Imaging. IEEE Trans Comput Imaging 2019; 5:109-119. [PMID: 30984801 PMCID: PMC6457269 DOI: 10.1109/tci.2018.2882681] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
An improvement of the monotone fast iterative shrinkage-thresholding algorithm (MFISTA) for faster convergence is proposed. Our motivation is to reduce the reconstruction time of compressed sensing problems in magnetic resonance imaging. The proposed modification introduces an extra term, which is a multiple of the proximal-gradient step, into the so-called momentum formula used for the computation of the next iterate in MFISTA. In addition, the modified algorithm selects the next iterate as a possibly-improved point obtained by any other procedure, such as an arbitrary shift, a line search, or other methods. As an example, an arbitrary-length shift in the direction from the previous iterate to the output of the proximal-gradient step is considered. The resulting algorithm accelerates MFISTA in a manner that varies with the iterative steps. Convergence analysis shows that the proposed modification provides improved theoretical convergence bounds, and that it has more flexibility in its parameters than the original MFISTA. Since such problems need to be studied in the context of functions of several complex variables, a careful extension of FISTA-like methods to complex variables is provided.
Collapse
|
10
|
Abstract
Dose reduction in computed tomography (CT) is essential for decreasing radiation risk in clinical applications. Iterative reconstruction algorithms are one of the most promising way to compensate for the increased noise due to reduction of photon flux. Most iterative reconstruction algorithms incorporate manually designed prior functions of the reconstructed image to suppress noises while maintaining structures of the image. These priors basically rely on smoothness constraints and cannot exploit more complex features of the image. The recent development of artificial neural networks and machine learning enabled learning of more complex features of image, which has the potential to improve reconstruction quality. In this letter, K-sparse auto encoder was used for unsupervised feature learning. A manifold was learned from normal-dose images and the distance between the reconstructed image and the manifold was minimized along with data fidelity during reconstruction. Experiments on 2016 Low-dose CT Grand Challenge were used for the method verification, and results demonstrated the noise reduction and detail preservation abilities of the proposed method.
Collapse
|
11
|
Marcassa C, Zoccarato O. Advances in image reconstruction software in nuclear cardiology: Is all that glitters gold? J Nucl Cardiol 2017; 24:142-144. [PMID: 27220879 DOI: 10.1007/s12350-016-0534-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2016] [Accepted: 04/28/2016] [Indexed: 11/29/2022]
Abstract
The cornerstone results of nuclear cardiology in the last 25 years were obtained with the Filtered Back Projection as the preferred reconstruction method for tomographic studies. Recently, evolution of the OSEM iterative reconstruction algorithms was implemented by different vendors. The value and limitations of the new methods are briefly addressed.
Collapse
Affiliation(s)
- Claudio Marcassa
- Cardiology Department, Salvatore Maugeri Foundation, IRCCS, Scientific Institute of Veruno (NO), via Revislate 13, 28010, Veruno, NO, Italy.
| | - Orazio Zoccarato
- Nuclear Medicine Department, Salvatore Maugeri Foundation, IRCCS, Scientific Institute of Veruno (NO), Veruno, NO, Italy
| |
Collapse
|
12
|
Abstract
OBJECTIVE Iterative algorithms are gaining clinical acceptance in CT. We performed objective phantom-based image quality evaluation of five commercial iterative reconstruction algorithms available on four different multi-detector CT (MDCT) scanners at different dose levels as well as the conventional filtered back-projection (FBP) reconstruction. METHODS Using the Catphan500 phantom, we evaluated image noise, contrast-to-noise ratio (CNR), modulation transfer function (MTF) and noise-power spectrum (NPS). The algorithms were evaluated over a CTDIvol range of 0.75-18.7 mGy on four major MDCT scanners: GE DiscoveryCT750HD (algorithms: ASIR™ and VEO™); Siemens Somatom Definition AS+ (algorithm: SAFIRE™); Toshiba Aquilion64 (algorithm: AIDR3D™); and Philips Ingenuity iCT256 (algorithm: iDose4™). Images were reconstructed using FBP and the respective iterative algorithms on the four scanners. RESULTS Use of iterative algorithms decreased image noise and increased CNR, relative to FBP. In the dose range of 1.3-1.5 mGy, noise reduction using iterative algorithms was in the range of 11%-51% on GE DiscoveryCT750HD, 10%-52% on Siemens Somatom Definition AS+, 49%-62% on Toshiba Aquilion64, and 13%-44% on Philips Ingenuity iCT256. The corresponding CNR increase was in the range 11%-105% on GE, 11%-106% on Siemens, 85%-145% on Toshiba and 13%-77% on Philips respectively. Most algorithms did not affect the MTF, except for VEO™ which produced an increase in the limiting resolution of up to 30%. A shift in the peak of the NPS curve towards lower frequencies and a decrease in NPS amplitude were obtained with all iterative algorithms. VEO™ required long reconstruction times, while all other algorithms produced reconstructions in real time. Compared to FBP, iterative algorithms reduced image noise and increased CNR. CONCLUSIONS The iterative algorithms available on different scanners achieved different levels of noise reduction and CNR increase while spatial resolution improvements were obtained only with VEO™. This study is useful in that it provides performance assessment of the iterative algorithms available from several mainstream CT manufacturers.
Collapse
Affiliation(s)
- Azeez Omotayo
- Division of Medical Physics, CancerCare Manitoba, Winnipeg, MB, Canada
| | - Idris Elbakri
- Division of Medical Physics, CancerCare Manitoba, Winnipeg, MB, Canada
- Department of Radiology, University of Manitoba, Winnipeg, MB, Canada
- Department of Physics and Astronomy, University of Manitoba, Winnipeg, MB, Canada
| |
Collapse
|
13
|
Jiang L, Wu Z, Ren G, Wang G, Zhao N. A Rapid Convergent Low Complexity Interference Alignment Algorithm for Wireless Sensor Networks. Sensors (Basel) 2015; 15:18526-49. [PMID: 26230697 PMCID: PMC4570334 DOI: 10.3390/s150818526] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/09/2015] [Revised: 06/29/2015] [Accepted: 07/23/2015] [Indexed: 12/02/2022]
Abstract
Interference alignment (IA) is a novel technique that can effectively eliminate the interference and approach the sum capacity of wireless sensor networks (WSNs) when the signal-to-noise ratio (SNR) is high, by casting the desired signal and interference into different signal subspaces. The traditional alternating minimization interference leakage (AMIL) algorithm for IA shows good performance in high SNR regimes, however, the complexity of the AMIL algorithm increases dramatically as the number of users and antennas increases, posing limits to its applications in the practical systems. In this paper, a novel IA algorithm, called directional quartic optimal (DQO) algorithm, is proposed to minimize the interference leakage with rapid convergence and low complexity. The properties of the AMIL algorithm are investigated, and it is discovered that the difference between the two consecutive iteration results of the AMIL algorithm will approximately point to the convergence solution when the precoding and decoding matrices obtained from the intermediate iterations are sufficiently close to their convergence values. Based on this important property, the proposed DQO algorithm employs the line search procedure so that it can converge to the destination directly. In addition, the optimal step size can be determined analytically by optimizing a quartic function. Numerical results show that the proposed DQO algorithm can suppress the interference leakage more rapidly than the traditional AMIL algorithm, and can achieve the same level of sum rate as that of AMIL algorithm with far less iterations and execution time.
Collapse
Affiliation(s)
- Lihui Jiang
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China.
| | - Zhilu Wu
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China.
| | - Guanghui Ren
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China.
| | - Gangyi Wang
- School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing 100191, China.
| | - Nan Zhao
- School of Information and Communication Engineering, Dalian University of Technology, Dalian 116024, China.
| |
Collapse
|
14
|
Sidky EY, Chartrand R, Boone JM, Pan X. Constrained T pV Minimization for Enhanced Exploitation of Gradient Sparsity: Application to CT Image Reconstruction. IEEE J Transl Eng Health Med 2014; 2. [PMID: 25401059 PMCID: PMC4228801 DOI: 10.1109/jtehm.2014.2300862] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Exploiting sparsity in the image gradient magnitude has proved to be an effective means for reducing the sampling rate in the projection view angle in computed tomography (CT). Most of the image reconstruction algorithms, developed for this purpose, solve a nonsmooth convex optimization problem involving the image total variation (TV). The TV seminorm is the ℓ1 norm of the image gradient magnitude, and reducing the ℓ1 norm is known to encourage sparsity in its argument. Recently, there has been interest in employing nonconvex ℓp quasinorms with 0<p<1 for sparsity exploiting image reconstruction, which is potentially more effective than ℓ1 because nonconvex ℓp is closer to ℓ0-a direct measure of sparsity. This paper develops algorithms for constrained minimization of the total p-variation (TpV), ℓp of the image gradient. Use of the algorithms is illustrated in the context of breast CT-an imaging modality that is still in the research phase and for which constraints on X-ray dose are extremely tight. The TpV-based image reconstruction algorithms are demonstrated on computer simulated data for exploiting gradient magnitude sparsity to reduce the projection view angle sampling. The proposed algorithms are applied to projection data from a realistic breast CT simulation, where the total X-ray dose is equivalent to two-view digital mammography. Following the simulation survey, the algorithms are then demonstrated on a clinical breast CT data set.
Collapse
Affiliation(s)
- Emil Y Sidky
- Department of Radiology, University of Chicago, Chicago, IL 60637, USA
| | - Rick Chartrand
- Theoretical Division T-5, Los Alamos National Laboratory, Los Alamos, NM 87545, USA
| | - John M Boone
- Department of Radiology, University of California Davis Medical Center, Sacramento, CA 95817, USA
| | - Xiaochuan Pan
- Department of Radiology, University of Chicago, Chicago, IL 60637, USA
| |
Collapse
|
15
|
Abstract
Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a "superiorized version" of it that retains its computational efficiency but nevertheless goes a long way towards solving an optimization problem. This is possible to do if the original algorithm is "perturbation resilient," which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that steer the process in the direction of a superior feasible point, which is not necessarily optimal, with respect to the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image.
Collapse
Affiliation(s)
- Y Censor
- Department of Mathematics, University of Haifa, Mount Carmel, Haifa 31905, Israel
| | | | | |
Collapse
|