Some Talks by Per Christian Hansen

Talks about large-scale numerical methods for inverse problems

Per Christian Hansen, DTU Compute, Technical University of Denmark

Talks about Inverse Problems, Regularization, and Tomography

Limited-Data CT for Underwater Pipeline Inspection. Presented at the workshop 100 Years of the Radon Transform in Linz, March 27-31, 2017.

This work concerns the inspection of defects in underwater pipelines using X-ray computed tomography (CT). We study which mathematical models and reconstruction techniques are best suited for limited-angle CT, in which only the central region of the pipe - the region-of-interest (ROI) - is fully illuminated. Of particular interest are defects, such as rust and cracks, near the edge of the ROI. The goal is to gain insight into the inherit difficulties and limitations of this problem, study the resolution limits and the uncertainties of the reconstruction and how these factors depend on the geometry, and determine which state-of-the-art models and methods are best suited for the problem.
The project is a collaboration with FORCE Technology, Denmark.

Regularization in Tomography - Dealing with Ambiguity and Noisy Data. Invited presentation at the COST workshop Advanced X-Ray Tomography: Experiment, Modeling, and Algorithms, Lorentz Center, Feb. 10-14, 2014.

Tomographic reconstructions are routinely computed each day. Our reconstruction algorithms are so reliable that we sometimes forget we are actually dealing with inverse problems with inherent stability problems. This is because the algorithms automatically incorporate regularization techniques that, in most cases, handle very well the stability issues.
In this talk we take a basic look at the inverse problem of CT reconstruction, in order to understand the stability problems that manifest themselves in solutions that may be very sensitive to data errors and may also fail to be unique. We demonstrate how regularization is used to avoid these problems and make the reconstruction process stable, and how the regularization is incorporated in standard reconstruction algorithms. Moreover, we shall see that different regularization techniques have different impact in the computed reconstructions.

Tomographic Image Reconstruction Using Training Images

Priors are essential for computing stable solutions to imaging problems, and they take many different forms. Here we consider priors in the form of training images, i.e., a collection of cross-section images of the object. We describe an algorithmic framework for this: From the training images we use techniques from machine learning to form a dictionary that captures the desired features, and we then compute a reconstruction with a sparse representation in this dictionary. We describe how to stably compute the dictionary through a regularized non-negative matrix factorization, and we study how this dictionary affects the reconstruction quality. Simulations show that for textural images our approach is competitive with other methods used for limited-data problems.

A Parameter-Choice Method that Exploits Residual Information

All regularization methods – used for computing stable solutions to inverse problems – rely on the choice of the regularization parameter that balances the solution's smoothness and how well it fits the data. Most algorithms for choosing this regularization parameter are based on the norm of the residual vector. However, the residual vector carries important information about the regularized solution, and therefore it can be advantageous to seek to extract more of the information available there. We present important relations between the residual components and the amount of information that is available in the noisy data, and we show how to use statistical tools and fast Fourier transforms to extract this information efficiently. This approach leads to a computationally inexpensive parameter-choice rule based on the normalized cumulative periodogram, which is particularly suited for large-scale problems.

Total Variation and Tomographic Imaging from Projections. Invited presentation at the 36th Conference of the Dutch-Flemish Numerical Analysis Cummunities, Woudschouten, Oct. 5-7, 2011.

Total Variation (TV) regularization is a powerful technique for image reconstruction tasks such as denoising, in-painting, and deblurring, because of its ability to produce sharp edges in the images. In this talk we discuss the use of TV regularization for tomographic imaging, where we compute a 2D or 3D reconstruction from noisy projections. We demonstrate that for a small signal-to-noise ratio, this new approach allows us to compute better (i.e., more reliable) reconstructions than those obtained by classical methods. This is possible due to the use of the TV reconstruction model, which incorporates our prior information about the solution and thus compensates for the loss of accuracy in the data. A consequence is that smaller data acquisition times can be used, thus reducing a patient’s exposure to X-rays in medical scanning and speeding up non-destructive measurements in materials science.
A short paper that summarizex the results is available here.

Talks with a focus on Krylov Subspace Methods

Lanczos Bidiagonalization with Subspace Augmentation for Discrete Inverse Problems. Given at the SIAM Conf. on Applied Linear Algebra 2015 in the minisymposium celebrating Dianne P. O'Leary.

The regularizing properties of Lanczos bidiagonalization were studied and advocated by many researchers, including Dianne O'Leary. This approach is powerful when the underlying Krylov subspace captures the dominating components of the solution. In some applications the regularized solution can be further improved by augmenting the Krylov subspace with a low-dimensional subspace that represents specific prior information. Inspired by earlier work on GMRES we demonstrate how to carry these ideas over to the Lanczos bidiagonalization algorithm.
This presentation can be found online here.

R3GMRES: Including Prior Information in GMRES-Type Methods for Discrete Inverse Problems. Based on a paper in a special issue of ETNA on occasion of Lothar Reichel's 60th birthday.

Krylov subspace methods are well suited for computing regularized solutions to large-scale discrete inverse problems, the regularization coming from the projection of the problem onto the underlying Krylov subspace. Several techniques have been proposed that augment the Krylov subspace with an additional low-dimensional subspace, in order to produce improved regularized solutions. We take a closer look at this approach and investigate a particular regularized range-restricted GMRES method R3GMRES with a subspace that represents prior information about the solution. We discuss the efficient implementation of this approach and we demonstrate its advantage by means of several test problems.

Repeated Blurring Operations Produce Sharper Images: Image Deblurring with Krylov Subspace Methods. Invited presentation at the UCL Centre for Inverse Problems: Opening Meeting, March 18-21, 2013.

Image deblurring, i.e., reconstruction of a sharper image from a blurred and noisy one, involves the solution of a large and very ill-conditioned system of linear equations, and regularization is needed in order to compute a stable solution. Krylov subspace methods are often ideally suited for this task: their iterative nature is a natural way to handle such large-scale problems, and the underlying Krylov subspace provides a convenient mechanism to regularized the problem by projecting it onto a low-dimensional "signal subspace" adapted to the particular problem. In this talk we consider the three Krylov subspace methods CGLS, MINRES, and GMRES. We describe their regularizing properties, and we discuss some computational aspects such as preconditioning and stopping criteria.
A short paper that summarized the results is available here.
A video presentation of this talk given at the Scientific Computing and Imaging Institute, University of Utah, is available here.

Image Deblurring in the Light of the Cosine Transform

This talk focuses on regularizing iterations based on Krylov subspaces. The regularization comes from the projection of the solution on the Krylov subspace associated with the method, and the number of iterations plays the role of the regularization parameter. We use the two-dimensional discrete cosine transform to perform a spectral analysis of the solutions to the image deblurring problem, computed by means of regularizing iterations, and we focus on CGLS/LSQR and GMRES and their variants MINRES, RRGMRES and MR-II.

Talks with a focus on Algebraic Iterative Reconstruction Methods

Convergence and Non-Convergence of Algebraic Iterative Reconstruction Methods. Presented at the workshop Inverse Problems in X-Ray Phase Retrieval and Tomography at LINXS - Lund Institute of Advanced Neutron and X-Ray Science - April 4, 2019.

Algebraic Iterative Reconstruction methods - such as ART (Kaczmarz), SART, and SIRT - produce good results for underdetermined problems, and they can easily incorporate non-negativity and box constraints. When AIR methods are implemented on GPU-accelerated systems with a focus on computational efficiency, different computational schemes are used for the forward projection and the backprojection. In the algebraic "language" of the AIR methods, this means that the backprojection matrix B is not the transpose AT of the forward projection matrix A. The use of BAT has two consequences: the accuracy (compared to when using AT) deteriorates, and the iteration may fail to converge. In this talk we illustrate these issues with recent theoretical and computational results, and we present a novel approach to "fixing" the non-convergence with only a small computational overhead.

ART Exhibit Invited presentation at the Oberwolfach Workshop: Mathematics and Algorithms in Tomography, August 10-16, 2014.

The iterative reconstruction method known as ART (Algebraic Reconstruction Technique) and Kaczmarz's method, as well as its many variants, are surprisingly simple and efficient methods with many applications in computed tomography, where we compute 2D and 3D reconstructions from noisy and often incomplete projections. On the theoretical side we are interested in explaining why it is so successful, and on the practical side how to implement it efficiently, how to select the relaxation parameter, and how to stop the iterations. This talk takes the form of an "exhibit" of the many aspects of ART.
A short research report is available here.

ART Performance Invited presentation at the International Workshop on Mathematical Imaging and Emerging Modalities, June 27-30, 2016.

An extended verion of the above talk.

Column-Action Methods in Image Reconstruction

Column-oriented versions of algebraic iterative methods are interesting alternatives to their row-version counterparts: they converge to a least squares solution, and they provide a basis for saving computational work by skipping small updates. We motivate these methods from an optimization point of view, we present a convergence analysis, and we discuss two techniques (loping and flagging) for reducing the work. The performance of the algorithms is illustrated with numerical examples from computed tomography.

Semi-Convergence Properties of Kaczmarz's Method (ART)

Kaczmarz’s method – sometimes referred to as the Algebraic Reconstruction Technique (ART) – is an iterative method that is widely used in tomographic imaging, due to its favorable semi-convergence properties. Specifically, when applied to a problem with noisy data, during the early iterations it converges very fast toward a good approximation to the exact solution, and thus produces a regularized solution. While this property is generally accepted and utilized, there is surprisingly little theoretical justification for it. The purpose of this talk is to present insight into the semi-convergence of Kaczmarz’s method as well as its projected counterpart. To do this we study how the data errors propagate into the iteration vectors and we derive bounds for this noise propagation, thus providing a justification of their use as regularization methods. Our bounds are compared with numerical results obtained from tomographic imaging.

Semi-Convergence and Relaxation Parameters for a Class of SIRT Algorithms
Unprojected SIRT Algorithms
Projected SIRT Algorithms

Large-scale discretizations of ill-posed problems (such as imaging problems in tomography) call for the use of iterative methods, because direct factorization methods are infeasible. In particular, there is an interest in regularizing iterations, where the iteration vector can be considered as a regularized solution, with the iteration index playing the role of the regularizing parameter. Initially the iteration vector approaches a regularized solution, while continuing the iteration often leads to iteration vectors corrupted by noise.
This work focuses on a class of non-stationary iteration methods, often referred to as Simultaneous Iterative Reconstruction Techniques (SIRT), including Landweber and Cimmino iteration. These methods incorporate a relaxation parameter, and the convergence rate of the initial iterations depends on the choice of this parameter. In principle, one can use a fixed parameter, but usually a good value is not known a priori. An attractive alternative is to choose the relaxation parameter automatically in each iteration, in such a way that fast semi-convergence is obtained.
We study the semi-convergence for the SIRT methods, and we use our insight to propose two new methods for adaptively choosing the re-laxation parameter. Based on a careful analysis of the semi-convergence behaviour of these methods, we propose two new techniques to specify the relaxation parameters during the iterations, so as to control the propagated noise component of the error. The advantage of using this strategy for noisy and ill-conditioned problems is demonstrated with an example from tomography.