A theoretical justification of how Anderson acceleration improves linear convergence rates
Sara Pollock, University of Florida
3:30 – 5PM
Thursday Mar 7, 2019
POB 6.304
Abstract
The extrapolation method known as Anderson acceleration has been used for decades to speed the convergence of nonlinear solvers in many applications. A mathematical justification of the improved convergence rate however has remained elusive. Here, we provide theory to establish the improved convergence rate. The key ideas of the analysis are relating the difference of consecutive iterates to residuals based on performing the inner-optimization in a Hilbert space setting, and explicitly defining the gain in the optimization stage to be the ratio of improvement over a step of the unaccelerated fixed point iteration. The main result we prove is this method of acceleration improves the convergence rate of a fixed point iteration to first order by a factor of the gain at each step as the method converges.
Bio
Sara Pollock is an Assistant Professor at University of Florida. She earned her Ph.D. in Mathematics with a specialization in Computational Science from the University of California, San Diego, in 2012. She also holds a B.S. in Mathematics from the University of New Mexico, and an M.S. in Applied Mathematics from the University of Washington, Seattle. She works on finite element methods for nonlinear and multiscale problems, and works on developing efficient and robust solution techniques and well-posedness of the underlying systems.