Title: Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
Abstract:We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient...We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates.Using these rates, we perform as well as or better than a carefully chosen fixed error level on a set of structured sparsity problems.Read More
Publication Year: 2011
Publication Date: 2011-09-12
Language: en
Type: preprint
Access and Citation
Cited By Count: 194
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot