[PDF] Splitting Algorithms For Convex Optimization And Applications To Sparse Matrix Factorization eBook

Splitting Algorithms For Convex Optimization And Applications To Sparse Matrix Factorization Book in PDF, ePub and Kindle version is available to download in english. Read online anytime anywhere directly from your device. Click on the download button below to get a free pdf file of Splitting Algorithms For Convex Optimization And Applications To Sparse Matrix Factorization book. This book definitely worth reading, it is an incredibly well-written.

Splitting Algorithms for Convex Optimization and Applications to Sparse Matrix Factorization

Author : Rong Rong
Publisher :
Page : 95 pages
File Size : 22,30 MB
Release : 2013
Category :
ISBN :

GET BOOK

Several important applications in machine learning, data mining, signal and image processing can be formulated as the problem of factoring a large data matrix as a product of sparse matrices. Sparse matrix factorization problems are usually solved via alternating convex optimization methods. These methods involve at each iteration a large convex optimization problem with non-differentiable cost and constraint functions, which is typically solved by block coordinate descent algorithm. In this thesis, we investigate first-order algorithms based on forward-backward splitting and Douglas-Rachford splitting algorithms, as an alternative to the block coordinate descent algorithm. We describe efficient methods to evaluate the proximal operators and resolvents needed in the splitting algorithms. We discuss in detail two applications: Structured Sparse Principal Component Analysis and Sparse Dictionary Learning. For these two applications, we compare the splitting algorithms and block coordinate descent on synthetic data and benchmark data sets. Experimental results show that several of the splitting methods, in particular Tseng's modified forward-backward method and the Chambolle-Pock splitting method, are often faster and more accurate than the block coordinate descent algorithm.

Handbook of Robust Low-Rank and Sparse Matrix Decomposition

Author : Thierry Bouwmans
Publisher : CRC Press
Page : 510 pages
File Size : 14,68 MB
Release : 2016-09-20
Category : Computers
ISBN : 1315353539

GET BOOK

Handbook of Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image and Video Processing shows you how robust subspace learning and tracking by decomposition into low-rank and sparse matrices provide a suitable framework for computer vision applications. Incorporating both existing and new ideas, the book conveniently gives you one-stop access to a number of different decompositions, algorithms, implementations, and benchmarking techniques. Divided into five parts, the book begins with an overall introduction to robust principal component analysis (PCA) via decomposition into low-rank and sparse matrices. The second part addresses robust matrix factorization/completion problems while the third part focuses on robust online subspace estimation, learning, and tracking. Covering applications in image and video processing, the fourth part discusses image analysis, image denoising, motion saliency detection, video coding, key frame extraction, and hyperspectral video processing. The final part presents resources and applications in background/foreground separation for video surveillance. With contributions from leading teams around the world, this handbook provides a complete overview of the concepts, theories, algorithms, and applications related to robust low-rank and sparse matrix decompositions. It is designed for researchers, developers, and graduate students in computer vision, image and video processing, real-time architecture, machine learning, and data mining.

Primal-dual Proximal Optimization Algorithms with Bregman Divergences

Author : Xin Jiang
Publisher :
Page : 0 pages
File Size : 48,89 MB
Release : 2022
Category :
ISBN :

GET BOOK

Proximal methods are an important class of algorithms for solving nonsmooth, constrained, large-scale or distributed optimization problems. Because of their flexibility and scalability, they are widely used in current applications in engineering, machine learning, and data science. The key idea of proximal algorithms is the decomposition of a large-scale optimization problem into several smaller, simpler problems, in which the basic operation is the evaluation of the proximal operator of a function. The proximal operator minimizes the function regularized by a squared Euclidean distance, and it generalizes the Euclidean projection onto a closed convex set. Since the cost of the evaluation of proximal operators often dominates the per-iteration complexity in a proximal algorithm, efficient evaluation of proximal operators is critical. To this end, generalized Bregman proximal operators based on non-Euclidean distances have been proposed and incorporated in many algorithms and applications. In the first part of this dissertation, we present primal-dual proximal splitting methods for convex optimization, in which generalized Bregman distances are used to define the primal and dual update steps. The proposed algorithms can be viewed as Bregman extensions of many well- known proximal methods. For these algorithms, we analyze the theoretical convergence and develop techniques to improve practical implementation. In the second part of the dissertation, we apply the Bregman proximal splitting algorithms to the centering problem in large-scale semidefinite programming with sparse coefficient matrices. The logarithmic barrier function for the cone of positive semidefinite completable sparse matrices is used as the distance-generating kernel. For this distance, the complexity of evaluating the Bregman proximal operator is shown to be roughly proportional to the cost of a sparse Cholesky factorization. This is much cheaper than the standard proximal operator with Euclidean distances, which requires an eigenvalue decomposition. Therefore, the proposed Bregman proximal algorithms can handle sparse matrix constraints with sizes that are orders of magnitude larger than the problems solved by standard interior-point methods and proximal methods.

Proximal Algorithms

Author : Neal Parikh
Publisher : Now Pub
Page : 130 pages
File Size : 10,42 MB
Release : 2013-11
Category : Mathematics
ISBN : 9781601987167

GET BOOK

Proximal Algorithms discusses proximal operators and proximal algorithms, and illustrates their applicability to standard and distributed convex optimization in general and many applications of recent interest in particular. Much like Newton's method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newton's method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods. Proximal Algorithms discusses different interpretations of proximal operators and algorithms, looks at their connections to many other topics in optimization and applied mathematics, surveys some popular algorithms, and provides a large number of examples of proximal operators that commonly arise in practice.

Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers

Author : Stephen Boyd
Publisher : Now Publishers Inc
Page : 138 pages
File Size : 27,77 MB
Release : 2011
Category : Computers
ISBN : 160198460X

GET BOOK

Surveys the theory and history of the alternating direction method of multipliers, and discusses its applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others.

Introduction to Optimization

Author : Boris Teodorovich Poli͡ak
Publisher :
Page : 474 pages
File Size : 36,76 MB
Release : 1987
Category : Mathematics
ISBN :

GET BOOK

Splitting Methods in Communication, Imaging, Science, and Engineering

Author : Roland Glowinski
Publisher : Springer
Page : 822 pages
File Size : 16,51 MB
Release : 2017-01-05
Category : Mathematics
ISBN : 3319415891

GET BOOK

This book is about computational methods based on operator splitting. It consists of twenty-three chapters written by recognized splitting method contributors and practitioners, and covers a vast spectrum of topics and application areas, including computational mechanics, computational physics, image processing, wireless communication, nonlinear optics, and finance. Therefore, the book presents very versatile aspects of splitting methods and their applications, motivating the cross-fertilization of ideas.