[PDF] A First-Order Numerical Algorithm without Matrix Operations





Previous PDF Next PDF



Rapid Estimation of SNP Heritability using Predictive Process

14 mai 2021 Process approximation in Large scale Cohort Studies ... tremendously challenging due to its high dimensional linear algebraic operations.



Certified and accurate SDP bounds for the ACOPF problem

18 mars 2022 computational experiments on large-scale instances from PGLib-. OPF v21.07. For ten of the tested instances our post-processing.



Randomized linear algebra for model order reduction

14 févr. 2020 3.6.2 Certification of a sketch of a reduced model and its solution . ... yielding large-scale systems of parametrized algebraic equations.



A First-Order Numerical Algorithm without Matrix Operations

9 mars 2022 method to solve large-scale conic optimization problems. Solv- ing systems of linear equations pose the most computationally.



Randomized linear algebra for model reduction. Part I: Galerkin

Standard algebraic operations are performed on the sketch which avoids heavy operations on large-scale matrices and vectors. Sufficient conditions on the 



Service de lapostille Cour dappel de Paris 34 quai des orfèvres

qui a fait l'objet d'une certification par un notaire. - une attestation privée qui a fait l'objet d'une opération commerciale ou douanière (par.



RANDOMIZED LINEAR ALGEBRA FOR LARGE-SCALE DATA

I certify that I have read this dissertation and that in my opinion



Shroud: Ensuring Private Access to Large-Scale Data in the Data

adapting oblivious RAM algorithms to enable large-scale parallelization. linear algebra schemes [28] can outperform the triv- ial PIR scheme [29].



Practical Large-Scale Linear Programming using Primal-Dual

One such area is Linear Programming (LP) the focus of this work. LP is a fundamental class of optimization problems in applied mathematics



Managing Large-Scale Security Events: A Planning Primer for Local

large-scale event specifically pre-event planning

1

A First-Order Numerical Algorithm

without Matrix Operations Muhammad Adil, Ramtin Madani, Sasan Tavakkol, and Ali Davoudi Abstract-This paper offers a matrix-free first-order numerical method to solve large-scale conic optimization problems. Solv- ing systems of linear equations pose the most computationally challenging part in both first-order and second-order numerical algorithms. Existing direct and indirect methods are either com- putationally expensive or compromise on solution accuracy. Alter- natively, we propose an easy-to-compute decomposition method to solve sparse linear systems that arise in conic optimization problems. Its iterations are tractable, highly parallelizable, with closed-form solutions. This algorithm can be easily implemented on distributed platforms, such as graphics processing units, with orders-of-magnitude time improvement. The performance of the proposed solver is demonstrated on large-scale conic optimization problems and is compared with the state-of-the-art first-order solvers. Index Terms-Cone programming, numerical algorithms, op- timization, parallel computing.

I. INTRODUCTION

Conic optimization is used in various areas such as opera- tion research, machine learning, signal processing, and optimal control. Common solutions to conic optimization problems are based on the interior point method (IPM) [1]-[6], that are suitable for small- to medium-sized problems. At each iteration, IPM solves a linear system of equations, mainly using Gauss-Jordan [7], Gaussian elimination [8], LU decom- position [9], Cholesky decomposition [10], QR decomposition, or Monte Carlo Methods [11]. These direct approaches become prohibitively expensive by scale, and, rendering IPM-based methods impractical for large-scale problems. Matrix-free interior-point methods, including indirect or iterative methods, are among the most popular methods for solving large-scale problem [12], [13]. However, direct methods become prohibitive [12], [14] at larger scale and Krylov subspace iterative methods, such as preconditioned conjugate gradient (PCG), become attractive alternatives [14]. In conjugate gradient method [15], an iterative technique solves the Newton step instead of factorizing the Hessian matrix directly. [16] proposed a Lagrangian dual predictor- corrector algorithm, and applied the conjugate gradient method to solving linear systems. In [17], an iterative solver is applied to the modified barrier method for large-scale semidefinite programming optimization problems. The conjugate gradient method, with a simple preconditioner, reduces the computation time of solving semidefinite programming problems [18], and Muhammad Adil is with Palo Alto Research Center (PARC), Ramtin Madani, and Ali Davoudi are with the University of Texas at Arlington. Sasan Tavakkol is with Google Research. Ramtin Madani (ramtin.madani@uta.edu) is the corresponding author. This work is funded, in part, by the Office of Naval Research under award N00014-18-1-2186, and approved for public release under DCN# 43-9295-22.an inexact semismooth Newton conjugate gradient approach in [19] improves the solution accuracy. The performance of iterative methods depends on the spectral properties of the linear system and the condition number of the matrix involved [20]. Existing preconditioners aim at reducing the condition number to the extend possible [21]. A matrix-free algorithm for equality-constraint nonlinear programming in [22] remedies ill-conditioned problems with rank-deficient Jacobian matrices. A matrix-free solver PDCO [23] uses least squares minimal residual (LSMR) to solve linear systems. [24] proposes a matrix-free IPM for quadratic programs, where the Karush-Kuhn-Tucker (KKT) system is regularized to bound the condition number and, then, a preconditioner is designed for the regularized system. This approach substantially decreases the computational cost in each iteration, albeit by trading off more iterations and lower accuracy. Collectively, IPMs are inherently computationally expensive and hard to scale for larger conic optimization problems. Al- ternatively, first-order methods scale gracefully with moderate accuracy [25]-[28]. The computational complexity of each iteration is significantly less than IPM. They are suitable when high accuracy is not needed [29], [30]. The convergence of first-order methods in a limited number of iterations is an active area of research [31]-[40]. First-order methods are required to solve a one time linear system [28]. Thanks tofactorizing caching, in practice, the linear system is solved only in the first iteration, factors are stored, and, then, reused in subsequent iteration [27], [28], [30]. The most common factorizing caching approaches are LDL and QR decompositions [41]. Direct methods or factor- izing caching become impractical at a very large scale, and point matrix-free or indirect approaches become viable [27]. The first-order methods [25], [27], [28], [41] apply conjugate gradient method to the resulting large-scale problems. First- order methods struggle with accuracy, and matrix-free indirect conjugate gradient approaches further hamper their ability. The computational complexity of direct methods and low accuracy solution of indirect methods to tackle large conic problems are the motivations behind the proposed matrix-free approach for solving very large problems with a modest accuracy.

A. Contributions

We develop a matrix-free first-order numerical algorithm to solve very large-scale sparse conic optimization problems. The basic idea is to decompose the constraint matrix into sparse factors, such that iterative steps do not involve any matrix operations. These factors are easy-to-compute andarXiv:2203.05027v1 [math.OC] 9 Mar 2022 2 require minimal memory storage. The matrix inversion lemma makes the operations matrix division free. We reformulate the standard conic optimization problem by introducing auxiliary variables and, then, apply the proposed matrix-free algorithm in conjunction with the well-known two-block alternating direction method of multipliers (ADMM) [29], [42], [43]. Therefore, the computational burden of solving the linear system in each iteration is taken out of the iterative loop. The proposed algorithm admits parallel implementation, and is amenable to graphics processing units (GPUs). We demon- strate the performance gain and computational speedup of the proposed algorithm by conducting a range of experiments and compare the results with other first-order numerical solvers.

B. Paper Structure

The rest of this paper is organized as follows. We introduce the cone programming and a brief description of operator splitting methods for such problems in Section II. We analyze the existing direct and indirect methods for solving the linear system, and provide a motivation for our numerical algorithm in Section III. Section IV presents the proposed algorithm. Nu- merical experiments and comparison with competing solvers are presented in V. Section VI concludes the paper.

C. Notations

SymbolsRandNdenote the set of real and natural numbers, respectively. Matrices and vectors are represented by bold up-

percase and bold lowercase letters, respectively. Notationkk2refers to`2norm of either matrix or vector depending on the

context, andjjrepresents the absolute value. The symbol()> represent the transpose operators. The notationsInrefer to the nnidentity matrix. The symbolKdescribes different types of cones. The superscript()optrefers to the optimal solution of optimization problem.odenotes the number of nonzero entries in matrix. The symbolLrepresent the augmented Lagrangian function. The notations "P1", "P2", represent the primal, whereas "D" denote the dual block of two-block ADMM. The symbols"prim;"dual;"abs, and"relare used for primal, dual, absolute, and relative tolerance, respectively."gapis the tolerance for difference between primal and dual objective values.

II. PRELIMINARIES

We consider the class of convex optimization problems with linear objective function, linear constraints, and conic constraints, in the form of: minimize x2Rnc>x(1a) subject toAx=b(1b) x2 K(1c) wherex2Rnis the primal decision variable andc2Rn,

A2Rmn, andb2Rmare given. Additionally,K,Kn1

K n2 KnkRnis a non-empty, closed, convex cone, where eachKniRniis a Lorentz cone of sizeni, i.e., K ni,w2Rnijw1 [w2;:::;wni]

2;andn1+n2+:::+nk=n.

In order to solve 1, various first-order operator splitting methods have been proposed in the past decade. First-order methods are particularly interesting for the cases where it- erative steps can be solved efficiently through explicit for- mula, and a large number of iterations can be executed in a short amount of time. Among most popular methods are the Douglas-Rachford Splitting (DRS) technique and ADMM.

1) Douglas-Rachford Splitting:DRS was originally pro-

posed in [44] to find numerical solutions of differential equa- tions for heat conduction problems, and it has been widely used to solve separable convex optimization problems. Rather than operating on the whole problem directly, DRS works on a splitting scheme to address each component of the problem separately. In order to implement the DR splitting method, one casts the problem (1) in the form of minimizef(x) +g(x)(2) wheref;g:Rn!R[ f1gare the indicator functions: f(x),0 ifx2 K

1otherwiseandg(x),c>xifAx=b

1otherwise

leading to the following iterative steps: x proxf(z)(3a) z z+ prox1g(2xz)x(3b) whereis a fixed tuning parameter, and for everyh:Rn!

R[ f1gandw02Rn, the operatorproxh(w0)returns the

unique solution to the following optimization problem: minimize w2Rnh(w) +12 kww0k22:(4) Each iteration of DRS requires the evaluation of the proximal operatorsproxf()andprox1g(). While the evaluation of prox f()is parallelizable and enjoys a closed-form solution, the evaluation ofprox1g(w0)is the main bottleneck which requires solving the following system of linear equations:InA> A0 w =w01c b ;(5) wherew= prox1g(w0).

2) Alternating Direction Method of Multipliers:ADMM is

one of the most commonly used first-order methods for solving large-scale optimization problems. ADMM can be analyzedquotesdbs_dbs24.pdfusesText_30
[PDF] certification officielle au format PDF

[PDF] Certification OHSAS 18001 - Support Technique

[PDF] CERTIFICATION OHSAS 18001 : 2007

[PDF] Certification PHP

[PDF] certification pmp - Logo Tenstep Tunisia - Gestion De Projet

[PDF] Certification pour le participant pratique libre et pratique acrobatique - Anciens Et Réunions

[PDF] Certification professionnelle de téléphonie d`entreprise en IUT

[PDF] Certification professionnelle d`employé familial

[PDF] Certification Qualité Aéronautique - De L'Automobile Et Des Véhicules

[PDF] Certification Qualité ISO 9001

[PDF] Certification qualité ISO 9001 (version 2008) - France

[PDF] CERTIFICATION RESPONSABLE DE DEVELOPPEMENT

[PDF] Certification sécurité - formation auditeur interne OHSAS 18001 - France

[PDF] Certification selon la norme ISO 13485 - Agrément

[PDF] Certification SH - Fédération Française d`Haltérophilie - France