[PDF] [PDF] Chapter 3 Quadratic Programming

Optimization I; Chapter 3 56 Chapter 3 1 Constrained quadratic programming problems A special where λ∗ ∈ lRm is the associated Lagrange multiplier



Previous PDF Next PDF





[PDF] Constrained Optimization Using Lagrange Multipliers - Duke People

The Lagrange multiplier, λ, serves the purpose of modifying (augmenting) the objective function from one quadratic (1 2 kx2) to another quadratic (1 2 kx2 − λx 



[PDF] Chapter 3 Quadratic Programming

Optimization I; Chapter 3 56 Chapter 3 1 Constrained quadratic programming problems A special where λ∗ ∈ lRm is the associated Lagrange multiplier



[PDF] Constrained Optimization 5 - UF MAE

5 fév 2012 · equations for the ne Lagrange multipliers and the n coordinates of the A convex optimization problem has a convex objective function and a 



[PDF] Linear Programming, Lagrange Multipliers, and Duality

the final version, for optimizing continuous functions over convex regions Between Lagrange multipliers are a way to solve constrained optimization problems



[PDF] 2 Constraint optimization and Lagrange multipliers - Baruch MFE

Constraint optimization and Lagrange multipliers Equality constraints and Lagrange Multiplier Theorem Consider the quadratic optimization problem:



[PDF] Constrained optimization and Lagrange multiplier - MIT

Constrained Optimization and Lagrange Multiplier Methods, by Dimitri P 2 1 The Quadratic Penalty Function Method 5 2 Convex Programming and Duality



[PDF] Solving SVM

Quadratic programming (QP): Introducing Lagrange multipliers and α 4 4 (can be justified 



[PDF] Lecture 10

Types of Optimization Algorithms •• All of the algorithms augmented Lagrangian and Sequential quadratic An update of the Lagrange Multiplier is needed



[PDF] BASIC ISSUES IN LAGRANGIAN OPTIMIZATION - University of

These lecture notes review the basic properties of Lagrange multipliers and In contrast to these properties of “convex optimization,” two major difficulties with

[PDF] lagrange multipliers pdf

[PDF] lagrangian field theory pdf

[PDF] laman race 2020

[PDF] lambda return value

[PDF] lambda trailing return type

[PDF] lana del rey age 2012

[PDF] lana del rey songs ranked billboard

[PDF] lana del rey the greatest album

[PDF] lancet diet study

[PDF] lancet nutrition

[PDF] lands end trail map pdf

[PDF] landwarnet blackboard learn

[PDF] lane bryant annual bra sale

[PDF] lane bryant annual bra sale 2019

[PDF] lane bryant bogo bra sale

Optimization I; Chapter 356

Chapter 3 Quadratic Programming

3.1 Constrained quadratic programming problems

A special case of the NLP arises when the objective functionalfis quadratic and the constraintsh;gare linear inx2lRn. Such an NLP is called a Quadratic

Programming (QP) problem. Its general form is

minimizef(x) :=1 2 xTBx¡xTb(3.1a) overx2lRn subject toA1x=c(3.1b) A

2x·d ;(3.1c)

whereB2lRn£nis symmetric,A12lRm£n; A22lRp£n, andb2lRn; c2 lR m; d2lRp. As we shall see in this chapter, the QP (3.1a)-(3.1c) can be solved iteratively by active set strategies or interior point methods where each iteration requires the solution of an equality constrained QP problem.

3.2 Equality constrained quadratic programming

If only equality constraints are imposed, the QP (3.1a)-(3.1c) reduces to minimizef(x) :=1 2 xTBx¡xTb(3.2a) overx2lRn subject toAx=c ;(3.2b) whereA2lRm£n; m·n. For the time being we assume thatAhas full row rankm. to the following linear system

µB AT

{z =:Kµ =µb ;(3.3) We denote byZ2lRn£(n¡m)the matrix whose columns span KerA, i.e.,AZ= 0.

Optimization I; Chapter 357

De¯nition 3.1 KKT matrix and reduced Hessian

The matrixKin (3.3) is called the KKT matrix and the matrixZTBZis referred to as the reduced Hessian.

Lemma 3.2 Existence and uniqueness

Assume thatA2lRm£nhas full row rankm·nand that the reduced Hessian Z TBZis positive de¯nite. Then, the KKT matrixKis nonsingular. Hence,

Proof:The proof is left as an exercise.

Under the conditions of the previous lemma, it follows that the second order minimizer.

Theorem 3.3 Global minimizer

QP (3.2a),(3.2b).

f(x) =1 2 1 2 p {z = 0; whence f(x) =1 2 In view ofp2KerA, we can writep=Zu ; u2lRn¡m, and hence, f(x) =1 2 the unique global minimizer of the QP (3.2a),(3.2b).

Optimization I; Chapter 358

3.3 Direct solution of the KKT system

As far as the direct solution of the KKT system (3.3) is concerned, we distinguish between symmetric factorization and the range-space and null-space approach.

3.3.1 Symmetric inde¯nite factorization

A possible way to solve the KKT system (3.3) is to provide a symmetric fac- torization of the KKT matrix according to P

TKP=LDLT;(3.4)

wherePis an appropriately chosen permutation matrix,Lis lower triangular with diag(L) =I, andDis block diagonal. Based on (3.4), the KKT system (3.3) is solved as follows: solveLy=PTµb ;(3.5a) solveD^y=y ;(3.5b) solveLT~y= ^y ;(3.5c) =P~y :(3.5d)

3.3.2 Range-space approach

The range-space approach applies, ifB2lRn£nis symmetric positive de¯nite. system AB with the Schur complementS2lRm£mgiven byS:=AB¡1AT. The range- space approach is particularly e®ective, if Bis well conditioned and easily invertible (e.g.,Bis diagonal or block- diagonal), B ¡1is known explicitly (e.g., by means of a quasi-Newton updating for- mula), the numbermof equality constraints is small.

Optimization I; Chapter 359

3.3.3 Null-space approach

The null-space approach does not require regularity ofBand thus has a wider range of applicability than the range-space approach. We assume thatA2lRm£nhas full row rankmand thatZTBZis positive de¯nite, whereZ2lRn£(n¡m)is the matrix whose columns span KerAwhich can be computed by QR factorization (cf. Chapter 2.4). x whereY2lRn£mis such that [Y Z]2lRn£nis nonsingular andwY2lRm; wZ2 lR n¡m. Substituting (3.7) into the second equation of (3.3), we obtain Ax = 0w

Z=c ;(3.8)

i.e.,Y wYis a particular solution ofAx=c. SinceA2lRm£nhas rankmand [Y Z]2lRn£nis nonsingular, the product matrixA[Y Z] = [AY0]2lRm£mis nonsingular. Hence,wYis well determined by (3.8). On the other hand, substituting (3.7) into the ¯rst equation of (3.3), we get BY w Multiplying byZTand observingZTAT= (AZ)T= 0 yields Z

TBZwZ=ZTb¡ZTBY wY:(3.9)

The reduced KKT system (3.9) can be solved by a Cholesky factorization of the reduced HessianZTBZ2lR(n¡m)£(n¡m). OncewYandwZhave been computed Finally, the Lagrange multiplier turns out to be the solution of the linear system arising from the multiplication of the ¯rst equation in (3.7) byYT:

3.4 Iterative solution of the KKT system

If the direct solution of the KKT system (3.3) is computationally too costly, the alternative is to use an iterative method. An iterative solver can be ap- plied either to the entire KKT system or, as in the range-space and null-space approach, use the special structure of the KKT matrix.

Optimization I; Chapter 360

3.4.1 Krylov methods

The KKT matrixK2lR(n+m)£(n+m)is inde¯nite. In fact, ifAhas full row rank m,Khasnpositive andmnegative eigenvalues. Therefore, for the iterative solution of (3.3) Krylov subspace methods like GMRES (G eneralized M inimum RES idual) and QMR (Q uasi M inimum R esidual) are appropriate candidates.

3.4.2 Transforming range-space iterations

We assumeB2lRn£nto be symmetric positive de¯nite and suppose that~Bis some symmetric positive de¯nite and easily invertible approximation ofBsuch that~B¡1B»I. We chooseKL2lR(n+m)£(n+m)as the lower triangular block matrix K

L=µI0

;(3.11) which gives rise to the regular splitting K

LK=µ~B AT

0 {z =:M1¡

µ~B(I¡~B¡1B) 0

{z =:M2»0;(3.12) where ~S2lRm£mis given by

S:=¡A~B¡1AT:(3.13)

We set

Ã:= (x;¸)T; ®:= (b;c)T:

Given an iterateÃ(0)2lRn+m, we computeÃ(k); k2lN;by means of the transforming range-space iterations (k+1)=Ã(k)+M¡11KL(®¡KÃ(k)) = (3.14) = (I¡M¡11KLK)Ã(k)+M¡11KL® ; k¸0: The transforming range-space iteration (3.14) will be implemented as follows: d (k)= (d(k)

1;d(k)

2)T:=®¡KÃ(k);(3.15a)

K

Ld(k)= (d(k)

1;¡A~B¡1d(k)

1+d(k)

2)T;(3.15b)

M

1'(k)=KLd(k);(3.15c)

(k+1)=Ã(k)+'(k):(3.15d)

Optimization I; Chapter 361

3.4.3 Transforming null-space iterations

We assume thatx2lRnand¸2Rmadmit the decomposition x= (x1;x2)T; x12lRm1; x22lRn¡m1;(3.16a) ¸= (¸1;¸2)T; ¸12lRm1; ¸22lRm¡m1;(3.16b) and thatA2lRm£nandB2lRn£ncan be partitioned by means of

A=µA11A12

A ; B=µB11B12 B ;(3.17) whereA11;B112lRm1£m1with nonsingularA11. Partitioning the right-hand side in (3.3) accordingly, the KKT system takes the form 0 B BBB@B

11B12jAT11AT21B

21B22jAT12AT22¡¡ ¡¡ j ¡¡ ¡¡

A

11A12j0 0

A

21A22j0 01

C CCCA0 B BBB@x C

CCCA=0

B BBB@b 1 b 2 c 1 c 21
C

CCCA:(3.18)

We rearrange (3.18) by exchanging the second and third rows and columns 0 B BBB@B

11AT11jB12AT21A

110jA120

¡¡ ¡¡ j ¡¡ ¡¡

B

21AT12jB22AT22A

210jA2201

C CCCA0 B BBB@x x C

CCCA=0

B BBB@b 1 c 1quotesdbs_dbs17.pdfusesText_23