[PDF] [PDF] 1 Unconstrained optimization - Simon Fraser University

Econ 798 s Introduction to Mathematical Economics Lecture Notes 4 which typically deals with problems where resources are constrained, but represents a The following theorem is the basic result used in unconstrained optimization



Previous PDF Next PDF





[PDF] Mathematical Economics (ECON 471) Lecture 4 Unconstrained

Unconstrained Constrained Optimization Teng Wah Leo 1 Unconstrained Optimization from microeconomics, one from macroeconomics, and another from 



[PDF] Lecture Notes - Department of Economics

Lecture 2: Tools for optimization (Taylor's expansion) and Unconstrained optimiza- tion Lecture 6: Constrained optimization III: The Maximum Value Function, 



[PDF] Constrained and Unconstrained Optimization

Oct 10th, 2017 C Hurtado (UIUC - Economics) Numerical Methods Page 2 On the Agenda 1 Numerical Optimization 2 Minimization of Scalar Function



[PDF] Chapter 3: Single Variable Unconstrained Optimization optimization

In a sense, nearly all economic problems are constrained because we are interested Within the unconstrained optimization problem heading, we can have single-variable and 21 in the Road Map in the C1Read pdf handout It is important 



[PDF] CONSTRAINED OPTIMIZATION - Kennedy - Economics

Peter Kennedy These notes provide a brief review of methods for constrained optimization We then solve the unconstrained maximization problem (1 30) λ,



[PDF] Optimization Techniques

Constrained versus Unconstrained Optimization The true marginal value of a function (e g , an economic relationship) is obtained from Equation A 4 when X is  



[PDF] 1 Unconstrained optimization - Simon Fraser University

Econ 798 s Introduction to Mathematical Economics Lecture Notes 4 which typically deals with problems where resources are constrained, but represents a The following theorem is the basic result used in unconstrained optimization



[PDF] 1 Unconstrained optimization - Simon Fraser University

Econ 798 t Introduction to Mathematical Economics Lecture Notes 4 which typically deals with problems where resources are constrained, but Lagrangean (from the unconstrained optimization method) but notice that we have ordered 



[PDF] Optimization - UBC Arts

4 sept 2019 · We typically model economic agents as optimizing some objective function Consumers to begin by studying unconstrained optimization problems There are or minimum of a function, perhaps subject to some constraints see the past course notes for details http://faculty arts ubc ca/pschrimpf/526/



[PDF] BEEM103 Mathematics for Economists Unconstrained Optimization

are satisfied, i e , either the k-Lagrange multiplier is zero or the k-th constraint binds for 1 ≤ k ≤ K Then (x∗ ,y ∗) is a maximum for the constrained maximization

[PDF] constrained optimization

[PDF] constrained optimization and lagrange multiplier

[PDF] constrained optimization and lagrange multiplier method example

[PDF] constrained optimization and lagrange multiplier methods bertsekas

[PDF] constrained optimization and lagrange multiplier methods matlab

[PDF] constrained optimization and lagrange multiplier methods pdf

[PDF] constrained optimization business

[PDF] constrained optimization calculator

[PDF] constrained optimization economics definition

[PDF] constrained optimization economics examples

[PDF] constrained optimization economics pdf

[PDF] constrained optimization economics questions

[PDF] constrained optimization in mathematical economics

[PDF] constrained optimization lagrange multiplier examples

[PDF] constrained optimization lagrange multiplier inequality

Simon Fraser University, Department of Economics

Econ 798 { Introduction to Mathematical Economics

Prof. Alex Karaivanov

Lecture Notes 4

1 Unconstrained optimization

In this section we address the problem of maximizing (minimizing) a function in the case when there areno constraintson its arguments. This is not a very interesting case for economics, which typically deals with problems where resources are constrained, but represents a natural starting point to solving the more economically relevant constrained optimization problems.

1.1 Univariate case

Letf:UR!RbeC2:We are interested in nding maxima (or minima) of this function. We need to start with dening what do we mean by these concepts.

Denition (local maximum) {done before.

A pointx02Uis a local maximum for the function

fif9" >0;such thatf(x0) f(x);8x2U\N"(x0);whereN"(x0) denotes an"-ball aroundx0:Iff(x0)> f(x);

8x2U\N"(x0) withx6=x0we say that the local maximum isstrict.

Clearly a function can have many or no local maxima in its domain.

Denition (global maximum)

A pointx02Uis a global maximum for the functionfiff(x0)f(x);8x2U: So how do we go about nding local (global) maxima? Most of the time we use dierentiation and set the rst derivative to zero but a zero rst derivative is neither necessary (e.g., corner maximum; kink maximum), nor sucient condition for maximum (f0= 0 could be a minimum or other critical point). Thus some care is needed to ensure that what one nds by setting f

0= 0 is indeed what one is looking for. Let us call both local maximum and local minimum

local extremum. The following theorem is the basic result used in unconstrained optimization problems for the univariate case.

Theorem 19 (Conditions for local extrema)

Letf0(x0) = 0. If:

(i)f00(x0)<0 thenx0is a local maximum off: (ii)f00(x0)>0 thenx0is a local minimum off: (iii) f00(x0) = 0 then we cannot conclude whetherx0is a local extremum off:

Theorem 20

A continuous functionfwith domain the closed interval [a;b]2Rattains a global maximum and global minimum in the interval. 1

1.2 The multivariate case

Now consider more general functions of the typef:URn!R(multivariate). Theorem 21 (First-order (necessary) conditions for a local extremum) Letf:URn!Rbe aC1function. Ifx0is a local extremum offin the interior ofUthen: @f(x0)@xi= 0; i= 1;::n The above theorem states that at an interior local extremumall rst partial derivatives must be equal to zero, i.e. we can solve the system ofnequations dened by the condition above and look for interior extrema only among its solutions. Note also that the above can be written equivalently as rf(x0) =0n1 i.e.,at interior local extremum the gradient offis zero. Remembering that the gradient was a vector pointing in the direction in which the function changes fastest, we see that the above condition implies that at the extremum there's no such best direction, i.e. if we go in any direction we will reach a lower functional value (if we are talking about a maximum). The rst-order conditionrf(x0) =0n1isonly necessary.To obtain sucient conditions, as in the univariate case (Thm 19) we need to know something about thesecond derivativesof f:In order to be able to do so, we need some useful concepts from linear algebra.

Denition (Principal submatrix and minor)

LetAbe annnmatrix. Ak-th orderprincipal submatrixofAis the matrix formed by deleting somenkrows and their correspondingnkcolumns ofA: Principal minoris the determinant of ak-th order principal submatrix.

Denition (Leading principal submatrix and minor)

Thek-th orderleading principal submatrix(LPS) ofAis formed by deleting thelast nkcolumns and rows ofA. Its determinant is called the leading principal minor (LPM). For asymmetricnnmatrixAdene the following concepts. Denition (positive/negative denite symmetric matrix)(Note: we saw alterna- tive denitions earlier, using quadratic forms!) 2 (a) The symmetric matrixAis positive denite (p.d.) i all itsnLPMs are positive. (b) The symmetric matrixAis negative denite (n.d.) i all its LPMs are not zero and alternate in sign, that isjA1j<0;jA2j>0;etc. Denition (positive/negative semidenite symmetric matrix) (a) The matrixAis positive semi-denite (p.s.d.) i all its principal minors are non-negative. (b) The matrixAis negative semi-denite (n.s.d.) i all its odd-order principal minors are non-positive and all its even-order principal minors are non-negative. Note: the above must be true for all principal minors, not just the leading ones. Finally we are ready to state the sucient conditions for local extrema. Theorem 22 (Second-order (sucient) conditions for local extrema) Letf:URn!Rbe aC2function. Let alsox02Usatisfyrf(x0) = 0n1and

H(x0) be the Hessian offatx0. Then:

(i) IfH(x0) is negative denite, thenx0is a strict local maximum off: (ii) IfH(x0) is positive denite, thenx0is a strict local minimum off: If the Hessian is only p.s.d. (n.s.d.) the extrema may be not strict.

Theorem 23 (Second-order necessary conditions)

Letf:URn!Rbe aC2function. Let alsox02intU(the interior ofU;i.e. not a boundary point). Ifx0is a local maximum (minimum) offthenrf(x0) =0n1 andH(x0) is n.s.d. (p.s.d.). The following examples illustrate how the theory from above is applied.

Example 1 (Multi-product rm)

Suppose we have a rm producing two goods in quantitiesq1andq2and with pricesp1and p

2:Let the cost of producingq1units of good 1 andq2units of good 2 is given byC(q1;q2) =

2q21+q1q2+ 2q22:The rm maximizes prots, i.e., it solves:

max q

1;q2=p1q1+p2q2(2q21+q1q2+ 2q22) =pTqqTAq

wherep= (p1;p2)T; q= (q1;q2)TandA=2:5 :5 2 3 How to solve for the optimal quantities the rm will choose? Take rst the partial derivatives ofwith respect toq1andq2and set them to zero, to ndqi=4pipj15; i;j= 1;2:We also need to verify that this is a maximum indeed. The Hessian of the objective isH=41 14 Let's check if the leading principal minors alternate in sign, we haveH1= det[4] =4<0 andH2= det(H) = 15>0;i.e., the candidate solution is a maximum indeed.

Example 2 (OLS)

Think of some variableywhich depends onx1;x2;::xkand assume we have a dataset ofn observations (i.e.nvectorsXi= (x1i;x2i;::xki); i= 1::n):Assume thatx1is a vector of ones. We are looking for the \best t" between a linear function of the observations,Xand our dependent variabley:(Note thatXisnkandisk1 vector of coecients). Thus we can write: y i=1x1i+:::kxki+"i; i= 1::n where"iare the `residuals' (errors) between the tted lineXandy:The above can be written more compactly in matrix form as: y=X+" Remember, we want to nd the best t, i.e. the coecientswhich minimize the"0sin some sense. One possible criterion (used by the OLS method) is to chooseto minimizePn i=1"2i; i.e. we want to solve the problem: min

S() =X

i(yi1x1i:::kxki)2= (yX)T(yX) = =yTyTXTyyTX+TXTX The rst order condition for the above minimization problem is (using the matrix dierentiation rules): @S()@=2XTy+ 2XTX=!0 from which we nd= (XTX)1XTy{ a candidate minimum. So isindeed a minimum ofS()? We need to check if the Hessian is positive semi-denite, i.e., whetherH() =

2S()@2= 2XTX, a k-by-k matrix is p.s.d. This is true in this case (think of all the squared

terms). (Exercise: prove that the Hessian is p.s.d. using one of the given denitions). 4

1.3 Constrained optimization

1.3.1 Introduction

In this section we look at problems of the following general form: max x2Rnf(x) (NLP) s:t: g(x)b h(x) =c We call the above problem, a Non-Linear Optimization Problem (NLP). In it,f(x) is called theobjective function,g(x)bareinequality constraints, andh(x) =careequality constraints. Note that any optimization problem can be written in the above canonical form. For example if we want to minimize a functionh(x), we can do this by maximizingh(x). It turns out that it is easier not to solve (NLP) directly, but instead solve another, related problem (Lagrange's or Kuhn-Tucker's) forxand then verify thatxsolves the original NLP as well. We will also be interested in whether we are obtainingall solutionsto the NLP in this way, i.e., whether it is true that ifxsolves the NLP it solves the related problem as well. Thus we would like to see when the Lagrange's or Kuhn-Tucker's methods are bothnecessary and sucientfor obtaining solutions to the original NLP.

1.3.2 Equality constraints

Start simple, assuming that the problem we deal with has only equality constraints, i.e., max x2Rnf(x) s:t: h(x) =c; c2Rm The equality constraints restrict the domain over which we maximize. Notice that if the number of the constraints is equal to the number of variables (m=n) and if we assume that the constraints are linearly independent, potentially we can solve forxfrom the constraints and there will be nothing to be optimized. Thus a well-dened problem will typically havem < n (less constraints than choice variables). (a) The Lagrange multipliers method The method for solving problems of the above type is called theLagrange Multipliers Method (LMM). What it does is convert the NLP into a related problem (call it the LMM problem) with adierent objective function and no constraints, so that we can then use the usual unconstrained optimization techniques. What is the price we have to pay for this simplication? During the conversion to LMM we end up withmmore variablesto optimize over. We next verify what is the connection between the solutions to the LMM and the original NLP and most importantly, what conditions are needed for the solutions to the LMM to be solutions to our NLP with equality constraints. Let us describe the Lagrange method works. First we form the new objective function, calledthe Lagrangean: (x;)f(x) +T(ch(x)) 5 Notice that we addedmnew variables,j,j= 1;:::;m{ one for each constraint. These are calledLagrange multipliers. Note that they multiply zeros, so in fact the functional value of the objective does not change.

The LMM problem is:

max x;(x;) (LMM) Suppose we have set all partial derivatives to zero and arrived at a candidate-solution (x;):We need to check if it is indeed a maximum, i.e. a second-order condition must be veried as well. Let's now go through the above steps in more detail. First, write down the rst-order (necessary) conditions for local extremum in the LMM problem: @@xi=1@h

1@x1:::m@h

m@xm+@f@xi= 0; i= 1::n @@j=cjhj(x) = 0; j= 1::m Note we havem+nequations in the same number of unknowns. (b) Second-order conditions of LMM and the bordered Hessian Suppose the above system of rst-order conditions has a solution (x;):We need to check if it is a maximizer indeed. The standard way in unconstrained problems was to see if the Hessian is n.s.d. Here, we form so-calledbordered Hessian, dened as:

H(m+n)(m+n)(x;)2

6

66666666666640:::0@h1@x1:::@h1@xn::: ::: ::: ::: ::: :::

0:::0@hm@x1:::@hm@xn

@h1@x1:::@hm@x1@

2@x21:::@2@x1@xn::: ::: ::: ::: ::: :::

@h1@xn:::@hm@xn@

2@xn@x1:::@2@x2n3

7

7777777777775

where all derivatives are evaluated at (x;):This is nothing but our usual Hessian for the Lagrangean (from the unconstrained optimization method) but notice that we have ordered the matrix of second partials in a particular way { rst taking all second partials with respect to the0sand then with respect to thex0s:The bordered Hessian,^Hcan be written in a more compact way as:

H=0mmJhmn(Jh)TnmH((x))nn

whereJhis the Jacobian ofh(x) andH((x)) is the \Hessian" of (X) (the matrix of second partials of taken only with respect to thexi). 6 Because of all the zeros, turns out we need only the lastnmleading principal minors of Hto determine if it is n.s.d. Letj^Hm+1jbe the LPM with last element@2@x21;j^Hm+2jbe the

LPM with last element

@2@x22;etc. Then we have the following result:

Proposition:

Ifsignj^Hm+lj= (1)l,l= 1;::nm, then the bordered Hessian is n.s.d. and the candidate solution is a maximum of the LMM.

Example: (A Consumer's problem)

A consumer has incomeyand wants to choose the quantities ofngoodsq1;::qnto buy to maximize his strictly concave utilityU(q1;::qn);taking as given the prices of the goodsp1;::pn: Her problem can be written as (using vector notation): max qU(q) s:t: p Tq=y

Set up the Lagrangean:

(q;) =U(q) +(ypTq)

The FOCs are:

ypTq= 0 @U@qipi= 0 which can be solved for (q;):Check that the SOC (the fact that the bordered Hessian is n.s.d.) is satised as an exercise (Hint: use the concavity ofU). (c) The constraint qualication Notice that the above propositiondoes not say anything about whetherxobtained as the solution to LMM will solve the original NLP problem. In general, this is not true since the FOCs of the Lagrangean may be neither necessary, nor sucient for a maximum and thus additional conditions are needed. One possible necessary condition so that the solutions of the LMM be solutions of the NLP as well is the so-calledconstraint qualication (CQ):

J(h(x)) =2

6 664@h

1(x)@x1:::@h1(x)@xn::: ::: :::

quotesdbs_dbs14.pdfusesText_20