[PDF] [PDF] Mathematical Economics (ECON 471) Lecture 4 Unconstrained

Unconstrained Constrained Optimization Teng Wah Leo 1 Unconstrained Optimization from microeconomics, one from macroeconomics, and another from 



Previous PDF Next PDF





[PDF] Mathematical Economics (ECON 471) Lecture 4 Unconstrained

Unconstrained Constrained Optimization Teng Wah Leo 1 Unconstrained Optimization from microeconomics, one from macroeconomics, and another from 



[PDF] Lecture Notes - Department of Economics

Lecture 2: Tools for optimization (Taylor's expansion) and Unconstrained optimiza- tion Lecture 6: Constrained optimization III: The Maximum Value Function, 



[PDF] Constrained and Unconstrained Optimization

Oct 10th, 2017 C Hurtado (UIUC - Economics) Numerical Methods Page 2 On the Agenda 1 Numerical Optimization 2 Minimization of Scalar Function



[PDF] Chapter 3: Single Variable Unconstrained Optimization optimization

In a sense, nearly all economic problems are constrained because we are interested Within the unconstrained optimization problem heading, we can have single-variable and 21 in the Road Map in the C1Read pdf handout It is important 



[PDF] CONSTRAINED OPTIMIZATION - Kennedy - Economics

Peter Kennedy These notes provide a brief review of methods for constrained optimization We then solve the unconstrained maximization problem (1 30) λ,



[PDF] Optimization Techniques

Constrained versus Unconstrained Optimization The true marginal value of a function (e g , an economic relationship) is obtained from Equation A 4 when X is  



[PDF] 1 Unconstrained optimization - Simon Fraser University

Econ 798 s Introduction to Mathematical Economics Lecture Notes 4 which typically deals with problems where resources are constrained, but represents a The following theorem is the basic result used in unconstrained optimization



[PDF] 1 Unconstrained optimization - Simon Fraser University

Econ 798 t Introduction to Mathematical Economics Lecture Notes 4 which typically deals with problems where resources are constrained, but Lagrangean (from the unconstrained optimization method) but notice that we have ordered 



[PDF] Optimization - UBC Arts

4 sept 2019 · We typically model economic agents as optimizing some objective function Consumers to begin by studying unconstrained optimization problems There are or minimum of a function, perhaps subject to some constraints see the past course notes for details http://faculty arts ubc ca/pschrimpf/526/



[PDF] BEEM103 Mathematics for Economists Unconstrained Optimization

are satisfied, i e , either the k-Lagrange multiplier is zero or the k-th constraint binds for 1 ≤ k ≤ K Then (x∗ ,y ∗) is a maximum for the constrained maximization

[PDF] constrained optimization

[PDF] constrained optimization and lagrange multiplier

[PDF] constrained optimization and lagrange multiplier method example

[PDF] constrained optimization and lagrange multiplier methods bertsekas

[PDF] constrained optimization and lagrange multiplier methods matlab

[PDF] constrained optimization and lagrange multiplier methods pdf

[PDF] constrained optimization business

[PDF] constrained optimization calculator

[PDF] constrained optimization economics definition

[PDF] constrained optimization economics examples

[PDF] constrained optimization economics pdf

[PDF] constrained optimization economics questions

[PDF] constrained optimization in mathematical economics

[PDF] constrained optimization lagrange multiplier examples

[PDF] constrained optimization lagrange multiplier inequality

Mathematical Economics (ECON 471)

Lecture 4

Unconstrained & Constrained Optimization

Teng Wah Leo

1 Unconstrained Optimization

We will now deal with the simplest of optimization problem, those without conditions, or what we refer to as unconstrained optimization problems. By denition, for a function F(:) in multiple variables,x= [x1;x2;:::;xn] =URn, such thatF:U!R1achieves a maximum if,

1. a pointx2Uis a max ofF(:) onUifF(x)F(x)8x6=x,x2U, and

2. a pointx2Uis astrictmax ofF(:) onUifF(x)> F(x)8x6=x,x2U.

Further,

3.x2Uis alocalmax ofF(:) if there is a \ball"B(x) aroundxsuch that

F(x)F(x)8x6=x2B(x)TU, and

4.x2Uis astrict localmax ofF(:) if there is a \ball"B(x) aroundxsuch that

F(x)> F(x)8x6=x2B(x)TU

The latter to points regarding local and strict local max essentially says that for \nearby" points, the functionF(:) achieves it's max atx. If the maximum is achieved on the entire domainUof the functionF(:), then we say the max is global/absolute atx. These denitions simply needs to be reverse for the case of minimization, or nding the minimum. The next question is regarding the manner in which a maximum or minimum is char- acterized. Focusing on maximums, you would recall that for a single variable function, 1 a maximum is achieved when function attains a critical point, which is when the slope is zero, in other words, when the function peaks. In the multivariate function case, the criteria is similar, with the sole dierence being maximum is achieved in the interior of the domain, aninterior solution. To be precise, a maximum is attained when the Jacobian vector is equal to zero @F(x)@x ix=x= 0,8i2 f1;2;:::;ng, and this solution is an interior solution if the \ball"B(x) is in the domain ofF(:). All this of course retaining the con- tinuity and dierentiability of the function. This criterion remain true for a minimization problem. So what distinguishes a maximization from a minimization problem? It is for this rea- son we developed the Hessian matrix, and the ideas of negative and positive (semi)deniteness. Recall that in the univariate case, we maximize when the function is increasing and con- cave, and minimize when the function decreasing and convex. This are important distin- guishing features, without knowledge of which, you might instead be nding a maxima for a minimization problem and vice versa. So the onus is always on you to verify whether you are setting up and solving the \correct problem". To reiterate the conditions for maximization and minimization in the case multivariate functions, letF:U!R1be continuous and twice dierentiable,xbe the critical value to the solution, then

1. if the Hessian matrix,O2F(x), is a negative (denite) semi-denite matrix, then

x is a (strict) local max of the functionF():,

2. if the Hessian matrix,O2F(x), is a positive (denite) semi-denite matrix, thenx

is a (strict) local min of the functionF():, and

3. if the Hessian matrix,O2F(x), is indenite, thenxis neither a (strict) local max

or min of the functionF(): The strict conditions are referred to as thesucient conditionsfor maximization or minimization as the case may be. While the weaker conditions (inequalityand) are referred to as necessary conditions. If the function is concave, then your solution to the functionF(:),xis aglobalmax, and if the function is convex, your solution would be aglobalmin ofF(:). To consolidate what we have learned, we will go through several examples here, one from microeconomics, one from macroeconomics, and another from econometrics. Example 1In choosing to maximize his/her utility, an individual chooses an optimal combination of goods that he/she likes. However, the individual is subject to a budget 2 constraint. For simplicity, let this individual choose between to quantities of two types of goods, 1 and 2, with respective quantitiesx1andx2. Let his utility function that describes the level of his felicity beU(:). The individual's income isy, and the price of each good 1 and 2, isp1andp2. Then the constrained maximization problem is, max x

1;x2U(x1;x2) (1)

subject toy=p1x1+p2x2(2) Although there is a constrain in this optimization problem, it is quite easy to change this into a unconstrained problem in terms of one good. With the solution in that single good, you can always nd the solution for the other by substituting your solution back into the budget constraint. So the new unconstrained problem becomes, max x 1U x

1;yp1x1p

2 which is now a unconstrained problem in terms ofx1. The condition that describes the maximization occurs when the slope of the utility function is equal to zero, which occurs at the critical point or the solution,x1andx2. Using the Chain Rule, U x1+Ux2dx 2dx 1= 0 )Ux1Ux2p 1p 2= 0 Ux1U x2=p1p 2 which gives your standard equilibrium condition of marginal rate of substitution equating with the price ratio (which is the slope of the budget constraint). How do you know if you have maximized or minimized. Again using both Chain Rule in concert with the product rule, U x1x1+ U x2x2dx2dx 1 2 +Ux2d2x2dx 21!
To complete the analysis, we can now rationalize why we need the utility function to be concave it the quantities of the goods. Given the concavity assumptionUxixiwould be negative fori=f1;2g. Further, since the second derivative ofx2with respect tox1is 0 given the linearity of the budget constraint, the second derivative of the utility function is negative, and consequently ensuring we are indeed maximizing. 3 Example 2Consider the rm's prot maximization problem where the typical rm faces a revenue functionR(:), and a cost functionC(:). The choice is to maximize their prot with respect to a vector ofninputsx. In other words, max x = maxxR(x)C(x) Then based on what we understand about maximization, at the critical point for allxi,xi, wherexithe typical element ofxandxiis the typical element ofx, the following must be true, @(x)@x i=@R(x)@x i@C(x)@x i= 0 which is just your marginal revenue equating with the marginal cost condition. This sort of condition at equilibrium is what we refer to as arst order condition. Let's give the problem alittle more structure, so that we can get a more recognizable equilibrium condition. Let the production function beF(:), and it is a function of the sameninputs. Let the price of the single good produced by these inputs bep, so that F:xRn!R1. Let the cost function be linear, so thatC(x) =w:x, wherewis the vector of ofninput prices. The rst order condition is now, @F(x)@x ipwi= 0 @F(x)@x ip=wi In other words, we get our standard equilibrium condition that the rm should engage an input up to the extent where its value marginal product @F(x)@x ipequates with price of the input. To ascertain whether the rm has maximized its prot, we have to check the Hessian matrix, which in the current example, we need again more structure to the prot function, or more precisely the production function. Let's consider another example common in Economics. Example 3Another useful example is the ordinary least squares regression. We will use the full spectrum of results we have learned regarding optimization to see it's relevance. In performing a regression, we are in eect trying to t a line that best traverse the location of observations in the variables we are interested in using to explain a phenomenon. Of 4 course we cannot possibly explain everything change, which then requires a idiosyncratic term we call the error,. This variableis what allows us to use statistics to nd out how markets, individuals, rms, etc behave on the average. Without going into excessive detail, the objective function in the ordinary least squares procedure is to minimize the sum of this squared errors.

Consider a simple model,

y i=0+1xi+i where the subscriptidenotes theithobservation. The objective is to minimize the sum of squared errors with respect to the coecients/parameters0and1. The second parame- ters tells us the eect that the variablexhas ony, while the rst is just the intercept of this equation of a straight line. min 0;1n X i=1

2i= min

0;1n X i=1(yi01xi)2 The rst order conditions to this minimization problem are, n X i=12(yi01xi) = 0 n X i=12(yi01xi)xi= 0

Focusing on the rst condition, we gety01x= 0

While the second condition gives us,

n X i=1y ixi0n X i=1x i1n X i=1x 2i= 0 5 Combining the two conditions yield by substituting away0rst gives us, n X i=1y ixi(y1x)nX i=1x i1n X i=1x 2i= 0 n P i=1y ixiy nP i=1x in P i=1x2ix nP i=1x i=1 nnP i=1y ixinP i=1y inP i=1x in nP i=1x2i nP i=1x i 2=1 We can now substitute this back into the other equation to obtain, 0=1n nX i=1y i1n X i=1x i! n P i=1x2inP i=1y inP i=1x inP i=1x iyin nP i=1x2i nP i=1x i 2 To verify the Hessian of the objective is positive (semi)denite, 2 4 2@ 20@ 2@ 0@1 2@ 10@ 2@ 213
5 = 22 6 4nnP i=1x i nP i=1x inP i=1x2i3 7 5 and notice that both the rst and second order leading principal minors are positive.

2 Constrained Optimization & the Lagrangian Func-

tion

2.1 Constrained Optimization with Equality Constraints

Fortunately or unfortunately much of optimization in Economics requires us to consider how economic agents make their choices subject to constraints, be they budgetary in na- ture, or simply technological, or some other form. Consequently we need techniques to help us deal with this. Aconstrained optimization problemrequires us to maximize 6 or minimize our objective function subject to constraints that may be an equality (which implies it will bebinding, something we will discuss in more detail shortly), or an in- equality (which implies in can be binding or not). We deal with the equality constraints rst. A typical constrained optimization problem is of the form, max xF(x) subject toh1(x) =b1;:::;hq(x) =bq g

1(x)a1;:::;gm(x)am

wherexis a vector ofnelements as before, andF(:) is our objective function such that F:xX2Rn!R1. Theh(:) functions are written with equality signs, and they are theequality constraints, wherebiare just constants. Theg(:) functions written with inequality signs are theinequality constraints. It is on the former types of constraints that we focus on. When the number of variables are small, it may be easier to simply substitute away one of the variables, such as in the previous section, and change the question into a unconstrained problem (when you have an equality constraint. Otherwise, you'd still have to \experiment" to check if the constraints arebinding). However, generally we need an alternative technique. That would be theLagrangian Method. Consider now a constrained optimization problem with equality constraints. max xF(x) subject toh1(x) =b1;:::;hq(x) =bq The method requires us to rst form theLagrangian Function, and optimize (maximize or minimize as the case might be)

L(x;) =F(x)1(h1(x)b1) q(hq(x)bq)

The optimization is with respect to all thenvariables as well as theLagrange multi- pliers,ifori=f1;:::;qg, where is the vector of Lagrange multipliers. It should be noted that the rst derivative of the constraints cannot be equal to zero, failing which the technique will not work. As you would notice with the caveat/non-degenerate constraint qualication, this happens only if the constraint is not dependent on the variables you are maximizing on, which means for all intent and purpose, it is generally not an issue in economics. In economics, the multipliers are typically referred to as the 7 shadow prices. That is a rather neat term in the sense that it highlights the cost to the optimization problem of the constraint. Put another way, it tells you how important the constraint is to your optimization problem. In the way we have structured the Lagrangian function, the Lagrange multipliers will always be positive, so that the higher the value of the multipliers, the more important they are. Take for instance the utility maximization problem. The equality constraint we had imposed ensures the the choice will be at the point where the indierence hyperbola is just tangent to the budget plane. Failing which without a constraint, an individual due to non-satiation, will always choose to have more ensuring that we have no interior solution, as long as a good is a \good". The rst order conditions at the critical points or solution are thus, @L(x;)@x

1= 0; ::: ;@L(x;)@x

n= 0 @L(x;)@

1= 0; ::: ;@L(x;)@

q= 0 As in all optimization problems, we have to verify if critical value/solution we have found is indeed a maxima or minima as the case might be. In the multivariate function case, you would imagine we need to set up a Hessian matrix, but how do we go about it using this method? We do so here by forming the bordered matrix, called theBordered Hessian, which is dened as follows, O

2x;L=2

6

66666666666640:::0j @h1(x)@x

1:::@h1(x)@x

n.........j.........

0:::0j @hq(x)@x

1:::@hq(x)@x

n @h1(x)@x

1:::@hq(x)@x

1j@2L(x)@x

21:::@2L(x)@x

n@x1.........j......... @h1(x)@x n:::@hq(x)@x nj@2L(x)@x

1@xn:::@2L(x)@x

2n3 7

7777777777775

In other words, you set up the matrix of second derivatives of the objective function, and border the top and left with the cross partial of the Lagrangian between the variable and the respective constraint. The reason you see the border as a rst derivative with respect to the constraint is because @2L(x)@x i@i=@h(x)@x i. With the rst derivative of the Lagrangian with respect to the multiplier you would have gotten the respective constrainth(x). The 8 matrix can also be written as follows, O

2x;L="0Oh(x)

Oh(x)TO2xL#

The conditions to verify the concavity or convexity of the Lagrangian function involves simply investigating the lastnqof then+qleading principal minors. Another way to think about this is that you need to check from the largest leading principal mi- nor all the way to the second smallest one including elements from the submatrixO2xL. This is because the smallest leading principal minor including an element fromO2xLwill always be negative. Further note that, unlike the standard Hessian, for positive (semi- )deniteness, the sign of the determinant is now uniformly negative, while for negative (semi-)deniteness, they still alternate in sign in the same way.

Let's consolidate using several examples.

Example 4Let's consider the utility maximization problem we had earlier, but this time, we will solve it using the Lagrangian Method.

L(x1;x2;) =U(x1;x2)(p1x1+p2x2y)

Then the rst order conditions are,

@L@x

1=Ux1p1= 0

@L@x

2=Ux2p2= 0

@L@ =yp1x1p2x2= 0

Combining the rst two equations, you obtain

U x1U x2=p1p 2 which is exactly what we obtained previously using the substitution method. However, you would notice that as the number of variables increases, it becomes easier to use the Lagrangian Method as opposed to substitution. Technically, given a functional form for the utility function, all you would need to do is to substitute the nal rst order condition into the equilibrium condition to solve for one of the variables, and with that answer, substitute it back into the nal condition to get the other critical value. 9 To gure out the concavity of the Lagrangian function, since the bordered Hessian is just, 2 6

640p1p2

p1Ux1x1Ux1x2 p2Ux2;x1Ux2x23 7 75
and all we need to do is to calculate the single third order leading principal minor, which is just the determinant of the bordered Hessian above which is

0(Ux1x1Ux2x2Ux1x2Ux2x1) +p1(p1Ux2x2+p2Ux1x2)p2(p1Ux2x1+p2Ux1x1)

=p1(p1Ux2x2+p2Ux1x2)p2(p1Ux2x1+p2Ux1x1) =p21Ux2x2+p1p2Ux1x2+p2p1Ux2x1p22Ux1x1)0 The last inequality follows sinceUxixi0, whileUxixj0,i6=j. Therefore, since the bordered Hessian is positive, the lagrangian is negative semi-denite, and we have a local maximum. Notice that should we have only focused on the submatrix without the border,O2xL, its sequence of leading principal minors would have been,Ux1x10and U x1x1Ux2x22Ux1x20for it to be negative semi-denite, which is similar to what we have found in the bordered Hessian case. However, you have to keep in mind that since we are dealing with a constrained optimization, the bordered Hessian is the correct matrix to examine. Note further that this is merely a sucient condition for a local maximum.

2.2 Constrained Optimization with Inequality Constraints

Of course things are never perfect in economics, and we have to deal of inequality con- straints. Sometimes we can rest of economic theory to argue that certain constraints has to be held with an equality, but often we cannot resort to it. We will now discuss what we should do when faced with inequality constraints. As noted in the previous section, a typical constrained optimization problem is of the form, max xF(x) subject toh1(x) =b1;:::;hq(x) =bq g

1(x)a1;:::;gm(x)am

wherexis a vector ofnelements as before, andF(:) is our objective function such that F:xX2Rn!R1. Theh(:) functions are written with equality signs, and they are 10 theequality constraints, wherebiare just constants. Theg(:) functions written with inequality signs are theinequality constraints. It is on the latter types of constraints that we focus on. Let's focus just on inequality constraints, max xF(x) subject tog1(x)a1;:::;gm(x)am The setup is as before in converting the constrained problem into an unconstrained one.

L(x;) =F(x)1(g1(x)a1) m(gm(x)am)

In other words, the set up is as before. However, there are signicant dierence in the rst order conditions which are now,quotesdbs_dbs17.pdfusesText_23