Similar definitions hold for functions of three variables. The Lagrange multiplier method for solving such problems can now be stated: Let f(x, y) and g(x, y) be smooth functions, and suppose that c is a scalar constant such that ∇g(x, y) ≠ 0 for all (x, y) that satisfy the equation g(x, y) = c. Then to solve the constrained optimization problem
Then to solve the constrained optimization problem find the points (x, y) that solve the equation ∇f(x, y) = λ∇g(x, y) for some constant λ (the number λ is called the Lagrange multiplier ). If there is a constrained maximum or minimum, then it must be such a point.
Since we need to maximize a function R ( h, s) , subject to a constraint, 20 h + 170 s = 20,000 , we begin by writing the Lagrangian function for this setup: Next, set the gradient ∇ L equal to the 0 vector. This is the same as setting each partial derivative equal to 0 . First, we handle the partial derivative with respect to h .
The Lagrange method is frequently used in economics, mainly because the Lagrange multiplicator(s) has an interesting interpretation.