Similar definitions hold for functions of three variables. The Lagrange multiplier method for solving such problems can now be stated: Let f(x, y) and g(x, y) be smooth functions, and suppose that c is a scalar constant such that ∇g(x, y) ≠ 0 for all (x, y) that satisfy the equation g(x, y) = c. Then to solve the constrained optimization problem
The Lagrangian method Lagrangian method. We define the Lagrangian as L(x, λ) = f(x) + λ(b g(x)) . (x, λ) = wi log xi + λ b xi . In general, the Lagrangian is the sum of the original objective function and a term that involves the functional constraint and a ‘Lagrange multiplier’ λ.
Then to solve the constrained optimization problem find the points (x, y) that solve the equation ∇f(x, y) = λ∇g(x, y) for some constant λ (the number λ is called the Lagrange multiplier ). If there is a constrained maximum or minimum, then it must be such a point.
Since we need to maximize a function R ( h, s) , subject to a constraint, 20 h + 170 s = 20,000 , we begin by writing the Lagrangian function for this setup: Next, set the gradient ∇ L equal to the 0 vector. This is the same as setting each partial derivative equal to 0 . First, we handle the partial derivative with respect to h .