[PDF] [PDF] The Newton-Raphson Method - UBC Math

1 Introduction The Newton-Raphson method, or Newton Method, is a powerful technique scribes another iterative root-finding procedure, the Secant Method



Previous PDF Next PDF





[PDF] Numerical methods - Roots finding

21 août 2018 · Bisection method is based on the Intermediate Value Theorem Intermediate See: Intermediate Value Theorem - Khan Academy Ricardo 



[PDF] The Newton-Raphson Method - UBC Math

1 Introduction The Newton-Raphson method, or Newton Method, is a powerful technique scribes another iterative root-finding procedure, the Secant Method

[PDF] newton raphson method proof

[PDF] newton raphson method questions

[PDF] newton's method for unconstrained optimization

[PDF] ngss mixtures and solutions

[PDF] nh4cl hydrolysis

[PDF] ni dp confirmation letter

[PDF] nice 10

[PDF] nice classification class 39

[PDF] nice france customs

[PDF] nicky hilton james rothschild net worth

[PDF] nicky hilton rothschild net worth 2019

[PDF] nielsen music

[PDF] nigeria employment and labour law 2019

[PDF] nike size guide

[PDF] nike training android app

The Newton-Raphson Method

1 Introduction

The Newton-Raphson method, or Newton Method, is a powerful technique for solving equations numerically. Like so much of the dierential calculus, it is based on the simple idea of linear approximation. The Newton Method, properly used, usually homes in on a root with devastating eciency. The essential part of these notes is Section 2.1, where the basic formula is derived, Section 2.2, where the procedure is interpreted geometrically, and|of course|Section 6, where the problems are. Peripheral but perhaps interesting is Section 3, where the birth of the Newton Method is described.

2 Using Linear Approximations to Solve Equa-

tions Letf(x) be a well-behaved function, and letrbe a root of the equation f(x) = 0. We start with an estimatex 0 ofr.Fromx 0 , we produce an improved|we hope|estimatex 1 .Fromx 1 , we produce a new estimate x 2 .Fromx 2 , we produce a new estimatex 3 . We go on until we are `close enough' tor|or until it becomes clear that we are getting nowhere. The above general style of proceeding is callediterative.Of the many it- erative root-nding procedures, the Newton-Raphson method, with its com- bination of simplicity and power, is the most widely used. Section 2.4 de- scribes another iterative root-nding procedure, theSecant Method.

Comment.The initial estimate is sometimes calledx

1 , but most mathe- maticians prefer to start counting at 0. Sometimes the initial estimate is called a \guess." The Newton Method is usually very very good ifx 0 is close tor,andcanbehorridifitisnot.

The \guess"x

0 should be chosen with care. 1

2.1 The Newton-Raphson Iteration

Letx 0 be a good estimate ofrand letr=x 0 +h.Sincethetruerootisr, andh=r-x 0 ,thenumberhmeasures how far the estimatex 0 is from the truth. Sincehis `small,' we can use the linear (tangent line) approximation to conclude that

0=f(r)=f(x

0 +h)f(x 0 )+hf 0 (x 0 and therefore, unlessf 0 (x 0 )iscloseto0, h-f(x 0 f 0 (x 0

It follows that

r=x 0 +hx 0 -f(x 0 f 0 (x 0

Our new improved (?) estimatex

1 ofris therefore given by x 1 =x 0 -f(x 0 f 0 (x 0

The next estimatex

2 is obtained fromx 1 in exactly the same way asx 1 was obtained fromx 0 x 2 =x 1 -f(x 1 f 0 (x 1

Continue in this way. Ifx

n is the current estimate, then the next estimate x n+1 is given by x n+1 =x n -f(x n f 0 (x n (1)

2.2 A Geometric Interpretation of the Newton-Raphson It-

eration In the picture below, the curvey=f(x) meets thex-axis atr.Letabe the current estimate ofr. The tangent line toy=f(x)atthepoint(a;f(a)) has equation y=f(a)+(x-a)f 0 (a):

Letbbe thex-intercept of the tangent line. Then

b=a-f(a) f 0 (a): 2 abrCompare with Equation 1:bis just the `next' Newton-Raphson estimate of r. The new estimatebis obtained by drawing the tangent line atx=a,and then sliding to thex-axis along this tangent line. Now draw the tangent line at (b;f(b)) and ride the new tangent line to thex-axis to get a new estimate c. Repeat. We can use the geometric interpretation to design functions and starting points for which the Newton Method runs into trouble. For example, by putting a little bump on the curve atx=awe can makebfly far away from r. When a Newton Method calculation is going badly, a picture can help us diagnose the problem and x it. It would be wrong to think of the Newton Method simply in terms of tangent lines. The Newton Method is used to nd complex roots of polynomials, and roots of systems of equations in several variables, where

the geometry is far less clear, but linear approximation still makes sense.2.3 The Convergence of the Newton Method

The argument that led to Equation 1 used the informal and imprecise symbol . We probe this argument for weaknesses. No numerical procedure works forallequations. For example, letf(x)= x 2 +17ifx6=1,andletf(1) = 0. The behaviour off(x)near1givesno clue to the fact thatf(1) = 0. Thus no method of successive approximation can arrive at the solution off(x) = 0. To make progress in the analysis, we need to assume thatf(x) is in some sense smooth. We will suppose that f 00 (x) (exists and) is continuous nearr. The tangent line approximation is|an approximation.Let's try to get a handle on the error. Imagine a particle travelling in a straight line, and letf(x) be its position at timex.Thenf

0(x) is the velocity at timex.If

the acceleration of the particle were always 0, then thechangein position from timex0 to timex 0 +hwould behf0(x 0 ). So the position at timex0 +h 3 would bef(x 0 )+hf 0 (x 0 )|note that this is the tangent line approximation, which we can also think of as the zero-acceleration approximation.

If the velocityvariesin the time fromx

0 tox 0 +h, that is, if the ac- celeration is not 0, then in general the tangent line approximation will not correctly predict the displacement at timex 0 +h. And the bigger the accel- eration, the bigger the error. It can be shown that iffis twice dierentiable then the error in the tangent line approximation is (1=2)h 2 f 00 (c)forsome cbetweenx 0 andx 0 +h. In particular, ifjf 00 (x)jis large betweenx 0 and x 0 +h, then the error in the tangent line approximation is large. Thus we can expectlarge second derivativesto be bad for the Newton Method. This is what goes wrong in Problem 7(b).

In the argument for Equation 1, from 0f(x

0 )+hf 0 (x 0 )weconcluded thath-f(x 0 )=f 0 (x 0 ). This can be quite wrong iff 0 (x 0 )iscloseto0: note that 3:01 is close to 3, but 3:01=10 -8 isnotatallcloseto3=10 -8 .Thus we can expectrst derivatives close to0 to be bad for the Newton Method.

This is what goes wrong in Problems 7(a) and 8.

These informal considerations can be turned into positivetheoremsabout the behaviour of the error in the Newton Method. For example, ifjf 00 (x)=f 0 (x)j is not too large nearr,andwestartwithanx 0 close enough tor,theNew- ton Method converges very fast tor. (Naturally, the theorem gives \not too large," \close enough," and \very fast" precise meanings.) The study of the behaviour of the Newton Method is part of a large and important area of mathematics calledNumerical Analysis.

2.4 The Secant Method

The Secant Method is the most popular of the many variants of the Newton

Method. We start withtwoestimates of the root,x

0 andx 1 . The iterative formula, forn1is x n+1 =x n -f(x n Q(x n-1 ;x n );whereQ(x n-1 ;x n )=f(x n-1 )-f(x n x n-1 -x n

Note that ifx

nis close tox n001,thenQ(x n-1quotesdbs_dbs21.pdfusesText_27