12 mar 2018 · Iterative methods are the only option for the majority of problems in numerical analysis, and may actually be quicker even when a direct method
Previous PDF | Next PDF |
[PDF] MATH 2P20 NUMERICAL ANALYSIS I Lecture Notes
ultimately, all numerical computation has to be reduced to these) 2 they are also easy to integrate and differentiate - we may thus substitute our fitted polynomial
[PDF] Numerical Analysis II – Lecture Notes - Department of Mathematical
12 mar 2018 · Iterative methods are the only option for the majority of problems in numerical analysis, and may actually be quicker even when a direct method
[PDF] MATH 435 Lecture Notes on Numerical Analysis - NIU Math
In this lecture, we will discuss numerical methods for the Root-Finding Problem Notes: 1 From the statement of the Bisection algorithm, it is clear that the
[PDF] Lecture Notes on Numerical Methods - Department of Mathematics
Preface What follows were my lecture notes for Math 3311: Introduction to Numerical Meth- ods, taught at the Hong Kong University of Science and Technology
[PDF] NOTES FOR NUMERICAL METHODS - MathCityorg
As analytic solutions are often either too tiresome or simply do not exist, we need to find an approximate method of solution This is where numerical analysis
[PDF] Numerical Analysis Notes - Unhaggle
Notes(PDF) Numerical Methods - Lecture Notes 2019 – 2020 Najm Numerical Analysis Handwritten Notes PDF for BSc Download S Baskar A short summary
[PDF] Introduction to Numerical Analysis - IITB Math - IIT Bombay
course is designed to meet this goal A first course in Calculus is indispensable for numerical analysis The first chapter of these lecture notes quickly reviews all
[PDF] Lecture Notes on Numerical Methods for Engineering (?) Pedro
Matlab: I shall try to make all the code in this notes runnable on Octave but this than geometric ideas because numerical analysis deals with formal methods of
[PDF] LECTURE NOTES on ELEMENTARY NUMERICAL METHODS
LECTURE NOTES on Numerical Methods for Initial Value Problems 341 In later analysis we shall need a quantity (called vector norm) that measures
[PDF] numerical analysis pdf for computer science
[PDF] numerical analysis pdf s.s sastry
[PDF] numerical analysis pdf sauer
[PDF] numerical analysis pdf solutions
[PDF] numerical analysis questions and answers pdf
[PDF] numerical mathematical analysis pdf
[PDF] numerical methods for computer science pdf
[PDF] numerical methods for engineering and science by saumyen guha pdf
[PDF] numerical methods for scientific and engineering computation 4th edition pdf
[PDF] numerical methods for solving system of nonlinear equations
[PDF] numerical methods in civil engineering book pdf
[PDF] numerical methods journal pdf
[PDF] numerical methods practical applications
[PDF] numerical methods problems and solutions pdf
Numerical Analysis II - Lecture Notes
Anthony Yeates
Durham University
March 12, 2018
Contents
0 What is numerical analysis? 4
0.1 Direct or iterative methods? . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40.2 Course outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61 Floating-point arithmetic 7
1.1 Fixed-point numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71.2 Floating-point numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71.3 Signi?cant ?gures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91.4 Rounding error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91.5 Loss of signi?cance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
102 Polynomial interpolation 12
2.1 Taylor series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
122.2 Polynomial interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
152.3 Lagrange form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
162.4 Newton/divided-di?erence form . . . . . . . . . . . . . . . . . . . . . . . . . .
182.5 Interpolation error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
212.6 Convergence and the Chebyshev nodes . . . . . . . . . . . . . . . . . . . . . .
232.7 Derivative conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
263 Di?erentiation 29
3.1 Higher-order ?nite di?erences . . . . . . . . . . . . . . . . . . . . . . . . . . .
303.2 Rounding error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
313.3 Richardson extrapolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
324 Nonlinear equations 35
4.1 Interval bisection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
354.2 Fixed point iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
374.3 Orders of convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
394.4 Newton"s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
404.5 Newton"s method for systems . . . . . . . . . . . . . . . . . . . . . . . . . . .
434.6 Aitken acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
444.7 ?asi-Newton methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46Numerical Analysis II - ARY22017-18 Lecture Notes
5 Linear equations 49
5.1 Triangular systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
495.2 Gaussian elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
515.3 LU decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
525.4 Pivoting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
555.5 Vector norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
575.6 Matrix norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
585.7 Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
625.8 Iterative methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
636 Least-squares approximation 68
6.1 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
686.2 Discrete least squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
696.3 QR decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
726.4 Continuous least squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
746.5 Orthogonal polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
767 Numerical integration 78
7.1 Newton-Cotes formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
797.2 Composite Newton-Cotes formulae . . . . . . . . . . . . . . . . . . . . . . . .
807.3 Exactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
827.4 Gaussian quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
83Numerical Analysis II - ARY32017-18 Lecture Notes
0What is numerical analysis?
Numerical analysis is the study of algorithms
for the problems of continuous mathematics. (from?e de?nition of numerical analysis, L. N. Trefethen) for scientists and engineers, nowadays using computers. ?e wordcontinuousis important: numerical analysis concerns real (or complex) variables, as opposed to discrete variables, which are the domain of computer science. 0.1Dir ector iterativ emetho ds?
Some problems can be solved by a ?nite sequence of elementary operations: adirect method. Example!Solve a system of simultaneous linear equations. x+2y=02xπy=1! 1:00 2:00
2:003:14!
x y! = 0:00 1:00! !Gaussian elimination::: Even in this simple example, we hit upon one problem:πis a transcendental number that can"t be represented exactly in a computer with ?nite memory. Instead, we will see that the computer uses a?oating-pointapproximation, which incursrounding error. But, unfortunately, most problems of continuous mathematics cannot be solved by a ?nite al- gorithm.Example!Evaluate sin(1:2).
We could do this with a Maclaurin series:
sin(1:2)=1:2(1:2)33! +(1:2)55! (1:2)77!To 8 decimal places, we get the partial sums
1:2 0:9120:932736
0:93202505
0:93203927
0:93203908
0:93203909
?isisaniterativemethod-keepaddinganextratermtoimprovetheapproximation. Iterative methods are the only option for the majority of problems in numerical analysis, and may actually be quicker even when a direct method exists. I?e word "iterative" derives from the latiniterare, meaning "to repeat".Numerical Analysis II - ARY42017-18 Lecture Notes
Even if our computer could do exact real arithmetic, there would still be an error resulting from stopping our iterative process at some ?nite point. ?is is calledtruncation error. We will be concerned with controlling this error and designing methods which converge as fast as possible. Example!?e famousp2 tablet from the Yale Babylonian Collection (photo: Bill Cassel-man,h?p://www.math.ubc.ca/cass/Euclid/ybc/ybc.html).?is is one of the oldest extant mathematical diagrams, dated approximately to 1800-1600 BC.
?e numbers along the diagonal of the square approximatep2 in base 60 (sexagesimal):1+2460
+51602+1060
3=1:41421296 to 9 s.f.
?is is already a good approximation to the true value p2=1:41421356 - much be?er than could be achieved by ruler and pencil measurements! I?e other numbers on the tablet relate to the calculation of the diagonal length for a square of side 30, which is30p242+2560
+35602:
Example!Iterative method forp2.
It is probable that the Babylonians used something like the following iterative method. Start with an initial guessx0=1:4. ?en iterate the following formula: x k+1=xk2 +1x k=)x1=1:4142857::: x2=1:414213564:::
I?is method is also known asHeron"s method, a?er a Greek mathematician who described it in the ?rst century AD. INotice that the method converges extremely rapidly! We will explain this later in the course when we discuss root?nding for nonlinear equations.Numerical Analysis II - ARY52017-18 Lecture Notes
0.2Course outline
In this course, we will learn how to do many common calculations quickly and accurately. In particular:1.Floating-point arithmetic(How do we represent real numbers on a computer?)
3.Numerical di?erentiation(How do we calculate derivatives?)
4.Nonlinear equations(How do we ?nd roots of nonlinear equations?)
5.Linear equations(How do we solve linear systems?)
systems?)7.Numerical integration(How do we calculate integrals?)
One area we won"t cover is how to solve di?erential equations. ?is is such an important topic that it has its own courseNumerical Di?erential Equations III/IV.Numerical Analysis II - ARY62017-18 Lecture Notes
1F loating-pointarithmetic
How do we represent numbers on a computer?
Integers can be represented exactly, up to some maximum size.Example!64-bit integers.
If 1 bit (binary digit) is used to store the sign, the largest possible number is1262+1261+:::+121+120=2631:
IIn Python, this is not really a worry. Even though the maximum size of a normal (32-bit) integer is 2311, larger results will be automatically promoted to "long" integers.
1.1Fixe d-pointnumb ers
In everyday life, we tend to use a?xed pointrepresentation Hereβis the base (e.g. 10 for decimal arithmetic or 2 for binary).Example!(10:1)2=121+020+121=2:5.
If we require thatd1,0 unlessk=2, then every number has a unique representation of this form, except for in?nite trailing sequences of digitsβ1.Example!3:1999=3:2.
1.2F loating-pointnumb ers
Computers use a?oating-pointrepresentation. Only numbers in a?oating-point number sys- temFRcan be represented exactly, where Here(0:d1d2dm)βis called thefraction(orsigni?candormantissa),βis the base, andeis theexponent. ?is can represent a much larger range of numbers than a ?xed-point system of the same size, although at the cost that the numbers are not equally spaced. Ifd1,0 then each number inFhas a unique representation andFis callednormalised.Example!Floating-point number system withβ=2,m=3,emin=1;emax=2.Numerical Analysis II - ARY72017-18 Lecture Notes
INotice that the spacing between numbers jumps by a factorβat each power ofβ. ?e largest possible number is(0:111)222=(12 +14 +18 )(4)=72 . ?e smallest non-zero number is (0:100)221=12 (12 )=14 Example!IEEE standard (1985) for double-precision (64-bit) arithmetic. Hereβ=2, and there are 52 bits for the fraction, 11 for the exponent, and 1 for the sign. ?e actual format used is Whenβ=2, the ?rst digit of a normalized number is always 1, so doesn"t need to be stored in memory. ?eexponent biasof 1022 means that the actual exponents are in the range1022 to 1025, sincee2[0;2047]. Actually the exponents1022 and 1025 are used to store0 and1respectively.
?e smallest non-zero number in this system is(0:1)2210212:22510308, and the largest number is(0:11)2210241:79810308.IIEEE stands for Institute of Electrical and Elec- tronics Engineers. ?is is the default system in Python/numpy. ?e automatic 1 is sometimes called the "hidden bit". ?e exponent bias avoids the need to store the sign of the exponent. Numbers outside the ?nite setFcannot be represented exactly. If a calculation falls below the lower non-zero limit (in absolute value), it is calledunder?ow, and usually set to 0. If it falls above the upper limit, it is calledover?ow, and usually results in a ?oating-point exception.Ie.g. in Python,2.0**1025leads to an exception.
Ie.g. Ariane 5 rocket failure (1996).?e maiden ?ight ended in failure. Only 40 seconds a?er initiation, at altitude 3700m, the launcher veered o? course and exploded. ?e cause was a so?ware exception during data conversion from a 64-bit ?oat to a 16-bit integer. ?e converted number was too large to be represented, causing an exception. IIn IEEE arithmetic, some numbers in the "zero gap" can be represented usinge=0, since only two possible fraction values are needed for0. ?e other fraction values may be used with ?rst (hidden) bit 0 to store a set of so-calledsubnormalnumbers. ?e mapping fromRtoFis calledroundingand denoted ?(x). Usually it is simply the nearest number inFtox. Ifxlies exactly midway between two numbers inF, a method of breaking ties is required. ?e IEEE standard speci?esround to nearest even- i.e., take the neighbour with last digit 0 in the fraction.I?is avoids statistical bias or prolonged dri?.Example!Our toy system from earlier.9
8 =(1:001)2has neighbours 1=(0:100)221and54 =(0:101)221, so is rounded down to 1. 118=(1:011)2has neighbours54 =(0:101)221and32 =0:110)221, so is rounded up to32
Numerical Analysis II - ARY82017-18 Lecture Notes
Ie.g. Vancouver stock exchange index.In 1982, the index was established at 1000. By November 1983, it had fallen to 520, even though the exchange seemed to be doing well. Ex- planation: the index was roundeddownto 3 digits at every recomputation. Since the errors were always in the same direction, they added up to a large error over time. Upon recalcula- tion, the index doubled! 1.3Signi?cant ?gur es
When doing calculations without a computer, we o?en use the terminology ofsigni?cant ?g- ures. To count the number of signi?cant ?gures in a numberx, start with the ?rst non-zero digit from the le?, and count all the digits therea?er, including ?nal zeros if they are a?er the decimal point. Example!3.1056, 31.050, 0.031056, 0.031050, and 3105.0 all have 5 signi?cant ?gures (s.f.). To roundxtons.f., replacexby the nearest number withns.f. An approximationˆxofxis "correct tons.f." if bothˆxandxround to the same number tons.f. 1.4Rounding err or
Ifjxjlies between the smallest non-zero number inFand the largest number inF, then ?(x)=x(1+δ);(1.3) where the relative error incurred by rounding is jδj=j?(x)xjjxj:(1.4) IRelative errors are o?en more useful because they are scale invariant. E.g., an error of 1 hour is irrelevant in estimating the age of this lecture theatre, but catastrophic in timing your arrival at the lecture. Nowxmay be wri?en asx=(0:d1d2)ββefor somee2[emin;emax], but the fraction will not terminate a?ermdigits ifxβmβe=) jδj 12
β1m:(1.5)
Here we used that the fractional part ofjxjis at least(0:1)ββ1. ?e numberϵM=12β1mis
called themachine epsilon(orunit roundo?), and is independent ofx. So the relative rounding error satis?es jδj ϵM:(1.6) ISometimes the machine epsilon is de?ned without the factor12 . For example, in Python, print(np.finfo(np.float64).eps). in the system.Numerical Analysis II - ARY92017-18 Lecture Notes
Example!For our system withβ=2,m=3, we haveϵM=12 :213=18 . For IEEE double precision, we haveβ=2 andm=53 (including the hidden bit), soϵM=12 :2153=2531:111016:
When adding/subtracting/multiplying/dividing two numbers inF, the result will not be inF in general, so must be rounded. Example!Our toy system again (β=2,m=3,emin=1,emax=2).Let us multiplyx=58
andy=78 . We have xy=3564 =12 +132+164
=(0:100011)2: ?is has too many signi?cant digits to represent in our system, so the best we can do is round the result to ?(xy)=(0:100)2=12 ITypically additional digits are used during the computation itself, as in our example. For= +;;;, IEEE standard arithmetic requires rounded exact operations, so that ?(xy)=(xy)(1+δ);jδj ϵM:(1.7) 1.5
Loss of signi?cance
You might think that (1.7) guarantees the accuracy of calculations to withinϵM, but this is true only ifxandyare themselves exact. In reality, we are probably starting from¯x=x(1+δ1) and¯y=y(1+δ2), withjδ1j;jδ2j ϵM. In that case, there is an error even before we round the
result, since x¯y=x(1+δ1)y(1+δ2)(1.8) =(xy)1+xδ1yδ2xy!
:(1.9) If the correct answerxyis very small, then there can be an arbitrarily large relative error in the result, compared to the errors in the initial ¯xand¯y. In particular, this relative error can be much larger thanϵM. ?is is calledloss of signi?cance, and is a major cause of errors in ?oating-point calculations.Example!?adratic formula for solvingx256x+1=0.
To 4 s.f., the roots are
x1=28+p783=55:98;x2=28p783=0:01786:
However, working to 4 s.f. we would compute
p783=27:98, which would lead to the results x1=55:98;¯x2=0:02000: ?e smaller root is not correct to 4 s.f., because of cancellation error. One way around this is to note thatx256x+1=(xx1)(xx2), and computex2fromx2=1=x1, which gives the correct answer. Numerical Analysis II - ARY102017-18 Lecture Notes INote that the error crept in when we roundedp783 to 27:98, because this removed digits that would otherwise have been signi?cant a?er the subtraction.Example!Evaluatef(x)=excos(x)xforxvery near zero.
Let us plot this function in the range5108x5108- even in IEEE double precisionarithmetic we ?nd signi?cant errors, as shown by the blue curve:?e red curve shows the correct result approximated using the Taylor series
f(x)=