Lectures on Computational Physics




Loading...







QCD in magnetic fields: from Hofstadter's butterfly to the phase

29 Oct 2014 structure of the butterfly. In the present talk this solid state physics problem will be interpreted from a new

Quantum Reality Complex Numbers and the Meteorological

butterfly had not flapped its wings exactly one month earlier? physics of the Lorenzian universe this counterfactual question is almost certainly.

Physics for Scientists & Engineers & Modern Physics 9th Ed

Wide-band butterfly network: stable and efficient inversion via multi

by “incorporating underlying physics” into the design of neural architectures. In instances where the problem data are smooth this demonstrably reduces the 

Dynamic Modeling of Butterfly Subdivision Surfaces

locations/areas and offers a novel solution to this problem by embedding the modified butterfly subdivision scheme in a physics-based modeling framework.

The application of CAD CAE & CAM in development of butterfly

The improved design of a butterfly valve disc is based on the concept of sandwich The final problem is due to the presence of cavitation on the low.

The application of CAD CAE & CAM in development of butterfly

The improved design of a butterfly valve disc is based on the concept of sandwich The final problem is due to the presence of cavitation on the low.

Hofstadter butterflies in nonlinear Harper lattices and their optical

13 May 2010 New Journal of Physics 12 (2010) 053017 (9pp) ... problem may involve in addition to deformed bands

Physics

19 Jan 2017 use this as a guide as to how much time to spend on each question. ... 15 The photograph shows a Blue Morpho butterfly.

Lectures on Computational Physics

21 May 2013 6.4 Lab Problem 8: The Butterfly Effect . ... A crucial tool in computational physics is programming languages. In simulations.

Lectures on Computational Physics 39567_7COMPUTATIONALNOTES_FINAL.pdf

Lectures on Computational Physics

Badis Ydri

Adel Bouchareb Rafik Chemam

Physics Department, Badji Mokhtar University, Annaba, Algeria

May 21, 2013

2ydri et al, lectures on computational physics

Contents

1 Introduction and References1

2 Euler Algorithm3

2.1 Euler Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.2 First Example and Sample Code . . . . . . . . . . . . . . . . . . . . . . .4

2.2.1 Radioactive Decay . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.2.2 A Sample Fortran Code . . . . . . . . . . . . . . . . . . . . . . . . 6

2.3 More Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.3.1 Air Resistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.3.2 Projectile Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.4 Periodic Motions and Euler-Cromer and Verlet Algorithms . . . . . . . . . 11

2.4.1 Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.4.2 Euler Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.4.3 Euler-Cromer Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 13

2.4.4 Verlet Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.5 Lab Problem1: Euler Algorithm- Air Resistance . . . . . . . . . . . . . . 14

2.6 Lab Problem2: Euler Algorithm- Projectile Motion . . . . . . . . . . . . 15

2.7 Lab Problem3: Euler, Euler-Cromer and Verlet Algorithms . . . . . . . . 16

3 Numerical Integration19

3.1 Rectangular Approximation . . . . . . . . . . . . . . . . . . . . . . . .. . 19

3.2 Trapezoidal Approximation . . . . . . . . . . . . . . . . . . . . . . . .. . 20

3.3 Parabolic Approximation or Simpson"s Rule . . . . . . . . . . .. . . . . . 20

3.4 Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.5 Lab Problem4: Numerical Integration . . . . . . . . . . . . . . . . . . . . 23

4 Newton-Raphson Algorithms and Interpolation 25

4.1 Bisection Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4.2 Newton-Raphson Algorithm . . . . . . . . . . . . . . . . . . . . . . . . .. 26

4.3 Hybrid Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

4.4 Lagrange Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . .. . 27

4.5 Cubic Spline Interpolation . . . . . . . . . . . . . . . . . . . . . . . .. . . 29

4.6 Lab Problem5: Newton-Raphson Algorithm . . . . . . . . . . . . . . . . . 31

4ydri et al, lectures on computational physics

5 The Solar System-The Runge-Kutta Methods 33

5.1 The Solar System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

5.1.1 Newton"s Second Law . . . . . . . . . . . . . . . . . . . . . . . . . 33

5.1.2 Astronomical Units and Initial Conditions . . . . . . . . .. . . . . 34

5.1.3 Kepler"s Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

5.1.4 The inverse-Square Law and Stability of Orbits . . . . . .. . . . . 37

5.2 Euler-Cromer Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . .. 37

5.3 The Runge-Kutta Algorithm . . . . . . . . . . . . . . . . . . . . . . . . .38

5.3.1 The Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

5.3.2 Example1: The Harmonic Oscillator . . . . . . . . . . . . . . . . . 40

5.3.3 Example2: The Solar System . . . . . . . . . . . . . . . . . . . . . 40

5.4 Precession of the Perihelion of Mercury . . . . . . . . . . . . . .. . . . . . 42

5.5 Lab Problem6: Runge-Kutta Algorithm- The Solar System . . . . . . . . 43

5.6 Lab Problem7: Precession of the perihelion of Mercury . . . . . . . . . . 44

6 Chaos: Chaotic Pendulum47

6.1 Equation of Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

6.2 Numerical Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49

6.2.1 Euler-Cromer Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 50

6.2.2 Runge-Kutta Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 50

6.3 Elements of Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

6.3.1 Butterfly Effect: Sensitivity to Initial Conditions . . .. . . . . . . 51

6.3.2 Poincare Section and Attractors . . . . . . . . . . . . . . . . . .. . 52

6.3.3 Period-Doubling Bifurcations . . . . . . . . . . . . . . . . . . . .. 52

6.3.4 Feigenbaum Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

6.3.5 Spontaneous Symmetry Breaking . . . . . . . . . . . . . . . . . . . 53

6.4 Lab Problem8: The Butterfly Effect . . . . . . . . . . . . . . . . . . . . . 54

6.5 Lab Problem9: Poincaré Sections . . . . . . . . . . . . . . . . . . . . . . . 55

6.6 Lab Problem10: Period Doubling . . . . . . . . . . . . . . . . . . . . . . . 56

6.7 Lab Problem11: Bifurcation Diagrams . . . . . . . . . . . . . . . . . . . . 57

7 Molecular Dynamics59

7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

7.2 The Lennard-Jones Potential . . . . . . . . . . . . . . . . . . . . . . .. . 60

7.3 Units, Boundary Conditions and Verlet Algorithm . . . . . . .. . . . . . 61

7.4 Some Physical Applications . . . . . . . . . . . . . . . . . . . . . . . .. . 63

7.4.1 Dilute Gas and Maxwell Distribution . . . . . . . . . . . . . . .. . 63

7.4.2 The Melting Transition . . . . . . . . . . . . . . . . . . . . . . . . 64

7.5 Lab Problem12: Maxwell Distribution . . . . . . . . . . . . . . . . . . . . 64

7.6 Lab Problem13: Melting Transition . . . . . . . . . . . . . . . . . . . . . 65

ydri et al, lectures computational physics5

8 Pseudo Random Numbers and Random Walks 67

8.1 Random Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

8.1.1 Linear Congruent or Power Residue Method . . . . . . . . . . .. . 67

8.1.2 Statistical Tests of Randomness . . . . . . . . . . . . . . . . . .. . 68

8.2 Random Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

8.2.1 Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

8.2.2 Diffusion Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

8.3 The Random Number Generators RAN0,1,2. . . . . . . . . . . . . . . . 74

8.4 Lab Problem14: Random Numbers . . . . . . . . . . . . . . . . . . . . . . 77

8.5 Lab Problem15: Random Walks . . . . . . . . . . . . . . . . . . . . . . . 78

9 Monte Carlo Integration79

9.1 Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 79

9.1.1 Rectangular Approximation Revisted . . . . . . . . . . . . . .. . . 79

9.1.2 Midpoint Approximation of Multidimensional Integrals . . . . . . . 80

9.1.3 Spheres and Balls indDimensions . . . . . . . . . . . . . . . . . . 82

9.2 Monte Carlo Integration: Simple Sampling . . . . . . . . . . . .. . . . . . 83

9.2.1 Sampling (Hit or Miss) Method . . . . . . . . . . . . . . . . . . . . 83

9.2.2 Sample Mean Method . . . . . . . . . . . . . . . . . . . . . . . . . 84

9.2.3 Sample Mean Method in Higher Dimensions . . . . . . . . . . . .. 84

9.3 The Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . . . .85

9.4 Monte Carlo Errors and Standard Deviation . . . . . . . . . . . .. . . . . 86

9.5 Nonuniform Probability Distributions . . . . . . . . . . . . . .. . . . . . . 89

9.5.1 The Inverse Transform Method . . . . . . . . . . . . . . . . . . . . 89

9.5.2 The Acceptance-Rejection Method . . . . . . . . . . . . . . . . .. 91

9.6 Lab Problem16: Midpoint and Monte Carlo Approximations . . . . . . . 91

9.7 Lab Problem17: Nonuniform Probability Distributions . . . . . . . . . . . 92

10 Monte Carlo Importance Sampling, Metropolis Algorithm and Ising

Model95

10.1 The Canonical Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . .. 95

10.2 Importance Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 97

10.3 The Ising Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

10.4 The Metropolis Algorithm . . . . . . . . . . . . . . . . . . . . . . . . .. . 98

10.5 The Heat-Bath Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . .100

10.6 The Mean Field Approximation . . . . . . . . . . . . . . . . . . . . . .. . 101

10.6.1 Phase Diagram and Critical Temperature . . . . . . . . . . .. . . 101

10.6.2 Critical Exponents . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

10.7 Simulation of The Ising Model and Numerical Results . . .. . . . . . . . 105

10.7.1 The Fortran Code . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

10.7.2 Some Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . 106

10.8 Lab Problem18: The Metropolis Algorithm and The Ising Model . . . . . 109

10.9 Lab Problem19: The Ferromagnetic Second Order Phase Transition . . . 110

10.10Lab Problem20: The2-Point Correlator . . . . . . . . . . . . . . . . . . 111

6ydri et al, lectures on computational physics

10.11Lab Problem21: Hysteresis and The First Order Phase Transition . . . . 111

Appendix115

A Notes on Numerical Errors115

A.1 Floating Point Representation . . . . . . . . . . . . . . . . . . . . .. . . . 115 A.2 Machine Precision and Roundoff Errors . . . . . . . . . . . . . . . .. . . 117 A.3 Systematic (Algorithmic) Errors . . . . . . . . . . . . . . . . . . .. . . . . 118 B The Jackknife Method and The Method of Least Squares 121

C Exercises123

1

Introduction and References

• Computational physics is a subfield of computational science/scientific computing. • In computational physics we combine elements from physics(especially theoretical), elements from mathematics (in particular applied mathematics such as numerical analysis) and elements from computer science (programming) for the purpose of solving a physics problem. • In physics there are traditionally two approaches.1)The experimental approach and2)The theoretical approach. Nowadays we consider "The computational ap- proach" as a third approach in physics. It can even be argued that the computa- tional approach is independent from the first two approachesand it is not just a bridge between them. • The most important use of computers in physics issimulation. Simulations are suited for nonlinear problems which can not generally solved by analytical methods. The starting point of a simulation is an idealized model of a physical system of interest. We want to check whether or not the behaviour of this model is consistent with observation. We specify an algorithm for the implementation of the model on a computer. The execution of this implementation is a simulation. Simulations are therefore virtual experiments. The comparison between simulations and laboratory experiments goes as follows:

2ydri et al, lectures on computational physics

Laboratory experimentSimulation

samplemodel physical apparatuscomputer program (thecode) calibrationtesting of code measurementcomputation data analysisdata analysis • A crucial tool in computational physics is programming languages. In simulations as used by the majority of research physicists codes are written in a high-level compiled language such as Fortran and C/C++. In such simulations we also use calls to routine libraries such as LAPACK. In this course we follow a different path by writing all our codes in a high-level compiled language and not call any libraries. • The use of mathematical software packages such as MAPLE, MATHEMATICA and MATLAB is only suited for relatively small calculations. These packages are interpreted languages and thus the code they produce run generally far too slowly compared to compiled languages. • As our programming language we will use Fortran77under the Linux operating system. We adopt the ubuntu distribution of Linux. We will use the Fortran compilersf77and gfortran. As an editor we will use emacs and for graphics we will use gnuplot. • The main references which we will follow in this course are: -N.J.Giordano, Computational Physics. -R.H.Landau, M.J.Paez, C.C.Bordeianu, Computational Physics. -H.Gould, J.Tobochnick, D.Christian, An Introduction To Computer Simula- tion Methods. -R.Fitzpatrick, Computational Physics. -M. Hjorth-Jensen,Computational Physics. -Paul L.DeVries, A First Course in Computational Physics. 2

Euler Algorithm

2.1 Euler Algorithm

It is a well appreciated fact that first order differential equations are commonplace in all branches of physics. They appear virtually everywhere and some of the most funda- mental problems of nature obey simple first order differential equations or second order differential equations. It is so often possible to recast second order differential equations as first order differential equations with a doubled number ofunknown. From the numer- ical standpoint the problem of solving first order differential equations is a conceptually simple one as we will now explain. We consider the general first order ordinary differential equation y ?=dy dx=f(x,y).(2.1) We impose the general initial-value boundary condition is y(x0) =y0.(2.2) We solve for the functiony=y(x)in the unitx-interval starting fromx0. We make the x-interval discretization x n=x0+nΔx , n= 0,1,...(2.3) The Euler algorithm is one of the oldest known numerical recipe. It consists in replacing the functiony(x)in the interval[xn,xn+1]by the straight line connecting the points (xn,yn)and(xn+1,yn+1). This comes from the definition of the derivative at the point x=xngiven by y n+1-yn xn+1-xn=f(xn,yn).(2.4)

4ydri et al, lectures on computational physics

This means that we replace the above first order differential equation by the finite dif- ference equation y n+1?yn+ Δxf(xn,yn).(2.5) This is only an approximation. The truncation error is givenby the next term in the Taylor"s expansion of the functiony(x)which is given by y n+1?yn+ Δxf(xn,yn) +1

2Δx2df(x,y)dx|x=xn+....(2.6)

The error then reads

1

2(Δx)2df(x,y)dx|x=xn.(2.7)

The error per step is therefore proportional to(Δx)2. In a unit interval we will perform N= 1/Δxsteps. The total systematic error is therefore proportional to

N(Δx)2=1

N.(2.8)

2.2 First Example and Sample Code

2.2.1 Radioactive Decay

It is an experimental fact that radioactive decay obeys a very simple first order differential equation. In a spontaneous radioactive decay a particle with no external influence will decay into other particles. A typical example is the nuclearisotope uranium235. The exact moment of decay of any one particle is random. This means that the number -dN(t) =N(t)-N(t+dt)of nuclei which will decay during a time inetrvaldtmust be proportional todtand to the numberN(t)of particles present at timet, i.e. -dN(t)? N(t)dt.(2.9) In other words the probability of decay per unit time given by(-dN(t)/N(t))/dtis a constant which we denote1/τ. The minus sign is due to the fact thatdN(t)is negative since the number of particles decreases with time. We write dN(t) dt=-N(t)τ.(2.10) The solution of this first order differential equation is given by a simple exponential function, viz

N(t) =N0exp(-t/τ).(2.11)

The numberN0is the number of particles at timet= 0. The timeτis called the mean lifetime. It is the average time for decay. For the uranium235the mean lifetime is around109years. ydri et al, lectures computational physics5 The goal now is to obtain an approximate numerical solution to the problem of radioactivity using the Euler algorithm. In this particular case we can compare to an exact solution given by the exponential decay law (2.11). Westart evidently from the

Taylor"s expansion

N(t+ Δt) =N(t) + ΔtdN

dt+12(Δt)2d2Ndt2+...(2.12)

We get in the limitΔt-→0

dN dt= LimΔt-→0N(t+ Δt)- N(t)Δt.(2.13) We takeΔtsmall but non zero. In this case we obtain the approximation dN dt?N(t+ Δt)- N(t)Δt.(2.14)

Equivalently

N(t+ Δt)? N(t) + ΔtdN

dt.(2.15)

By using (2.10) we get

N(t+ Δt)? N(t)-ΔtN(t)

τ.(2.16)

We will start from the number of particles at timet= 0given byN(0) =N0which is known. We substitutet= 0in (2.16) to obtainN(Δt) =N(1)as a function ofN(0). Next the valueN(1)can be used in equation (2.16) to getN(2Δt) =N(2), etc. We are thus led to the time discretization t≡t(i) =iΔt , i= 0,...,N.(2.17)

In other words

N(t) =N(i).(2.18)

The integerNdetermine the total time intervalT=NΔt. The numerical solution (2.16) can be rewritten as

N(i+ 1) =N(i)-ΔtN(i)

τ, i= 0,...,N.(2.19)

This is Euler algorithm for radioactive decay. For convenience we shift the integeriso that the above equation takes the form

N(i) =N(i-1)-ΔtN(i-1)

τ, i= 1,...,N+ 1.(2.20)

6ydri et al, lectures on computational physics

We introduceˆN(i) =N(i-1), i.eˆN(1) =N(0) =N0. We get ˆ

N(i+ 1) =ˆN(i)-ΔtˆN(i)

τ, i= 1,...,N+ 1.(2.21)

The corresponding times are

ˆ t(i+ 1) =iΔt , i= 1,...,N+ 1.(2.22)

The initial number of particles at time

ˆt(1) = 0isˆN(1) =N0. This approximate solution should be compared with the exact solution (2.11).

2.2.2 A Sample Fortran Code

The goal in this section is to provide a sample Fortran code which implements the above algorithm (2.21). The reasons behind choosing Fortran wereexplained in the introduc- tion. Any Fortran program, like any other programing language, must start with some program statementand conclude with anend statement. The program statement allows us to give a name to the program. The end statement may be preceded by areturn statement. This looks like program radioactivity c Here is the code return end We have chosen the name "radioactivity" for our program. The"c" in the second line indicates that the sentence "here is the code" is only a comment and not a part of the code. After the program statement come thedeclaration statements. We state thevariables and theirtypeswhich are used in the program. In Fortran we have theinteger typefor integer variables and thedouble precision typefor real variables. In the case of (2.21) the variablesˆN(i),ˆt(i),τ,Δt,N0are real numbers while the variablesiandNare integer numbers. AnarrayAof dimensionKis an ordered list ofKvariables of a given type called the elements of the array and denotedA(1),A(2),...,A(K). In our above exampleˆN(i) and ˆt(i)are real arrays of dimensionN+ 1. We declare thatˆN(i)andˆt(i)are real for alli= 1,...,N+ 1by writingˆN(1 :N+ 1)andˆt(1 :N+ 1). Since an array is declared at the begining of the program it must have a fixed size. In other words the upper limit must be a constant and not a variable. In Fortran a constant is declared with aparameter statement. In our above case the upper limit isN+ 1and henceNmust be declared in parameter statement. In the Fortran code we choose to use the notationA=ˆN,A0 =ˆN0,time =ˆt,Δ = Δt andtau =τ. By putting all declarations together we get the following preliminary lines of code ydri et al, lectures computational physics7 program radioactivity integer i,N parameter (N=100) doubleprecision A(1:N+1),A0,time(1:N+1),Delta,tau c Here is the code return end Theinputof the computation in our case are obviously given by the parametersN0,

τ,ΔtandN.

For the radioactivity problem the main part of the code consists of equations (2.21) and (2.22). We start with the known quantitiesˆN(1) =N0atˆt(1) = 0and generate via the successive use of (2.21) and (2.22)ˆN(i)andˆt(i)for alli >1. This will be coded using ado loop. It begins with ado statementand ends with anenddo statement. We may also indicate a step size. Theoutputof the computation can be saved to a file using awrite statementinside the do loop. In our case the output is the number of particlesˆN(i)and the timeˆt(i).

The write statement reads explicitly

write(10,?)ˆt(i),ˆN(i). The data will then be saved to a file calledfort.10. By including the initialization, the do loop and the write statement we obtain the complete code program radioactivity integer i,N parameter (N=100) doubleprecision A(1:N+1),A0,time(1:N+1),Delta,tau parameter (A0=1000,Delta=0.01d0,tau=1.0d0)

A(1)=A0

time(1)=0 do i=1,N+1,1

A(i+1)=A(i)-Delta*A(i)/tau

time(i+1)=i*Delta write(10,*) time(i+1),A(i+1) enddo return end

8ydri et al, lectures on computational physics

2.3 More Examples

2.3.1 Air Resistance

We consider an athlete riding a bicycle moving on a flat terrain. The goal is to determine the velocity. Newton"s second law is given by m dv dt=F.(2.23) Fis the force exerted by the athlete on the bicycle. It is clearly very difficult to write down a precise expression forF. Formulating the problem in terms of the power generated by the athlete will avoid the use of an explicit formula forF. Multiplying the above equation byvwe obtain dE dt=P.(2.24)

Eis the kinetic energy andPis the power, viz

E=1

2mv2, P=Fv.(2.25)

Experimentaly we find that the output of well trained athletes is aroundP= 400watts over periods of1h. The above equation can also be rewritten as dv 2 dt=2Pm.(2.26)

ForPconstant we get the solution

v 2=2P mt+v20.(2.27) We remark the unphysical effect thatv-→ ∞ast-→ ∞. This is due to the absence of the effect of friction and in particular air resistance. The most important form of friction is air resistance. The force due to air resistance (the drag force) is F drag=-B1v-B2v2.(2.28) At small velocities the first term dominates whereas at largevelocities it is the second term that dominates. For very small velocities the dependence onvgiven byFdrag= -B1vis known as Stockes" law. For reasonable velocities the dragforce is dominated by the second term, i.e. it is given for most objects by F drag=-B2v2.(2.29) The coefficientB2can be calculated as follows. As the bicycle-rider combination moves with velocityvit pushes in a timedta mass of air given bydmair=ρAvdtwhereρis the air density andAis the frontal cross section. The corresponding kinetic energy is dE air=dmairv2/2.(2.30) ydri et al, lectures computational physics9 This is equal to the work done by the drag force, i.e. -Fdragvdt=dEair.(2.31)

From this we get

B

2=CρA.(2.32)

The drag coefficient isC=1

2. The drag force becomes

F drag=-CρAv2.(2.33) Taking into account the force due to air resistance we find that Newton"s law becomes m dv dt=F+Fdrag.(2.34)

Equivalently

dv dt=Pmv-CρAv2m.(2.35) It is not obvious that this equation can be solved exactly in any easy way. The Euler algorithm gives the approximate solution v(i+ 1) =v(i) + Δtdv dt(i).(2.36)

In other words

v(i+ 1) =v(i) + Δt?P mv(i)-CρAv2(i)m? , i= 0,...,N.(2.37) This can also be put in the form (withˆv(i) =v(i-1))

ˆv(i+ 1) = ˆv(i) + Δt?P

mˆv(i)-CρAˆv2(i)m? , i= 1,...,N+ 1.(2.38)

The corresponding times are

t≡ˆt(i+ 1) =iΔt , i= 1,...,N+ 1.(2.39) The initial velocityˆv(1)at timet(1) = 0is known.

2.3.2 Projectile Motion

There are two forces acting on the projectile. The weight force and the drag force. The drag force is opposite to the velocity. In this case Newton"slaw is given by

10ydri et al, lectures on computational physics

md?vdt=?F+?Fdrag =m?g-B2v2?v v =m?g-B2v?v.(2.40) The goal is to determine the position of the projectile and hence one must solve the two equations d?x dt=?v.(2.41) m d?v dt=m?g-B2v?v.(2.42) In components (the horizontal axis isxand the vertical axis isy) we have4equations of motion given by dx dt=vx.(2.43) m dvx dt=-B2vvx.(2.44) dy dt=vy.(2.45) m dvy dt=-mg-B2vvy.(2.46)

We recall the constraint

v=? v2x+v2y.(2.47) The numerical approach we will employ in order to solve the4equations of motion (2.43)- (2.46) together with (2.47) consists in using Euler algorithm. This yields the approximate solution given by the equations x(i+ 1) =x(i) + Δtvx(i).(2.48) v x(i+ 1) =vx(i)-ΔtB2v(i)vx(i) m.(2.49) ydri et al, lectures computational physics11 y(i+ 1) =y(i) + Δtvy(i).(2.50) v y(i+ 1) =vy(i)-Δtg-ΔtB2v(i)vy(i) m.(2.51)

The constraint is

v(i) =? vx(i)2+vy(i)2.(2.52) In the above equations the indexiis such thati= 0,...,N. The initial position and velocity are given, i.e.x(0),y(0),vx(0)andvy(0)are known.

2.4 Periodic Motions and Euler-Cromer and Verlet Algo-

rithms As discussed above at each iteration using the Euler algorithm there is a systematic error proportional to1/N. Obviously this error will accumulate and may become so large that it will alter the solution drastically at later times. In theparticular case of periodic motions, where the true nature of the motion can only become clear after few elapsed periods, the large accumulated error can lead to diverging results. In this section we will discuss simple variants of the Euler algorithm which perform much better than the plain

Euler algorithm for periodic motions.

2.4.1 Harmonic Oscillator

We consider a simple pendulum: a particle of massmsuspended by a massless string from a rigid support. There are two forces acting on the particle. The weight and the tension of the string. Newton"s second law reads m d2?s dt=m?g+?T.(2.53) The parallel (with respect to the string) projection reads

0 =-mgcosθ+T.(2.54)

The perpendicular projection reads

m d2s dt2=-mgsinθ.(2.55) Theθis the angle that the string makes with the vertical. Clearlys=lθ. The force mgsinθis a restoring force which means that it is always directed toward the equilibrium position (hereθ= 0) opposite to the displacement and hence the minus sign in theabove equation. We get by usings=lθthe equation d 2θ dt2=-glsinθ.(2.56)

12ydri et al, lectures on computational physics

For smallθwe havesinθ?θ. We obtain

d 2θ dt2=-glθ.(2.57) The solution is a sinusoidal function of time with frequencyΩ =? g/l. It is given by

θ(t) =θ0sin(Ωt+φ).(2.58)

The constantsθ0andφdepend on the initial displacement and velocity of the pendulum. The frequency is independent of the massmand the amplitude of the motion and depends only on the lengthlof the string.

2.4.2 Euler Algorithm

The numerical solution is based on Euler algorithm. It is found as follows. First we replace the equation of motion (2.57) by the following two equations dθ dt=ω.(2.59) dω dt=-glθ.(2.60) We use the definition of a derivative of a function, viz df dt=f(t+ Δt)-f(t)Δt,Δt-→0.(2.61) We get for small but non zeroΔtthe approximations

θ(t+ Δt)?θ(t) +ω(t)Δt

ω(t+ Δt)?ω(t)-g

lθ(t)Δt.(2.62)

We consider the time discretization

t≡t(i) =iΔt , i= 0,...,N.(2.63)

In other words

θ(t) =θ(i), ω(t) =ω(i).(2.64)

The integerNdetermine the total time intervalT=NΔt. The above numerical solution can be rewritten as

ω(i+ 1) =ω(i)-g

lθ(i)Δt

θ(i+ 1) =θ(i) +ω(i)Δt.(2.65)

ydri et al, lectures computational physics13 We shift the integerisuch that it takes values in the range[1,N+ 1]. We obtain

ω(i) =ω(i-1)-g

lθ(i-1)Δt

θ(i) =θ(i-1) +ω(i-1)Δt.(2.66)

We introduceˆω(i) =ω(i-1)andˆθ(i) =θ(i-1). We get withi= 1,...,N+ 1the equations

ˆω(i+ 1) = ˆω(i)-g

lˆθ(i)Δt ˆ

θ(i+ 1) =ˆθ(i) + ˆω(i)Δt.(2.67)

By using the values ofθandωat timeiwe calculate the corresponding values at time i+ 1. The initial angle and angular velocityˆθ(1) =θ(0)andˆω(1) =ω(0)are known. This process will be repeated until the functionsθandωare determined for all times.

2.4.3 Euler-Cromer Algorithm

As it turns out the above Euler algorithm does not conserve energy. In fact Euler"s method is not good for all oscillatory systems. A simple modification of Euler"s algorithm due to Cromer will solve this problem of energy non conservation. This goes as follows.

We use the values of the angleˆθ(i)and the angular velocityˆω(i)at time stepito calculate

the angular velocityˆω(i+1)at time stepi+1. This step is the same as before. However

we useˆθ(i)andˆω(i+ 1)(and notˆω(i)) to calculateˆθ(i+ 1)at time stepi+ 1. This

procedure as shown by Cromer"s will conserve energy in oscillatory problems. In other words equations (2.67) become

ˆω(i+ 1) = ˆω(i)-g

lˆθ(i)Δt ˆ θ(i+ 1) =ˆθ(i) + ˆω(i+ 1)Δt.(2.68) The error can be computed as follows. From these two equations we get ˆ

θ(i+ 1) =ˆθ(i) + ˆω(i)Δt-g

lˆθ(i)Δt2 =

ˆθ(i) + ˆω(i)Δt+d2ˆθ

dt|iΔt2.(2.69) In other words the error per step is still of the order ofΔt2. However the Euler-Cromer algorithm does better than Euler algorithm with periodic motion. Indeed at each stepi the energy conservation condition reads E i+1=Ei+g

2l(ω2i-glθ2i)Δt2.(2.70)

The energy of the simple pendulum is of course by

E i=1

2ω2i+g2lθ2i.(2.71)

14ydri et al, lectures on computational physics

The error at each step is still proportional toΔt2as in the Euler algorithm. However the coefficient is precisely equal to the difference between the values of the kinetic energy and the potential energy at the stepi. Thus the accumulated error which is obtained by summing over all steps vanishes since the average kinetic energy is equal to the average potential energy. In the Euler algorithm the coefficient is actually equal to the sum of the kinetic and potential energies and as consequence no cancellation can occur.

2.4.4 Verlet Algorithm

Another method which is much more accurate and thus very suited to periodic motions is due to Verlet. Let us consider the forward and backward Taylor expansions

θ(ti+ Δt) =θ(ti) + Δtdθ

dt|ti+12(Δt)2d2θdt2|ti+16(Δt)3d3θdt3|ti+...(2.72)

θ(ti-Δt) =θ(ti)-Δtdθ

dt|ti+12(Δt)2d2θdt2|ti-16(Δt)3d3θdt3|ti+...(2.73)

Adding these expressions we get

θ(ti+ Δt) = 2θ(ti)-θ(ti-Δt) + (Δt)2d2θ dt2|ti+O(Δ4).(2.74)

We write this as

θ i+1= 2θi-θi-1-g l(Δt)2θi.(2.75) This is the Verlet algorithm for the harmonic oscillator. First we remark that the error is proportional toΔt4which is less than the errors in the Euler, Euler-Cromer (andeven less than the error in the second-order Runge-Kutta) methods so this method is much more accurate. Secondly in this method we do not need to calculate the angular velocity ω=dθ/dt. Thirdly this method is not self-starting. In other words given the initial conditionsθ1andω1we need also to knowθ2for the algorithm to start. We can for example determineθ2using the Euler method, vizθ2=θ1+ Δt ω1.

2.5 Lab Problem1: Euler Algorithm- Air Resistance

The equation of motion of a cyclist exerting a force on his bicycle corresponding to a constant powerPand moving against the force of air resistance is given by dv dt=Pmv-CρAv2m. The numerical approximation of this first order differentialequation which we will con- sider in this problem is based on Euler algorithm. ydri et al, lectures computational physics15 (1)Calculate the speedvas a function of time in the case of zero air resistance and then in the case of non-vanishing air resistance. What do youobserve. We will takeP= 200andC= 0.5. We also give the values m= 70kg, A= 0.33m2, ρ= 1.2kg/m3,Δt= 0.1s , T= 200s.

The initial speed is

ˆv(1) = 4m/s ,ˆt(1) = 0.

(2)What do you observe if we change the drag coefficient and/or thepower. What do you observe if we decrease the time step.

2.6 Lab Problem2: Euler Algorithm- Projectile Motion

The numerical approximation based on the Euler algorithm ofthe equations of motion of a projectile moving under the effect of the forces of gravity and air resistance is given by the equations v x(i+ 1) =vx(i)-ΔtB2v(i)vx(i) m. v y(i+ 1) =vy(i)-Δtg-ΔtB2v(i)vy(i) m. v(i+ 1) =? v2x(i+ 1) +v2y(i+ 1). x(i+ 1) =x(i) + Δt vx(i). y(i+ 1) =y(i) + Δt vy(i). (1)Write a Fortran code which implements the above Euler algorithm. (2)We take the values B 2 m= 0.00004m-1, g= 9.8m/s2. v(1) = 700m/s , θ= 30 degree. v x(1) =v(1)cosθ , vy(1) =v(1)sinθ.

N= 105,Δt= 0.01s.

Calculate the trajectory of the projectile with and withoutair resistance. What do you observe.

16ydri et al, lectures on computational physics

(3)We can determine numerically the range of the projectile by means of the condi- tional instruction if. This can be done by adding inside the do loop the following condition if (y(i+ 1).le.0) exit Determine the range of the projectile with and without air resistance. (4)In the case where air resistance is absent we know that the range is maximal when the initial angle is45degrees. Verify this fact numerically by considering several angles. More precisely add a do loop over the initial angle inorder to be able to study the range as a function of the initial angle. (5)In the case where air resistance is non zero calculate the angle for which the range is maximal.

2.7 Lab Problem3: Euler, Euler-Cromer and Verlet Algo-

rithms We will consider the numerical solutions of the equation of motion of a simple harmonic oscillator given by the Euler, Euler-Cromer and Verlet algorithms which take the form ω i+1=ωi-g lθiΔt , θi+1=θi+ωiΔt ,Euler. ω i+1=ωi-g lθiΔt , θi+1=θi+ωi+1Δt ,Euler-Cromer. θ i+1= 2θi-θi-1-g lθi(Δt)2,Verlet. (1)Write a Fortran code which implements the Euler, Euler-Cromer and Verlet algo- rithms for the harmonic oscillator problem. (2)Calculate the angle, the angular velocity and the energy of the harmonic oscillator as functions of time. The energy of the harmonic oscillator is given by E=1

2ω2+12glθ2.

We take the values

g= 9.8m/s2,l= 1m . We take the number of iterationsNand the time stepΔtto be

N= 10000,Δt= 0.05s.

The initial angle and the angular velocity are given by θ

1= 0.1 radian, ω1= 0.

By using the conditional instruction if we can limit the totaltime of motion to be equal to say5periods as follows if (t(i+ 1).ge.5?period) exit. ydri et al, lectures computational physics17 (3)Compare between the value of the energy calculated with the Euler method and the value of the energy calculated with the Euler-Cromer method. What do you observe and what do you conclude. (4)Repeat the computation using the Verlet algorithm. Remark that this method can not self-start from the initial valuesθ1andω1only. We must also provide the angle θ

2which can be calculated using for example Euler, viz

θ

2=θ1+ω1Δt.

We also remark that the Verlet algorithm does not require thecalculation of the angular velocity. However in order to calculate the energy we need to evaluate the angular velocity which can be obtained from the expression ω i=θi+1-θi-1

2Δt.

18ydri et al, lectures on computational physics

3

Numerical Integration

3.1 Rectangular Approximation

We consider a generic one dimensional integral of the form F=? b a f(x)dx.(3.1) In general this can not be done analytically. However this integral is straightforward to do numerically. The starting point is Riemann definition of the integralFas the area under the curve of the functionf(x)fromx=atox=b. This is obtained as follows. We discretize thex-interval so that we end up withNequal small intervals of lenght

Δx, viz

x n=x0+nΔx ,Δx=b-a

N(3.2)

Clearlyx0=aandxN=b. Riemann definition is then given by the following limit

F= lim?

Δx-→0, N-→∞, b-a=fixed??

ΔxN-1?

n=0f(xn)? .(3.3) The first approximation which can be made is to drop the limit.We get the so-called rectangular approximation given by F

N= ΔxN-1?

n=0f(xn).(3.4)

20ydri et al, lectures on computational physics

General integration algorithms approximate the integralFby F N=N? n=0f(xn)wn.(3.5) In other words we evaluate the functionf(x)atN+ 1points in the interval[a,b]then we sum the valuesf(xn)with some corresponding weightswn. For example in the rectangular approximation (3.4) the valuesf(xn)are summed with equal weightswn= Δx,n= 0,N-1andwN= 0. It is also clear that the estimationFNof the integralF becomes exact only in the largeNlimit.

3.2 Trapezoidal Approximation

The trapezoid rule states that we can approximate the integral by a sum of trapezoids. In the subinterval[xn,xn+1]we replace the functionf(x)by a straight line connecting the two points(xn,f(xn))and(xn+1,f(xn+1)). The trapezoid has as vertical sides the two straight linesx=xnandx=xn+1. The base is the intervalΔx=xn+1-xn. It is not difficult to convince ourselves that the area of this trapezoid is (f(xn+1)-f(xn))Δx

2+f(xn)Δx=(f(xn+1) +f(xn))Δx2.(3.6)

The integralFcomputed using the trapezoid approximation is therefore given by sum- ming the contributions from all theNsubinterval, viz T

N=N-1?

n=0(f(xn+1) +f(xn))Δx

2=?12f(x0) +N-1?

n=1f(xn) +12f(xN)?

Δx.(3.7)

We remark that the weights here are given byw0= Δx/2,wn= Δx,n= 1,...,N-1 andwN= Δx/2.

3.3 Parabolic Approximation or Simpson"s Rule

In this case we approximate the function in the subinterval[xn,xn+1]by a parabola given by f(x) =αx2+βx+γ.(3.8) The area of the corresponding box is thus given by ? xn+1 x ndx(αx2+βx+γ) =?αx3

3+βx22+γx?

xn+1 x n.(3.9) ydri et al, lectures computational physics21

Let us go back and consider the integral

? 1 -1dx(αx2+βx+γ) =2α

3+ 2γ.(3.10)

We remark that

f(-1) =α-β+γ , f(0) =γ , f(1) =α+β+γ.(3.11)

Equivalently

α=f(1) +f(-1)

2-f(0), β=f(1)-f(-1)2, γ=f(0).(3.12)

Thus ? 1 -1dx(αx2+βx+γ) =f(-1)

3+4f(0)3+f(1)3.(3.13)

In other words we can express the integral of the functionf(x) =αx2+βx+γover the interval[-1,1]in terms of the values of this functionf(x)atx=-1,0,1. Similarly we can express the integral off(x)over the adjacent subintervals[xn-1,xn]and[xn,xn+1] in terms of the values off(x)atx=xn+1,xn,xn-1, viz ? xn+1 x n-1dx f(x) =? xn+1 x n-1dx(αx2+βx+γ) = Δx?f(xn-1)

3+4f(xn)3+f(xn+1)3?

.(3.14) By adding the contributions from each pair of adjacent subintervals we get the full integral S

N= ΔxN-2

2? p=0? f(x2p)

3+4f(x2p+1)3+f(x2p+2)3?

.(3.15) Clearly we must haveN(the number of subintervals) even. We compute S

N=Δx

3? f(x0) + 4f(x1) + 2f(x2) + 4f(x3) + 2f(x4) +...+ 2f(xN-2) + 4f(xN-1) +f(xN)? . (3.16) It is trivial to read from this expression the weights in thisapproximation. Let us now recall the trapezoidal approximation given by T N=? f(x0) + 2N-1? n=1f(xn) +f(xN)?Δx

2.(3.17)

22ydri et al, lectures on computational physics

Let us also recall thatNΔx=b-ais the length of the total interval which is always kept fixed. Thus by doubling the number of subintervals we halve the width, viz

4T2N=?

2f(ˆx0) + 42N-1?n=1f(ˆxn) + 2f(ˆx2N)?Δx

2 = ?

2f(ˆx0) + 4N-1?

n=1f(ˆx2n) + 4N-1? n=0f(ˆx2n+1) + 2f(ˆx2N)?Δx 2 = ?

2f(x0) + 4N-1?

n=1f(xn) + 4N-1? n=0f(ˆx2n+1) + 2f(xN)?Δx

2.(3.18)

In above we have used the identificationˆx2n=xn,n= 0,1,...,N-1,N. Thus

4T2N-TN=?

f(x0) + 2N-1? n=1f(xn) + 4N-1? n=0f(ˆx2n+1) +f(xN)?

Δˆx

= 3SN.(3.19)

3.4 Errors

The error estimates for numerical integration are computedas follows. We start with the

Taylor expansion

f(x) =f(xn) + (x-xn)f(1)(xn) +1

2!(x-xn)2f(2)(xn) +...(3.20)

Thus ?xn+1 x ndx f(x) =f(xn)Δx+1

2!f(1)(xn)(Δx)2+13!f(2)(xn)(Δx)3+...(3.21)

The error in the interval[xn,xn+1]in the rectangular approximation is ? xn+1 x ndx f(x)-f(xn)Δx=1

2!f(1)(xn)(Δx)2+13!f(2)(xn)(Δx)3+...(3.22)

This is of order1/N2. But we haveNsubintervals. Thus the total error is of order1/N. The error in the interval[xn,xn+1]in the trapezoidal approximation is ? xn+1 x ndx f(x)-1

2(f(xn) +f(xn+1))Δx=?

xn+1 x ndx f(x) - 1

2(2f(xn) + Δxf(1)(xn) +12!(Δx)2f(2)(xn) +...)Δx

= ( 1

3!-1212!)f(2)(xn)(Δx)3+...(3.23)

This is of order1/N3and thus the total error is of order1/N2. ydri et al, lectures computational physics23 In order to compute the error in the interval[xn-1,xn+1]in the parabolic approxi- mation we compute ? xn x n-1dx f(x) +? xn+1 x ndx f(x) = 2f(xn)Δx+2

3!(Δx)3f(2)(xn) +25!(Δx)5f(4)(xn) +...

(3.24)

Also we compute

Δx

3(f(xn+1) +f(xn-1) + 4f(xn)) = 2f(xn)Δx+23!(Δx)3f(2)(xn) +23.4!(Δx)5f(4)(xn) +...

(3.25) Hence the error in the interval[xn-1,xn+1]in the parabolic approximation is ? xn+1 x n-1dx f(x)-Δx

3(f(xn+1) +f(xn-1) + 4f(xn)) = (25!-23.4!)(Δx)5f(4)(xn) +...

(3.26) This is of order1/N5. The total error is therefore of order1/N4.

3.5 Lab Problem4: Numerical Integration

(1)We take the integral I=? 1 0 f(x)dx;f(x) = 2x+ 3x2+ 4x3. Calculate the value of this integral using the rectangular approximation. Compare with the exact result. Hint: You can code the function using either "subroutine" or"function". (2)Calculate the numerical error as a function ofN. Compare with the theory. (3)Repeat the computation using the trapezoid method and the Simpson"s rule. (4)Take now the integrals I=? π 2

0cosxdx , I=?

e 11 xdx , I=? +1 -1lim?-→0?

1π?x2+?2?

dx.

24ydri et al, lectures on computational physics

4

Newton-Raphson Algorithms and Interpolation

4.1 Bisection Algorithm

Letfbe some function. We are interested in the solutions (roots)of the equation f(x) = 0.(4.1) The bisection algorithm works as follows. We start with two values ofxsayx+andx- such that f(x-)<0, f(x+)>0.(4.2) In other words the function changes sign in the interval betweenx-andx+and thus there must exist a root betweenx-andx+. If the function changes from positive to negative as we increasexwe conclude thatx+≤x-. We bisect the interval[x+,x-]at x=x++x-

2.(4.3)

Iff(x)f(x+)>0thenx+will be changed to the pointxotherwisex-will be changed to the pointx. We continue this process until the change inxbecomes insignificant or until the error becomes smaller than some tolerance. The relative error is defined by error = x+-x- x.(4.4) Clearly the absolute errore=xi-xfis halved at each iteration and thus the rate of convergence of the bisection rule is linear. This is slow.

26ydri et al, lectures on computational physics

4.2 Newton-Raphson Algorithm

We start with a guessx0. The new guessxis written asx0plus some unknown correction

Δx, viz

x=x0+ Δx.(4.5)

Next we expand the functionf(x)aroundx0, namely

f(x) =f(x0) + Δxdf dx|x=x0.(4.6) The correctionΔxis determined by finding the intersection point of this linear approxi- mation off(x)with thexaxis. Thus f(x0) + Δxdf dx|x=x0= 0 =?Δx=-f(x0)(df/dx)|x=x0.(4.7) The derivative of the functionfis required in this calculation. In complicated problems it is much simpler to evaluate the derivative numerically than analytically. In these cases the derivative may be given by the forward-difference approximation (with someδxnot necessarily equal toΔx) df dx|x=x0=f(x0+δx)-f(x0)δx.(4.8) In summary this method works by drawing the tangent to the functionf(x)at the old guessx0and then use the intercept with thexaxis as the new hopefully better guessx. The process is repeated until the change inxbecomes insignificant. Next we compute the rate of convergence of the Newton-Raphson algorithm. Starting fromxithe next guess isxi+1given by x i+1=xi-f(xi) f?(x).(4.9) The absolute error at stepiis?i=x-xiwhile the absolute error at stepi+ 1is ? i+1=x-xi+1wherexis the actual root. Then ? i+1=?i+f(xi) f?(x).(4.10)

By using Taylor expansion we have

f(x) = 0 =f(xi) + (x-xi)f?(xi) +(x-xi)2

2!f??(xi) +...(4.11)

In other words

f(xi) =-?if?(xi)-?2i

2!f??(xi) +...(4.12)

Therefore the error is given by

? i+1=-?2i 2f ??(xi)f?(xi).(4.13) This is quadratic convergence. This is faster than the bisection rule. ydri et al, lectures computational physics27

4.3 Hybrid Method

We can combine the certainty of the bisection rule in finding aroot with the fast con- vergence of the Newton-Raphson algorithm into a hybrid algorithm as follows. First we must know that the root is bounded in some interval[a,c]. We can use for example a graphical method. Next we start from some initial guessb. We take a Newton-Raphson step b ?=b-f(b) f?(b).(4.14) We check whether or not this step is bounded in the interval[a,c]. In other words we must check that a≤b-f(b) f?(b)≤c?(b-c)f?(b)-f(b)≤0≤(b-a)f?(b)-f(b).(4.15)

Therefore if

? (b-c)f?(b)-f(b)?? (b-a)f?(b)-f(b)? <0(4.16) Then the Newton-Raphson step is accepted else we take instead a bisection step.

4.4 Lagrange Interpolation

Let us first recall that taylor expansion allows us to approximate a function at a point xif the function and its derivatives are known in some neighbouring pointx0. The lagrange interpolation tries to approximate a function at apointxif only the values of the function in several other points are known. Thus this method does not require the knowledge of the derivatives of the function. We start from taylor expansion f(y) =f(x) + (y-x)f?(x) +1

2!(y-x)2f??(x) +..(4.17)

Let us assume that the function is known at three pointsx1,x2andx3. In this case we can approximate the functionf(x)by some functionp(x)and write f(y) =p(x) + (y-x)p?(x) +1

2!(y-x)2p??(x).(4.18)

We have

f(x1) =p(x) + (x1-x)p?(x) +1

2!(x1-x)2p??(x)

f(x2) =p(x) + (x2-x)p?(x) +1

2!(x2-x)2p??(x)

f(x3) =p(x) + (x3-x)p?(x) +1

2!(x3-x)2p??(x).(4.19)

28ydri et al, lectures on computational physics

We can immediately find

p(x) =1

1 +a2+a3f(x1) +a21 +a2+a3f(x2) +a31 +a2+a3f(x3).(4.20)

The coefficientsa2anda3solve the equations

a

2(x2-x)2+a3(x3-x)2=-(x1-x)2

a

2(x2-x) +a3(x3-x) =-(x1-x).(4.21)

We find

a

2=(x1-x)(x3-x1)

(x2-x)(x2-x3), a3=-(x1-x)(x2-x1)(x3-x)(x2-x3).(4.22) Thus

1 +a2+a3=(x3-x1)(x2-x1)

(x2-x)(x3-x).(4.23)

Therefore we get

p(x) =(x-x2)(x-x3) (x1-x2)(x1-x3)f(x1) +(x-x1)(x-x3)(x2-x1)(x2-x3)f(x2) +(x-x1)(x-x2)(x3-x1)(x3-x2)f(x3). (4.24)

This is a quadratic polynomial.

Letxbe some independent variable with tabulated valuesxi,i= 1,2,...,n.. The dependent variable is a functionf(x)with tabulated valuesfi=f(xi). Let us then assume that we can approximatef(x)by a polynomial of degreen-1, viz p(x) =a0+a1x+a2x2+...+an-1xn-1.(4.25) A polynomial which goes through thenpoints(xi,fi=f(xi))was given by Lagrange.

This is given by

p(x) =f1λ1(x) +f2λ2(x) +...+fnλn(x).(4.26) λ i(x) =? n j(?=i)=1x-xj xi-xj.(4.27)

We remark

λ i(xj) =δij.(4.28) n ? i=1λ i(x) = 1.(4.29) The Lagrange polynomial can be used to fit the entire table withnequal the number of points in the table. But it is preferable to use the Lagrange polynomial to to fit only a small region of the table with a small value ofn. In other words use several polynomials to cover the whole table and the fit considered here is local and not global. ydri et al, lectures computational physics29

4.5 Cubic Spline Interpolation

We considernpoints(x1,f(x1)),(x2,f(x2)),...,(xn,f(xn))in the plane. In every interval x j≤x≤xj+1we approximate the functionf(x)with a cubic polynomial of the form p(x) =aj(x-xj)3+bj(x-xj)2+cj(x-xj) +dj.(4.30)

We assume that

p j=p(xj) =f(xj).(4.31) In other words thepjfor allj= 1,2,...,n-1are known. From the above equation we conclude that d j=pj.(4.32)

We compute

p ?(x) = 3aj(x-xj)2+ 2bj(x-xj) +cj.(4.33) p ??(x) = 6aj(x-xj) + 2bj.(4.34) Thus we get by substitutingx=xjintop??(x)the result b j=p??j

2.(4.35)

By substitutingx=xj+1intop??(x)we get the result

a j=p??j+1-p??j

6hj.(4.36)

By substitutingx=xj+1intop(x)we get

p j+1=ajh3j+bjh2j+cjhj+pj.(4.37)

By using the values ofajandbjwe obtain

c j=pj+1-pj hj-hj6(p??j+1+ 2p??j).(4.38) Hence p(x) =p??j+1-p??j

6hj(x-xj)3+p??j2(x-xj)2+?pj+1-pjhj-hj6(p??j+1+ 2p??j)?

(x-xj) +pj. (4.39)

30ydri et al, lectures on computational physics

In other words the polynomials are determined frompjandp??j. Thepjare known given bypj=f(xj). It remains to determinep??j. We take the derivative of the above equation p ?(x) =p??j+1-p??j

2hj(x-xj)2+p??j(x-xj) +?pj+1-pjhj-hj6(p??j+1+ 2p??j)?

.(4.40) This is the derivative in the interval[xj,xj+1]. We compute p ?(xj) =?pj+1-pj hj-hj6(p??j+1+ 2p??j)? .(4.41)

The derivative in the interval[xj-1,xj]is

p ?(x) =p??j-p??j-1

2hj-1(x-xj-1)2+p??j-1(x-xj-1) +?pj-pj-1hj-1-hj-16(p??j+ 2p??j-1)?

.(4.42)

We compute

p ?(xj) =p??j-p??j-1

2hj-1+p??j-1hj-1+?pj-pj-1hj-1-hj-16(p??j+ 2p??j-1)?

.(4.43)

By matching the two expressions forp?(xj)we get

h j-1p??j-1+ 2(hj+hj-1)p??j+hjp??j+1= 6?pj+1-pj hj-pj-pj-1hj-1? .(4.44) These aren-2equations sincej= 2,...,n-1fornunknownp??j. We need two more equations. These are obtained by computing the first derivativep?(x)atx=x1and x=xn. We obtain the two equations h

1(p??2+ 2p??1) =6(p2-p1)

h1-6p?1.(4.45) h n-1(p??n-1+ 2p??n) =-6(pn-pn-1) hn-1+ 6p?n.(4.46) Thenequations (4.44), (4.45) and (4.46) correspond to a tridiagonal linear system. In generalp?1andp?nare not known. In this case we may use natural spline in which the second derivative vanishes at the end points and hence p 2-p1 h1-p?1=pn-pn-1hn-1-p?n= 0.(4.47) ydri et al, lectures computational physics31

4.6 Lab Problem5: Newton-Raphson Algorithm

A particle of massmmoves inside a potential well of heightVand length2acentered around0. We are interested in the states of the system which have energies less thanV, i.e. bound states. The states of the system can be even or odd.The energies associated with the even wave functions are solutions of the transcendental equation

αtanαa=β.

α=?

2mE ?2, β=?

2m(V-E)

?2. In the case of the infinite potential well we find the solutions E n=(n+1

2)2π2?2

2ma2, n= 0,1....

We choose (dropping units)

?= 1, a= 1,2m= 1. In order to find numerically the energiesEnwe will use the Newton-Raphson algorithm which allows us to find the roots of the equationf(x) = 0as follows. From an initial guessx0, the first approximationx1to the solution is determined from the intersection of the tangent to the functionf(x)atx0with thex-axis. This is given by x

1=x0-f(x0)

f?(x0). Next by usingx1we repeat the same step in order to find the second approximationx2 to the solution. In general the approximationxi+1to the desired solution in terms of the approximationxiis given by the equation x i+1=xi-f(xi) f?(xi). (1)ForV= 10, determine the solutions using the graphical method. Consider the two functions f(α) = tanαa , g(α) =β

α=?

V

α2-1.

(2)Find using the method of Newton-Raphson the two solutions with a tolerance equal 10 -8. For the first solution we take the initial guessα=π/aand for the second solution we take the initial guessα= 2π/a. (3)Repeat forV= 20. (4)Find the4solutions forV= 100. Use the graphical method to determine the initial step each time. (5)Repeat the above questions using the bisection method.

32ydri et al, lectures on computational physics

5

The Solar System-The Runge-Kutta Methods

5.1 The Solar System

5.1.1 Newton"s Second Law

We consider the motion of the Earth around the Sun. Letrbe the distance andMsand M ebe the masses of the Sun and the Earth respectively. We neglect the effect of the other planets and the motion of the Sun (i.e. we assume thatMs>> Me). The goal is to calculate the position of the Earth as a function of time. We start from Newton"s second law of motion M ed2?r dt2=-GMeMsr3?r =-GMeMs r3(x?i+y?j).(5.1)

We get the two equations

d 2x dt2=-GMsr3x.(5.2) d 2y dt2=-GMsr3y.(5.3) We replace these two second-order differential equations bythe four first-order differential equations dx dt=vx.(5.4)

34ydri et al, lectures on computational physics

dvx dt=-GMsr3x.(5.5) dy dt=vy.(5.6) dv y dt=-GMsr3y.(5.7)

We recall

r=? x2+y2.(5.8)

5.1.2 Astronomical Units and Initial Conditions

The distance will be measured in astronomical units (AU) whereas time will be measured in years. One astronomical unit of lenght (1AU) is equal to the average distance between the earth and the sun, viz1AU = 1.5×1011m. The astronomical unit of mass can be found as follows. Assuming a circular orbit we have M ev2 r=GMsMer2.(5.9)

Equivalently

GM s=v2r.(5.10) The radius isr= 1AU. The velocity of the earth isv= 2πr/yr = 2πAU/yr. Hence GM s= 4π2AU3/yr2.(5.11) For the numerical simulations it is important to determine the correct initial conditions. The orbit of Mercury is known to be an ellipse with eccentricitye= 0.206and radius (semimajor axis)a= 0.39 AUwith the Sun at one of the foci. The distance between the Sun and the center isea. The first initial condition isx0=r1,y0= 0wherer1 is the maximum distance from Mercury to the Sun,i.e.r1= (1 +e)a= 0.47 AU. The second initial condition is the velocity(0,v1)which can be computed using conservation of energy and angular momentum. For example by comparing with the point(0,b)on the orbit wherebis the semiminor axis, i.eb=a⎷

1-e2the velocity(v2,0)there can

be obtained in terms of(0,v1)from conservation of angular momentum as follows r

1v1=bv2?v2=r1v1

b.(5.12)

Next conservation of energy yields

- GMsMm r1+12Mmv21=-GMsMmr2+12Mmv22.(5.13)

In abover2=⎷

e2a2+b2is the distance between the Sun and Mercury when at the point(0,b). By substituting the value ofv2we get an equation forv1. This is given by v 1=? GMs a1-e1 +e= 8.2 AU/yr.(5.14) ydri et al, lectures computational physics35

5.1.3 Kepler"s Laws

Kepler"s laws are given by the following three statements: • The planets move in elliptical orbits around the sun. The sun resides at one focus. • The line joining the sun with any planet sweeps out equal areas in equal times. • Given an orbit with a periodTand a semimajor axisathe ratioT2/a3is a constant. The derivation of these three laws proceeds as follows. We work in polar coordinates.

Newton"s second law reads

M e¨?r=-GMsMe r2ˆr.(5.15)

We use

ˆr=θˆθandˆθ=-θˆrto derive?r= rˆr+rθˆθand¨?r= (¨r-rθ2)ˆr+ (r¨θ+ 2rθ)ˆθ.

Newton"s second law decomposes into the two equations r

¨θ+ 2rθ= 0.(5.16)

¨r-rθ2=-GMs

r2.(5.17)

Let us recall that the angular momentum by unit mass is definedby?l=?r×?r=r2θˆr׈θ.

Thusl=r2θ. Equation (5.16) is precisely the requirement that angularmomentum is conserved. Indeed we compute dl dt=r(r¨θ+ 2rθ) = 0.(5.18) Now we remark that the area swept by the vector?rin a time intervaldtisdA= (r×rdθ)/2 wheredθis the angle traveled by?rduringdt. Clearly dA dt=12l.(5.19) In other words the planet sweeps equal areas in equal times sincelis conserved. This is

Kepler"s second law.

The second equation (5.17) becomes now

¨r=l2

r3-GMsr2(5.20)

By multiplying this equation withrwe obtain

d dtE= 0, E=12r2+l22r2-GMsr.(5.21)

36ydri et al, lectures on computational physics

This is precisely the statement of conservation of energy.Eis the energy per unit mass.

Solving fordtin terms ofdrwe obtain

dt=dr ? 2?

E-l22r2+GMsr?

(5.22)

Howeverdt= (r2dθ)/l. Thus

dθ=ldr r2?2?

E-l22r2+GMsr?

(5.23) By integrating this equation we obtain (withu= 1/r)

θ=?ldr

r2?2?

E-l22r2+GMsr?

=-?du ? 2E l2+2GMsl2u-u2.(5.24)

This integral can be done explicitly. We get

θ=-arccos?u-C

eC? +θ?, e=?1 +2l2EG2M2s, C=GMsl2.(5.25) By inverting this equation we get an equation of ellipse with eccentricityesinceE <0, viz 1 r=C(1 +ecos(θ-θ?)).(5.26) This is Kepler"s first law. The angle at whichris maximum isθ-θ?=π. This distance is precisely(1 +e)awhereais the semi-major axis of the ellipse sinceeais the distance between the Sun which is at one of the two foci and the center ofthe ellipse. Hence we obtain the relation (1-e2)a=1

C=l2GMs.(5.27)

From equation (5.19) we can derive Kepler"s third law. By integrating both sides of the equation over a single periodTand then taking the square we get A 2=1

4l2T2.(5.28)

ydri et al, lectures computational physics37 Ais the area of the ellipse, i.e.A=πabwhere the semi-minor axisbis related the semi-major axisabyb=a⎷

1-e2. Hence

π

2a4(1-e2) =1

4l2T2.(5.29)

By using equation (5.27) we get the desired formula T 2 a3=4π2GMs.(5.30)

5.1.4 The inverse-Square Law and Stability of Orbits

Any object with mass generates a gravitational field and thusgravitational field lines will emanate from the object and radiate outward to infinity.The number of field lines Nis proportional to the mass. The density of field lines crossing a sphere of radiusr surrounding this object is given byN/4πr2. This is the origin of the inverse-square law. Therefore any other object placed in this gravitational field will experience a gravitational force proportional to the number of field lines which intersect it. If the distance between this second object and the source is increased the force on itwill become weaker because the number of field lines which intersect it will decrease as we are further away from the source.

5.2 Euler-Cromer Algorithm

The time discretization is

t≡t(i) =iΔt , i= 0,...,N.(5.31) The total time interval isT=NΔt. We definex(t) =x(i),vx(t) =vx(i),y(t) =y(i), v y(t) =vy(i). Equations (5.4), (5.5), (5.6),(5.7) and (5.8) become (withi= 0,...,N) v x(i+ 1) =vx(i)-GMs (r(i))3x(i)Δt.(5.32) x(i+ 1) =x(i) +vx(i)Δt.(5.33) v y(i+ 1) =vy(i)-GMs (r(i))3y(i)Δt.(5.34) y(i+ 1) =y(i) +vy(i)Δt.(5.35) r(i) =? x(i)2+y(i)2.(5.36)

38ydri et al, lectures on computational physics

This is Euler algorithm. It can also be rewritten withˆx(i) =x(i-1),ˆy(i) =y(i-1), ˆvx(i) =vx(i-1),ˆvy(i) =vy(i-1),ˆr(i) =r(i-1)andi= 1,...,N+ 1as

ˆvx(i+ 1) = ˆvx(i)-GMs

(ˆr(i))3ˆx(i)Δt.(5.37)

ˆx(i+ 1) = ˆx(i) + ˆvx(i)Δt.(5.38)

ˆvy(i+ 1) = ˆvy(i)-GMs

(ˆr(i))3ˆy(i)Δt.(5.39)

ˆy(i+ 1) = ˆy(i) + ˆvy(i)Δt.(5.40)

ˆr(i) =?

ˆx(i)2+ ˆy(i)2.(5.41)

In order to maintain energy conservation we employ Euler-Cromer algorithm. We calcu- late as in the Euler"s algorithm the velocity at time stepi+ 1by using the position and velocity at time stepi. However we compute the position at time stepi+1by using the position at time stepiand the velocity at time stepi+ 1, viz

ˆvx(i+ 1) = ˆvx(i)-GMs

(ˆr(i))3ˆx(i)Δt.(5.42)

ˆx(i+ 1) = ˆx(i) + ˆvx(i+ 1)Δt.(5.43)

ˆvy(i+ 1) = ˆvy(i)-GMs

(ˆr(i))3ˆy(i)Δt.(5.44)

ˆy(i+ 1) = ˆy(i) + ˆvy(i+ 1)Δt.(5.45)

5.3 The Runge-Kutta Algorithm

5.3.1 The Method

The problem is still trying to solve the first order differential equation dy dx=f(x,y).(5.46) In the Euler"s method we approximate the functiony=y(x)in each interval[xn,xn+1] by the straight line y n+1=yn+ Δxf(xn,yn).(5.47) ydri et al, lectures computational physics39 The slopef(xn,yn)of this line is exactly given by the slope of the functiony=y(x)at the begining of the inetrval[xn,xn+1]. Given the valueynatxnwe evaluate the valueyn+1atxn+1using the method of Runge-Kutta as follows. First the middle of the interval[xn,xn+1]which is at the value x n+1

2Δxcorresponds to they-valueyn+1calculated using the Euler"s method, viz

y n+1=yn+1

2k1where

k

1= Δxf(xn,yn).(5.48)

Second the slope at this middle point(xn+1

2Δx,yn+12k1)which is given by

k 2

Δx=f(xn+12Δx,yn+12k1)(5.49)

is the value of the slope which will be used to estimate the correct value ofyn+1atxn+1 using again Euler"s method, namely y n+1=yn+k2.(5.50)

In summary the Runge-Kutta algorithm is given by

k

1= Δxf(xn,yn)

k

2= Δxf(xn+1

2Δx,yn+12k1)

y n+1=yn+k2.(5.51) The error in this method is proportional toΔx3. This can be shown as follows. We have y(x+ Δx) =y(x) + Δxdy dx+12(Δx)2d2ydx2+... =y(x) + Δxf(x,y) +1

2(Δx)2ddxf(x,y) +...

=y(x) + Δx? f(x,y) +1

2Δx∂f∂x+12Δxf(x,y)∂f∂y?

+... =y(x) + Δxf(x+1

2Δx,y+12Δxf(x,y)) +O(Δx3)

=y(x) + Δxf(x+1

2Δx,y+12k1) +O(Δx3)

=y(x) +k2+O(Δx3).(5.52) Let us finally note that the above Runge-Kutta method is strictly speaking the second- order Runge-Kutta method. The first-order Runge-Kutta method is the Euler algorithm. The higher-order Runge-Kutta methods will not be discussedhere.

40ydri et al, lectures on computational physics

5.3.2 Example1: The Harmonic Oscillator

Let us apply this method to the problem of the harmonic oscillator. We have the differ- ential equations dθ dt=ω dω dt=-glθ.(5.53)

Euler"s equations read

θ n+1=θn+ Δtωn ω n+1=ωn-g lθnΔt.(5.54) First we consider the functionθ=θ(t). The middle point is(tn+1

2Δt,θn+12k1)where

k

1= Δtωn. For the functionω=ω(t)the middle point is(tn+1

2Δt,ωn+12k3)where

k 3=-g lΔtθn. Therefore we have k

1= Δtωn

k 3=-g lΔtθn.(5.55) The slope of the functionθ(t)at its middle point is k 2

Δt=ωn+12k3.(5.56)

The slope of the functionω(t)at its middle point is k 4

Δt=-gl(θn+12k1).(5.57)

The Runge-Kutta solution is then given by

θ n+1=θn+k2 ω n+1=ωn+k4.(5.58)

5.3.3 Example2: The Solar System

Let us consider the equations

dx dt=vx.(5.59) dv x dt=-GMsr3x.(5.60) ydri et al, lectures computational physics41 dy dt=vy.(5.61) dv y dt=-GMsr3y.(5.62) First we consider the functionx=x(t). The middle point is(tn+1

2Δt,xn+12k1)where

k

1= Δt vxn. For the functionvx=vx(t)the middle point is(tn+1

2Δt,vxn+12k3)where

k

3=-GMs

rnΔt xn. Therefore we have k

1= Δt vxn

k

3=-GMs

r3nΔt xn.(5.63) The slope of the functionx(t)at the middle point is k 2

Δt=vxn+12k3.(5.64)

The slope of the functionvx(t)at the middle point is k 4

Δt=-GMsR3n(xn+12k1).(5.65)

Next we consider the functiony=y(t). The middle point is(tn+1

2Δt,yn+12k?

1)where

k ?

1= Δt vyn. For the functionvy=vy(t)the middle point is(tn+1

2Δt,vyn+12k?

3)where

k ?

3=-GMs

rnΔt yn. Therefore we have k ?

1= Δt vyn

k ?

3=-GMs

r3nΔt yn.(5.66) The slope of the functiony(t)at the middle point is k ? 2

Δt=vyn+12k?

3.(5.67)

The slope of the functionvy(t)at the middle point is k ? 4

Δt=-GMsR3n(yn+12k?

1).(5.68)

In the above equations

R n=? (xn+12k1)2+ (yn+12k?

1)2.(5.69)

42ydri et al, lectures on computational physics

The Runge-Kutta solutions are then given by

x n+1=xn+k2 v x(n+1)=vxn+k4 y n+1=yn+k? 2 v y(n+1)=vyn+k?

4.(5.70)

5.4 Precession of the Perihelion of Mercury

The orbit of Mercury is elliptic. The orientation of the axesof the ellipse rotate with time. This is the precession of the perihelion (the point of the orbit nearest to the Sun) of Mercury. Mercury"s perihelion makes one revolutionevery23000years. This is approximately566arcseconds per century. The gravitational forces of the other planets (in particular Jupiter) lead to a precession of523arcseconds per century. The remaining

43arcseconds per century are accounted for by general relativity.

For objects too close together (like the Sun and Mercury) theforce of gravity predicted by general relativity deviates from the inverse-square law. This force is given by

F=GMsMm

r2(1 +αr2), α= 1.1×10-8AU2.(5.71) We discuss here some of the numerical results obtained with the Runge-Kutta method for different values ofα. We take the time step and the number of iterations to be N= 20000anddt= 0.0001. The angle of the line joining the Sun and Mercury with the horizontal axis when mercury is at the perihelion is found tochange linearly with time.

We get the following rates of precession

α= 0.0008,dθ

dt= 8.414±0.019

α= 0.001,dθ

dt= 10.585±0.018

α= 0.002,dθ

dt= 21.658±0.019

α= 0.004,dθ

dt= 45.369±0.017.(5.72) Thus dθ dt=aα , α= 11209.2±147.2 degr
Politique de confidentialité -Privacy policy