[PDF] The sine and cosine integrals - Lancaster





Previous PDF Next PDF



1. Préliminaires La convergence de lintégrale impropre ? +? dt est

L'INTÉGRALE DE DIRICHLET ?. +?. 0 sin(t) t dt. PATRICE LASSÈRE. Résumé. à l'origine car t ?? sin(t)/t s'y prolonge continuement le seul problème ...



lintégrale de Dirichlet

12 mars 2020 2. t ?? sin(t)/t est continue sur ]0 +?[ et prolongeable par continuité en 0 (valeur 1). L'unique borne impropre est au voisinage.



Épreuve de Mathématiques 3 Exercice 1 (PT 2013 C)

15 nov. 2013 1 donc la fonction ? est prolongeable par continuité en 0 et l'intégrale converge. En +? : D'après 1)



Intégrales impropres

Ainsi l'intégrale sur l'intervalle complet est la somme des intégrales sur les intervalles du découpage. • Dans l'exemple de la fonction f (t) = sin



Intégrales généralisées (ou impropres)

cos(t) dt est divergente puisque la fonction sin(x) ne converge pas lorsque x tend vers l'infini. Intégrale . +?. 0 exp(?t)dt. La fonction t ? 



Développement asymptotique de lintégrale de sin(t)/t

sint t dt. Yves Coudene 16/10/03. L'intégrale ? N. 0 sin t t dt tend vers ?/2 lorsque N tend vers l' infini. Quitte `a faire.



Corrigé Centrale 2000 - (Maths I - PSI)

21 avr. 2001 de I.A est analogue en remplaçant sin(xt) par cos(xt) ou bien le changement de variable t = ? ? u dans la derni`ere intégrale.



Intégrales convergentes

9 mai 2012 La relation de Chasles impose que l'intégrale sur l'intervalle ... de la fonction f(t) =



Analyse 3 - Contrôle Continu 1

3 nov. 2008 1. Montrez que l'intégrale ?. +?. 0 sint t dt converge. ... sin(2n + 1)t sint dt : (a) calculez les intégrales J0 et J1 ;.



Intégrales généralisées

sin(1/t)e?1/tt?k dt. Exercice 2. Calcul fractions rationnelles. Prouver la convergence des intégrales suivantes puis les calculer : 1). ? +? t=0.



The sine and cosine integrals - Lancaster

To derive (1) note rst that since sint=tis an even function Z 1 1 sint t dt= 2 Z 1 0 sint t dt: Denote this by I The substitution t= x+ n?gives Z (n+1)? n? sint t dt= ( n1) Z ? 0 sinx x+ n? dx: Assuming that termwise integration of the series is valid we add these identities for all integers nto obtain at once I= Z ? 0 sinx 1 sinx dx



The sine and cosine integrals - Lancaster

More generally it is always possible to evaluate the derivative of an integral using the chain rule For example to evaluate d dx Z x2 3 sint t dt Let F(t) be an antiderivative for sint t Then Z x2 3 sint t dt = F(x2) ?F(3) By the chain rule d dx Z x2 3 sint t dt = d dx F(x2) ?F(3) = F?(x2)2x But F?(t) = sint t so F?(x2) = sin



Techniques of Integration - University of Utah

integral version of the product rule called integration by parts may be useful because it interchanges the roles of the two factors Recall the product rule: d uv udv vdu and rewrite it as (7 15) udv d uv vdu In the case of 7 14 taking u x dv cosxdx we have du dx v sinx Putting this all in 7 15: (7 16) xcosxdx d xsinx sinxdx



Evaluation of the sine and cosine integrals - Lancaster

The complete sine integral: ?rst method We shall consider the integrals in their various appropriate forms of sint t and cost t We start with the “complete sine integral”: THEOREM 1 We have Z ? 0 sint t dt = ? 2 (1) Note ?rst that there is no problem of convergence at 0 because sint t ? 1 as t ? 0



Table of Integrals

Integrals with Trigonometric Functions Z sinaxdx= 1 a cosax (63) Z sin2 axdx= x 2 sin2ax 4a (64) Z sinn axdx= 1 a cosax 2F 1 1 2; 1 n 2; 3 2;cos2 ax (65) Z sin3 axdx= 3cosax 4a + cos3ax 12a (66) Z cosaxdx=



Searches related to integrale sint/t PDF

A physically motivating example for the integral is the displacement traveled by a car with velocity f(t) at time t Suppose that from time t= ato t= ba car travels at a velocity f(t) If f(t) = vis a constant Then the displacement traveled in tunits of time is simply d= v t= f(t) t: (2 1 6) Now suppose that f(t) is variable

What are the integrals of Sint=T and cost=ton intervals?

In these notes, we consider the integrals of sint=tand cost=ton intervals like (0;1),(0; x) and (x;1). Most of the material appeared in [Jam1]. Companion notes [Jam2], [Jam3]deal with integrals ofeit=tpand, more generally,f(t)eit. THEOREM 1. We have Note rst that there is no problem of convergence at 0, becausesint!1 ast!0.

What are trig integrals?

Trig integrals that we concern ourselves with are of the follow three forms:Product of sinn(x)andcosm(x) We give a summary of the strategy forcomputing this kind of integrals in Fig. 4. Product oftannandsecm. We give a summary of the strategy for com-puting this kind of integrals in Fig. 5. Product ofsin(ax)andcos(bx).

How do you integrate a constant into an integral?

Here is the idea: Regard the integrand asf(ax) for some constantasuch that you knowan antiderivative off, sayF. Use the substitutionu=axwithdu=adx. Then Compute the integral R3 cos(9x)dx. Use the substitution isu= 9x. Compute the dierential ddu=(9x)dx= dx Put everything back into the integral. We get Now one can integrate this easily.

What is exponential integral?

We now present a contour integral method that provides a third proof of Theorem 1,and at the same time establishes the equivalence of Theorem 2 with theexponential integral,which we now describe. E(x), as well as its various mutations, is known as the exponential integral". Exactly asforC(x), we haveE(x) =E(x)logx+c0; (26)

The sine and cosine integrals

Notes by G.J.O. Jameson

The complete sine integral: rst method

In these notes, we consider the integrals of sint=tand cost=ton intervals like (0;1), (0;x) and (x;1). Most of the material appeared in [Jam1]. Companion notes [Jam2], [Jam3] deal with integrals ofeit=tpand, more generally,f(t)eit.

We start with the \complete sine integral":

THEOREM 1.We haveZ1

0sintt

dt=2 :(1) Note rst that there is no problem of convergence at 0, because sintt !1 ast!0. A very quick and neat proof of (1) (already to be seen, for example, in the 1909 note [Har]) lies to hand if we assume the following well-known series identity: forx6=k,

1sinx=1X

n=1(1)nx+n(2) One proof of (2) [Wa, p. 17{18] is by takingx= 0 in the Fourier series for cosaxon [;]. To derive (1), note rst that, since sint=tis an even function, Z 1

1sintt

dt= 2Z 1

0sintt

dt:

Denote this byI. The substitutiont=x+ngives

Z (n+1) nsintt dt= (1)nZ

0sinxx+ndx:

Assuming that termwise integration of the series is valid, we add these identities for all integersnto obtain at once I=Z 0 sinx1sinxdx=: The termwise integration (for any readers who care) is easily justied by uniform convergence, as follows. By combining the terms fornandnand multiplying by sinx, we can rewrite the series (2) as sinxx + 2xsinx1X n=1(1)nx

2n22= 1:

1

For 0< x < andn2,2xsinxx

2n22

2(n21)2:

Since P1 n=21=(n21) is convergent, it follows, by Weierstrass's \M-test", that the series converges uniformly on the open interval (0;): this is all we need. We note some immediate variants and consequences of (1). First, for anya >0, the substitutionat=ugives Z 1

0sinatt

dt=Z 1

0sinuu

du=2

Hence also the value of this integral is2

fora <0.

Fora >0, we deduce

Z 1

0sinatcosatt

dt=12 Z 1

0sin2att

dt=4 :(3)

We will use this several times later.

Since sin(a+b)t+ sin(ab)t= 2sinatcosbt, we can also deduce Z 1

0sinatcosbtt

dt= 2 ifa > b0;

0 ifb > a0:(4)

Integrating by parts, and using (3) and the fact that sin2tt !0 ast!0, we obtain Z 1 0sin 2tt 2dt=h sin2tt i 1 0+Z 1

02sintcostt

dt=2 :(5) This argument is reversible, so (4) equally implies (1). This is a viable alternative, because one can prove (4) in a similar way to (1), using the series 1=sin2x=P1 n=1[1=(xn)2]; this method is followed in [Wa, p. 186{187].

By the cosine series, cosatcosbt=12

(ba)t2+O(t4) for smallt, so1t

2(cosatcosbt)!

12 (ba) and1t (cosatcosbt)!0 ast!0. We deduce, fora; b0, Z 1

0cosatcosbtt

2dt=h cosatcosbtt i 1 0+Z 1

0asinat+bsinbtt

dt=2 (ba):(6)

Since cos(ab)tcos(a+b)t= 2sinatsinbt, we deduce

Z 1

0sinatsinbtt

2dt=b2

(7) ifab0 (hence alsoa2 ifba0; of course, (5) is a special case). 2

The incomplete sine integral

The \incomplete" sine integral is the function

Si(x) =Z

x

0sintt

dt:

First, some simple facts about it. Since

sintt

1 fort >0, we have Si(x)xfor allx >0.

Of course, (1) says that Si(x)!2

asx! 1.

The substitutionat=ugivesRx

0sinatt

dt= Si(ax). In particular,

Si(2x) =Z

x

0sin2tt

dt= 2Z x

0sintcostt

dt:

Hence Si(2x)2Rx

0cost dt= 2sinxfor 0x2

, equivalently Si(x)2sin12 xfor

0x(this is stronger than Si(x)x).

By the fundamental theorem of calculus, the derivative Si

0(x) is sinx=x. Hence Si(x)

is increasing on intervals [2n;(2n+ 1)] and decreasing on intervals [(2n1);2n], so it has maxima at the points (2n+ 1)and minima at the points 2n. PROPOSITION 1. Si(x)0for allx >0, and its greatest value occurs atx=.

Proof. Write

A n=Z (n+2) nsintt dt: By substitutingt+=uon [n;(n+ 1)] and recombining, we see that A n=Z (n+1) n 1t 1t+ sint dt; in which 1t

1t+>0. Ifnis even, then sint0 on [n;(n+ 1)], soAn0 and

Si[(n+ 2)]Si(n). Hence Si(2n):::Si(2)Si(0) = 0 for alln. Since Si(x) increases on [2n;(2n+1)] and decreases on [(2n+1);(2n+2)], it follows that Si(x)0 for allx0. Meanwhile, ifnis odd, thenAn0, so that Si()Si(3):::, hence the greatest value is Si().

By integrating the series

sintt =1X n=0(1)nt2n(2n+ 1)!; we obtain the explicit series expression

Si(x) =1X

n=0(1)nx2n+1(2n+ 1)!(2n+ 1)=xx33!3 +x55!5 ;(8) 3 from which, in principle, Si(x) can be calculated, though in practice the calculation is only pleasant for fairly smallx. One nds, for example, Si()1:85194 (recall that this is the greatest value) and Si(2)1:41816.

The complementary sine and cosine integrals

We cannot simply replace sintby costin (1), or in the denition of Si(x), because the resulting integral would be divergent at 0. To formulate results that make sense for both sintand cost, we consider instead the complementary integrals

S(x) =Z

1 xsintt dt; C(x) =Z 1 xcostt dt: (Here I am departing from the established notation, which is si(x) and ci(x) where we have

S(x) andC(x)).

By (1), we haveS(0) =2

andS(x) =2

Si(x). By the remarks above,S(x) has

maxima at 2nand minima at (2n1), with greatest value2 and least valueS(). Also,

S() 0:28114 andS(2)0:15264.

Meanwhile,C(x) is dened forx >0, but not atx= 0. It has maxima at (2n12 and minima at (2n+12 ), with overall least value at2 The nature ofS(x) andC(x), especially for largex, is revealed by a simple integration by parts. It adds to clarity to describe the results more generally. For a functionf, dene I f(x) =Z 1 x f(t)eitdt; Cf(x) =Z 1 x f(t)cost dt; Sf(x) =Z 1 x f(t)sint dt; assuming that these integrals converge. SoIf(x) =Cf(x) +iSf(x). Results forIf(x) will of course deliver simultaneous results forCf(x) andSf(x). The reader just needs to accept that ddt eit=ieitand that the usual processes of calculus, such as integration by parts, work in the same way for complex functions of a real variable. We assume thatf(t) iscompletely monotonic, that is: (CM) for alln0, (1)nf(n)(t)0 fort >0 andf(n)(t)!0 ast! 1. Note that this implies that (1)nf(n)(t) is decreasing and that (1)nf(n)(t) also satises (CM). Of course,f(t) = 1=tpis completely monotonic for allp >0. For such a function, considerIf0(x) (distinguish between this andI0f(x)!). Condition (CM) implies thatf0(t)0 andR1 x(f0(t))dt=f(x). Sincejf0(t)eitj f0(t), the integral deningIf0(x) is convergent, and we have jIf0(x)j f(x):(9) 4

Now integrate by parts:

I f(x) =h if(t)eiti1 x+iZ 1 x f0(t)eitdt=if(x)eix+iIf0(x): We summarise this information in the following result PROPOSITION 2.Iffis completely monotonic, then the integrals deningIf(x)and I f0(x)are convergent for allx >0, and the following statements apply: I f(x) =if(x)eix+iIf0(x);(10) C f(x) =f(x)sinxSf0(x); Sf(x) =f(x)cosx+Cf0(x);(11)

Further,

jIf(x)j 2f(x):(12) Note that (12) follows at once from (9) and (10). Also, it applies tof0(sincef0 is completely monotonic) to givejI0f(x)j 2f0(x). Actually, (12) can be improved to jIf(x)j f(x): see [Jam3]. We restate these results for our casef(x) = 1=x. Taking a slight liberty with the notation, we write I n(x) =Z 1 xe itt ndt; and s imilarlyCn(x),Sn(x) (also, writeI(x) forI1(x)).

PROPOSITION 3.For allx >0,

I(x) =ieixx

iI2(x);(13)

C(x) =sinxx

+S2(x); S(x) =cosxx

C2(x);(14)

Also,jI(x)j 2=xandjI2(x)j 2=x2. HencexI(x)ieix,xS(x)cosxandxC(x)+sinx tend to 0 asx! 1. By repeating the process, we can derive increasingly accurate approximations, as fol- lows. (At this point, the reader could skip to \The functionC(x)".) PROPOSITION 4.Letfbe completely monotonic. Then for allx >0, I f(x) =if(x)eixf0(x)eixIf00(x); C f(x) =f(x)sinxf0(x)cosxCf00(x); 5 S f(x) =f(x)cosxf0(x)sinxSf00(x):(15)

Further,

I f(x) = [f(x)f00(x)]ieixf0(x)f(3)(x)eix+If(4)(x); C f(x) =[f(x)f00(x)]sinxf0(x)f(3)(x)cosx+Cf(4)(x); S f(x) = [f(x)f00(x)]cosxf0(x)f(3)(x)sinx+Sf(4)(x):(16) Proof. Applying (10) tof0(t) and substituting back into (10), we obtain (15) for I f(x). Now apply (15) tof00(t) and substitute, obtaining: I f(x) =if(x)eixf0(x)eixif00(x)eix+f(3)(x)eix+If(4)(x): which equates to (16). We have alternative bounds for the remainder terms, from (9) and (12). For example, jIf(4)(x)jis bounded both byf(3)(x) and by 2f(4)(x). Of course, the process can be continued: successive derivatives off(x) appear in the expressions multiplyingeixandieix. The outcome is an asymptotic expansion forIf(x). However, this does not simply deliver ever-closer approximations, because for a xedx, the derivativesf(n)(x) will ultimately grow large in magnitude. We restate (16) explicitly for the casef(t) = 1=t:

PROPOSITION 5.Forx >0,

I(x) =1x

2x 3 ie ix+1x 26x
4 e ix+ 24I5(x);

C(x) =1x

2x 3 sinx+1x 26x
4 cosx+ 24C5(x);

S(x) =1x

2x 3 cosx+1x 26x
4 sinx+ 24S5(x):(17)

Further,24jI5(x)jis bounded by both6=x4and48=x5.

We deduce a bound forjI(x)j:

PROPOSITION 6.Forx2,

jI(x)j<1x

32x3+6x

4: 6 Proof. Denote the expression forI(x) in (17) byF(x) + 24I5(x). Assume thatx2 and writey= 1=x, so thaty12 . Then jF(x)j2= (y2y3)2+ (y26y4)2 =y23y48y6+ 36y8 = (y32 y3)21014 y6+ 36y8 <(y32 y3)2; sojF(x)j< y32 y3. The stated inequality follows. This bound is smaller than 1=xwhenx >4. It is used in [JLM] as a stage in the proof of the stronger inequalityjS(x)j 2 tan1x(note that this is exact at 0).

The functionC(x)

Can we nd a formula that enables us to calculateC(x), and that opens the way to some kind of analogue of (1)? The key is to introduce the function C (x) =Z x

01costt

dt (This function is sometimes denoted by Cin(x)). It is elementary that 01cost12 t2, hence 01costt 12 t, fort >0. So there is no problem of convergence of the integral at 0, and we have 0C(x)14 x2for allx >0. By integrating the series

1costt

=1X n=1(1)n1t2n1(2n)!; we obtain the power series expression C (x) =1X n=1(1)n1x2n(2n)!(2n)=x22!2 x44!4 +:(18)

For example,C(2

)0:55680 andC()1:64828.

We now relateC(x) andC(x). We have

C (x)C(1) =Z x

11costt

dt= logxZ x

1costt

dt= logxC(1) +C(x); so

C(x) =C(x)logx+c;(19)

wherecis constant, in factc=C(1)C(1). 7 Even without knowingc, we can draw some conclusions from (6). One, which we will use later, isC(x) logx, hencexC(x)!0, asx!0+. Another is the following integral, which can be compared with (6). It is a special case of the \Frullani integral": see [Fer, p.

133{135], [Jam4] or [Tr], where it is used in the evaluation of the integral of sin

nx=xm(I am grateful to Nick Lord for the references [Fer] and [Tr]).

PROPOSITION 7.Fora; b >0,Z

1

0cosatcosbtt

dt= logbloga:(20)

Proof. The substitutionat=ugivesZ

x

01cosatt

dt=Z ax

01cosuu

du=C(ax): Hence Zx

0cosatcosbtt

dt=C(bx)C(ax) =C(bx)C(ax) + logbxlogax =C(bx)C(ax) + logbloga !logblogaasx! 1: However, for a fully satisfactory version of (19), and for the calculation ofC(x), of course we need to know the value ofc. The answer turns out to be thatc= , where is Euler's constant. Let us state this fact as a theorem:

THEOREM 2.We have

C(x) =C(x)logx

:(21) Surprisingly, this result is not mentioned in the comprehensive article [Lag] on Euler's constant. It can be seen stated without proof in compilations of formulae, such as Wikipedia or [DLMF, chapter 6] However, it is not easy to nd accessible references with a proof. At the same time, the method will also give a second proof of Theorem 1. Later, we describe an alternative route to both theorems using contour integration. Two limit expressions forcfollow from (19). SinceC(x)!0 asx! 1, we have C (x)logx! casx! 1:(22) This will be used in our proof of (21). Also, sinceC(x)!0 asx!0+, we have

C(x) + logx!casx!0+:(23)

8

This (once we know thatc=

) describes the nature ofC(x) near 0, so can be regarded as the true analogue of (1). Also, (21), together with (18), enables us to calculateC(x). We nd, for example, C(2 ) 0:47200 (recall that this is the least value) andC() 0:07367. Second proof of Theorem 1 and a proof of Theorem 2 We will use the following elementary version of the Riemann-Lebesgue Lemma, which is easily proved by integration by parts:iffis continuous on[a;b]and has a continuous derivative on(a;b), then Z b a f(t)sinnt dt!0asn! 1; and similarly withsinntreplaced bycosnt.We also use:quotesdbs_dbs16.pdfusesText_22
[PDF] procédés théatraux

[PDF] tendinopathie genou traitement

[PDF] tendinite demi membraneux

[PDF] comment soigner une fabella

[PDF] fabella douloureuse

[PDF] tendinite poplité traitement

[PDF] mecanique de fluide resume

[PDF] mécanique des fluides bernoulli exercices corrigés

[PDF] fiche résumé mécanique des fluides

[PDF] mécanique des fluides cours pdf

[PDF] question ? choix multiple culture générale

[PDF] question ? choix multiple definition

[PDF] choix multiple orthographe

[PDF] questions avec reponses multiples synonyme

[PDF] question ? choix unique