[PDF] Stochastic Differential Equations



Previous PDF Next PDF







Differential Equations I

derivative occurring A solution (or particular solution) of a differential equa-tion of order n consists of a function defined and n times differentiable on a domain D having the property that the functional equation obtained by substi-tuting the function and its n derivatives into the differential equation holds for every point in D



Differential Equations BERNOULLI EQUATIONS

A Bernoulli differential equation can be written in the following standard form: dy dx +P(x)y = Q(x)yn, where n 6= 1 (the equation is thus nonlinear) To find the solution, change the dependent variable from y to z, where z = y1−n This gives a differential equation in x and z that is linear, and can be solved using the integrating factor



LES EQUATIONS DIFFERENTIELLES - AlloSchool

La solution générale de l’équation différentielle (????) est l’ensemble des fonctions : x y x eo ()Oax Où ???? est un réel Exemple : Résoudre les équations différentielles suivantes :1) (???? 1): ′= 3 2) (???? 2): ′− = 0 Solution :1) La solution générale de l’équation différentielle (???? 1



EQUATIONS DIFFERENTIELLES LINEAIRES

1 EQUATIONS DIFFERENTIELLES LINEAIRES Une équation différentielle est une équation dont l’inconnue est une fonction, en général notée y, à valeurs réelles ou complexes et qui fait intervenir les dérivées de la fonction y



Stochastic Differential Equations

A strong solution of the stochastic differential equation (1) with initial condition x2R is an adapted process X t = Xxwith continuous paths such that for all t 0, X t= x+ Z t 0 (X s)ds+ Z t 0 ˙(X s)dW s a s (2) At first sight this definition seems to have little content except to give a more-or-less obvious in-terpretation of the



13 EQUATIONS DIFFERENTIELLES LINEAIRES DU SECOND ORDRE A

La solution générale de (II) est y e C x C x avec C C RSG II x ()=+ ∈( cos sin ) ( , ) α ββ 12 12 2 3 RESOLUTION de L'EQUATION COMPLETE (I) La solution générale de l'équation complète (I) est la somme • de la solution générale de l'équation sans second membre (II) • et d'une solution particulière de l'équation complète (I)



AN3 - Equations différentielles - Séance de TD - Corrigés des

Solution générale de l’équation (E) : y y y Cx x HP ln 1 3 GI F 18/26 2013 – Test – 1er ordre Résoudre l’équation différentielle (E) : y xy x x c 33 On recherchera une solution particulière de (E) par la méthode de variation de la constante Cette équation est linéaire et non homogène On recherchera la solution générale y H



TS2 Feuille 1bis – Équations différentielles Exercice 1

est une solution de (E) 3 Déduire des questions précédentes la solution générale de l'équation (E) 4 Déterminer la solution particulière de (E) qui vérifie la condition initiale f(0) — Une comparaison à un modèle d'écoulement amène à considérer que la vitesse d'écoulement vo d'un liquide dans un

[PDF] equation differentielle exercice corrigé

[PDF] equation differentielle ordre 2

[PDF] cours sur les racines carrées

[PDF] effet médiateur définition

[PDF] analyse de médiation statistique

[PDF] modérateur médiateur définition

[PDF] test de sobel médiation

[PDF] analyse de modération

[PDF] les verbes passe partout exercices corrigés

[PDF] synonyme des verbes passe partout

[PDF] remplacer il y a dans une description

[PDF] les procédés de traduction français arabe pdf

[PDF] quelles sont les différences entre html et xhtml ?

[PDF] sujet pour groupe whatsapp

[PDF] combinaison avec répétition démonstration

Stochastic Differential Equations

Steven P. Lalley

December 2, 2016

1 SDEs: Definitions

1.1 Stochastic differential equations

Many important continuous-time Markov processes - for instance, the Ornstein-Uhlenbeck pro- cess and the Bessel processes - can be defined as solutions tostochastic differential equationswith drift and diffusion coefficients that depend only on the current value of the process. The general form of such an equation (for a one-dimensional process with a one-dimensional driving Brownian motion) is dX t=(Xt)dt+(Xt)dWt;(1) wherefWtgt0is a standard Wiener process. Definition 1.LetfWtgt0be a standard Brownian motion on a probability space( ;F;P)with an admissible filtrationF=fFtgt0. Astrong solutionof the stochastic differential equation (1) with initial conditionx2Ris an adapted processXt=Xxtwith continuous paths such that for allt0, X t=x+Z t 0 (Xs)ds+Z t 0 (Xs)dWsa.s. (2)

At first sight this definition seems to have little content except to give a more-or-less obvious in-

terpretation of the differential equation (1). However, there are a number of subtle points involved:

First, the existence of the integrals in (2) requires some degree of regularity onXtand the functions

and; in particular, it must be the case that for allt0, with probability one, Z t 0 j(Xs)jds <1andZ t 0

2(Xs)ds <1:(3)

Second, the solution is required to exist for allt <1with probability one. In fact, there are interesting cases of (1) for which solutions can be constructed up to a finite, possibly random time T <1, but not beyond; this often happens because the solutionXtexplodes(that is, runs off to1) in finite time. Third, the definition requires that the processXtlive on the same probability space as the given Wiener processWt, and that it be adapted to the given filtration. It turns out (as we will see) that for certain coefficient functionsand, solutions to the stochastic integral equation equation (2) may exist forsomeWiener processes andsomeadmissible filtrations but not for others. Definition 2.Aweak solutionof the stochastic differential equation (1) with initial conditionxis a continuous stochastic processXtdefined onsomeprobability space( ;F;P)such that for some Wiener processWtand some admissible filtrationFthe processX(t)is adapted and satisfies the stochastic integral equation (2). 1

2 Existence and Uniqueness of Solutions

2.1 Itˆo"s existence/uniqueness theorem

The basic result, due to It

ˆo, is that foruniformly Lipschitzfunctions(x)and(x)the stochastic differential equation (1) has strong solutions, and that for each initial valueX0=xthe solution is unique. Theorem 1.Assume that:R!Rand:R!R+are uniformly Lipschitz, that is, there exists a constantC <1such that for allx;y2R, j(x)(y)j Cjxyjand(4) j(x)(y)j Cjxyj:(5)

Then the stochastic differential equation(1)has strong solutions: In particular, for any standard Brownian

motionfWtgt0, any admissible filtrationF=fFtgt0, and any initial valuex2Rthere exists a unique adapted processXt=Xxtwith continuous paths such that X t=x+Z t 0 (Xs)ds+Z t 0 (Xs)dWsa.s.(6)

Furthermore, the solutions depend continuously on the initial datax, that is, the two-parameter processXxtis jointly continuous intandx.

This parallels the main existence/uniqueness result forordinarydifferential equations, or more generally finite systems of ordinary differential equations x

0(t) =F(x(t));(7)

which asserts that unique solutions exist for each initial valuex(0)provided the functionFis uniformly Lipschitz. Without the hypothesis that the functionFis Lipschitz, the theorem may fail in any number of ways, even for ordinary differential equations.

Example 1.Consider the equationx0= 2pjxj. This is the special case of equation (7) withF(x) =px. This function fails the Lipschitz property atx= 0. Correspondingly, uniqueness of solutions

fails for the initial valuex(0) = 0: the functions x(t)0andy(t) =t2 are both solutions of the ordinary differential equation with initial value0. Example 2.Consider the equationx0=x2, the special case of (7) whereF(x) =x2. The function FisC1, hence Lipschitz on any finite interval, but it is not uniformly Lipschitz, as uniformly Lipschitz functions cannot grow faster than linearly. For any initial valuex0>0, the function x(t) = (x10t)1

solves the differential equation and has the right initial value, and it can be shown that there is no

other solution. The difficulty is that the functionx(t)blows up ast!1=x0, so the solution does not exist for all timet >0. The same difficulty can arise with stochastic differential equations whose coefficients grow too quickly: for stochastic differential equations, when solutions travel to1in finite time they are said toexplode. 2

2.2 Gronwall inequalities

The proof of Theorem 1 will make use of several basic results concerning the solutions of simple

differential inequalities due to Gronwall. These are also useful in the theory of ordinary differential

equations. Lemma 1.Lety(t)be a nonnegative function that satisfies the following condition: For someT 1there exist constantsA;B0such that y(t)A+BZ t 0 y(s)ds <1for all0tT:(8) Then y(t)AeBtfor all0tT:(9) Proof.Without loss of generality, we may assume thatC:=RT

0y(s)ds <1and thatT <1. It then

follows sinceyis nonnegative, thaty(t)is bounded byD:=A+BCon the interval[0;T]. Iterate the inequality (8) to obtain y(t)A+BZ t 0 y(s)ds A+BZ t 0 (A+B)Z s 0 y(r)drds

A+BAt+B2Z

t 0Z s 0 (A+BZ r 0 y(q)dq)drds

A+BAt+B2At2=2! +B3Z

t 0Z s 0Z r 0 (A+BZ q 0 y(p)dp)dqdrds Afterkiterations, one has the firstkterms in the series forAeBtplus a(k+1)fold iterated integral I k. Becausey(t)Don the interval[0;T], the integralIkis bounded byBkDtk+1=(k+ 1)!. This

converges to zero uniformly fortTask! 1. Hence, inequality (9) follows.Lemma 2.Letyn(t)be a sequence of nonnegative functions such that for some constantsB;C <1,

y

0(t)Cfor alltTand

y n+1(t)BZ t 0 y n(s)ds <1for alltTandn= 0;1;2;::: :(10) Then y n(t)CBntn=n!for alltT:(11)

Proof.Exercise.2.3 Proof of Theorem 1: Constant

It is instructive to first consider the special case where the function(x)is constant. (This includes the possibility0, which the stochastic differential equation reduces to anordinary differential equationx0=(x).) In this case the Gronwall inequalities can be usedpathwiseto prove all three assertions of the theorem (existence, uniqueness, and continuous dependence on 3 initial conditions). First, uniqueness: suppose that for some initial valuexthere are two continuous solutions X t=x+Z t 0 (Xs)ds+Z t 0 dW sand Y t=x+Z t 0 (Ys)ds+Z t 0 dW s:

Then the difference satisfies

Y tXt=Z t 0 ((Ys)(Xs))ds; and since the drift coefficientis uniformly Lipschitz, it follows that for some constantB <1, jYtXtj BZ t 0 jYsXsjds for allt <1. Lemma 1 now implies thatYtXt0. Thus, the stochastic differential equation can have at most one solution for any particular initial valuex. A similar argument shows that solutions depend continuously on initial conditionsX0=x. Existence of solutions is proved by a variant of Picard"s method of successive approximations. Fix an initial valuex, and define a sequence of adapted processXn(t)by X

0(t) =xandXn+1(t) =x+Z

t 0 (Xn(s))ds+W(t): The processesXn(t)are all well-defined and have continuous paths, by induction onn(using the hypothesis that the function(y)is continuous). The strategy will be to show that the sequence X n(t)converges uniformly on compact time intervals. It will then follow, by the dominated con- vergence theorem and the continuity of, that the limit processX(t)solves the stochastic integral equation (6). Because(y)is Lipschitz, jXn+1(t)Xn(t)j BZ t 0 jXn(s)Xn1(s)jds; and so Lemma 2 implies that for anyT <1, jXn+1(t)Xn(t)j CBnTn=n!for alltT It follows that the processesXn(t)converge uniformly on compact time intervals[0;T], and there-

fore that the limit processX(t)has continuous trajectories.2.4 Proof of Theorem 1. General Case: Existence

The proof of Theorem 1 in the general case is more complicated, because when differences of so- lutions or approximate solutions are taken, the It ˆo integrals no longer vanish. Thus, the Gronwall inequalities cannot be applied directly. Instead, we will use Gronwall to control second moments. Different arguments are needed for existence and uniqueness. Continuous dependence on initial conditions can be proved using arguments similar to those used for the uniqueness proof; the de- tails are left as an exercise. To prove existence of solutions we use the same iterative method as in the case of constantto generate approximate solutions: X

0(t) =xandXn+1(t) =x+Z

t 0 (Xn(s))ds+Z t 0 (Xn(s))dWs:(12) 4 By induction, the processesXn(t)are well-defined and have continuous paths. The problem is to show that these converge uniformly on compact time intervals, and that the limit process is a solution to the stochastic differential equation. First we will show that for eacht0the sequence of random variablesXn(t)converges inL2 to a random variableX(t), necessarily inL2. The first two terms of the sequence areX0(t)xand X

1(t) =x+(x)t+(x)Wt; for both of these the random variablesXj(t)are uniformly bounded in

L

2fortin any bounded interval[0;T], and so for eachT <1there existsC=CT<1such that

E(X1(t)X0(t))2Cfor alltT:

Now by hypothesis, the functionsandare uniformly Lipschitz, and hence, for a suitable con- stantB <1, j(Xn(t))(Xn1(t))j BjXn(t)Xn1(t)jand (13) j(Xn(t))(Xn1(t))j BjXn(t)Xn1(t)j for allt0. Thus, by Cauchy-Schwartz and the Itˆo isometry, together with the elementary inequal- ity(x+y)22x2+ 2y2,

EjXn+1(t)Xn(t)j2E

Zt 0 ((Xn(s))(Xn1(s))ds+Z t 0 ((Xn(s))(Xn1(s)))dWs 2 2E Zt 0 ((Xn(s))(Xn1(s))ds 2 + 2E Zt 0 ((Xn(s))(Xn1(s))dWs 2 2B2E Zt 0 jXn(s)Xn1(s)jds 2 + 2B2Z t 0

EjXn(s)Xn1(s)j2ds

2B2E tZ t 0 jXn(s)Xn1(s)j2ds + 2B2Z t 0

EjXn(s)Xn1(s)j2ds

2B2(T+ 1)Z

t 0

EjXn(s)Xn1(s)j2ds8tT:

Lemma 2 now applies toyn(t) :=EjXn+1(t)Xn(t)j2(recall thatEjX1(t)X0(t)j2C=CTfor alltT), yielding

E(Xn+1(t)Xn(t))2C(4B2+ 4B2T)n=n!8tT:(14)

This clearly implies that for eachtTthe random variablesXn(t)converge inL2. Furthermore, thisL2convergence is uniform fortT(because the bounds in (14) hold uniformly fortT), and the limit random variablesX(t) :=L2limn!1Xn(t)are bounded inL2fortT. It remains to show that the limit processX(t)satisfies the stochastic differential equation (6). To this end, consider the random variables(Xn(t))and(Xn(t)). SinceXn(t)!X(t)inL2, the

Lipschitz bounds (13) imply that

lim n!1Ej(Xn(t))(X(t))j2+Ej(Xn(t))(X(t))j2= 0 uniformly fortT. Hence, by the Itˆo isometry, L

2limn!1Z

t 0 (Xn(s))dWs=Z t 0 (X(s))dWs for eachtT. Similarly, by Cauchy-Schwartz and Fubini, L

2limn!1Z

t 0 (Xn(s))ds=Z t 0 (X(s))ds: 5

Thus, (12) implies that

X(t) =x+Z

t 0 (X(s))ds+Z t 0 (X(s))dWs: This shows that the processX(t)satisfies the stochastic integral equation (6). Both of the integrals

in this equation are continuous int, and therefore so isX(t).2.5 Proof of Theorem 1. General Case: Uniqueness

Suppose as before that for some initial valuexthere are two continuous solutions X t=x+Z t 0 (Xs)ds+Z t 0 (Xs)dWsand Y t=x+Z t 0 (Ys)ds+Z t 0 (Ys)dWs:

Then the difference satisfies

Y tXt=Z t 0 ((Ys)(Xs))ds+Z t 0 ((Ys)(Xs))dWs(15) Although the second integral cannot be bounded pathwise, its second moment can be bounded, since(y)is Lipschitz: E Zt 0 ((Ys)(Xs))dWs 2 B2Z t 0

E(YsXs)2ds;

whereBis the Lipschitz constant. Of course, we have no way of knowing that the expectations E(YsX2s)are finite, so the integral on the right side of the inequality may be1. Nevertheless, taking second moments on both sides of (15), using the inequality(a+b)22a2+ 2b2and the

Cauchy-Schwartz inequality, we obtain

E(YtXt)2(2B2+ 2B2T)Z

t 0

E(YsXs)2ds

If the functionf(t) :=E(YtXt)2were known to be finite and integrable on compact time intervals, then the Gronwall inequality (9) would imply thatf(t)0, and the proof of uniqueness would be complete.

1To circumvent this difficulty, we use a localization argument: Define the

stopping time :=A= infft:X2t+Y2tAg: SinceXtandYtare defined and continuous for allt, they are a.s. bounded on compact time inter- vals, and soA! 1asA! 1. Hence, with probability one,t^A=tfor all sufficiently large A. Next, starting from the identity (15), stopping at time=A, and proceeding as in the last paragraph, we obtain

E(Yt^Xt^)2(2B2+ 2B2T)Z

t 0

E(Ys^Xs^)2dsfor alltT1

OKSENDALseems to have fallen prey to this trap: In his proof of Theorem 5.2.1 he fails to check that the second moment

is finite. 6 By definition of, both sides are finite, and so Gronwall"s inequality (9) implies that

E(Yt^Xt^)2= 0

Since this is true for every=A, it follows thatXt=Yta.s., for eacht0. SInceXtandYt have continuous sample paths, it follows that with probability one,Xt=Ytfor allt0. A similar argument proves continuous dependence on initial conditions.3 Example: The Feller diffusion The Feller diffusionfYtgt0is a continuous-time Markov process on the half-line[0;1)with ab- sorption at0that satisfies the stochastic differential equation dY t=pY tdWt(16) up until the time=0of the first visit to0. Here >0is a positive parameter. The Itˆo exis- tence/uniqueness theorem does not apply, at least directly, because the functionpyis not Lips- chitz. However, the localization lemma of It ˆo calculus can be used in a routine fashion to show that for any initial valuey >0there is a continuous processYtsuch that Y t^=y+Z t^ 0pY sdWswhere= infft >0 :Yt= 0g: (Exercise: Fill in the details.) The importance of the Feller diffusion stems from the fact that it is the natural continuous- time analogue

2of the critical Galton-Watson process. The Galton-Watson process is a discrete-

time Markov chainZnon the nonnegative integers that evolves according to the following rule: Given thatZn=kand any realization of the past up to timen1, the random variableZn+1is distributed as the sum ofkindependent, identically distributed random variables with common distributionF, called theoffspring distribution. The process is said to becriticalifFhas mean1. Assume also thatFhas finite variance2; then the evolution rule implies that the incrementZn+1 Z nhas conditional expectation0and conditional variance2Zn, given the history of the process to timen. This corresponds to the stochastic differential equation (16), which roughly states that the increments ofYthave conditional expectation0and conditional variance2Ytdt, givenFt. A natural question to ask about the Feller diffusion is this: IfY0=y >0, does the trajectoryYt reach the endpoint0of the state space in finite time? (That is, is <1w.p.1?) To see that it does, consider the processY1=2 t. By Itˆo"s formula, ifYtsatisfies (16), or more precisely, if it satisfies Y t=y+Z t 0 pY s1[0;](s)dWs;(17) then dY 1=2 t=12 Y1=2 tdYt18 Y3=2 td[Y]t 2 dWt18 Y1=2 tdt up to time. Thus, up to the time of the first visit to0(if any), the processY1=2 tis a Brownian motion plus a negative drift. Since a Brownian motion started atpywill reach0in finite time, with probability one, so willY1=2 t.2

Actually, the Feller diffusion is more than just an analogue of the Galton-Watson process: It is a weak limit of rescaled

Galton-Watson processes, in the same sense that Brownian Motion is a weak limit of rescaled random walks.

7 Exercise 1.Scaling law for the Feller dif fusion:LetYtbe a solution of the integral equation (17) with volatility parameter >0and initial valueY0= 1.quotesdbs_dbs15.pdfusesText_21