[PDF] The bivariate and multivariate normal distribution




Loading...







[PDF] 1107 Bivariate Normal Distribution - SU LMS

Theorem 1 17 Let X and Y be jointly continuous random variables with joint pdf fX,Y (x, y) which has support on S ? R2 Consider random variables U =

[PDF] Multivariate Analysis Homework 1 MSU

16 mar 2018 · Consider a bivariate normal population with µ1 = 0, µ2 = 2, ?11 = 2, ?22 = 1, and ?12 = 0 5 (a) Write out the bivariate normal density

[PDF] The Bivariate Normal Distribution

If two random variables X and Y are jointly normal and are uncorrelated, then they are independent This property can be verified using multivariate transforms, 

[PDF] Conditioning and the Bivariate Normal Distribution

Math 280B, Winter 2012 Conditioning and the Bivariate Normal Distribution In what follows, X and Y are random variables defined on a probability space

[PDF] The bivariate and multivariate normal distribution

(To actually do this is a very useful exercise ) The Multivariate Normal Distribution Using vector and matrix notation To study the joint normal 

[PDF] 10 — BIVARIATE DISTRIBUTIONS

After some discussion of the Normal distribution, consideration is given to Bivariate Distributions — Continuous Random Variables Exercises — X

[PDF] SIMGM713 Homework 5 Solutions

In ;this problem we will construct a formulation of the probability density function for the bivariate normal distribution based on the covariance matrix and 

[PDF] The bivariate and multivariate normal distribution 34609_6Notes11_09.pdf

Probability 2 - Notes 11

The bivariate and multivariate normal distribution. Definition.Two r.v."s(X;Y)have a bivariate normal distributionN(μ1;μ2;s21;s22;r)if their joint p.d.f. is f

X;Y(x;y) =1

2ps1s2p

(1¡r2)e¡1

2(1¡r2)·

³x¡μ1

s 1´

2¡2r³x¡μ1

s

1´³

y¡μ2 s 2´ +³y¡μ2 s 2´ 2¸ (1) for allx;y. The parametersμ1;μ2may be any real numbers,s1>0;s2>0, and¡1·r·1.

It is convenient to rewrite (1) in the form

f

X;Y(x;y) =ce¡1

2

Q(x;y);wherec=1

2ps1s2p

(1¡r2)and

Q(x;y) = (1¡r2)¡1"µx¡μ1

s 1¶ 2

¡2rµx¡μ1

s

1¶µ

y¡μ2 s 2¶ +µy¡μ2 s 2¶ 2# (2) Statement.The marginal distributions ofN(μ1;μ2;s21;s22;r)are normal with r.v."sXandY having density functions f

X(x) =1

p

2ps1e¡(x¡μ1)2

2s21;fY(y) =1

p

2ps2e¡(y¡μ2)2

2s22: Proof.The expression (2) forQ(x;y)can be rearranged as follows:

Q(x;y) =1

1¡r2"

µx¡μ1

s

1¡ry¡μ2

s 2¶ 2 +(1¡r2)µy¡μ2 s 2¶ 2# = (x¡a)2 (1¡r2)s21+(y¡μ2)2 s 22;
(3) wherea=a(y) =μ1+rs1 s

2(y¡μ2). Hence

f

Y(y) =Z

¥

¡¥fX;Y(x;y)dx=ce¡(y¡μ2)2

2s22£Z

¥

¡¥e¡(x¡a)2

2(1¡r2)s21dx=1

p

2ps2e¡(y¡μ2)2

2s22; where the last step makes use of the formula

R¥¡¥e¡(x¡a)2

2s2dx=p

2pswiths=s1p

1¡r2.2

Exercise.Derive the formula forfX(x).

Corollaries.

1.SinceX»N(μ1;s21),Y»N(μ2;s22), we know the meaning of four parameters involved into

the definition of the normal distribution, namely

E(X) =μ1,Var(X) =s21,E(Y) =μ2,Var(X) =s22.

2.Xj(Y=y)is a normal r.v.To verify this statement we substitute the necessary ingredients

into the formula defining the relevant conditional density: f

XjY(xjy) =fX;Y(x;y)

f

Y(y)=1

p

2p(1¡r2)s1e¡(x¡a(y))2

2s21(1¡r2):

1 In other words,Xj(Y=y)»N(a(y);(1¡r2)s21). Hence:

3.E(XjY=y) =a(y)or, equivalently,E(XjY) =μ1+rs1

s

2(Y¡μ2). In particular, we see that

E(XjY)is a linear function ofY.

4.E(XY) =s1s2r+μ1μ2.

Proof.E(XY) =E[E(XYjY)] =E[YE(XjY)] =E[Y(μ1+rs1 s

2(Y¡μ2)] =μ1E(Y)+

r s1 s

2[E(Y2)¡μ2E(Y)] =μ1μ2+rs1

s

2[E(Y2)¡μ22] =μ1μ2+rs1

s

2Var(Y) =s1s2r+μ1μ2.2

5.Cov(X;Y) =s1s2r. This follows from Corollary 4 and the formulaCov(X;Y) =E(XY)¡

E(X)E(X).

6.r(X;Y) =r. In words:ris the correlation coefficient ofX;Y. This is now obvious from the

definitionr(X;Y) =Cov(X;Y) p

Var(X)var(Y).

Exercise.Show thatXandYare independent iffr=0. (We proved this in the lecture; it is easily seen from either the joint p.d.f.) Remark.It is possible to show that the m.g.f. ofX;Yis M

X;Y(t1;t2) =e(μ1t1+μ2t2)+1

2 (s21t21+2rs1s2t1t2+s22t22) Many of the above statements follow from it. (To actually do this is a very useful exercise.)

The Multivariate Normal Distribution.

Using vector and matrix notation.To study the joint normal distributions of more than two r.v."s, it is convenient to use vectors and matrices. But let us first introduce these notations for the case of two normal r.v."sX1;X2. We set

X=µX1

X 2¶ ;x=µx1 x 2¶ ;t=µt1 t 2¶ ;m=µμ1 μ 2¶ ;V=µs21rs1s2 rs

1s2s22¶

Thenmis the vector of means andVis the variance-covariance matrix. Note thatjVj= s

21s22(1¡r2)and

V

¡1=1

(1¡r2)Ã 1 s

21¡r

s

1s2¡r

s 1s21 s 22!

HencefX(x) =1

(2p)2=2jVj1=2e¡1 2 (x¡m)TV¡1(x¡m)for allx. AlsoMX(t) =etTm+1 2 tTVt. We again use matrix and vector notation, but now there arenrandom variables so thatX,x,t andmare nown-vectors withithentriesXi,xi,tiandμiandVis then£nmatrix withiithentry s

2iandijthentry (fori6=j)sij. Note thatVis symmetric so thatVT=V.

2

The joint p.d.f. isfX(x) =1(2p)n=2jVj1=2e¡12(x¡m)TV¡1(x¡m)for allx. We say thatX»N(m;V).

We can find the joint m.g.f. quite easily.

M

X(t)=Eh

eånj=1tjXji =E[etTX]=Z ¥

¡¥:::Z

¥

¡¥1

(2p)n=2jVj1=2e¡1 2 ((x¡m)TV¡1(x¡m)¡2tTx)dx1:::dxn We do the equivalent of completing the square, i.e. we write (x¡m)TV¡1(x¡m)¡2tTx= (x¡m¡a)TV¡1(x¡m¡a)+b for a suitable choice of then-vectoraof constants and a constantb. Then M

X(t) =e¡b=2Z¥

¡¥:::Z

¥

¡¥1

(2p)n=2jVj1=2e¡1 2 (x¡m¡a)TV¡1(x¡m¡a)dx1:::dxn=e¡b=2:

We just need to findaandb. Expanding we have

((x¡m)¡a)TV¡1((x¡m)¡a)+b = (x¡m)TV¡1(x¡m)¡2aTV¡1(x¡m)+aTV¡1a+b = (x¡m)TV¡1(x¡m)¡2aTV¡1x+£2aTV¡1m+aTV¡1a+b¤ This has to equal(x¡m)TV¡1(x¡m)¡2tTxfor allx. Hence we needaTV¡1=tTand b=¡£2aTV¡1m+aTV¡1a¤. Hencea=Vtandb=¡£2tTm+tTVt¤. Therefore M

X(t) =e¡b=2=etTm+1

2 tTVt

Results obtained using the m.g.f.

1. Any (non-empty) subset of multivariate normals is multivariate normal. Simply puttj=0for

alljfor whichXjis not in the subset. For exampleMX1(t1)=MX1;:::;Xn(t1;0;:::;0)=et1μ1+t21s21=2. HenceX1»N(μ1;s21). A similar result holds forXi. This identifies the parametersμiands2ias the mean and variance ofXi. Also M X1;X2(t1;t2) =MX1;:::;Xn(t1;t2;0;:::;0) =et1μ1+t2μ2+1 2 (t21s21+2s12t1t2+s22t22) HenceX1andX2have bivariate normal distribution withs12=Cov(X1;X2). A similar result holds for the joint distribution ofXiandXjfori6=j. This identifiesVas the variance-covariance matrix forX1;:::;Xn. 3

2.Xis a vector of independent random variables iffVis diagonal (i.e. all off-diagonal entries

are zero so thatsij=0fori6=j). Proof.From (1), if theX0sare independent thensij=Cov(Xi;Xj) =0for alli6=j, so thatVis diagonal.

IfVis diagonal thentTVt=ånj=1s2jt2jand hence

M

X(t) =etTm+1

2 tTVt=nÕ j=1³ eμjtj+1 2 s2jt2j=2´ =nÕ j=1M

Xj(tj)

By the uniqueness of the joint m.g.f.,X1;:::;Xnare independent.

3. Linearly independent linear functions of multivariate normal random variables are multivari-

ate normal random variables. IfY=AX+b, whereAis ann£nnon-singular matrix andbis a (column)n-vector of constants, thenY»N(Am+b;AVAT).

Proof.Use the joint m.g.f.

M Y(t) =E[etTY] =E[etTAX+b] =etTbE[e(ATt)TX] =etTbMX(ATt) =etTbe(ATt)Tm+1 2 (ATt)TV(ATt)=etT(Am+b)+1 2 tT(AVAT)t This is just the m.g.f. for the multivariate normal distribution with vector of meansAm+b and variance-covariance matrixAVAT. Hence, from the uniqueness of the joint m.g.f,Y»

N(Am+b;AVAT).

Note that from (2) a subset of theY0sis multivariate normal. NOTE. The results concerning the vector of means and variance-covariance matrix for linear functions of random variables hold regardless of the joint distribution ofX1;:::;Xn. We define the expectation of a vector of random variablesX,E[X]to be the vector of the expectations and the expectation of a matrix of random variablesY,E[Y], to be the matrix of the expectations. Then the variance-covariance matrix ofXis justE[(X¡E[X])(X¡E[X])T].

The following results are easily obtained:

(i) LetAbe anm£nmatrix of constants,Bbe anm£kmatrix of constants andYbe ann£k matrix of random variables. ThenE[AY+B] =AE[Y]+B. P roof. Theijthentry ofE[AY+B]isE[ånr=1AirYrj+Bij] =ånr=1AirE[Yrj]+Bij, which is the ij thentry ofAE[Y]+B. The result is then immediate. (ii) LetCbe ak£mmatrix of constants andYbe ann£kmatrix of random variables. Then

E[YC] =E[Y]C.

4 Proof. Just transpose the equation. The result then follows from (i). Hence ifZ=AX+b, whereAis anm£nmatrix of constants,bis anm-vector of constants andXis ann-vector of random variables withE[X] =μand variance-covariance matrixV, then

E[Z] =E[AX+b] =AE[X]+b=Aμ+b

Also the variance-covariance matrix forYis just

E[(Y¡E[Y])(Y¡E[Y])T] =E[A(X¡μ)(X¡μ)TAT] =AE[(X¡μ)(X¡μ)T]AT=AVAT

Example.

Suppose thatE[X1] =1,E[X2] =0,Var(X1) =2,Var(X2) =4andCov(X1;X2) =1. LetY1=X1+X2andY2=X1+aX2. Find the means, variances and covariance and hence find aso thatY1andY2are uncorrelated. Writing in vector and matrix notation we haveE[Y] =Amand the variance-covariance matrix forYis justAVATwhere m=µ1 0¶

V=µ2 1

1 4¶

A=µ1 1

1a¶

Therefore

Am=µ1 1

1a¶µ

1 0¶ =µ1 1¶ AVA

T=µ1 1

1a¶µ

2 1

1 4¶µ

1 1

1a¶

=µ8 3+5a

3+5a2+2a+4a2¶

HenceY1andY2have means1and1, variances8and2+2a+4a2and covariance3+5a. They are therefore uncorrelated if3+5a=0, i.e. ifa=¡3 5 . 5
Politique de confidentialité -Privacy policy