[PDF] 1 Introduction - College of Arts and Sciences



Previous PDF Next PDF







STANDARDS FOR VARIANCES (40-9-26)

D The Zoning Board of Appeals decision STANDARDS FOR VARIANCES (40-9-26) The Board of Appeals shall not grant any variance unless, based upon the evidence presented to them, they determine: (A) The proposed variance is consistent with the general purpose of this Code (See Section 40-1-1); and



Topic 13: Unbiased Estimation

d(X) has finite variance for every value of the parameter and for any other unbiased estimator d~, Var d(X) Var d~(X): The efficiency of unbiased estimator d~, e(d~) = Var d(X) Var d~(X): Thus, the efficiency is between 0 and 1 The Cram´er-Rao bound tells us how small a variance is ever possible The formula is a bit mysterious at first, but



Annexe : Calcul de la moyenne et de l’écart type

d Maintenant calculez la variance : Divisez la somme de la colonne C par n-1 soit 19 Le résultat est 4mg 2/dL 2 e La variance a peu d’intérêt au laboratoire car elle s’exprime en carré de l’unité de l’analyse f Maintenant calculez l’écart type en prenant la racine carrée de la variance g Le résultat est 2mg/dL



1 Variable aléatoire et loi de probabilité

2 Paramètres d’une variable aléatoire 2 1 Espérance, variance, écart-type Définition 3 Soit X une variable aléatoire discrète définie sur Ω dont la loi de probabilité est donnée ci-contre Valeur de X x1 x2 x n p i = P(X = x i) p1 p2 p n • L’espérance de la variable aléatoire X est le nombre réel, noté E(X



Model’s output variance can increase when input variance

Résumé Pour certains modèles, réduire la variance d’une ou plusieurs variables d’entrée provoque une aug-mentation de la variance de la sortie du modèle Ce phénomène n’est pas concevable dans le cadre des modèles linéaires décrits par le GUM, mais peut être, en revanche, observable par propagation des distributions sur des



1 Introduction - College of Arts and Sciences

t} is not i i d, but it is a martingale difference sequence, and we can apply the CLT for mds To apply the CLT of mds, we need to show that the three conditions in proposition 15 in lecture note 4 (proposition 7 8 in Hamilton) are satisfied First, E[(t/n)u t]2 = (t2/n2)σ2, and n−1 P n t=1 (t 2/n2)σ2 → σ2/3 >0, so condition (a) is



Estimation d’un intervalle de confiance 1

L'intervalle de confiance est dit approximatif s’il se base sur l’approximation d’une loi par une autre C’est par exemple le cas d’une loi binomiale de paramètres (n, p) qui peut être approximée par une loi normale de moyenne m = np et de variance σ 2 = np(1-p), si n est assez grand et p pas trop proche de 0 ou de 1



Heteroscedastic modelling via the autoregressive conditional

Resum´e: Les auteu rs traitent de l’estimation non parametrique de la variance conditionnelle d’une s´ ´erie chronologique basee sur un mod´ ele autor` egressif non lin´ eaire appliqu´ e´ `alas ´erie des innovations au carr ´eet ne necessitant pas la sp´ ecification d’un mod´ ele



Bacccalla auurrééatt Epprreeuvee:: iMMaatthhéémmaattiqueess

c) Calculer la variance de X 2 bis) U n iq u em t , po rla sé T MG On considère l'équation (E’) : (11 9i)z (11 9i)z 38 , d’inconnue complexe désigne le conjugué de z a) Soit M(x,y) un point d’affixe z où z est une solution de (E’) Montrer que l'équation (E’) admet une



Variation morphologique de la spermathèque chez l’escargot

térale d’un dard et durent environ huit heures [4] Le sperme, échangé au moyen d’un spermatophore, peut être conservé par le receveur pendant plusieurs mois, voire plus d’un an Il est alors stocké dans une sper-mathèque subdivisée en tubules, qui rend possible une séparation physique des spermatozoïdes des différents

[PDF] tableau de signe fonction racine carré

[PDF] fonction x²

[PDF] trigonométrie 1ere sti2d cours

[PDF] arctan valeurs particulières

[PDF] production électricité particulier

[PDF] comment protéger le sol

[PDF] fonctions hyperboliques exercices corrigés

[PDF] arctan valeur remarquable

[PDF] fonction circulaire réciproque cours

[PDF] limite de arctan

[PDF] limite arctan en 0

[PDF] le pouvoir du peuple par le peuple pour le peuple

[PDF] fonctions trigonométriques réciproques pdf

[PDF] shlomo sand livres

[PDF] le peuple est il souverain dissertation

Lecture 7: Processes with Deterministic Trends

1 Introduction

Recall that a process is covariance stationary if it has constant expectation, finite variance, and its antocovariance functions do not depend on time. In this lecture, we will introduce one class of processes that are nonstationary - processes with deterministic trend. In the next lecture, we will introduce another type - processes with stochastic trend. We have been familiar with a stationary ARMA process,

˜xt=ψ(L)ut.

Now consider an ARMA process with a drift,

x t=δt+ ˜xt=δt+ψ(L)ut.(1) Now the expectation ofxtisδt, which is a function of time, so this process is nonstationary. We can decompose the processxtinto two components: a trend component (δt) and a stationary

component (˜xt). Ifδis known, then we candetrendxt, i.e.,xt-δtto get ˜xt, which is a stationary

process, so the process{xt}is said to betrend stationary.

Thek-period ahead forecasting ofxis

E t(xt+k) =Et(δ(t+k) +ψ(L)ut+k) =δ(t+k) +Et(ut+k+ψ1ut+k-1+...+ψkut+ψk+1ut-1+...+ψt+ku0) =δ(t+k) +ψkut+ψk+1ut-1+...+ψt+k

And the forecasting error is:

E t(xt+k-Et(xt+k))2 =Et(ut+k+ψ1ut+k-1+...+ψk-1ut+1)2 = (1 +ψ21+ψ22+...+ψ2k-1)σ2

Note since ˜xt=ψ(L)?tis a stationary process, ask→ ∞, the forecasting error converges to

the unconditional variance of ˜xt, which is bounded. This is a very important difference between processes with deterministic trend and those with stochastic trend. Another feature of a trend

stationary process is that given a shock at timet, its effects on the level of ˜x, hencexeventually

dies off as in a stationary process. This is another difference compared to a unit root process. We will discuss more on this in next lecture. Figure 1 plots a simulated path of (1), whereδ= 1,ut≂N(0,1) andψ(L) = 1 + 0.5L.?

Copyright 2002-2006 by Ling Hu.

1

05101520253035404550-3

-2 -1 0 1 2 3

051015202530354045500

10 20 30
40

50Figure 1: Simulated MA(2) Process with Deterministic Trend

2 Estimation and Inference

2.1 OLS estimation of the simple time trend model

Consider the process

y t=α+δt+ut=xtβ+ut,(2)

whereβ?= [α,δ],x?t= [1,t], andut≂i.i.d.N(0,σ2). We can use MLE to estimate the parameters

β, and the MLE estimator is equivalent to the OLS estimator. So we will only discuss OLS estimation, which is applicable to a more general class of errors. In our following analysis, we assumeut≂i.i.d(0,σ2) andE(u4t)<∞.

The OLS estimate ofβis

βn=?

n? t=1x tx?t? -1?n? t=1x tyt? (3) =β0+? n? t=1x tx?t? -1?n? t=1x tut? (4)

Sincextis deterministic, take expectation ofˆβn, we haveE(ˆβn) =β0. Soˆβnis an unbiased

estimator forβ0. It can be also shown that they are consistent (converges to the true value). So

far what we got are just the same as what we got with a stationary processes. However, althoughˆβn= (ˆαn,ˆδn)?converges to the true parameterβ0= (ˆα0,ˆδ0)?, it turns out that its two components

ˆαnandˆδnconverge at different rates!

To see this, note thatxtx?tis a 2 by 2 matrix,

n t=1x tx?t=?n?n t=1t?n t=1t?n t=1t2? 2

Some simple math gives

n t=1t=n(n+ 1)/2 =O(n2), n t=1t

2=n(n+ 1)(2n+ 1)/6 =O(n3),

or 1n 2n t=1t→12 1n 3n t=1t

2→13

More generally, we have

1n r+1n t=1t r→1r+ 1. So the elements of matrix ofX?nXndiverges at different rate. To obtain a convergent matrix, we have to divide it byn3(the largest divergent rate), n -3n? t=1x tx?t=?0 0 0 13 Unfortunately, this limiting matrix is singular and cannot be inverted. It turns out that to

obtain a nondegenerate limiting distributions, ˆαnneed to be rescaled byn1/2andˆδnneed to be

rescaled byn3/2. Therefore, to get a proper limit ofˆβn, we need to normalize it with a matrix H n=?n1/20

0n3/2?

Now premultiply

ˆβnbyH.

H n(ˆβn-β0) =?n1/2(ˆαn-α0) n

3/2(ˆδn-δ0)?

=Hn? n? t=1x tx?t? -1 H nH-1n? n? t=1x tut? H -1n? n? t=1x tx?t? H -1n? -1? H -1n? n? t=1x tut?? We first drive the limit for the matrixH-1nX?nXnHn. H -1n? n? t=1x tx?t? H -1n=?n1/20

0n3/2?

-1?n?n t=1t?n t=1t?n t=1t2??n1/20

0n3/2?

-1 →?112 12 13 asn→ ∞. We will useQto denote this matrix,

Q=?112

12 13 3 Next, we need to derive the asymptotic distribution forH-1n(?n t=1xtut), H -1n? n? t=1x tut? =?n-1/20

0n-3/2??

?n t=1ut?n t=1tut? =?n-1/2?n t=1ut n -1/2?n t=1(t/n)ut? We will show that this vector is asymptotically normal with mean zero and covariance matrix

2Q. First consider the termn-1/2?n

t=1ut, applying the classical central limit theorem directly, we have n -1/2n? t=1u t→N(o,σ2).

Second, consider the termn-1/2?n

t=1(t/n)ut. Now the series{(t/n)ut}is not i.i.d, but it is a martingale difference sequence, and we can apply the CLT for mds. To apply the CLT of mds, we need to show that the three conditions in proposition 15 in lecture note 4 (proposition 7.8 in Hamilton) are satisfied. First,E[(t/n)ut]2= (t2/n2)σ2, andn-1?n t=1(t2/n2)σ2→σ2/3>0, so condition (a) is satisfied. Second, taker= 4, sinceuthas finite fourth moment by assumption, condition (b) is satisfied. Finally, we need to show that n -1n? t=1(t2/n2)u2t→σ2/3.

Since we have that

n -1n? t=1(t2/n2)σ2→σ2/3, we just need to show that n -1n? t=1(t2/n2)(u2t-σ2)→0.(5) Note that the series{(t2/n2)(u2t-σ2)}is a mds with variance ((t4/n4)E[(u2t-σ2)2] = (t4/n4)[E(u4t)-σ4] = (t4/n4)(μ4-σ4)<∞, So (5) holds by law of large numbers for mds. Now all the three conditions are satisfied, we can then apply the CLT for mds, n -1/2n? t=1(t/n)ut→N(0,σ2/3).

The remaining task is to show that{n-1/2?n

t=1ut}and{n-1/2?n t=1(t/n)ut}are asymptotically joint normal. To show they are jointly normal, it is suffice to show that any linear combination of these two series is asymptotically normal, i.e., to show that n -1/2n? t=1[λ1+λ2(t/n)]ut→N(0,Σ). 4

Note that the series{λ1ut+λ2(t/n)ut}is a mds with varianceσ2[λ21+ 2λ1λ2(t/n) +λ22(t/n)2]

satisfying 1n n t=1σ

2[λ21+ 2λ1λ2(t/n) +λ22(t/n)2]→σ2[λ21+ 2λ1λ2(1/2) +λ22(1/3)] =σ2λ?Qλ

forλ= (λ1,λ2)?. Furthermore, 1n n So we can apply CLT and have this linear combination of the two elements converge to a Gaussian distribution, this hence imply that this two elements are joint Gaussian. ?n-1/2?n t=1ut n -1/2?n t=1(t/n)ut? →N(0,σ2Q).

Therefore, we got

H -1n? n? t=1x tut? =?n-1/2?n t=1ut n -3/2?n t=1tut? →N(0,σ2Q-1QQ-1) =N(0,σ2Q-1).

We can summarize the results in

Proposition 1Letytbe generated according to the simple deterministic time trend model (2) whereut≂i.i.d.(0,σ2)with finite fourth moment. Then ?n1/2(ˆαn-α) n

3/2(ˆδn-δ)?

→N? ?0 0? 2?112 12 13 -1?

Note that for the estimate ofδ, we not only haveˆδn→pδ, we also haven(ˆδn-δ)→p0. In this

case, the estimate

ˆδis said to besuperconsistent.

2.2 Hypothesis testing for the simple time trend model

When the innovation termutis Gaussian, and since in the simple trend model the regressors are deterministic, the OLS estimates ˆαnandˆδnare Gaussian and the usual OLStandFtests have the exact small sampletandFdistribution. In this section, we will consider the case whenutis non-Gaussian. We first consider a test of the null hypothesis onα, say,α=a. Lets2nis the OLS estimate of

2:s2n=1n-2?

n t=1ˆu2t. Then thetstatistics is t n=ˆαn-a? s

2n?1 0?(X?nXn)-1?1

0?? 1/2 ⎷n(ˆαn-a)? s

2n?⎷n0?(X?nXn)-1?

⎷n 0?? 1/2 5 ⎷n(ˆαn-a)? s

2n?1 0?Hn(X?nXn)-1Hn?1

0?? 1/2 ⎷n(ˆαn-a)?

2?1 0?Q-1?1

0?? 1/2 where we uses thats2n→pσ2, ⎷n0?=?1 0?HnandHn(X?nXn)-1Hn= [H-1n(X?nXn)H-1n]-1→Q-1. Letq11denote the (1,1) element ofQ-1, then under the null hypothesis we know that⎷n(ˆαn- a)→N(0,σ2q11). So we can see that t n→⎷n(ˆαn-a)σ ⎷q 11, is an asymptotically Gaussian variable divided by the square root of its variance, so it has aN(0,1) distribution. Similarly, to test the null hypothesisˆδn=b, write t n=ˆδn-b? s

2n?0 1?(X?nXn)-1?0

1?? 1/2 n3/2(ˆδn-b)? s

2n?0n3/2?(X?nXn)-1?0

n 3/2?? 1/2 n3/2(ˆδn-b)? s

2n?0 1?Hn(X?nXn)-1Hn?0

1?? 1/2 n3/2(ˆδn-b)σ ⎷q 22,
which is again asymptoticallyN(0,1). We have just considered tests on eitherαorδ. Now consider a test involving bothαandδ: H

0:r1α+r2δ=r.

We will apply similar procedure as before, butαandδhave different convergent raten1/2and n

3/2, which one shall we use to derive the asymptotics? It turns out (again) that the slower rate

'dominates". t n=(r1ˆαn+r2ˆδn-r)? s

2n?r1r2?(X?nXn)-1?r1

r 2?? 1/2 6 ⎷n(r1ˆαn+r2ˆδn-r)? s

2n⎷n

?r1r2?(X?nXn)-1?r1 r 2? ⎷n 1/2 ⎷n(r1ˆαn+r2ˆδn-r)? s

2n⎷n

?r1r2?H-1nHn(X?nXn)-1HnH-1n?r1 r 2? ⎷n 1/2 where r n≡H-1n?r1 r 2? ⎷n=?r1 r 2/n? →?r1 0? Since

ˆδnis superconsistent,

⎷n(r1ˆαn+r2ˆδn-r) =⎷n(r1ˆαn+r2δ-r) +op(1). So t

2?r10?Q-1?r1

0??

Further, note that

⎷n(r1ˆαn+r2δ-r) =⎷n[r1(ˆαn-α) +r1α+r2δ-r) =⎷n[r1(ˆαn-α)]

under the null hypothesis. Therefore, under the null t ⎷q 11. which is asymptoticallyN(0,1). This example shows that a test involving a single restriction across parameters with different rates of convergence is dominated asymptotically by the parameters with the slowest rates of convergence. Finally consider joint test of separate hypothesis aboutαandδ, H

0:?α

=?a b? or in vector form,β=c. Then we could compute a Wald statistics W n= (ˆβn-c)?[s2n(X?nXn)-1]-1(ˆβn-c)

Then we have

W n→χ2(2). 7

2.3 OLS Estimation of Autoregression with Time Trend

Now consider a general autoregressive process around a deterministic time trend. y or in matrix form, y t=x?tβ+ut

wherex?t= [yt-1,yt-2,...,yt-p,1,t], andβ?= [φ1,...,φp,α,δ]. Sims, Stock and Watson (1990)

suggest that we find a matrixGand use it to transform this process to y t=x?tG?[G?]-1β+ut= ˜x?tβ?+ut.

where ˜xt=Gxt= [˜yt-1,˜yt-2,...,˜yt-p,1,t]?andβ?= [G?]-1β= [φ?1,φ?2,...φ?p,α?,δ?]?.

The idea is that after the transformation, we could writeytin terms of zero-mean covariance

stationary process (˜yt-j), a constant and a time trend. In doing this, we could isolate components of

the OLS coefficient vector with different rates of convergence. In this case, after the transformation,ˆφ?1,n,ˆφ?2,n,...will converge at the usual rate of⎷n, while ˆα?n,ˆδ?nwill behave asymptotically like ˆαn

and ˆδnin the simple time trend model. The matrixGis of dimension (p+ 2)×(p+ 2): G ???????1 0...0 0 0

0 1...0 0 0

0 0...1 0 0

-α?+δ?-α?+ 2δ?...-α?+pδ?1 0quotesdbs_dbs8.pdfusesText_14