[PDF] Asymptotic Theory - Statistics




Loading...







[PDF] Asymptotic Theory of Statistics and Probability

of Statistics and Probability Asymptotic Distribution of One Order Statistic 21 3 Asymptotic Theory of Likelihood Ratio Test Statistics

[PDF] STATISTICAL ASYMPTOTICS - University of Warwick

Statistical asymptotics draws from a variety of sources including (but not restricted to) probability theory, analysis (e g Taylor's theorem), and of

[PDF] Asymptotic Theory - Statistics

Asymptotic theory (or large sample theory) aims at answering the question: what happens as we gather more and more data? In particular, given random sample, 

Asymptotic theory of statistical inference, by B L S Prakasa Rao

The asymptotic theory of statistical inference is the study of how well we may succeed in this pursuit, in quantitative terms Any function of the data, 

[PDF] Asymptotic in Statistics Lecture Notes for Stat522B Jiahua Chen

Review of probability theory, probability inequalities • Modes of convergence, stochastic order, laws of large numbers • Results on asymptotic normality

[PDF] Lecture Notes on Asymptotic Statistics - Data Science Association

Asymptotic Theory of Statistics and Probability, Springer Serfling, R (1980) Approximation Theorems of Mathematical Statistics, John Wiley, New

[PDF] Asymptotic Theory in Probability and Statistics with Applications

To celebrate the 65th birthday of Professor Zhengyan Lin, an Inter- national Conference on Asymptotic Theory in Probability and Statistics

[PDF] Chapter 6 Asymptotic Distribution Theory

In Chapter 5, we derive exact distributions of several sample statistics based on a random sample of observations • In many situations an exact statistical 

[PDF] Asymptotic Theory - Statistics 22869_6Lecture7_Asymptotics_2019.pdf

Statistics

Asymptotic Theory

Shiu-Sheng Chen

Department of Economics

National Taiwan University

Fall 2019

Shiu-Sheng Chen (NTU Econ)StatisticsFall 20191/28

Asymptotic Theory: Motivation

Asymptotic theory (or large sample theory) aims at answering the question:what happens as we gather more and more data? In particular, given random sample,{X1,X2,X3,...,Xn}, and statistic:

Tn=t(X1,X2,...,Xn),

what is thelimiting behaviorofTnasn?→∞?

Shiu-Sheng Chen (NTU Econ)StatisticsFall 20192/28

Asymptotic Theory: Motivation

Why asking such a question?

For instance, given random sample{Xi}ni=1≂i.i.d.N(μ,σ2), we know that

¯Xn≂N?μ,σ2

n? However, if{Xi}ni=1≂i.i.d.(μ,σ2)without normal assumption, what is the distribution of¯Xn?

We don"t know, indeed.

Is it possible to find a good approximation of the distribution of

¯Xnasn?→∞?

Yes! This is where theasymptotic theorykicks in.

Shiu-Sheng Chen (NTU Econ)StatisticsFall 20193/28

Preliminary Knowledge

Section 1

Preliminary Knowledge

Shiu-Sheng Chen (NTU Econ)StatisticsFall 20194/28

Preliminary Knowledge

Preliminary Knowledge

Limit

Markov Inequality

Chebyshev Inequality

Shiu-Sheng Chen (NTU Econ)StatisticsFall 20195/28

Preliminary Knowledge

Limit of a Real Sequence

Definition (Limit)

If for everyε>0, and an integerN(ε),

?bn-b?<ε,?n>N(ε) then we say that a sequence of real numbers{b1,...,bn}converges to a limitb.

It is denoted by

limn→∞bn=b

Shiu-Sheng Chen (NTU Econ)StatisticsFall 20196/28

Preliminary Knowledge

Markov Inequality

Theorem (Markov Inequality)

Suppose thatXis a random variable such thatP(X≥0)=1. Then for every real numberm>0,

P(X≥m)≤E(X)

m

Shiu-Sheng Chen (NTU Econ)StatisticsFall 20197/28

Preliminary Knowledge

Chebyshev Inequality

Theorem (Chebyshev Inequality)

LetY≂(E(Y),Var(Y)). Then for every numberε>0,

P??Y-E(Y)?≥ε?≤Var(Y)

ε2

Proof: LetX=[Y-E(Y)]2, then

P(X≥0)=1

and

E(X)=Var(Y)

Hence, the result can be derived by applying the Markov

Inequality.

Shiu-Sheng Chen (NTU Econ)StatisticsFall 20198/28

Modes of Convergence

Section 2

Modes of Convergence

Shiu-Sheng Chen (NTU Econ)StatisticsFall 20199/28

Modes of Convergence

Types of Convergence

For a random variable, we consider three modes of convergence:

Converge in Probability

Converge in Distribution

Converge in Mean Square

Shiu-Sheng Chen (NTU Econ)StatisticsFall 201910/28

Modes of Convergence

Converge in Probability

Definition (Converge in Probability)

Let{Yn}be a sequence of random variables and letYbe another random variable. For anyε>0,

P(?Yn-Y?<ε)?→1,asn?→∞

then we say thatYnconverges in probability toY, and denote it by Yn p?→Y

Equivalently,

P(?Yn-Y?≥ε)?→0,asn?→∞

Shiu-Sheng Chen (NTU Econ)StatisticsFall 201911/28

Modes of Convergence

Converge in Probability

{Xi}ni=1≂i.i.d.Bernoulli(0.5)and then computeYn=¯Xn=∑iXi n

In this case,Yn

p?→0.5

02004006008001000

0.2 0.4 0.6 0.8 1.0 toss z Shiu-Sheng Chen (NTU Econ)StatisticsFall 201912/28

Modes of Convergence

Converge in Distribution

Definition (Converge in Distribution)

Let{Yn}be a sequence of random variables with distribution functionFYn(y), (denoted byFn(y)for simplicity). LetYbe another random variable with distribution function,FY(y). If limn→∞Fn(y)=FY(y)at allyfor whichFY(y)is continuous then we say thatYnconverges in distribution toY.

It is denoted by

Yn d?→Y

FY(y)is called thelimiting distributionofYn.

Shiu-Sheng Chen (NTU Econ)StatisticsFall 201913/28

Modes of Convergence

Converge in Mean Square

Definition (Converge in Mean Square)

Let{Yn}be a sequence of random variables and letYbe another random variable. If

E(Yn-Y)2?→0,asn?→∞.

Then we say thatYnconverges in mean square toY.

It is denoted by

Yn ms?→Y

It is also calledconverge in quadratic mean.

Shiu-Sheng Chen (NTU Econ)StatisticsFall 201914/28

Important Theorems

Section 3

Important Theorems

Shiu-Sheng Chen (NTU Econ)StatisticsFall 201915/28

Important Theorems

Theorems

Theorem

Yn ms?→cif and only if limn→∞E(Yn)=c,andlimn→∞Var(Yn)=0.

Proof. It can be shown that

E(Yn-c)2=E([Yn-E(Yn)]2)+[E(Yn)-c]2

Shiu-Sheng Chen (NTU Econ)StatisticsFall 201916/28

Important Theorems

Theorems

Theorem

IfYn ms?→YthenYn p?→Y Proof: Note thatP(?Yn-Y?2≥0)=1, and by Markov Inequality, P(?Yn-Y?≥k)=P(?Yn-Y?2≥k2)≤E(?Yn-Y?2) k2 Shiu-Sheng Chen (NTU Econ)StatisticsFall 201917/28

Important Theorems

Weak Law of Large Numbers, WLLN

Theorem (WLLN)

Given a random sample{Xi}ni=1withσ2=Var(X1)<∞. Let¯Xn denote the sample mean, and note thatE(¯Xn)=E(X1)=μ. Then

¯Xn

p?→μ Proof: (1) By Chebyshev Inequality (2) By Converge in Mean

Square

Sample mean¯Xnis getting closer (in probability sense) to the population meanμas the sample size increases. That is, if we use¯Xnas aguessof unknownμ, we are quite happy that the sample mean makes a good guess. Shiu-Sheng Chen (NTU Econ)StatisticsFall 201918/28

Important Theorems

WLLN for Other Moments

Note that the WLLN can be thought as

∑ni=1Xi n=X1+X2+⋯Xn n p?→E(X1)

LetY=X2, and by the WLLN,

∑ni=1Yi n=Y1+Y2+⋯Yn n p?→E(Y1)

Hence,

∑ni=1X2i n=X21+X22+⋯X2n n p?→E(X21) Shiu-Sheng Chen (NTU Econ)StatisticsFall 201919/28

Important Theorems

Example: An Application of WLLN

AssumeWn≂Binomial(n,μ), and letYn=Wnn. Then Yn p?→μ Why? SinceWn=∑iXi,Xi≂i.i.d.Bernoulli(μ) withE(X1)=μ,

Var(X1)=μ(1-μ), the result follows by WLLN.

Shiu-Sheng Chen (NTU Econ)StatisticsFall 201920/28

Important Theorems

Central Limit Theorem, CLT

Theorem (CLT)

Let{Xi}ni=1be a random sample, whereE(X1)=μ<∞,

Var(X1)=σ2<∞, then

Zn=¯Xn-E(¯Xn)?Var(¯Xn)=

⎷n(¯Xn-μ) σ d?→N(0,1) If a random sample is taken from any distribution with meanμ and varianceσ2, regardless of whether this distribution is discrete or continuous, then the distribution of the random variableZnwill be approximately the standard normal distribution in large sample. Shiu-Sheng Chen (NTU Econ)StatisticsFall 201921/28

Important Theorems

CLT

Using notation of asymptotic distribution,

¯Xn-μ?

σ2 n ≂AN(0,1), Or

¯Xn≂AN?μ,σ2

n?, where≂Arepresents asymptotic distribution, andArepresents

Asymptotically

Shiu-Sheng Chen (NTU Econ)StatisticsFall 201922/28

Important Theorems

An Application of CLT

Example: Assume{Xi}≂i.i.d.Bernoulli(μ), then

¯Xn-μ?μ(1-μ)

n d?→N(0,1). Why?

SinceE(¯Xn)=μ, andVar(¯Xn)=σ2

n=μ(1-μ) n Shiu-Sheng Chen (NTU Econ)StatisticsFall 201923/28

Important Theorems

Continuous Mapping Theorem

Theorem (CMT)

GivenYn

p?→Y, andg(?)is continuous, then g(Yn) p?→g(Y).

Proof: omitted here.

Examples: ifYn

p?→Y, then 1 Yn p?→1 Y Y2n p?→Y2 ⎷Yn p?→⎷Y Shiu-Sheng Chen (NTU Econ)StatisticsFall 201924/28

Important Theorems

Theorem

Theorem

GivenWn

p?→WandYn p?→Y, then Wn+Yn p?→W+Y WnYn p?→WY

Proof: omitted here.

Shiu-Sheng Chen (NTU Econ)StatisticsFall 201925/28

Important Theorems

Slutsky Theorem

Theorem

GivenWn

d?→WandYn p?→c, wherecis a constant. Then Wn+Yn d?→W+c WnYn d?→cW WnYn d?→W cforc≠0

Proof: omitted here.

Shiu-Sheng Chen (NTU Econ)StatisticsFall 201926/28

Important Theorems

The Delta Method

Theorem

Given⎷n(Yn-θ)d?→N(0,σ2).Letg(?)be differentiable, and g′(θ)≠0exists, then ⎷n(g(Yn)-g(θ))d?→N(0,[g′(θ)]2σ2). Proof: (sketch) Given 1st-order Taylor approximation g(Yn)≈g(θ)+g′(θ)(Yn-θ), then⎷n(g(Yn)-g(θ)) g′(θ)≈⎷n(Yn-θ)d?→N(0,σ2) Shiu-Sheng Chen (NTU Econ)StatisticsFall 201927/28

Important Theorems

Example

Given{Xi}ni=1≂i.i.d.(μ,σ2), find the asymptotic distribution of

¯Xn

1-¯Xn.

Note that by CLT,

⎷n(¯Xn-μ)d?→N(0,σ2)

Hence, by the Delta method,

g(¯Xn)=¯Xn

1-¯Xn,g(μ)=μ

1-μ,g′(μ)=1

(1-μ)2 ⎷n?¯Xn

1-¯Xn-μ

1-μ?d?→N?0,1

(1-μ)4σ2? Shiu-Sheng Chen (NTU Econ)StatisticsFall 201928/28
Politique de confidentialité -Privacy policy