[PDF] Approximating the Binomial Distribution by the Normal - DiVA Portal




Loading...







[PDF] The Normal Approximation to the Binomial Distribution

Remarkably, when n, np and nq are large, then the binomial distribution is well approximated by the normal distribution According to eq (8 3) on p 762 of Boas 

[PDF] Normal approximation to the binomial

Theorem 9 1 (Normal approximation to the binomial distribution) If Sn is a binomial variable with parameters n and p, Binom (n, p), then

[PDF] Approximating the Binomial Distribution by the Normal - DiVA Portal

21 jui 2011 · In this paper an examination is made regarding the size of the approximations errors The exact probabilities of the binomial distribution is 

[PDF] Lab Project 5: The Normal approximation to Binomial distribution

Lab Project 5: The Normal approximation to Binomial distribution Course : Introduction to Probability and Statistics, Math 113 Section 3234

[PDF] 65 The Normal Approximation to the Binomial Distribution

12 nov 2019 · Although it seems strange, under certain circumstances a (continuous) normal distribution can be used to approximate a (discrete) binomial 

[PDF] Normal approximation to the Binomial

In 1733, Abraham de Moivre presented an approximation to the Binomial distribution He later (de Moivre, 1756, page 242) appended the derivation

[PDF] Normal Distribution as Approximation to Binomial Distribution

Normal Approximation to the Binomial distribution IF np > 5 AND nq > 5, then the binomial random variable is approximately normally distributed with mean µ =np 

[PDF] NORMAL APPROXIMATION TO THE BINOMIAL DISTRIBUTION

For accurate values for binomial probabilities, either use computer software to do exact calculations or if n is not very large, the probability calculation 

[PDF] Section 55, Normal Approximations to Binomial Distributions

Let x represent a binomial random variable for n trials, with probability of Since the binomial distribution is discrete and the normal distribution is 

[PDF] The Normal Distribution as an Approximation to the Binomial

For a large enough number of trials (n) the area under normal curve can be used to approximate the probability of a binomial distribution Requirements:

[PDF] Approximating the Binomial Distribution by the Normal  - DiVA Portal 29623_6FULLTEXT01.pdf

U.U.D.M. Project Report 2011:18

Examensarbete i matematik, 15 hp

Handledare och examinator: Sven Erick Alm

Juni 2011

Department of Mathematics

Uppsala University

Approximating the Binomial Distribution by

the Normal Distribution - Error and Accuracy

Peder Hansen

Approximating the Binomial Distribution by the

Normal Distribution - Error and Accuracy

Peder Hansen

Uppsala University

June 21, 2011

Abstract

Different rules of thumb are used when approximating the binomial distribution by the normal distribution. In this paper an examination is made regarding the size of the approximations errors. The exact probabilities of the binomial distribution is derived and then compared to the approximated value of the normal distribution. In addition a regression model is done. The result is that the different rules indeed gives rise to errors of different sizes. Furthermore, the regression model can be used in order to get guidance of the maximum size of the error. 1

Acknowledgenment

Thank you Professor Sven Erick Alm!

2

Contents

1 Introduction 4

2 Theory and methodology 4

2.1 Characteristics of the distributions . . . . . . . . . . . . . . .

4

2.2 Approximation . . . . . . . . . . . . . . . . . . . . . . . . . .

5

2.3 Continuity correction . . . . . . . . . . . . . . . . . . . . . . .

6

2.4 Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

2.5 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

2.5.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . .

8

2.5.2 Regression . . . . . . . . . . . . . . . . . . . . . . . . .

9

3 Background 10

4 The approximation error of the distribution function 11

4.1 Absolute error . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

4.2 Relative Error . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

5 Summary and conclusions 17

3

1 Introduction

Neither is any extensive examination found, regarding the rules of thumb used when approximating the binomial distribution by the normal distribu- tion, nor of the accuracy and the error which they result in. The scope of this paper is the most common approximation of a Binomial distributed random variable by the normal distribution. We letXBin(n;p), with expectation E(X) =npand varianceV(X) =np(1p), be approximated byY, where

YN(np;np(1p)). We denote,XY.

The rules of thumb, is a set of different guidelines, minimum values or limits, here denotedLfornp(1p), in order to get a good approximation, that is,np(1p)L. There are various kinds of rules found in the literature and any extensive examination of the error and accuracy has not been found. Reasonable approaches when comparing the errors are, the maximum error and the relative error, which both are investigated. The main focus lies on two related topics. First, there is a shorter section, where the origin of the rules, where they come from and who is the originator, is discussed. Next comes an empirical part, where the error affected by the different rules of thumb is studied. The result is both plotted and tabled. An analysis of regression is also made, which might be useful as a guideline when estimating the error in situations not covered here. In addition to the main topics, a section dealing with the preliminaries, notation and definitions of probability theory and mathematical statistics is found. Each of the sections will be more explanatory themselves regarding their topics. I presume the reader to be familiar with some basic concepts of mathematical statistics and probability theory, otherwise the theoretical part would range way to far. Therefor, also proofs and theorems are just referred to. Finally there is a summarizing section, where the results of the empirical part are discussed.

2 Theory and methodology

First of all, the reader is assumed to be familiar with basic concepts in mathematical statistics and probability theory. Furthermore there are, as stated above, some theory that instead of being explicitly explained, only is referred to. Regarding the former, I suggest the reader to view for instance [1] or [4] and concerning the latter the reader may want to read [7].

2.1 Characteristics of the distributions

As the approximation of a binomial distributed random variable by a normal distributed random variable is the main subject, a brief theoretical introduc- tion about them is made. We start with abinomial distributed random 4 variable,Xand denote,

XBin(n;p);wheren2Nandp2[0;1]:

The parameterspandnare the probability of an outcome and the number of trials. The expected value and variance ofXare,

E(X) =npandV(X) =np(1p);

respectively. In addition,Xhas got the probability function p

X(k) =P(X=k) =n

k p k(1p)nk;where0kn; and the cumulative probability function, or distribution function, F

X(k) =P(Xk) =kX

i=0 n i p i(1p)ki:(1) The variableXis approximated by a normal distributed random variable, call itY, we write,

YN(;2);where2Rand2<1:

The parametersand2are the mean value and variance,E(Y)andV(Y), respectively. The density function ofYis f

Y(x) =1

p2e(x)2=22 and the distribution function is defined by, F

Y(k) =P(Yx) =Z

x 11 p2e(t)2=22dt:(2)

2.2 Approximation

Thanks to De Moivre, among others, we know by the central limit theo- rem that a sum of random variables converges to the normal distribution. A binomial distributed random variableXmay be considered as a sum of Bernoulli distributed random variables. That is, letZbe a Bernoulli dis- tributed random variable,

ZBe(p)wherep2[0;1];

5 with probability distribution, p

Z=P(Z=k) =(

pfork= 1

1pfork= 0:

Consider the sum ofnindependent identically distributedZi"s, i.e. X=nX i=0Z i and note thatXBin(n;p). For instance one can realize that the proba- bility of the sum being equal to k,P(X=k) =n k p k(1p)nk. Hence, we know that whenn! 1, the distribution ofXwill be normal and for largenapproximately normal. How largenshould be in order to get a good approximation also depends, to some extent, onp. Because of this, it seems reasonable to define the following approximations. Again, letXBin(n;p) andYN(;2). The most common approximation,XY, is the one where=npand2=np(1p), this is also the one used here. Regarding the distribution function we get F

X(k)

knppnp(1p)! ;(3) whereFX(k)is defined in (1) andis the standard normal distribution function. We extend the expression above and get that, F

X(b)FX(a) =P(a < Xb)

bnppnp(1p)!  anppnp(1p)! : (4)

2.3 Continuity correction

We proceed with the use of continuity correction, which is recommended by [1], suggested by [4] and advised by [9], in order to decrease the error, the approximation (3) will then be replaced by F

X(k)

k+ 0:5nppnp(1p)! (5) and hence (4) is written as F

X(b)FX(a) =P(a < Xb)

b+ 0:5nppnp(1p)!  a+ 0:5nppnp(1p)! : (6) 6 This gives, for a single probability, with the use of continuity correction, the approximation, p

X(k) =FX(k)FX(k1)

k+ 0:5nppnp(1p)!  (k1) + 0:5nppnp(1p)! (7) and further we note that it can be written F

X(k)FX(k1)k+0:5Z

k0:5f

Y(t)dt:(8)

2.4 Error

There are two common ways of measuring an error, the absolute error and the relative error. In addition another usual measure of how close, so to speak, two distributions are to each other, is thesupremum norm sup

AjP(X2A)P(Y2A)j:

However, from a practical point of view, we will study the absolute error and relative error of the distribution function. Letadenote the exact value andathe approximated value. Theabsolute erroris the difference between them, the real value and the one approximated. The following notation is used, " abs=jaaj: Therefor, theabsolute error of the distribution function, denoted"Fabs(k), for any fixedpandn, wherek2N: 0kn, without use of continuity correction, is "

Fabs(k) =

FX(k) knppnp(1p)! :(9) Regarding the relative error, in the same way as before, letabe the exact value andathe approximated value. Then therelative erroris defined as " rel= aaa : This gives therelative error of the distribution function, denoted"Frel(k), for any fixedpandn, wherek2N: 0kn, without use of continuity correction, is "

Frel(k) ="Fabs(k)F

X(k); 7 or equivalently, inserting"Fabs(k)from (9), "

Frel(k) =

FX(k) knppnp(1p)! F X(k):

2.5 Method

The examination is done in the statistical software R. The software provides predefined functions for deriving the distribution function and probability function of the normal and binomial distributions. The examination is split into two parts, where the first part deals with the absolute error of the approximation of the distribution function and the second part concerns the relative error. The conditions under which the calculations are made, are those found as guidelines in [4]. The calculations will be made with the help of a two-step algorithm. At the end of each section a linear model is fitted to the error. Finally, an overview, where a table and a plot of how the value of npq, whereq= 1p, affects the maximum approximation error for different probabilities are presented.

2.5.1 Algorithm

The two step algorithm below is used. The values ofnpqmentioned in the literature are, in all cases said to be equal or larger than some limit, here denotedL. The worst case scenario, as to speak, is the case where they are equal, that is,npq=L. Therefor equalities are chosen as limits. We know thatn2N, which means thatpmust be semi-fixed if the equality should hold, this means that the values ofpare adjusted, but still remain close to the ones initially chosen. The way of doing this is a two-step algorithm. First a reasonable set of different initial probabilities,~pi"s are chosen, whereafter the corresponding~nivalues, which in turn will be rounded toni, are derived. These are used to adjust~pitopiso that the equality will hold. 1. (a)

Chose a set

~Pof different initial probabilities,~pi2[0;0:5], where i2N: 0< i < ~P . (b) Deriv ethe corresp onding~ni2R+so that~ni~pi(1~pi) =L, (c) con tinueb yderiving ni2N, in order to get a integer, n i(pi) := minfn2N:n~pi(1~pi)Lg:(10)

Now we got a set ofni2N, denote itN.

2.

Chose a set Pso that for everypi2P,

8 n ipi(1pi) =L: The result is that we always keep the limitLfixed. Let us take a look at an example. LetL= 10, use continuity correction and the initial~P=

0:1(0:1)0:5,Exemplifying table of algorithm values

i1 2 3 4 5 ~pi0.1 0.2 0.3 0.4 0.5 ~ni111.11 62.50 47.62 41.67 40.00 n i112 63 48 42 40 p i0.099 0.198 0.296 0.391 0.500Different rules of thumb are suggested by [4]. Using approximation (3) the authors say thatnp(1p)10gives reasonable approximations and in addition, using (5), it may even be sufficient usingnp(1p)3. The investigation takes place under three different conditions, np(1p) = 10without continuity correction, suggested in [4], np(1p) = 10with continuity correction, suggested in [2], np(1p) = 3with continuity correction, suggested in [4]. The investigation of the rules is made only forpi2[0;0:5]due to symmetry. As we see,np(1p)simply gets the same values forp2[0;0:5]as for p2[0:5;1]. So, for everyp,ni(pi)is derived, this in turn, means that we getni(pi) + 1approximations. For everyni(pi), and of coursepias well, we definethe maximum absolute error of the approximation of the distribution function, M

Fabs= maxf"Fabs(k) : 0kni(pi)g;(11)

and in additionthe maximum relative error M

Frel= maxf"Frel(k) : 0kni(pi)g:(12)

The results are both tabled and plotted.

2.5.2 Regression

Beforehand, some plots where made which indicated that the maximum ab- solute error could be a linear function ofp. Regarding the relative maximum error, a quadratic or cubic function ofpseemed plausible. Because of that, a regression is made. The model assumed to explain the absolute error is 9 M "= + p+l;(13) whereM"is the maximum error, is the intercept, the slope andlthe error of the linear model. For the relative error, the two additional regression models are, M "= + p+ p2+l(14) and M "= + p+ p2+p3+l:(15)

3 Background

In the first basic courses in mathematical statistics, the approximations (3) and (5) are taught. Students have learned some kind of rules of thumb they should use when applying the approximations, myself included, for example the rules suggested by Blom [4], np(1p)10; np(1p)3with continuity correction: Any motivation why the limitLis set to beL= 10andL= 3respec- tively is not found in the book. On the other hand, in 1989 Blom claims that the approximation "gives decent accuracy ifnpqis approximately larger than 10" with continuity correction [2]. Further, it is interesting, that Blom changes the suggestion between the first edition of [3] from 1970, where it says, similarly as above, that it "gives decent accuracy ifnp(1p)is ap- proximately larger than 10"withcontinuity correction, and in the second edition from 1984 the same should yield, but now insteadwithoutuse of continuity correction, the conclusion is that there has been some fuzziness regarding the rules. Neither have I, nor my advisor Sven Erick Alm, found any examination of the accuracy of these rules anywhere else. With Blom [4] as starting-point, I begun backtracking, hoping that I could find the source of the rules of thumb. It is worth mentioning that among authors, slightly different rules have been used. For instance Alm himself and Britton, present a schema with rules for approximating distributions, in whichnp(1p)>5 with continuity correction is suggested [1]. Even between countries, or from an international point of view, so to speak, differences are found. Schader and Schmid [10] says that "by far the most popular are" np(1p)>9 10 and np >5for0< p0:5; n(1p)>5for0:5< p <1; which I am not familiar with and I have not found in any Swedish literature. In the mid-twentieth century, more precise 1952, Hald [9] wrote, An exhaustive examination on the accuracy of the approxi- mation formulas has not yet been made, and we can therefore only give rough rules for the applicability of the formulas. With these words in mind, the conclusion is that there probably does not ex- ist any earlier work made about the accuracy of the approximation. However, Hald himself made an examination in the same work fornpq >9. Further he also points out that in cases where the binomial distribution is very skew, p <

1n+1andp >nn+1, the approximation cannot be applied. Some articles

have been found that briefly discuss the accuracy and error of the distribu- tions. Mainly, the focus of the articles lies on some more advanced method of approximating than (3) or (5). An update of [2] has been made by Enger, Englund, Grandell and Holst in 2005, [4]. The writers have been contacted and Enger was said to be the one that assigned the rules. Hearing this made me believe that the source could be found. However, Enger could not recall from where he had got it [6]. That is how far I could get. Nevertheless, the examination remains as interesting as beforehand. Discussing rules for approximating, one can not avoid at least mentioning the Berry-Esseen theorem. The theorem gives a conservative estimate, in the sense that it gives the largest possible size of the error. It is based upon the rate of convergence of the approximation to the normal distribution. The Berry-Esseen theorem will not be further examined here, but there are several interesting articles due to that the theorem is improved every now and then, most recently in May 2010 [11].

4 The approximation error of the distribution func-

tion The errors of the approximations,MFabsandMFrel, defined in (11), and (12) respectively, are plotted and tabled. The cases that are examined are those mentioned earlier, suggested by [4].

4.1 Absolute error

We examine the absolute maximum errors of the approximation of the distri- bution function,MFabsdefined in (11), here in the first part. In addition to 11 that a regression is made, defined in (13), to see if we might find any linear trend.

Case 1:npq= 10,without continuity correction

First, the case whereL= 10 =npq, without continuity correction.~P, the set of different initial probabilities is chosen to be~pi= 0:01(0:01)0:50. This means that we use 50 equidistant~pi. The smallest probability isp1= 0:0100 and it has the largest errorMFabs= 0:0831.MFabsdecreases the closer to

0:5we get, which is natural since the binomial distribution tends to be skew.

The points make a pattern which is a bit curvy, but still the points are close to the straight line in Figure 1. Another remark made, is that the distance between the probabilities decreases the closer to0:5we get. The fact that there are several~nirounded to the same value ofni, which in turn gives equal values onpi, makes severalMFabsthe same, and plotted in the same spot. So they are all there, but not visible due to that reason. Next we try to fit a linear model forMFabs. The result is M

Fabs= 0:08360:0417p+l:

The regression line is the straight line in Figure 1. The slope of the line shows that the size ofMFabschanges moderately. Note that the sum of the errors of the regression line,Pjlj, is relatively small, the result should be somewhat precise estimates ofMFabsfor probabilities which are not taken in consideration here.l l l l l l l l l l l l l l l l ll l l l l l l l l l lll l ll l ll lll lll lllllll l

0.00.10.20.30.40.5

0.065 0.070 0.075 0.080

Probabilities

Max errorFigure 1: Maximum absolute error fornpq= 10withoutcontinuity correc- tion. The straight line is the regression line,MFabs= 0:08360:0417p. 12

Case 2:npq= 10,with continuity correction

Under these circumstancesMFabsdecreases and is about four times smaller than without continuity correction. The regression line, M

Fabs= 0:02090:0416p+l;(16)

also has got a four times smaller intercept than in the first case. What is interesting is that, the slope is approximately the same in both cases, this in turn, means that for every~pi= 0:01(0:01)0:50, it holds thatMFabsalso is four times smaller. This can be seen in Figure 2.l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l

0.00.10.20.30.40.5

0.000 0.005 0.010 0.015 0.020

Probabilities

Max errorFigure 2: Maximum absolute error fornpq= 10withcontinuity correction. The straight line is the regression line,MFabs= 0:02090:0416p+l.

Case 3:npq= 3,with continuity correction

Finally we take a look at the last case, regarding the absolute error, where L= 3 =npqand continuity correction is used. The plot is seen in Figure 3. ~Pis the same as above. In this case the regression line is M

Fabs= 0:03730:0720p+l:

The largest error,MFabs= 0:0355appears atp1= 0:0100and is about twice the size compared to the largestMFabsforL= 10. The slope of the line is more aggressive here, which in turn results in errors, one order of magnitude less than in the Case 1 for probabilities close to0:5. Also here the sum of 13 discrepancy from the regression line is relatively small which should result in fairly good estimations ofMFabs.l l l l l l l l l l l l l l l l l l l ll l ll lll llll lllll lllllllllllll l

0.00.10.20.30.40.5

0.005 0.010 0.015 0.020 0.025 0.030 0.035

Probabilities

Max errorFigure 3: Maximum absolute error fornpq= 3,withcontinuity correction. The straight line is the regression line,MFabs= 0:03730:0720p:

4.2 Relative Error

Here, the maximum relative error of the approximation of the distribution function,MFrel, defined in (12) is examined. The regression models (14) and (15) are both tested.

Case 1:npq= 10,without continuity correction

In the first case we perform the calculations under,L= 10 =npqwith-

out continuity correction. The result is shown in Figure 4. As we seeMFrelincreases very rapidly. The smallest value ofMFrel,16:97317is atp1. The

largest138:61756atp50. As we see in Table 4, it isk= 0that gives the largest error. For other values ofkthe error is much smaller. Furthermore we note thatMFrelis very large. If we look at a specific example where p= 0:2269, which means thatn= 57, thenXBin(57;0:2269). LetX be approximated, according to (3), byYN(12:933;3:162078). We get thatP(X1) = 7:55106andP(Y1) = 8:04105. Under these circumstances we get, M

Frel=jP(X1)P(Y1)jP(X1)= 9:64:

14 The result is shown in Table 4. So the relative error is, as we also can see, large, for smallkand small probabilities. The regression curves, defined in (14) and (15) are, M

Frel= 14:66 + 69:86p+ 416:14p2+l

and M

Frel= 21:5392:26p+ 1246:60p21136:07p3+l

respectively. We note that there are not any larger differences in accuracy depending on the choice of model. Naturally, the discrepancy of the second model is lower.l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l

0.00.10.20.30.40.5

20 40
60
80
100
120
140

Probabilities

Max errorFigure 4: Maximum relative error fornpq= 10withoutcontinuity correction. The solid line is the regression curve,MFrel= 14:66+69:86p+416:14p2and the dashed line,MFrel= 21:5392:26p+ 1246:60p21136:07p3.

Case 2:npq= 10,with continuity correction

We continue by looking at the same case as above, but here continuity correc- tion is used. This gives somewhat remarkable results,MFrelis actually about two times larger than without continuity correction. Let us study the same numeric example as above, except that we use continuity correction. We got p= 0:2269which again means thatn= 57, thenXBin(57;0:2269). We letXbe approximated, according to (5), byYN(12:933;3:162078). It results in,P(X1) = 7:55106andP(Y1 + 0:5) = 0:000150. Under 15 these circumstances we get, M

Frel=jP(X1)P(Y1 + 0:5)jP(X1)= 18:84;

which fits the values in Table 5.MFabsgets dramatically worse when we use continuity correction than without. Hence, alsoMFrelbecomes worse. In Figure 5 one can judge that the results gets worse as we get closer to probabilities near0:5. The regression curves, defined in (14) and (15) are, M

Frel= 34:969:8p+ 1597:1p2+l

and M

Frel= 37:4127:3p+ 1891:8p2403:2p3+l;

respectively. Looking at Figure 5, we see that the difference between the two models is insignificant.l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l

0.00.10.20.30.40.5

50
100
150
200
250
300
350

Probabilities

Max errorFigure 5: Maximum relative error fornpq= 10withcontinuity correction. The solid line is the regression curve,MFrel= 34:969:8p+ 1597:1p2and the dashed line,MFrel= 37:4127:3p+ 1891:8p2403:2p3.

Case 3:npq= 3with continuity correction

Here, in the last casenpq= 3and continuity correction is used, see Fig- ure 6. This gives the curves of regression, defined in (14) and (15), M

Frel= 0:473 + 2:204p+ 2:123p2+l

16 and M

Frel= 0:514 + 1:155p+ 7:858p27:885p3+l;

respectively. As we seeMFrelactually get the smallest value here, where npq= 3and continuity correction is used. As well as in the two other cases regarding the relative error the difference between the quadratic and cubic regression model is minimal.l l l l l l l l l l l l l l l l l l l l l l l l l l l

0.00.10.20.30.40.5

0.5 1.0 1.5 2.0

Probabilities

Max errorFigure 6: Maximum relative error fornpq= 3withcontinuity correction. The solid line is the regression curve,MFrel= 0:473 + 2:204p+ 2:123p2and the dashed line,MFrel= 0:514 + 1:155p+ 7:858p27:885p3.

5 Summary and conclusions

The three different rules of thumbs that are focused on turned out to give approximation errors of different sizes. Regarding the absolute errors, the largest difference is found between the case whereL= 10withoutcontinuity correction andL= 10withcontinuity correction. The largest error decreases from0:08to about0:02, which is approximately four times smaller, a relatively large difference. LettingL= 3and using continuity correction we end up with the largest error0:035, closer to the latter case, but still between them. When using this common and simple way of approximating, depending on the problem, different levels of tolerance usually are accepted. A common level in many cases may be 0.01. If we look deeper, we see that the probabilities for getting such a smallMFabsdiffers from between the rules of thumb. Usingnpq= 10withoutcontinuity correction does not even reach to the 0.01 level of accepted accuracy. Comparing this to the other 17 two cases which in contrast reach the 0.01 level for probabilities0:25 in the same case as above but in additionwithcontinuity correction, and for probabilities0:35in the case wherenpq= 3. Further, it would be interesting to investigate how the relationship betweenkandnaffects the error. In addition, another interesting part would be some tables indicating how largenshould be in order to get sufficiently small errors, for different probabilities. Concerning the relative errors I would say that the applicability may be somewhat uncertain, due to the fact thatMFrelis very large for small values ofkbut rapidly decrease. This fact, I may say, make the plots look a bit ex- treme and there are other values ofkthat give much better approximations. Judging by Tables 4, 5 and 6 indeed this seems to be the case. We know that the approximation is motivated by the central limit theorem, however, what we also know, is that it does not hold the same accuracy for small probabilities, that is, the tails of the distributions. This is also the direct reason why the accuracy gets worse when using continuity correction, it puts extra mass on the already too large approximated value. In a similar way we get the explanation why the relative error increases when the value of npqchanges from10to3, (as one maybe would expect the opposite), the mean value of the normal distribution,np, gets closer to0which in turn gives additional mass. The conclusion is, one should remember that due to the fluctuations depending onk, of the relative errors, what we also can see in Tables 4, 5 and 6, that the regression model also provides conservative estimates of the errors. As a natural alternative, and most likely better, Poisson approximation is recommended for small probabilities. Like in the previous case concerning the absolute errors, some more exhaustive exami- nation of the relative error would be interesting. How large shouldnbe to get acceptable levels of the error, for instance 10% or 5% and so on.

References

[1] Alm S.E. and Britton T., Stokastik - Sannolikhetsteori och statistikteori med tillämpningar, Liber (2008). [2] Blom, G., Sannolikhetsteori och statistikteori med tillämpningar (Bok C),

Fjärde upplagan, Studentlitteratur (1989).

[3] Blom, G., Sannolikhetsteori med tillämpningar (Bok A), Studentlitter- atur (1970,1984) [4] Blom G., Enger J .,Englund G., Grand ellJ. and Holst L., Sannolihetsteori och statistikteori med tillämpningar, Femte upplagan, Studentlitteratur (2008). 18 [5]Cramér H., Sannolikhetskalkylen, Almqvist & Wiksell/Geber Förlag AB (1949). [6]

Enger J., Private communication, (2011).

[7] Gut A., An Intermediate Course in Probability, Springer (2009). [8] Hald A., A History of Mathematical Statistics from 1750 to 1930. Wiley,

New York (1998).

[9] Hald A., Statistical Theory with Engineering Applications, John Wiley &

Sons, Inc., New York and London (1952).

[10] Sc haderM. and Sc hmidF., T woRules of Th umbfor the Appro ximation of the Binomial Distribution by the Normal Distribution,The American

Statistician,43, 1989, 23-24.

[11] Shevtso vI. G., An Impro vementof Con vergenceRate Estimates in the Lyapunov Theorem,Doklady Mathematics,82, 2010, 862-864.

Tables

Regarding the plotted probabilities, that is the setP, only the maximum error is plotted. One can not tell from whichkthe error comes from, neither can one tell if the error is of similar size for other values ofk. To get a more detailed picture this section contains tables both for the absolute errors and the relative errors. It would have been possible to table all the errors for all values ofk, but due to the fact that the cardinality ofNat times, that is for small probabilities, is relatively large, it would have taken too much place. This made me table only the 10 values ofkwhich resulted in the largest errors. The columns in the tables, that contains the values ofkis in descending order. What this means is that the first value ofkin each column is the maximum error that is plotted. On the side of every column ofk, there is a column where the corresponding error is written. These two sub columns, got a common header which tells the value ofpin the specific case. 19

p=0.01 0.02 0.03 0.0399 0.0499 0.0597 0.0698 0.0799 0.0893 0.0991 0.109 0.1196 0.129 0.1381 0.1487 0.1584 0.1696 0.1792 0.1899k "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabs

10 0.083110 0.08310 0.082810 0.082410 0.081910 0.081310 0.080510 0.079511 0.079311 0.079411 0.079311 0.07911 0.078611 0.07811 0.077111 0.076112 0.076312 0.076312 0.0761

9 0.08089 0.07959 0.07811 0.076811 0.077611 0.078211 0.078711 0.079110 0.078410 0.077210 0.075810 0.074112 0.074312 0.07512 0.075712 0.076111 0.074711 0.073311 0.0715

11 0.073811 0.074911 0.07599 0.07659 0.07489 0.0739 0.0719 0.068912 0.069812 0.071112 0.072312 0.073410 0.072410 0.070610 0.068413 0.066513 0.068313 0.069613 0.0709

8 0.06728 0.0658 0.062812 0.062112 0.063812 0.065412 0.06712 0.06859 0.06699 0.06479 0.06239 0.059713 0.061413 0.063113 0.064910 0.066210 0.063610 0.061110 0.0583

12 0.056912 0.058712 0.06048 0.06058 0.05818 0.05578 0.053213 0.051613 0.053613 0.055513 0.057513 0.05969 0.05739 0.05499 0.052114 0.05114 0.053614 0.055714 0.058

7 0.04657 0.04427 0.041913 0.043513 0.045513 0.047513 0.04968 0.05078 0.04838 0.04588 0.043314 0.042214 0.044314 0.046414 0.04889 0.04949 0.04639 0.043615 0.0418

13 0.037713 0.039613 0.04167 0.03967 0.03737 0.0357 0.032814 0.033714 0.035614 0.037714 0.03988 0.04068 0.03828 0.03598 0.033215 0.034215 0.036815 0.03919 0.0406

6 0.02536 0.023514 0.024314 0.02614 0.027814 0.029714 0.03177 0.03057 0.02857 0.02647 0.024415 0.025815 0.027715 0.029615 0.0328 0.03088 0.02818 0.025916 0.0263

14 0.02114 0.02266 0.02176 0.026 0.01846 0.016815 0.01715 0.018615 0.020215 0.021915 0.02377 0.02227 0.02047 0.018716 0.01816 0.019816 0.021916 0.02398 0.0235

15 0.009215 0.010315 0.011515 0.012815 0.014115 0.01556 0.01526 0.01386 0.01256 0.011216 0.011816 0.013316 0.014716 0.01627 0.01697 0.01527 0.013417 0.012517 0.0142

0.1979 0.2066 0.2163 0.2269 0.2389 0.2454 0.2598 0.2678 0.2764 0.2857 0.2959 0.307 0.3194 0.3333 0.3492 0.3679 0.3909 0.4219 0.5

k "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabs

12 0.075712 0.075112 0.074213 0.073613 0.073713 0.073713 0.07313 0.072413 0.071514 0.071414 0.071514 0.071114 0.070115 0.069515 0.069315 0.067616 0.067517 0.066220 0.0627

13 0.071713 0.072513 0.073112 0.07312 0.071212 0.070114 0.069814 0.070514 0.071113 0.070213 0.068515 0.067515 0.068814 0.068216 0.065616 0.067517 0.064516 0.062919 0.0614

11 0.0711 0.068111 0.06614 0.065314 0.067214 0.068112 0.067212 0.065412 0.063315 0.064315 0.06613 0.066213 0.063216 0.062914 0.065214 0.060315 0.063218 0.062621 0.058

14 0.059714 0.061514 0.063411 0.063411 0.060211 0.058415 0.05915 0.060815 0.062612 0.060712 0.057816 0.05716 0.0613 0.059317 0.055417 0.060118 0.055215 0.053318 0.0544

10 0.056110 0.053610 0.050815 0.051115 0.054115 0.055711 0.054111 0.051611 0.048916 0.051416 0.054112 0.054312 0.050217 0.050813 0.054218 0.04814 0.052619 0.053222 0.0487

15 0.043815 0.04615 0.048410 0.047610 0.04410 0.04216 0.044216 0.046416 0.048811 0.045911 0.042517 0.042817 0.046612 0.045318 0.041813 0.047519 0.042420 0.040817 0.0434

9 0.03849 0.0369 0.033316 0.035316 0.038416 0.040210 0.037610 0.035217 0.033717 0.036417 0.039411 0.038811 0.034718 0.036612 0.039619 0.034313 0.038714 0.040223 0.0373

16 0.028116 0.030116 0.03259 0.03049 0.02739 0.025617 0.029217 0.031410 0.032610 0.029810 0.026918 0.028618 0.032211 0.030119 0.028212 0.032920 0.029321 0.028216 0.0311

8 0.02178 0.019917 0.019117 0.021317 0.02417 0.02569 0.0229 0.020218 0.020618 0.022918 0.025510 0.023810 0.020519 0.023511 0.025220 0.02212 0.02513 0.026924 0.026

17 0.015617 0.01728 0.01798 0.01598 0.013818 0.014218 0.01718 0.01879 0.01829 0.016319 0.014619 0.01719 0.019910 0.017120 0.01711 0.019821 0.018322 0.017715 0.02

Table 1: Table of the 10 largest errors,"Fabsand whichkis comes from, for everypi, undernpq= 10withoutcontinuity

correction.20

p=0.01 0.02 0.03 0.0399 0.0499 0.0597 0.0698 0.0799 0.0893 0.0991 0.109 0.1196 0.129 0.1381 0.1487 0.1584 0.1696 0.1792 0.1899k "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabs

11 0.020111 0.0211 0.019811 0.019511 0.019211 0.018711 0.018211 0.017711 0.017112 0.016512 0.016312 0.016112 0.015812 0.015412 0.014912 0.014412 0.013613 0.013313 0.0131

10 0.0210 0.019310 0.018510 0.017610 0.016712 0.016412 0.016512 0.016612 0.016511 0.016411 0.015711 0.014811 0.013913 0.013313 0.013413 0.013513 0.013412 0.01312 0.0121

12 0.014912 0.015312 0.015612 0.015912 0.016210 0.015710 0.014710 0.013710 0.012613 0.012113 0.012513 0.012913 0.013111 0.013111 0.012111 0.011111 0.009914 0.0114 0.0104

9 0.01419 0.01299 0.01189 0.01066 0.01036 0.010213 0.010713 0.011213 0.011710 0.011610 0.010410 0.00927 0.00847 0.008314 0.008614 0.009114 0.009711 0.008811 0.0077

5 0.01145 0.0115 0.01076 0.01045 0.009813 0.01016 0.016 0.00986 0.00966 0.00936 0.00896 0.00856 0.008214 0.0087 0.00817 0.00797 0.00767 0.00747 0.007

6 0.01046 0.01056 0.01055 0.010313 0.00955 0.00945 0.00895 0.00847 0.00827 0.00837 0.00847 0.008410 0.00826 0.00786 0.00736 0.00698 0.00668 0.00678 0.0068

4 0.00914 0.008613 0.008213 0.00899 0.00949 0.00837 0.00777 0.0085 0.0085 0.00755 0.007114 0.006914 0.007510 0.00718 0.00618 0.00646 0.00646 0.0066 0.0055

16 0.007613 0.00754 0.00814 0.00764 0.00717 0.00749 0.007117 0.006617 0.006417 0.006214 0.00625 0.00655 0.00618 0.005810 0.005918 0.005318 0.005218 0.00515 0.0055

17 0.007116 0.007316 0.007117 0.0077 0.00717 0.006817 0.00679 0.00618 0.005818 0.005817 0.00617 0.005818 0.00575 0.005718 0.005510 0.004819 0.004715 0.004818 0.0047

13 0.006817 0.007117 0.00716 0.006917 0.00694 0.00664 0.006118 0.00584 0.005314 0.005518 0.005818 0.005717 0.005518 0.00565 0.00525 0.00475 0.004319 0.004719 0.0046

0.1979 0.2066 0.2163 0.2269 0.2389 0.2454 0.2598 0.2678 0.2764 0.2857 0.2959 0.307 0.3194 0.3333 0.3492 0.3679 0.3909 0.4219 0.5

k "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabs

13 0.012813 0.012513 0.01213 0.011414 0.010814 0.010614 0.010214 0.009914 0.009514 0.008915 0.008515 0.008215 0.007715 0.00716 0.006416 0.005617 0.004618 0.003214 7e-04

12 0.011414 0.010814 0.010914 0.010913 0.010713 0.010213 0.00915 0.008615 0.008715 0.008614 0.008314 0.007416 0.006716 0.006715 0.005917 0.005116 0.004317 0.003227 7e-04

14 0.010612 0.010612 0.009712 0.008715 0.00815 0.008215 0.008513 0.008313 0.007513 0.006616 0.006416 0.006614 0.006414 0.005217 0.00515 0.004518 0.003619 0.002415 6e-04

11 0.00688 0.006715 0.00715 0.007512 0.007412 0.00689 0.00549 0.005316 0.005716 0.00613 0.005713 0.004610 0.004217 0.004614 0.003811 0.003311 0.002816 0.002226 6e-04

8 0.006715 0.00658 0.00668 0.00638 0.00618 0.00598 0.005416 0.00539 0.00529 0.0059 0.00489 0.004517 0.004110 0.00410 0.003618 0.003215 0.002712 0.002228 6e-04

7 0.00677 0.00647 0.0067 0.00569 0.00549 0.005412 0.00538 0.00518 0.00488 0.004410 0.004310 0.00439 0.00419 0.003611 0.003410 0.003112 0.002613 0.001913 6e-04

15 0.00611 0.00589 0.00499 0.00527 0.0057 0.004816 0.004912 0.004410 0.00410 0.00428 0.00417 0.003613 0.003411 0.00339 0.00312 0.002410 0.002311 0.001925 4e-04

6 0.00516 0.004711 0.004819 0.004219 0.00416 0.0047 0.00417 0.003812 0.003620 0.003320 0.00318 0.00358 0.00318 0.002518 0.00269 0.002319 0.001910 0.001316 4e-04

19 0.00469 0.004619 0.00446 0.003820 0.003719 0.003920 0.003610 0.00377 0.00347 0.003117 0.00320 0.002811 0.00321 0.002421 0.002114 0.00229 0.001520 0.001112 4e-04

18 0.004519 0.00456 0.004320 0.003716 0.003620 0.003719 0.003520 0.003520 0.003421 0.002821 0.002821 0.002721 0.002613 0.002122 0.00222 0.001813 0.001514 0.00129 4e-04

Table 2: Table of the 10 largest errors,"Fabsand whichkis comes from, for everypi, undernpq= 10withcontinuity correction.21

p=0.01 0.0199 0.0297 0.0395 0.0493 0.059 0.0685 0.0795 0.089 0.0978 0.1086 0.1172 0.1273 0.1394k "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabs

2 0.03562 0.03433 0.03323 0.0333 0.03283 0.03253 0.03223 0.03183 0.03143 0.0313 0.03043 0.033 0.02943 0.0286

3 0.03353 0.03342 0.03312 0.03182 0.03052 0.02922 0.02792 0.02652 0.02522 0.02392 0.02242 0.02122 0.01980 0.0189

0 0.02450 0.02420 0.02390 0.02360 0.02330 0.02290 0.02250 0.02210 0.02160 0.02120 0.02070 0.02020 0.01962 0.0181

6 0.01176 0.01166 0.01146 0.01124 0.01114 0.01164 0.01214 0.01264 0.01314 0.01354 0.0144 0.01434 0.01474 0.0152

4 0.0094 0.00964 0.01014 0.01066 0.01116 0.01096 0.01076 0.01056 0.01036 0.01016 0.00986 0.00966 0.00946 0.009

5 0.0095 0.00865 0.00825 0.00775 0.00727 0.00717 0.0077 0.0077 0.0077 0.00697 0.00697 0.00687 0.00681 0.0075

7 0.00727 0.00727 0.00717 0.00717 0.00715 0.00685 0.00635 0.00585 0.00535 0.00481 0.00531 0.00591 0.00677 0.0067

1 0.00471 0.00358 0.0038 0.0038 0.0038 0.0038 0.0038 0.0031 0.00361 0.00445 0.00425 0.00385 0.00328 0.003

8 0.00318 0.0031 0.00241 0.00139 0.0019 0.0011 0.00171 0.00288 0.0038 0.0038 0.0038 0.0038 0.0035 0.0025

9 0.0019 0.0019 0.0019 0.00110 3e-041 8e-049 0.0019 0.0019 0.0019 0.0019 0.0019 0.0019 0.0019 0.001

0.1464 0.1542 0.1629 0.1727 0.1838 0.1965 0.2113 0.2288 0.25 0.2764 0.311 0.3613 0.5

k "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabsk "

Fabs3 0.02813 0.02763 0.02693 0.02623 0.02523 0.02413 0.02273 0.02093 0.01864 0.0164 0.01464 0.01129 0.0024

0 0.01850 0.0180 0.01750 0.01694 0.01644 0.01664 0.01674 0.01684 0.01663 0.01553 0.01141 0.00792 0.0024

2 0.01722 0.01614 0.01594 0.01610 0.01610 0.01530 0.01430 0.01310 0.01161 0.01081 0.00995 0.006810 0.0015

4 0.01544 0.01562 0.01492 0.01352 0.0122 0.01041 0.01061 0.01091 0.0110 0.00980 0.00763 0.00571 0.0015

6 0.00886 0.00861 0.00881 0.00931 0.00981 0.01022 0.00852 0.00637 0.00557 0.0055 0.00610 0.00473 0.0015

1 0.00791 0.00846 0.00836 0.0086 0.00776 0.00726 0.00676 0.0066 0.00515 0.00487 0.00422 0.00438 0.0015

7 0.00667 0.00667 0.00657 0.00647 0.00637 0.00627 0.0067 0.00582 0.00396 0.00398 0.00247 0.00276 8e-04

8 0.0038 0.0038 0.0038 0.0038 0.0038 0.00298 0.00298 0.00295 0.00368 0.00266 0.00248 0.00175 8e-04

5 0.00215 0.00175 0.00129 9e-049 9e-049 9e-045 0.00155 0.00258 0.00282 0.00122 0.00179 5e-047 6e-04

9 0.0019 0.0019 9e-045 7e-0410 2e-045 7e-049 9e-049 9e-049 9e-049 9e-049 8e-046 2e-044 6e-04

Table 3: Table of the 10 largest errors,"Fabsand whichkis comes from, for everypi, undernpq= 3withcontinuity correction.22

p=0.01 0.02 0.03 0.0399 0.0499 0.0597 0.0698 0.0799 0.0893 0.0991 0.109 0.1196 0.129 0.1381 0.1487 0.1584 0.1696 0.1792 0.1899k "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frel

016.9732017.7468018.5666019.4284020.3435021.3015022.3354023.43670 24.515025.7144026.9868028.4404029.8146031.2175032.93910 34.62540 36.684038.55190 40.7878

1 3.57941 3.73421 3.89731 4.06781 4.24781 4.4351 4.6361 4.84871 5.05581 5.28481 5.52631 5.80051 6.05821 6.31971 6.63881 6.94941 7.32621 7.66591 8.0701

2 1.11232 1.16652 1.22332 1.28252 1.34472 1.40922 1.47812 1.55072 1.6212 1.69852 1.77992 1.87182 1.95782 2.04472 2.15022 2.25242 2.37582 2.48652 2.6176

3 0.32273 0.34733 0.3733 0.39983 0.42783 0.45683 0.48773 0.52023 0.55163 0.58613 0.62223 0.66293 0.70093 0.73913 0.78553 0.83023 0.8843 0.93223 0.989

7 0.22167 0.22137 0.22097 0.22037 0.21957 0.21867 0.21768 0.21788 0.21848 0.2198 0.21948 0.21978 0.21998 0.21994 0.23644 0.25924 0.28674 0.31124 0.3401

8 0.20978 0.21118 0.21258 0.21378 0.21498 0.2168 0.21697 0.21637 0.21497 0.21337 0.21147 0.20929 0.20944 0.21278 0.21978 0.21938 0.21878 0.21799 0.2187

6 0.20646 0.20366 0.20076 0.19756 0.1949 0.19449 0.19689 0.19919 0.20129 0.20349 0.20559 0.20767 0.2079 0.21119 0.21299 0.21459 0.21619 0.21748 0.2169

9 0.18179 0.18439 0.18699 0.18959 0.19196 0.19046 0.18646 0.18216 0.177910 0.17510 0.178210 0.18164 0.19317 0.20477 0.20187 0.198810 0.196910 0.199710 0.2027

10 0.145710 0.148910 0.152210 0.155510 0.158810 0.16210 0.165410 0.168710 0.17186 0.17316 0.16814 0.173710 0.184610 0.187410 0.190710 0.19367 0.19517 0.19167 0.1874

5 0.14545 0.1395 0.13225 0.125111 0.121711 0.125311 0.12911 0.132811 0.136411 0.14014 0.15286 0.16236 0.156811 0.155311 0.159511 0.163311 0.167711 0.171511 0.1757

0.1979 0.2066 0.2163 0.2269 0.2389 0.2454 0.2598 0.2678 0.2764 0.2857 0.2959 0.307 0.3194 0.3333 0.3492 0.3679 0.3909 0.4219 0.5

k "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frel

042.5395044.5543046.8956049.6484052.9297054.8179059.22570 61.819064.73610 68.041071.81590 76.168081.2398087.2252094.39510103.13770114.0190127.7940138.6176

1 8.3851 8.74541 9.16181 9.6485110.2247110.5545111.31971 11.767112.2677112.8322113.4733114.2081115.0589116.0558117.24051 18.6718120.4335122.62861 24.1341

2 2.71932 2.83522 2.96852 3.12352 3.3062 3.412 3.65012 3.78972 3.94542 4.12012 4.31762 4.54282 4.80212 5.10412 5.46042 5.88732 6.40722 7.04462 7.4028

3 1.03293 1.08283 1.143 1.20633 1.28413 1.32833 1.42993 1.48873 1.55413 1.62733 1.70973 1.80323 1.91053 2.03473 2.18043 2.35373 2.56263 2.81443 2.9154

4 0.36254 0.38784 0.41684 0.45044 0.48974 0.51194 0.56314 0.59264 0.62534 0.66194 0.7034 0.74954 0.80264 0.86394 0.93554 1.02014 1.12114 1.24054 1.2619

9 0.21969 0.22049 0.22129 0.22189 0.22249 0.22269 0.22279 0.222710 0.222910 0.22455 0.23815 0.26375 0.2935 0.32675 0.36595 0.4125 0.46655 0.52945 0.5199

8 0.2168 0.21488 0.213310 0.212410 0.215210 0.216610 0.219710 0.22129 0.22259 0.222210 0.226210 0.227910 0.229710 0.231510 0.233411 0.237111 0.244311 0.255412 0.3121

10 0.204910 0.207210 0.20978 0.21148 0.2098 0.20758 0.203911 0.204911 0.20795 0.21549 0.22179 0.220911 0.222211 0.226511 0.231410 0.235710 0.238812 0.25111 0.3112

7 0.184111 0.182211 0.185911 0.189911 0.194411 0.196811 0.2028 0.20178 0.199211 0.211111 0.214511 0.21829 0.21999 0.21859 0.216812 0.224912 0.235510 0.244913 0.302

Table 4: Table of the 10 largest errors,"Freland whichkis comes from, for everypi, undernpq= 10withoutcontinuity

correction.23

p=0.01 0.02 0.03 0.0399 0.0499 0.0597 0.0698 0.0799 0.0893 0.0991 0.109 0.1196 0.129 0.1381 0.1487 0.1584 0.1696 0.1792 0.1899k "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frel

029.7208031.1973032.7709034.43470 36.2120 38.0840 40.11720 42.2970 44.44540 46.8510 49.42060 52.37820 55.19540 58.09210 61.67520 65.21410 69.57290 73.56450 78.3871

1 6.47251 6.76191 7.06851 7.39071 7.73271 8.09081 8.47741 8.88931 9.29281 9.74191 10.21881 10.76421 11.28051 11.80831 12.45721 13.09411 13.87351 14.58281 15.4344

2 2.29232 2.39262 2.49842 2.60912 2.72612 2.8482 2.97892 3.11782 3.25322 3.40322 3.56172 3.74212 3.9122 4.08492 4.29642 4.5032 4.75452 4.98222 5.2542

3 0.97073 1.01663 1.06483 1.11513 1.16813 1.22323 1.28223 1.34453 1.40513 1.47213 1.54253 1.62243 1.69753 1.77353 1.86633 1.95663 2.0663 2.16473 2.2822

4 0.42474 0.44914 0.47474 0.50144 0.52954 0.55864 0.58984 0.62274 0.65474 0.68994 0.72694 0.76894 0.80814 0.84794 0.89634 0.94334 1.00024 1.05144 1.1121

5 0.16655 0.18045 0.19515 0.21045 0.22655 0.24335 0.26125 0.28025 0.29865 0.31895 0.34035 0.36455 0.38735 0.41025 0.43825 0.46545 0.49825 0.52785 0.5629

9 0.04516 0.04686 0.05536 0.06426 0.07366 0.08356 0.09416 0.10546 0.11646 0.12866 0.14146 0.1566 0.16986 0.18376 0.20086 0.21736 0.23746 0.25556 0.2771

8 0.04399 0.04479 0.04429 0.04369 0.04289 0.04199 0.04089 0.03949 0.03810 0.03727 0.03737 0.04617 0.05447 0.06297 0.07337 0.08357 0.0967 0.10737 0.1208

6 0.03878 0.0428 0.03988 0.037510 0.037110 0.037310 0.037510 0.037510 0.03749 0.036310 0.036810 0.036210 0.035610 0.034810 0.033610 0.032411 0.030611 0.03038 0.0354

10 0.035310 0.035910 0.036410 0.03688 0.03498 0.03218 0.02911 0.027811 0.02847 0.02979 0.03449 0.032111 0.030411 0.030711 0.030811 0.030810 0.030610 0.028911 0.0298

0.1979 0.2066 0.2163 0.2269 0.2389 0.2454 0.2598 0.2678 0.2764 0.2857 0.2959 0.307 0.3194 0.3333 0.3492 0.3679 0.3909 0.4219 0.5

k "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frel

082.1989086.6195091.8044097.96610105.40310109.72780119.95260126.0530132.99070140.94880150.16820160.97310173.81130189.32270208.46030232.72780264.70540309.51340382.9711

116.1038116.87611 17.777118.84121 20.11711 20.85521 22.5896123.61791 24.78211 26.11091 27.64241 29.42741 31.53571 34.06691 37.16851 41.07271 46.1771 53.2711 64.8288

2 5.46692 5.71152 5.99542 6.32912 6.7272 6.95612 7.49192 7.80792 8.16422 8.56942 9.03422 9.57342 10.20722 10.9642 11.8862 13.03942 14.53752 16.60552 19.9613

3 2.37383 2.47873 2.60013 2.74233 2.91113 3.00793 3.23353 3.3663 3.5153 3.68393 3.87693 4.10013 4.36123 4.67183 5.04853 5.51743 6.12333 6.95553 8.304

4 1.15944 1.21344 1.27584 1.34874 1.4354 1.48444 1.59924 1.66654 1.74194 1.82734 1.92464 2.03684 2.16774 2.3234 2.51074 2.74364 3.04364 3.45434 4.1207

5 0.59025 0.62145 0.65735 0.69935 0.7495 0.77745 0.84335 0.88195 0.92525 0.9745 1.02975 1.09385 1.16865 1.2575 1.36385 1.49625 1.66635 1.89915 2.2788

6 0.29396 0.31316 0.33526 0.36126 0.39196 0.40946 0.45036 0.47426 0.5016 0.53146 0.56596 0.60576 0.65226 0.70726 0.77366 0.8566 0.9626 1.10746 1.3466

7 0.13147 0.14357 0.15767 0.17417 0.19387 0.2057 0.23147 0.24687 0.26427 0.28397 0.30657 0.33257 0.36297 0.3997 0.44287 0.49737 0.56767 0.66477 0.8267

8 0.04198 0.04958 0.05848 0.06898 0.08158 0.08888 0.10598 0.1168 0.12748 0.14058 0.15548 0.17288 0.19338 0.21778 0.24758 0.28488 0.33348 0.4018 0.5164

11 0.029211 0.028511 0.027411 0.02612 0.024712 0.02449 0.03489 0.04129 0.04869 0.05719 0.0679 0.07869 0.09239 0.10899 0.12949 0.15539 0.18959 0.23789 0.3228

Table 5: Table of the 10 largest errors,"Freland whichkis comes from, for everypi, undernpq= 10withcontinuity correction.24

p=0.01 0.0199 0.0297 0.0395 0.0493 0.059 0.0685 0.0795 0.089 0.0978 0.1086 0.1172 0.1273 0.1394k "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frel

0 0.5140 0.5330 0.55230 0.57210 0.59240 0.61320 0.6340 0.65880 0.68090 0.70190 0.72830 0.75010 0.77640 0.8088

2 0.08562 0.08422 0.08282 0.08132 0.07972 0.0782 0.07622 0.07412 0.07212 0.07022 0.06782 0.06572 0.06311 0.0634

3 0.05243 0.05273 0.0533 0.05333 0.05363 0.05383 0.0543 0.05413 0.05423 0.05423 0.05423 0.05423 0.05412 0.0599

1 0.02431 0.01891 0.01324 0.01334 0.01414 0.01484 0.01551 0.01811 0.02471 0.0311 0.0391 0.04551 0.05353 0.0538

6 0.01216 0.0124 0.01266 0.01176 0.01156 0.01136 0.01114 0.01634 0.0174 0.01774 0.01844 0.01914 0.01984 0.0206

4 0.01124 0.01196 0.01185 0.00855 0.0085 0.00751 0.01086 0.01096 0.01076 0.01056 0.01036 0.01016 0.00986 0.0095

5 0.00995 0.00945 0.0091 0.00757 0.00727 0.00727 0.00717 0.00717 0.00717 0.0077 0.0077 0.00697 0.00697 0.0068

7 0.00737 0.00737 0.00727 0.00728 0.0031 0.00465 0.0075 0.00645 0.00595 0.00545 0.00485 0.00435 0.00378 0.003

8 0.00318 0.00318 0.00318 0.00311 0.00158 0.0038 0.0038 0.0038 0.0038 0.0038 0.0038 0.0038 0.0035 0.0029

9 0.0019 0.0019 0.0019 0.0019 0.0019 0.0019 0.0019 0.0019 0.0019 0.0019 0.0019 0.0019 0.0019

0.1464 0.1542 0.1629 0.1727 0.1838 0.1965 0.2113 0.2288 0.25 0.2764 0.311 0.3613 0.5

k "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frelk "

Frel0 0.82810 0.84990 0.87480 0.90360 0.93730 0.97730 1.02550 1.0850 1.16050 1.26020 1.39940 1.61420 2.0641

1 0.06921 0.07591 0.08361 0.09251 0.10291 0.11531 0.13041 0.14911 0.1731 0.20481 0.24981 0.32031 0.4769

2 0.05792 0.05573 0.05313 0.05263 0.0523 0.05123 0.053 0.04843 0.04593 0.0423 0.03542 0.04362 0.1227

3 0.05363 0.05342 0.0532 0.04992 0.04622 0.04172 0.03612 0.0294 0.02634 0.02684 0.02664 0.0243 0.02

4 0.02114 0.02164 0.02224 0.02284 0.02344 0.02414 0.02494 0.02562 0.01962 0.00682 0.01223 0.02264 0.0031

6 0.00936 0.00916 0.00886 0.00856 0.00816 0.00776 0.00716 0.00647 0.00565 0.00615 0.00815 0.00999 0.0024

7 0.00687 0.00677 0.00667 0.00667 0.00657 0.00637 0.00627 0.0066 0.00557 0.00517 0.00437 0.00295 0.002

8 0.0038 0.0038 0.0038 0.0038 0.0038 0.0038 0.00295 0.0035 0.00446 0.00436 0.00268 0.00188 0.0016

5 0.00255 0.0025 0.00149 9e-049 9e-049 9e-045 0.00188 0.00298 0.00288 0.00278 0.00249 5e-0410 0.0015

Table 6: Table of the 10 largest errors,"Freland whichkis comes from, for everypi, undernpq= 3withcontinuity correction.25


Politique de confidentialité -Privacy policy