[PDF] [PDF] A random variable: a function

Random Variables, Conditional Expectation and Transforms 1 Random Variables and Functions of Random Variables (i) What is a random variable?



Previous PDF Next PDF





[PDF] Chapter 4 RANDOM VARIABLES

The set of possible values that a random variable (rv) X can take is called the range of X DEFINITION: A rv X is said to be discrete if its range consists of a finite or 



[PDF] Random Variables, Distributions, and Expected Value

Then X is a random variable Its possible values are 1, 2, 3, 4, 5, and 6; each of these possible values has probability 1/6



[PDF] A random variable: a function

Random Variables, Conditional Expectation and Transforms 1 Random Variables and Functions of Random Variables (i) What is a random variable?



[PDF] Random Variables

A Random Variable is a rule that assigns a number to each outcome of an experiment Example: An experiment consists of rolling a pair of dice, one red and one 



A Random Variables and Probability Distributions

say that the random variable X is continuous The function f is called the probability density function ( pdf ) of X and can be found from the relation f(x) = F (x )



[PDF] Random Variables - NPTEL

random variables, although we will define and give an example of a singular random variable 11 1 Discrete Random Variables Definition 11 1 Discrete 



[PDF] Random Variables - UCI

27 oct 2010 · Probability distribution function ( pdf ) for a discrete r v X is a table or rule that assigns probabilities to possible values of X Cumulative distribution 



[PDF] Random Variables - Department of Management Studies - IISc

Definition 3 1: A random variable (r v ) X is a function which maps the When the probability distribution of a (discrete) r v is defined by specifying P[X = x]



[PDF] Contents 2 Random Variables

Example: If two standard 6-sided dice are rolled, find the probability distribution for the random variable X giving the sum of the outcomes Then calculate (i) P(X 

[PDF] which opened  ...Related searchesTheme park ridesStar Wars: Rise of the...

[PDF] which spurred. Canada to develop the world's first 24 Hour. Movement Guidelines

[PDF] whose content is a tragedy of life in general. The tragic heroes like Hamlet

[PDF] wie Kafka und seine Freunde über die schrecklich absurde Interpretation ... eine radikale Funktion

[PDF] Wilderness survival ...www.troop604.net › download › mark_dl

[PDF] with ser meaning "to exist

[PDF] with the result that plot lines

[PDF] women's day

[PDF] WORKSHOP DAY. NATASHA catches up to CAROL IN THE WORKSHOP. NATASHA. Hey

[PDF] X n'existent pas en alphabet turc et ils sont remplacés respectivement par ... Les lettres Ç

[PDF] X n'existent pas en alphabet turc et ils sont remplacés respectivement par. K

[PDF] y associé à a se prononce [èJ] ; y associé à o se prononce [waJ].[PDF] Correspondance Phonie / Graphie J'entends le son [ɛ̃] J'écris in ...www.pedag<

[PDF] y compris.

[PDF] y neutre au singulier seulement ... L'emploi d'un pronom personnel de la même personne que le sujet (GNs ou pronom) est obligatoire pour ...

[PDF] y se

IEOR 4106: Introduction to Operations Research: Stochastic Models

Spring 2011, Professor Whitt

Class Lecture Notes: Tuesday, January 25.

Random Variables, Conditional Expectation and Transforms

1. Random Variables and Functions of Random Variables

(i) What is arandom variable? A (real-valued) random variable, often denoted byX(or some other capital letter), is a functionmapping a probability space (S;P) into the real lineR. This is shown in Figure 1. Associated with each pointsin the domainSthe functionXassigns one and only one value X(s) in the rangeR. (The set of possible values ofX(s) is usually a proper subset of the real line; i.e., not all real numbers need occur. IfSis a ¯nite set withmelements, thenX(s) can assume at mostmdi®erent values assvaries inS.)A random variable: a function (S,P)RX

Range: real lineDomain: probability space

Figure 1: A (real-valued) random variable is a function mapping a probability space into the real line. As such, a random variable has a probability distribution. We usually do not care about the underlying probability space, and just talk about the random variable itself, but it is good to know the full formalism. The distribution of a random variable is de¯ned formally in the obvious way where´means \equality by de¯nition,"Pis the probability measure on the underlying sample spaceSandfs2S:X(s)·tgis a subset ofS, and thus aneventin the underlying sample spaceS. See Section 2.1 of Ross; he puts this out very quickly. (Key point: recall thatP attaches probabilities to events, which are subsets ofS.) If the underlying probability space is discrete, so that for any eventEin the sample space

Swe have

P(E) =X

s2Ep(s); wherepis theprobability mass function(pmf), thenXalso has a pmfpXon a new sample space, sayS1, de¯ned by p

X(r)´P(X=r)´P(fs2S:X(s) =rg) =X

s2fs2S:X(s)=rgp(s) forr2S1:(1)

Example 0.1

(roll of two dice) Consider a random roll of two dice. The natural sample space is

S´ f(i;j) : 1·i·6;1·j·6g;

where each of the 36 points inSis assigned equal probabilityp(s) = 1=36. (See Example 4 in Section 1.2.) The random variableXmight record the sum of the values on the two dice, i.e., X(s)´X((i;j)) =i+j. Then the new sample space is S

1=f2;3;4;:::;12g:

In this case, using formula (1), we get the pmf ofXbeingpX(r)´P(X=r) forr2S1, where p

X(2) =pX(12) = 1=36;

p

X(3) =pX(11) = 2=36;

p

X(4) =pX(10) = 3=36;

p

X(5) =pX(9) = 4=36;

p

X(6) =pX(8) = 5=36;

p

X(7) = 6=36:

(ii) What is afunction of a random variable? Given that we understand what is a random variable, we are prepared to understand what is a function of a random variable. Suppose that we are given a random variableXmapping the probability space (S;P) into the real lineRand we are given a functionhmappingRinto R. Thenh(X) is a function mapping the probability space (S;P) intoR. As a consequence, h(X) is itself a new random variable, i.e., a new function mapping (S;P) intoR, as depicted in Figure 2. As a consequence, the distribution of the new random variableh(X) can be expressed in di®erent (equivalent) ways: F

´PX(fr2R:h(r)·tg);

´Ph(X)(fk2R:k·tg);

2

A function of a random variable

X (S,P)RhR Domain: probability space Range: real line Range: real line Figure 2: A (real-valued) function of a random variable is itself a random variable, i.e., a function mapping a probability space into the real line. wherePis the probability measure onSin the ¯rst line,PXis the probability measure on R(the distribution ofX) in the second line andPh(X)is the probability measure onR(the distribution of the random variableh(X) in the third line.

Example 0.2

(more on the roll of two dice) As in Example 0.1, consider a random roll of two dice. There we de¯ned the random variableXto represent the sum of the values on the two rolls. Now let h(x) =jx¡7j; so thath(X)´ jX¡7jrepresents the absolute di®erence between the observed sum of the two rolls and the average value 7. Thenh(X) has a pmf on a new probability spaceS2´ f0;1;2;3;4;5g. In this case, using formula (1) yet again, we get the pmf ofh(X) being p h(X)(k)´P(h(X) =k)´P(fs2S:h(X(s)) =kg) fork2S2, where p h(X)(5) =P(h(X) = 5)´P(jX¡7j= 5) = 2=36 = 1=18; p h(X)(4) =P(h(X) = 4)´P(jX¡7j= 4) = 4=36 = 2=18; p h(X)(3) =P(h(X) = 3)´P(jX¡7j= 3) = 6=36 = 3=18; p h(X)(2) =P(h(X) = 2)´P(jX¡7j= 2) = 8=36 = 4=18; p h(X)(1) =P(h(X) = 1)´P(jX¡7j= 1) = 10=36 = 5=18; p h(X)(0) =P(h(X) = 0)´P(jX¡7j= 0) = 6=36 = 3=18: 3 In this setting we can compute probabilities for events associated withh(X)´ jX¡7jin three ways: using each of the pmf'sp,pXandph(X). (iii) How do we compute theexpectation(or expected value) of a (probability distribution) or a random variable? See Section 2.4. The expected value of a discrete probability distributionPis expected value = mean = X kkP(fkg) =X kkp(k); wherePis the probability measure onSandpis the associated pmf, withp(k)´P(fkg). The expected value of a discrete random variableXis

E[X] =X

kkP(X=k) =X kkp X(k) X s2SX(s)P(fsg) =X s2SX(s)p(s): In the continuous case, with pdf's, we have corresponding formulas, but the story gets more complicated, involving calculus for computations. The expected value of a continuous probability distributionPwith densityfis expected value = mean = Z s2Sxf(x)dx : The expected value of a continuous random variableXwith pdffXis

E[X] =Z

1 ¡1 xf

X(x)dx=Z

X(s)f(s)ds ;

wherefis the pdf onSandfXis the pdf \induced" byXonR. (iv) How do we compute theexpectation of a function of a random variable? Now we need to put everything above together. For simplicity, supposeSis a ¯nite set, so thatXandh(X) are necessarily ¯nite-valued random variables. Then we can compute the expected valueE[h(X)] in three di®erent ways:

E[h(X)] =X

s2Sh(X(s))P(fsg) =X s2Sh(X(s))p(s) X r2Rh(r)P(X=r) =X r2Rh(r)pX(r) X t2RtP(h(X) =t) =X t2Rtp h(X)(t): Similarly, we have the following expressions when all these probability distributions have prob- ability density functions (the continuous case). First, suppose that the underlying probability distribution (measure)Pon the sample spaceShas a probability density function (pdf)f. Then, under regularity conditions, the random variablesXandh(X) have probability density 4 functionsfXandfh(X). Then we have:

E[h(X)] =Z

s2Sh(X(s))f(s)ds Z 1 ¡1 h(r)fX(r)dr Z 1 ¡1 tfh(X)(t)dt : Examples 2.24 and 2.26 (in the book)To ways to computeE[X3] whenXis uniformly distributed on [0;1].

2. Random Vectors, Joint Distributions, and Conditional Distributions

We may want to talk about two or more random variables at once. For example, we may want to consider the two-dimensional random vector (X;Y). (i) Arandom vectormay be constructed just like a real-valued random variable. We may think of (X;Y) as a function mapping the underlying probability space (S;P) into the plane, R 2. The right representation can make linearity of expectation obvious. Here is the general property: For constantsaandb,

E[aX+bY] =aE[X] +bE[Y]:

This is easy to show, writing (in the discrete case) the expected value of a function of a random vector:

E[h(X;Y)] =X

s2Sh((X;Y)(s))P(fsg); wherehis the functionsh(x;y) =ax+by. Hence we get

E[aX+bY] =X

s2S(aX(s) +bY(s))P(fsg) =aX s2SX(s)P(fsg) +bX s2S(Y(s)P(fsg) =aE[X] +bE[Y]: The ¯rst line above is a well chosen representation. The rest is simple algebra. Note that we did not use any special properties such as independence ofXandY. Examples 2.31 and 2.32: Computing expectation using indicator variables. (ii) What does it mean for two random variablesXandYto beindependent random variables? See Section 2.5.2. Pay attention tofor all. We say thatXandYare independent random variables if P(X·x;Y·y) =P(X·x)P(Y·y) for allxandy : We can rewrite that in terms of cumulative distribution functions (cdf's) as We say thatX andYare independent random variables if F X;Y(x;y)´P(X·x;Y·y) =FX(x)FY(y) for allxandy : 5 When the random variables all have pdf's, that relation is equivalent to f

X;Y(x;y) =fX(x)fY(y) for allxandy :

(iii) What is thejoint distributionof (X;Y) in general?

See Section 2.5.

The joint distribution ofXandYis

F

X;Y(x;y)´P(X·x;Y·y):

(iv) How do we compute theconditional expectationof a random variable, given the value of another random variable, in the discrete case? See Section 3.2. There are two steps: (1) ¯nd the conditional probability distribution, (2) compute the expectation of the conditional distribution, just as you would compute the expected value of an unconditional distribution. Here is an example. We ¯rst compute a conditional density. Then we compute an expected value.

Example 3.6

Here we consider conditional expectation in the case of continuous random variables. We now work with joint probability density functions and conditional probability density functions. We start with the joint pdffX;Y(x;y). The de¯nition of the conditional pdf is f

XjY(xjy)´fX;Y(x;y)

f Y(y); where the pdf ofY,fY(y), can be found from the given joint pdf by f

Y(y)´Z

f

X;Y(x;y)dx:

Then we computeE[XjY=y] by computing the ordinary expected value

E[XjY=y] =Z

xf

XjY(xjy)dx;

treating the conditional pdf as a function ofxjust like an ordinary pdf ofx.

Example 3.13 in10thed., Example 3.12 in9thed.

This is the trapped minor example. It shows how we can compute expected values by setting up a simple linear equation with one unknown. This is a common trick, worth knowing. As stated, the problem does not make much sense, because the miner would not make a new decision, independent of his past decisions, when he returns to his starting point. So think of the miner as a robot, who is programmed to make choices at random, independently of the past choices. That is not even a very good robot. But even then the expected time to get out is not so large.

3. moment generating functions

6 Given a random variableX, themoment generating function(mgf) ofX(really of its probability distribution) is

X(t)´E[etX];

which is a function of the real variablet, see Section 2.6 of Ross. (I here useÃ, whereas Ross usesÁ.) An mgf is an example of a transform. The random variable could have a continuous distribution or a discrete distribution; Discrete case:Given a random variableXwith a probability mass function (pmf) p n´P(X=n); n¸0; ; themoment generating function(mgf) ofX(really of its probability distribution) is

X(t)´E[etX]´1X

n=0p netn: The transform maps the pmffpn:n¸0g(function ofn) into the associated function oft. Continuous case:Given a random variableXwith a probability density function (pdf) f´fXon the entire real line, themoment generating function(mgf) ofX(really of its probability distribution) is

Ã(t)´ÃX(t)´E[etX]´Z

1 ¡1 f(x)etxdx : In the continuous case, the transform maps the pdfff(x) :x¸0g(function ofx) into the associated function oft. A major di±culty with the mgf is that it may be in¯nite or it may not be de¯ned. For example, ifXhas a pdff(x)´A=(1 +x)p; x >0, forp >1, then the mgf is in¯nite for allt >0. Similarly, ifXhas the pmfp(n)´A=npforn= 1;2;:::, then the mgf is in¯nite for allt >0. As a consequence, probabilists often use other transforms. In particular, the characteristic functionE[eitX], wherei´p ¡1, is designed to avoid this problem. We will not be using complex numbers in this class. Two major uses of mgfs are: (i) calculating moments and (ii) characterizing the probability distributions of sums of random variables. Below are some illustrative examples. We did not do the Poisson example, but we did do the normal example, and a bit more. We showed that the sum of two independent normal random variables is again normally distributed with a mean equal to the sum of the means and a variance equal to the sum of the variances. That is easily done with the MGF's. Examples 2.37, 2.41 (2.36, 2.40 in9thed.): Poisson

Example 2.43 (2.42 in9thed.): Normal

7quotesdbs_dbs14.pdfusesText_20