[PDF] A random variable: a function





Previous PDF Next PDF



Introduction to Random Variables

1 What is a Random Variable? The concept of “randomness” is fundamental to the field of statistics. As mentioned in the probability theory notes the science 



Continuous Random Variables and Probability Distributions

A random variable X is continuous if possible values comprise either a single interval on the number line or a union of disjoint intervals. Example: If in the 



RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 1.1

The probability distribution for a discrete random variable assigns nonzero probabilities to only a countable number of distinct x values. Any value x not 



Chapter 3 Continuous Random Variables

Rather than summing probabilities related to discrete random variables here for continuous random variables



RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 1.1

The probability distribution for a discrete random variable assigns nonzero probabilities to only a countable number of distinct x values. Any value x not 



Random Variables and Probability Distributions

F(x) is continuous from the right [i.e. for all x]. Distribution Functions for Discrete Random Variables. The distribution function for a discrete random 



Chapter 5: Discrete Probability Distributions - Section 5.1

A probability distribution is an assignment of probabilities to the values of the random variable. The abbreviation of pdf is used for a probability 



A random variable: a function

As such a random variable has a probability distribution. We usually do not care about. Page 2. the underlying probability space



Distinguishing Between Random and Fixed

Here are some summary comments that may help. Random and Fixed Variables. A “fixed variable” is one that is assumed to be measured without error. It is also 



Simple Linear Regression

If the two (random) variables are probabilistically related then for a fixed value of x



Introduction to Random Variables

1 What is a Random Variable? The concept of “randomness” is fundamental to the field of statistics. As mentioned in the probability theory notes the science 



A random variable: a function

(i) What is a random variable? A (real-valued) random variable often denoted by X (or some other capital letter)



Expected Value The expected value of a random variable indicates

for all values of t then. X and Y have the same probability distribution. If the moment generating function of X exists and is finite in some region about t=0



Continuous Random Variables and Probability Distributions

A random variable X is continuous if possible values comprise either a single interval on the number line or a union of disjoint intervals.



Random Variables Distributions

https://www0.gsb.columbia.edu/faculty/pglasserman/B6014/RandomVariables.pdf



Chapter 3 Continuous Random Variables

Rather than summing probabilities related to discrete random variables here for Random variable X is continuous if probability density function (pdf) f ...



RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS 1.1

The probability distribution for a discrete random variable assigns nonzero probabilities to only a countable number of distinct x values.



Topic 7 Random Variables and Distribution Functions

The range of a random variable is called the state space. Exercise. Give some random variables on the following probability spaces ?. 1. Roll a die 3 times and 



Chapter 3 Some Special Distributions - 3.1 The Binomial and

A binomial distribution is a common probability distribution that occurs in practice. If the random variable X counts the number of successes in the n.



Random Variables

Random Variables. A Random Variable is a rule that assigns a number to each outcome of an experiment. Example: An experiment consists of rolling a pair of 



[PDF] Chapter 4 RANDOM VARIABLES

Random Experiment Variable E X Sample space range of X random variable X must be discrete the pdf gives approximately the probability



[PDF] Random Variables and Probability Distributions

A random variable that takes on a finite or countably infinite number of values (see page 4) is called a dis- crete random variable while one which takes on a 



Functions of Continuous Random Variables PDF CDF

If X is a continuous random variable and Y=g(X) is a function of X then Y itself is a random variable Thus we should be able to find the CDF and PDF of 



Probability density function - Wikipedia

In probability theory a probability density function (PDF) or density of an absolutely continuous random variable is a function whose value at any given 



[PDF] random variables and probability distributions

Probability distribution for a discrete random variable The probability distribution for Definition of a probability density frequency function ( pdf )



[PDF] Random Variables - UCI

Two different broad classes of random variables: 1 A continuous random variable can Probability distribution function ( pdf ) for a discrete r v X is a



[PDF] Lecture 4 Functions of random variables

25 sept 2019 · Let Y be a random variable discrete and continuous A random variable with the pdf fW(w) of (4 2 1) above is said to



[PDF] Chapter 3 Random Variables and Their Distributions

A random variable (r v ) is a function that assigns one and only one We define the probability density function (p d f ) of a continuous r v as:



[PDF] Random Variables and Applications

A random variable is a numerically valued variable which takes on different values with given probabilities Examples: The return on an investment in a one-year 

Definition. The probability density function (PDF) of a continuous random variable X is the function f(·) that associates a probability with each range of realizations of X. The area under the PDF between a and b returns P(a
  • Which is a random variable?

    A random variable is a variable whose value is unknown or a function that assigns values to each of an experiment's outcomes. A random variable can be either discrete (having specific values) or continuous (any value in a continuous range).
  • What are pdf and CDF for a random variable?

    PDF is the probability that a random variable will take a value exactly equal to the random variable, whereas CDF is the probability that a random variable will take a value less than or equal to the random variable.
  • How do you find the pdf of a random variable?

    Let X be a continuous random variable with pdf f and cdf F.

    1By definition, the cdf is found by integrating the pdf: F(x)=x???f(t)dt.2By the Fundamental Theorem of Calculus, the pdf can be found by differentiating the cdf: f(x)=ddx[F(x)]
  • Every continuous random variable has a probability density function (PDF), instead of a probability mass function (PMF), that defines the relative likelihood that a random variable X has a particular value.
IEOR 4106: Introduction to Operations Research: Stochastic Models

Spring 2011, Professor Whitt

Class Lecture Notes: Tuesday, January 25.

Random Variables, Conditional Expectation and Transforms

1. Random Variables and Functions of Random Variables

(i) What is arandom variable? A (real-valued) random variable, often denoted byX(or some other capital letter), is a functionmapping a probability space (S;P) into the real lineR. This is shown in Figure 1. Associated with each pointsin the domainSthe functionXassigns one and only one value X(s) in the rangeR. (The set of possible values ofX(s) is usually a proper subset of the real line; i.e., not all real numbers need occur. IfSis a ¯nite set withmelements, thenX(s) can assume at mostmdi®erent values assvaries inS.)A random variable: a function (S,P)RX

Range: real lineDomain: probability space

Figure 1: A (real-valued) random variable is a function mapping a probability space into the real line. As such, a random variable has a probability distribution. We usually do not care about the underlying probability space, and just talk about the random variable itself, but it is good to know the full formalism. The distribution of a random variable is de¯ned formally in the obvious way where´means \equality by de¯nition,"Pis the probability measure on the underlying sample spaceSandfs2S:X(s)·tgis a subset ofS, and thus aneventin the underlying sample spaceS. See Section 2.1 of Ross; he puts this out very quickly. (Key point: recall thatP attaches probabilities to events, which are subsets ofS.) If the underlying probability space is discrete, so that for any eventEin the sample space

Swe have

P(E) =X

s2Ep(s); wherepis theprobability mass function(pmf), thenXalso has a pmfpXon a new sample space, sayS1, de¯ned by p

X(r)´P(X=r)´P(fs2S:X(s) =rg) =X

s2fs2S:X(s)=rgp(s) forr2S1:(1)

Example 0.1

(roll of two dice) Consider a random roll of two dice. The natural sample space is

S´ f(i;j) : 1·i·6;1·j·6g;

where each of the 36 points inSis assigned equal probabilityp(s) = 1=36. (See Example 4 in Section 1.2.) The random variableXmight record the sum of the values on the two dice, i.e., X(s)´X((i;j)) =i+j. Then the new sample space is S

1=f2;3;4;:::;12g:

In this case, using formula (1), we get the pmf ofXbeingpX(r)´P(X=r) forr2S1, where p

X(2) =pX(12) = 1=36;

p

X(3) =pX(11) = 2=36;

p

X(4) =pX(10) = 3=36;

p

X(5) =pX(9) = 4=36;

p

X(6) =pX(8) = 5=36;

p

X(7) = 6=36:

(ii) What is afunction of a random variable? Given that we understand what is a random variable, we are prepared to understand what is a function of a random variable. Suppose that we are given a random variableXmapping the probability space (S;P) into the real lineRand we are given a functionhmappingRinto R. Thenh(X) is a function mapping the probability space (S;P) intoR. As a consequence, h(X) is itself a new random variable, i.e., a new function mapping (S;P) intoR, as depicted in Figure 2. As a consequence, the distribution of the new random variableh(X) can be expressed in di®erent (equivalent) ways: F

´PX(fr2R:h(r)·tg);

´Ph(X)(fk2R:k·tg);

2

A function of a random variable

X (S,P)RhR Domain: probability space Range: real line Range: real line Figure 2: A (real-valued) function of a random variable is itself a random variable, i.e., a function mapping a probability space into the real line. wherePis the probability measure onSin the ¯rst line,PXis the probability measure on R(the distribution ofX) in the second line andPh(X)is the probability measure onR(the distribution of the random variableh(X) in the third line.

Example 0.2

(more on the roll of two dice) As in Example 0.1, consider a random roll of two dice. There we de¯ned the random variableXto represent the sum of the values on the two rolls. Now let h(x) =jx¡7j; so thath(X)´ jX¡7jrepresents the absolute di®erence between the observed sum of the two rolls and the average value 7. Thenh(X) has a pmf on a new probability spaceS2´ f0;1;2;3;4;5g. In this case, using formula (1) yet again, we get the pmf ofh(X) being p h(X)(k)´P(h(X) =k)´P(fs2S:h(X(s)) =kg) fork2S2, where p h(X)(5) =P(h(X) = 5)´P(jX¡7j= 5) = 2=36 = 1=18; p h(X)(4) =P(h(X) = 4)´P(jX¡7j= 4) = 4=36 = 2=18; p h(X)(3) =P(h(X) = 3)´P(jX¡7j= 3) = 6=36 = 3=18; p h(X)(2) =P(h(X) = 2)´P(jX¡7j= 2) = 8=36 = 4=18; p h(X)(1) =P(h(X) = 1)´P(jX¡7j= 1) = 10=36 = 5=18; p h(X)(0) =P(h(X) = 0)´P(jX¡7j= 0) = 6=36 = 3=18: 3 In this setting we can compute probabilities for events associated withh(X)´ jX¡7jin three ways: using each of the pmf'sp,pXandph(X). (iii) How do we compute theexpectation(or expected value) of a (probability distribution) or a random variable? See Section 2.4. The expected value of a discrete probability distributionPis expected value = mean = X kkP(fkg) =X kkp(k); wherePis the probability measure onSandpis the associated pmf, withp(k)´P(fkg). The expected value of a discrete random variableXis

E[X] =X

kkP(X=k) =X kkp X(k) X s2SX(s)P(fsg) =X s2SX(s)p(s): In the continuous case, with pdf's, we have corresponding formulas, but the story gets more complicated, involving calculus for computations. The expected value of a continuous probability distributionPwith densityfis expected value = mean = Z s2Sxf(x)dx : The expected value of a continuous random variableXwith pdffXis

E[X] =Z

1 ¡1 xf

X(x)dx=Z

X(s)f(s)ds ;

wherefis the pdf onSandfXis the pdf \induced" byXonR. (iv) How do we compute theexpectation of a function of a random variable? Now we need to put everything above together. For simplicity, supposeSis a ¯nite set, so thatXandh(X) are necessarily ¯nite-valued random variables. Then we can compute the expected valueE[h(X)] in three di®erent ways:

E[h(X)] =X

s2Sh(X(s))P(fsg) =X s2Sh(X(s))p(s) X r2Rh(r)P(X=r) =X r2Rh(r)pX(r) X t2RtP(h(X) =t) =X t2Rtp h(X)(t): Similarly, we have the following expressions when all these probability distributions have prob- ability density functions (the continuous case). First, suppose that the underlying probability distribution (measure)Pon the sample spaceShas a probability density function (pdf)f. Then, under regularity conditions, the random variablesXandh(X) have probability density 4 functionsfXandfh(X). Then we have:

E[h(X)] =Z

s2Sh(X(s))f(s)ds Z 1 ¡1 h(r)fX(r)dr Z 1 ¡1 tfh(X)(t)dt : Examples 2.24 and 2.26 (in the book)To ways to computeE[X3] whenXis uniformly distributed on [0;1].

2. Random Vectors, Joint Distributions, and Conditional Distributions

We may want to talk about two or more random variables at once. For example, we may want to consider the two-dimensional random vector (X;Y). (i) Arandom vectormay be constructed just like a real-valued random variable. We may think of (X;Y) as a function mapping the underlying probability space (S;P) into the plane, R 2. The right representation can make linearity of expectation obvious. Here is the general property: For constantsaandb,

E[aX+bY] =aE[X] +bE[Y]:

This is easy to show, writing (in the discrete case) the expected value of a function of a random vector:

E[h(X;Y)] =X

s2Sh((X;Y)(s))P(fsg); wherehis the functionsh(x;y) =ax+by. Hence we get

E[aX+bY] =X

s2S(aX(s) +bY(s))P(fsg) =aX s2SX(s)P(fsg) +bX s2S(Y(s)P(fsg) =aE[X] +bE[Y]: The ¯rst line above is a well chosen representation. The rest is simple algebra. Note that we did not use any special properties such as independence ofXandY. Examples 2.31 and 2.32: Computing expectation using indicator variables. (ii) What does it mean for two random variablesXandYto beindependent random variables? See Section 2.5.2. Pay attention tofor all. We say thatXandYare independent random variables if P(X·x;Y·y) =P(X·x)P(Y·y) for allxandy : We can rewrite that in terms of cumulative distribution functions (cdf's) as We say thatX andYare independent random variables if F X;Y(x;y)´P(X·x;Y·y) =FX(x)FY(y) for allxandy : 5 When the random variables all have pdf's, that relation is equivalent to f

X;Y(x;y) =fX(x)fY(y) for allxandy :

(iii) What is thejoint distributionof (X;Y) in general?

See Section 2.5.

The joint distribution ofXandYis

F

X;Y(x;y)´P(X·x;Y·y):

(iv) How do we compute theconditional expectationof a random variable, given the value of another random variable, in the discrete case? See Section 3.2. There are two steps: (1) ¯nd the conditional probability distribution, (2) compute the expectation of the conditional distribution, just as you would compute the expected value of an unconditional distribution. Here is an example. We ¯rst compute a conditional density. Then we compute an expected value.

Example 3.6

Here we consider conditional expectation in the case of continuous random variables. We now work with joint probability density functions and conditional probability density functions. We start with the joint pdffX;Y(x;y). The de¯nition of the conditional pdf is f

XjY(xjy)´fX;Y(x;y)

f Y(y); where the pdf ofY,fY(y), can be found from the given joint pdf by f

Y(y)´Z

f

X;Y(x;y)dx:

Then we computeE[XjY=y] by computing the ordinary expected value

E[XjY=y] =Z

xf

XjY(xjy)dx;

treating the conditional pdf as a function ofxjust like an ordinary pdf ofx.

Example 3.13 in10thed., Example 3.12 in9thed.

This is the trapped minor example. It shows how we can compute expected values by setting up a simple linear equation with one unknown. This is a common trick, worth knowing. As stated, the problem does not make much sense, because the miner would not make a new decision, independent of his past decisions, when he returns to his starting point. So think of the miner as a robot, who is programmed to make choices at random, independently of the past choices. That is not even a very good robot. But even then the expected time to get out is not so large.

3. moment generating functions

6 Given a random variableX, themoment generating function(mgf) ofX(really of its probability distribution) is

X(t)´E[etX];

which is a function of the real variablet, see Section 2.6 of Ross. (I here useÃ, whereas Ross usesÁ.) An mgf is an example of a transform. The random variable could have a continuous distribution or a discrete distribution; Discrete case:Given a random variableXwith a probability mass function (pmf) p n´P(X=n); n¸0; ; themoment generating function(mgf) ofX(really of its probability distribution) is

X(t)´E[etX]´1X

n=0p netn: The transform maps the pmffpn:n¸0g(function ofn) into the associated function oft. Continuous case:Given a random variableXwith a probability density function (pdf) f´fXon the entire real line, themoment generating function(mgf) ofX(really of its probability distribution) is

Ã(t)´ÃX(t)´E[etX]´Z

1 ¡1 f(x)etxdx : In the continuous case, the transform maps the pdfff(x) :x¸0g(function ofx) into the associated function oft. A major di±culty with the mgf is that it may be in¯nite or it may not be de¯ned. For example, ifXhas a pdff(x)´A=(1 +x)p; x >0, forp >1, then the mgf is in¯nite for allt >0. Similarly, ifXhas the pmfp(n)´A=npforn= 1;2;:::, then the mgf is in¯nite for allt >0. As a consequence, probabilists often use other transforms. In particular, the characteristic functionE[eitX], wherei´p ¡1, is designed to avoid this problem. We will not be using complex numbers in this class. Two major uses of mgfs are: (i) calculating moments and (ii) characterizing the probability distributions of sums of random variables. Below are some illustrative examples. We did not do the Poisson example, but we did do the normal example, and a bit more. We showed that the sum of two independent normal random variables is again normally distributed with a mean equal to the sum of the means and a variance equal to the sum of the variances. That is easily done with the MGF's. Examples 2.37, 2.41 (2.36, 2.40 in9thed.): Poisson

Example 2.43 (2.42 in9thed.): Normal

7quotesdbs_dbs14.pdfusesText_20
[PDF] which opened  ...Related searchesTheme park ridesStar Wars: Rise of the...

[PDF] which spurred. Canada to develop the world's first 24 Hour. Movement Guidelines

[PDF] whose content is a tragedy of life in general. The tragic heroes like Hamlet

[PDF] wie Kafka und seine Freunde über die schrecklich absurde Interpretation ... eine radikale Funktion

[PDF] with ser meaning "to exist

[PDF] with the result that plot lines

[PDF] women's day

[PDF] WORKSHOP DAY. NATASHA catches up to CAROL IN THE WORKSHOP. NATASHA. Hey

[PDF] X n'existent pas en alphabet turc et ils sont remplacés respectivement par ... Les lettres Ç

[PDF] X n'existent pas en alphabet turc et ils sont remplacés respectivement par. K

[PDF] y compris.

[PDF] y neutre au singulier seulement ... L'emploi d'un pronom personnel de la même personne que le sujet (GNs ou pronom) est obligatoire pour ...

[PDF] y se

[PDF] yeah. Guess you know it's true. Hope you know ...[PDF] NHS services

[PDF] yes sir. Three bags full. One for my master. And one for the dame. One for the little boy. Who lives down the ...[PDF] BAA BAA BLACK SHEEP PRESCHOOL/