Home About us Mathematical Epidemiology Rweb EPITools Statistics Notes Web Design Contact us Links |

> Home > Statistics Notes > Probability > Expectation Values, ctd.

Value (x) |
P(X=x) |
xP(X=x) |

0 | 1 × 0.4^{3} |
0 |

1 | 3 × 0.4^{2} × 0.6 |
0.288 |

2 | 3 × 0.4 × 0.6^{2} |
0.864 |

3 | 1 × 0.6^{3} |
0.648 |

So the total is 0.288+0.864+0.648=1.8, which in fact equals 3*0.6 as advertised. The expectation value of a binomial random variable with N trials and success probability p per trial is N*p.

Where did the factors 1 and 3 that we were multiplying by in the probability column come from? They came from the binomial formula; remember the expression involving factorials out in front?

So you can compute the expectation value of some random quantity by (1) taking each quantity and multiplying it by the chance it would ever occur, and (2) adding this up for all the possible values. This is analogous to an average of the values in a particular data set, except that for computing the average of a particular data set you can take all the values that happen to occur and multiply them by their observed relative frequency (as we discussed in a previous page). The point of all that? The sample mean and the expectation value are both useful quantities and have a close relationship, but they are distinct. One possible problem is that if there are infinitely many possible values you might not actually be able to add all those values up in any meaningful way. So maybe the expectation value doesn't even exist. But you can always calculate a sample mean.

But what if we wanted to compute the expectation value of some other quantity derived from a random variable? In particular we will be interested in the square of a binomial. We used earlier binomial with N=3 and p=0.6. We found that the square of such a binomial takes values {0,1,4,9} and we were able to find the probability that any of these values would appear.

The square of this binomial is a random quantity in its own right, with its own probability function. We found out what it was on an earlier page. The way to compute the expectation is the same: take each possible value, multiply it by its probability of occurrence, and add it all together. So for the expectation of a square, we could write this by saying that the square (which we will call Y) takes values in {0,1,4,9} (in this example). We will need to calculate

Value (y=x)^{2} |
P(Y=y=x^{2})=P(X=x) |
yP(Y=y)=x^{2}P(X=x) |

0 | 1 × 0.4^{3} |
? |

1 | 3 × 0.4^{2} × 0.6 |
? |

4 | 3 × 0.4 × 0.6^{2} |
? |

9 | 1 × 0.6^{3} |
? |

Here, the total is 0.288+1.728+1.944=3.96. For the binomial with N=3 and p=0.6, this is the expected value of the square. Try this on the computer:

```
> z1 <- rbinom(25000,size=3,prob=0.6)
```

> z1sq <- z1*z1

> mean(z1sq)

This should be fairly close to 3.96. The average of a lot of squared
values should be close to the expectation value of the square. By
the way, the expectation value of the square of a random quantity
is called the On to even more about expectation values.

Return to statistics page.

Return to probability page.

Return to stochastic seminar.

All content © 2000 Mathepi.Com (except R and Rweb).

About us.