Chapter 5
Discrete Probability Distributions
Discrete Probability Distributions
The probability distribution function (DPF), P(x), of a discrete random variable expresses the probability that X takes the value x, as a function of x. That is
Discrete Probability Distributions
Graph the probability distribution function for the roll of a single six-sided die.
1
2
3
4
5
6
1/6
P(x)
x
Figure 5.1
Required Properties of Probability Distribution Functions of Discrete Random Variables
Let X be a discrete random variable with probability distribution function, P(x). Then
Where the notation indicates summation over all possible values x.
Cumulative Probability Function
The cumulative probability function, F(x0), of a random variable X expresses the probability that X does not exceed the value x0, as a function of x0. That is
Where the function is evaluated at all values x0
Derived Relationship Between Probability and Cumulative Probability Function
Let X be a random variable with probability function P(x) and cumulative probability function F(x0). Then it can be shown that
Where the notation implies that summation is over all possible values x that are less than or equal to x0.
Derived Properties of Cumulative Probability Functions for Discrete Random Variables
Let X be a discrete random variable with a cumulative probability function, F(x0). Then we can show that
Expected Value
The expected value, E(X), of a discrete random variable X is defined
Where the notation indicates that summation extends over all possible values x.
The expected value of a random variable is called its mean and is denoted μx.
Variance and Standard Deviation
Let X be a discrete random variable. The expectation of the squared discrepancies about the mean, (X - μ)2, is called the variance, denoted σ2x and is given by
The standard deviation, σx , is the positive square root of the variance.
Variance�(Alternative Formula)
The variance of a discrete random variable X can be expressed as
Expected Value and Variance for Discrete Random Variable Using Microsoft Excel�(Figure 5.4)
Expected Value = 1.95
Variance = 1.9475
Bernoulli Distribution
A Bernoulli distribution arises from a random experiment which can give rise to just two possible outcomes. These outcomes are usually labeled as either “success” or “failure.” If π denotes the probability of a success and the probability of a failure is (1 - π ), the the Bernoulli probability function is
Mean and Variance of a Bernoulli Random Variable
The mean is:
And the variance is:
Sequences of x Successes in n Trials
The number of sequences with x successes in n independent trials is:
Where n! = n x (x – 1) x (n – 2) x . . . x 1 and 0! = 1.
Binomial Distribution
Suppose that a random experiment can result in two possible mutually exclusive and collectively exhaustive outcomes, “success” and “failure,” and that π is the probability of a success resulting in a single trial. If n independent trials are carried out, the distribution of the resulting number of successes “x” is called the binomial distribution. Its probability distribution function for the binomial random variable X = x is:
P(x successes in n independent trials)=
for x = 0, 1, 2 . . . , n
Mean and Variance of a Binomial Probability Distribution
Let X be the number of successes in n independent trials, each with probability of success π. The x follows a binomial distribution with mean,
and variance,
Binomial Probabilities�- An Example –�(Example 5.7)
An insurance broker, Shirley Ferguson, has five contracts, and she believes that for each contract, the probability of making a sale is 0.40.
What is the probability that she makes at most one sale?
P(at most one sale) = P(X ≤ 1) = P(X = 0) + P(X = 1)
= 0.078 + 0.259 = 0.337
Binomial Probabilities, n = 100, π =0.40�(Figure 5.10)
Hypergeometric Distribution
Suppose that a random sample of n objects is chosen from a group of N objects, S of which are successes. The distribution of the number of X successes in the sample is called the hypergeometric distribution. Its probability function is:
Where x can take integer values ranging from the larger of 0 and [n-(N-S)] to the smaller of n and S.
Poisson Probability Distribution
Assume that an interval is divided into a very large number of subintervals so that the probability of the occurrence of an event in any subinterval is very small. The assumptions of a Poisson probability distribution are:
Poisson Probability Distribution
The random variable X is said to follow the Poisson probability distribution if it has the probability function:
where
Poisson Probability Distribution
Partial Poisson Probabilities for λ = 0.03 Obtained Using Microsoft Excel PHStat�(Figure 5.14)
Poisson Approximation to the Binomial Distribution
Let X be the number of successes resulting from n independent trials, each with a probability of success, π. The distribution of the number of successes X is binomial, with mean nπ. If the number of trials n is large and nπ is of only moderate size (preferably nπ ≤ 7), this distribution can be approximated by the Poisson distribution with λ = nπ. The probability function of the approximating distribution is then:
Joint Probability Functions
Let X and Y be a pair of discrete random variables. Their joint probability function expresses the probability that X takes the specific value x and simultaneously Y takes the value y, as a function of x and y. The notation used is P(x, y) so,
Joint Probability Functions
Let X and Y be a pair of jointly distributed random variables. In this context the probability function of the random variable X is called its marginal probability function and is obtained by summing the joint probabilities over all possible values; that is,
Similarly, the marginal probability function of
the random variable Y is
Properties of Joint Probability Functions
Conditional Probability Functions
Let X and Y be a pair of jointly distributed discrete random variables. The conditional probability function of the random variable Y, given that the random variable X takes the value x, expresses the probability that Y takes the value y, as a function of y, when the value x is specified for X. This is denoted P(y|x), and so by the definition of conditional probability:
Similarly, the conditional probability function of X, given Y = y is:
Stock Returns, Marginal Probability, Mean, Variance�(Example 5.16)
Y Return | ||||
X Return | 0% | 5% | 10% | 15% |
0% | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
5% | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
10% | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
15% | 0.0625 | 0.0625 | 0.0625 | 0.0625 |
Table 5.6
Covariance
Let X be a random variable with mean μX , and let Y be a random variable with mean, μY . The expected value of (X - μX )(Y - μY ) is called the covariance between X and Y, denoted Cov(X, Y).
For discrete random variables
An equivalent expression is
Correlation
Let X and Y be jointly distributed random variables. The correlation between X and Y is:
Covariance and Statistical Independence
If two random variables are statistically independent, the covariance between them is 0. However, the converse is not necessarily true.
Portfolio Analysis
The random variable X is the price for stock A and the random variable Y is the price for stock B. The market value, W, for the portfolio is given by the linear function,
Where, a, is the number of shares of stock A and, b, is the number of shares of stock B.
Portfolio Analysis
The mean value for W is,
The variance for W is,
or using the correlation,