Tuesday, March 15, 2016

Unizor - Probability - Density Distribution

Unizor - Creative Minds through Art of Mathematics - Math4Teens

As we mentioned, cumulative probability distribution function F(x)=Prob{ξ less than x} contains all the information about our random variable ξ, sufficient to determine the probability of any event associated with this random variable.
This cumulative function is applicable to both discrete and continuously distributed random variables. However, for discrete variables it makes more sense to deal with probability mass distribution function because it makes more visible which values are more probable than others without any calculations. There is no such probability mass distribution function for continuously distributed random variables, but some reasonable equivalent is very desirable to define.

Here comes probability density function, that is to cumulative distribution function the same as speed is to distance.
We will define this function completely analogously to how we defined speed above.

Assume, our random variable ξ is defined by its cumulative probability distribution function F(x). Then we can define the probability of any event related to our random variable, for instance
Prob{ξ is between a and b} = F(b)−F(a)
Let's take two equal but small non-intersecting intervals of values ξ might take, for instance [a1;b1] and [a2;b2]. Comparing F(b1)−F(a1) with F(b2)−F(a2), we can make a judgement about which values, within the first or the second interval, are more probable.
Analogously, value
is a good measure of an average increment of probability on interval [a;b] and can be compared with similar average increment of probability on different intervals.

More than that, if we divide the whole set of real values our random variable might take into many small equal intervals and calculate the average increment of probability on each interval, we will have a picture similar to probability mass distribution function for discreet random variables, only in this case it will be distribution of probabilities not among discrete values, but among different small intervals. And, the smaller the intervals - the more precisely the average probability increments on each interval obtained by this procedure would inform us about comparative distribution of probabilities among different intervals.

Ultimately, for each value x our random variable might take, we can consider a function called probability density derived from cumulative probability distribution function F(x) as
f(x) = lim{d↓0} [F(x+d)−F(x)]/d
which plays the role of a "speed" of changing the probabilities around any value x our random variable can take.

No comments: