Monday, November 10, 2014

Unizor - Probability - Normal Distribution





In some way the Normal distribution of probabilities is the most important one in an entire Theory of Probabilities and Mathematical Statistics.

Everybody knows about the bell curve. This is a graphical representation of the Normal distribution of probabilities, where the probability of a random variable to be between the values A and B is measured as an area under a bell curve from an abscissa A to an abscissa B.
Not all bell-shaped distributions of probabilities are Normal, there are many others, but all Normal distributions do have a bell-like graphic representation with different position of a center and different steepness of the curve.
Of course, the entire area under a bell curve from negative infinity to positive infinity equals to 1 since this is a probability of a random variable to have any value.

Normal distribution is a continuous type of distribution. Random variable with this distribution of probabilities (we will call it sometimes normal random variable) can take any real value. The probability of normal random variable to take any exact specific value equals to zero, while the probability it takes a value in the interval from one real number (left boundary) to another (right boundary) is greater than zero.

Exactly in the middle of a graph is the expected value (mean) of our random variable. Variance, being a measure of deviation of a random variable from its mean value, depends on how steep the graph increases towards its middle. The steeper the bell-shaped graph around its middle - the more concentrated the values of a normal random variable are around its mean value and, therefore, the smaller variance and standard deviation it has.

What makes a normal random variable so special?

In short (and not very mathematically speaking), average of many random variables of almost any distributions of probabilities behaves very much like a normal random variable.

Obviously, this property of an average of random variables is conditional. A necessary condition is that the number of random variables participating in an average should be large, the more random variables are averaged together - the more the distribution of an average resembles normal distribution.
More precisely, this is a limit theorem, which means that the measure of "normality" of a distribution of an average of random variables is increasing with the number of participating random variables increasing to infinity, and the limit of a distribution of this average is exactly the normal distribution of probabilities.
The precise mathematical theorem that states this property of an average of random variables is called the Central Limit Theorem in the theory of probabilities.

There are other conditions for this theorem to hold. For instance, independence and identical distribution of random variables participating in the averaging is a sufficient (but not necessary) condition.

No comments: