Wednesday, February 3, 2016
Unizor - Bernoulli Statistics - Task
Unizor - Creative Minds through Art of Mathematics - Math4Teens
Notes to a video lecture on http://www.unizor.com
Bernoulli Statistics - Task
As we noted, the purpose of Mathematical Statistics is, using the past observations, to evaluate the probability of certain events to be able to predict their occurrence in the future.
Let's recall the Bernoulli trials and Bernoulli random variables. Coin tossing is an example of this. An experiment has only two outcomes and we associate with it a random variable that takes a value of 1 for one outcome and the values of 0 for another. Only one parameter defines the distribution of probabilities of this random variable - probability P of one of the outcomes. So, assume that our Bernoulli random variable ξ takes a value of 1 with probability P and 0 with probability 1−P.
Let's consider the simplest task of Mathematical Statistics.
Based on a series of Bernoulli trials (say, one coin tossed multiple times or multiple coins tossed simultaneously once), when we repeat the experiment with our random variable N times, independently of each other and under identical conditions to have the same distribution of probabilities, and knowing the results of these experiments - ξ1, ξ2 ...ξN, that is the values ξ took at any experiment, we need to determine the probability P that completely defines the distribution of probabilities of our random variable ξ.
In the Theory of Probabilities part of this course we rather intuitively introduced probability P of an event as a limit of frequency of an event's occurrence in independent experiments under identical conditions as the number of experiments increases to infinity. We did not go deeper into a problem of existing of this limit.
In our case of Bernoulli random variables, the sum of values of individual experiments (that is, ξi=1 if event happened and ξi=0 otherwise) divided by the number of these experiments is exactly that frequency. So, our very intelligent guess is that this average value, calculated on results of experiments, would be a good approximation of probability P:
η = (ξ1+ξ2+...+ξN)/N ≅ P
We also hope that, as the number of experiments N increases, the evaluation of probability P would be better and better.
Basically, we have formulated a task - evaluation of probability P, we also consider the solution to this task - using the arithmetic average of the results of experiments
η = (ξ1+ξ2+...+ξN)/N
All we have to do now is to determine if this solution is indeed giving us some evaluation of probability P and how good this evaluation really is.
Let's think about our task.
We would like to approximate an unknown constant P (that is, Prob{ξ=1} - probability of our Bernoulli random variable ξ to take a value of 1) with a single value of a random variable
η = (ξ1+ξ2+...+ξN)/N
(the average of results of a series of N individual experiments with random variable ξ: ξ1, ξ2 ...ξN, that can be considered as one combined experiment).
Another such combined experiment will produce a different value of η. That's why we consider η to be a random variable.
We have to think now about how close single values of η approximate constant P, whether this approximation depends on the number of individual experiments in a series, how to express this approximation quantitatively and what are the required parameters of our experimentation to achieve the needed precision of approximation.
These are the subjects of the next lecture about solution of our task.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment