Wednesday, July 23, 2014
Unizor - Probability Definition - Equal Chances
Based on the previous philosophical discussion about probability let's attempt to define this concept more rigorously.
By now we should understand the concept of an event (like "getting even number on a dice") as a result of a random experiment (like "rolling the dice"). We have also discussed a concept of an elementary event - the "smallest" in some sense result of a random experiment that cannot be represented as a combination of "smaller" events (like "getting 6 on a single dice rolling"). We also introduced a concept of a sample space as a set of all possible elementary events (like {"getting 1", "getting 2", "getting 3", "getting 4", "getting 5", "getting 6"} as a result of the rolling of a dice). All these concepts were discussed and exemplified in the previous lectures. In all cases that we considered it was possible to identify elementary events in such a way that they seemed to have equal chances of occurrence (flipping symmetrical coins, rolling perfect dice, dealing a randomly shuffled deck of cards among players etc.)
Notice that in all examples that we presented we were dealing with a finite number of elementary events and equal chances of their occurrence. Since all our elementary events had equal chances to occur, we assumed that the frequency of occurrence of each such elementary event would tend to 1/N, where N is the total number of elementary events, increasing to infinity.
Therefore, the probability of each elementary event was reasonably to assume to be 1/N.
To evaluate the probability of different events, we compared the numbers of elementary events that comprise them. Assuming, again, equal chances of each elementary event to occur, the frequency of occurrence of any event seems to be proportional to a number of elementary events that comprise it and, therefore, the probability of any event should be equal to M/N, where M is the number of elementary events that comprise our event and N is the total number of elementary event.
Let's use some abstraction to define these concepts a little more formally, as is customary in mathematics. This will allow to spread the applications of the theory to all other cases that fit our abstraction.
A very simple abstract model of all the concepts above is a finite set of elements with the following properties.
Each element of this set has an associated with it a numerical measure that is equal to 1/N, where N is the number of elements in our set.
Every subset of our set has an associated numerical measure equal to M/N, where M is the number of elements in this subset. This is equivalent to say that the measure of any subset equals to a sum of measures of elements that comprise it.
From this we can easily derive that the measure of an empty subset equals to 0, the measure of a full subset that coincides with the whole set equals to 1, a measure of a union of two subsets that have no common elements equals to a sum of their measures (additive property of our measure).
Now let's define the terminology to bridge the gap between the abstraction above and all we know about probability.
The set we introduced above we will call a "sample space".
We will call the elements of this set "elementary events" and the measure associated with each we will call its "probability" that is equal to 1/N, where N is the number of elements (which we called elementary events) in our set (which we called a sample space).
We further call any subset of that set (that is a subset of a sample space) an "event". It contains certain number of elements (that is, elementary events) and its "probability" is its measure we have defined above as M/N, where M is the number of elements in this subset and N is the total number of elements in the set.
Summarizing our abstraction, we model the results of any random experiment as
i) a finite set
ii) with an additive measure defined for each element and each subset,
iii) with the measure of an entire set equal to 1,
iv) with equal measures associated with each element,
v) with, consequently, a measure of each element equal to 1/N,
v) with, consequently, a measure of each subset equal to a sum of measures of elements that comprise it.
From now on we assume exactly these properties when we discuss sample space (sets), events (subsets of this set), elementary events (elements of this set), probability (additive measure on this set).
We would like to mention that the above abstraction works only for random experiments with finite number of symmetrical results having equal chances to occur.
The complete Theory of Probabilities deals also with experiments having infinite number of results and more abstract definitions of measure, but these are outside the scope of this course.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment