Friday, August 15, 2014

Unizor - Probability - Random Variables - Introduction





Historically, the theory of probabilities was developed from attempts to study the games and analyze the chances of winning. The concept of random variables has the same roots. Not only people tried to evaluate the chances of winning, they also bet money on these games and wanted to understand how much to bet on this or that game. So, they were dealing with some numerical quantity associated with chances of winning or losing a game. This numerical quantity associated with the results (elementary events) of the game (random experiment) is an example of a random variable. The elementary event can be considered as a qualitative result of an experiment, while the random variable describes its quantitative result.

Consider an example of betting an amount of $1 in a game of flipping a coin you play against a partner. Let's assume you are betting on "heads", then a coin is flipped. If you guessed correctly and a coin indeed falls on "heads", your partner pays you $1. If you guess incorrectly, you give $1 to him. This seems to be a fair game with equal chances to win and lose. Let's associate a numerical value - positive amount of winning or negative amount of losing - with each elementary event occurring in this game, that is introduce a function with numerical values defined on a set of elementary events. The value of this function for an elementary event "heads" equals to 1 and the value for an elementary event "tails" equals to −1. This is an example of a random variable.

Basically, any function defined on a set of elementary events that takes real values depending on results of certain random experiment is a random variable. Let's consider some examples.

Consider a game of roulette. A dealer spins a small ball on a wheel divided into partitions with 36 numbers from 1 to 36 and two additional partitions with 0 and 00 (American version). You can bet on any number from 1 to 36 (among other options which we do not consider for this example). Assume, you bet $1 on number 23. If the ball stops on this number, you win and a dealer pays you $36. If the ball stops on any other number, including 0 or 00, you lose you bet of $1. The random variable thus is defined as having a value of 36 on the elementary event "Ball stops in a partition with number 23 in it" and having a value of −1 for all other elementary events.

Since in this course of theory of probabilities we consider only random experiments with finite number of elementary events, all our random variables are defined on the finite set of elements and take finite number of values. The values are always real numbers. The elementary events might or might not have equal chances to occur. Therefore, if there are N elementary events that our random experiment results in, there are N values (not necessarily different) of the random variable defined on this set of elementary events.

Assume, our sample space Ω consists of N elementary events E1, E2,..., EN. Generally speaking, they might not necessarily have equal chances to occur, so let's assume that the probability of occurrence of elementary event Ei is Pi (index i changes from 1 to N), where all these probabilities are not negative and, if added together, sum up to 1.
Random variable is a function with real values defined on each of the elementary events.
Let's use a Greek letter ξ to denote this random variable. Values taken by a random variable ξ depend on the results of the random experiment.
If the experiment results in E1, which can happen with a probability P1, the random variable ξ takes the value of X1.
If the experiment results in E2, which can happen with a probability P2, the random variable ξ takes the value of X2.
etc.
X1=ξ(E1),
X2=ξ(E2),
...
XN=ξ(EN).

So, we have a random variable ξ that takes different values Xi with different probabilities Pi, depending on the results of a random experiment. To analyze this random variable, we don't really need to think much about concrete elementary events (arguments of a function that we call the random variable) and concentrate only on probabilities of the values that our random variable takes. Thus, we can say that in the game of roulette with 38 partitions on a wheel (numbers from 1 to 36, 0 and 00), betting on number 23, we deal with a random variable that takes a value of 36 with the probability 1/38 (winning by correctly guessing the number where the ball stops spinning) and a values of −1 with the probability 37/38 (losing).

No comments: