*Notes to a video lecture on http://www.unizor.com*

__Linear Regression__

In many cases we suspect that one random variable is dependent on another.

Sometimes the dependence can be expressed as a formula. Consider two people measure a temperature at the same place and at the same time, but one measure it in degrees of Fahrenheit (

*), while another - in degrees of Celsius (*

**T**_{F}*).*

**T**_{C}Obviously, we are talking about two random variables precisely related to each other by formula:

**T**_{C}= (5/9)·(T_{F}−32)In most cases, however, the dependence between random variables is more complex and, most often, cannot be expressed as a formula or a function. The reason for this is that there are many factors influencing values of random variables, all contributing to the results of random experiments.

Consider such random variables as volcanic activity and average annual temperature on the planet. It must be some kind of dependency, but there are so many other factors that affect the average annual temperature, besides volcanic activity, that all we can definitely say is that there is some dependency between these two, but it cannot be expressed as a definitive formula.

In other cases we can talk about cause, being completely under our control, and effect that we can measure.

For instance, the cause can be the amount of money some parents spend on education of their child and the effect is the amount of knowledge (measured in some units) our child obtains as a result of this education. The cause (amount of money spent on education) is under control of the parents, while effect (knowledge) certainly depends on many additional factors, besides the money, which justifies considering it as a random variable.

Another example is forecasting air temperature in one hour from now based on current temperature, considering we take into account the time of day (morning or afternoon) and month of the year. During this one hour period temperature will change up or down, depending on known factors (time of the day and season) and some random fluctuations that we don't know about, so we can assume that there is a strong dependency between current and future temperatures and, if we know this dependency, we can predict with some precision the temperature in one hour based on the current temperature, time of the day and a season.

Of course, the more causes, that influence the effect, we are aware of - the better our understanding of the dependency between them is, and, therefore, we will be better equipped to achieve or predict certain effect.

For example, the positive results of medical treatment of some illness depend on the whole complex of measures - drugs, medical procedures, food, genetic factors, physical exercises etc. All these factors are important and, if we knew how each of these factors (that are either known or pretty much under our control) contributes to success of the entire treatment of the illness, we would be able to treat a patient very efficiently and with good outcome.

Let's bring some Mathematical Statistics into a picture.

First of all, we simplify the problem by considering only a case of one contributing factor

*(which we consider as a cause, independent variable under our control or known in advance) and one observed effect*

**X***of this cause - a random variable that depends on*

**Y***and some unknown factors summarized in a random variable*

**X***that shifts the value of*

**ε***randomly up or down with Normal distribution of probabilities with expectation of zero.*

**Y**We further assume that the dependency between

*and*

**X***is linear with unknown coefficients*

**Y***and*

**a***. So, the entire dependency can be expressed as*

**b**

**Y = a·X + b + ε**This type of relationship is called

*linear regression*.

Our purpose is to determine coefficients

*and*

**a***of linear regression based on known values of independent variable*

**b***-*

**X***x*,

_{1}*x*...

_{2}*x*and observed values of dependent variable

_{n}*-*

**Y***y*,

_{1}*y*...

_{2}*y*.

_{n}The base for determining these coefficients should be the maximum closeness of values {

*a·x*} to values {

_{i}+b*y*}. Since the difference between

_{i}*and*

**Y***is a Normal random variable*

**a·X+b***, the best coefficients*

**ε***and*

**a***are those that minimize the empirical variance of random variable*

**b***.*

**ε**The next simplification can be achieved by replacing random variables

*and*

**X***with random variable*

**Y***and*

**X−E(X)***.*

**Y−E(Y)**Consider averaging transformations:

**E(Y) = E(a·X + b + ε) =**

**= a·E(X) + b + E(ε) =**(since we assumed that

*is Normal random variable with mathematical expectation equaled to zero)*

**ε**

**= a·E(X) + b**Therefore,

**b = E(Y) − a·E(X)**and

**Y = a·X + E(Y) − a·E(X) + ε**or

**Y−E(Y) = a·[X−E(X)] + ε**As you see, if we knew

*and*

**E(X)***, our problem would be easier, since there is only one parameter*

**E(Y)***to be determined to minimize variance of*

**a***.*

**ε**We do not know precise values for mathematical expectations of

*and*

**X***, but we do have their statistics, which means that we can approximate these values with arithmetic means of statistical observations:*

**Y**

**E(X)**≅ (x_{1}+x_{2}+...+x_{n})/n = U

**E(Y)**≅ (y_{1}+y_{2}+...+y_{n})/n = VHopefully, we do not lose much precision stating that

**Y−**V**= a·(X−**U**) + ε**(where

*U*and

*V*are known arithmetic means of available statistical values of

*and*

**X***)*

**Y**Let's replace statistical values

*x*,

_{1}*x*...

_{2}*x*with

_{n}*X*,

_{1}=x_{1}−U*X*...

_{2}=x_{2}−U*X*

_{n}=x_{n}−Uand replace

*y*,

_{1}*y*...

_{2}*y*with

_{n}*Y*,

_{1}=y_{1}−V*Y*...

_{2}=y_{2}−V*Y*

_{n}=y_{n}−VOur problem is to find such a coefficient

*that values*

**a***Y*

_{1}−**a**X_{1}, Y_{2}−**a**X_{2}...Y_{n}−**a**X_{n}have the smallest sample variation.

Since arithmetic mean of these values is zero (remember, we subtracted

*U*and

*V*from original statistical values to get these) and since the number of experiments

*n*is constant, we just have to minimize the sum of squares of these numbers. This sum of squares is just a quadratic function of

*and we can easily find its minimum.*

**a***s² = Σ(Y*=

_{k}−**a**X_{k})²*=*

**a**²·ΣX_{k}²−2**a**·ΣX_{k}·Y_{k}+ΣY_{k}²Minimum of this quadratic polynomial is at point

**a**= ΣX_{k}·Y_{k}/ ΣX_{k}²Knowing coefficient

*, we can determine coefficient*

**a***from equation*

**b**

**E(Y) = a·E(X) + b**That fully determines the coefficients of linear regression.

How can we determine the quality of approximation or prediction of values of random variable

*based on values of*

**Y***and a regression formula we have just determined?*

**X**Knowing the regression coefficients and sample data for

*and*

**X***, we can derive the sample data for an error*

**Y***.*

**ε = Y−a·X−b**From these sample data for

*we can calculate its sample variance and standard deviation*

**ε***. Having*

**σ***as a margin of error with 95% certainty, we can compare it with empirical mean value of random variable*

**2σ***and determine the ratio of a margin of error to*

**Y***'s mean*

**Y***. If it's small (say, 0.05 or less), we can be satisfied with our regression analysis and conclude that the formula of regression adequately represents the dependency between*

**2σ/E(Y)***and*

**X***and can be used for prediction of future values of*

**Y***based on observed values of*

**Y***.*

**X**Obviously, which ratio of a margin of error to a mean value of

*should be considered as satisfactory is an individual issue and should be defined based on circumstances, thus introducing an element of subjectivity into this theory.*

**Y**
## No comments:

Post a Comment