Stat 601, Fall 2000, Class 6

Today's topics

*
Fitting equations to data, least squares
*
Assumptions in regression
*
Understanding outliers in regression
*
Prediction and confidence intervals

Fitting equations to data

Why fit?

Once we have an equation we can summarzie and exploit fit:
*
Graphically summarize
*
Interpolate
*
Forecast/extrapolate (with caution)
*
Leverage the equation, marginal this, that ...

Least squares

The classical definition of the ``best'' line:
*
Find the $\beta_0$ and $\beta_1$ that minimize

\begin{displaymath}\sum_{i = 1}^n \{y_i - (\beta_0 + \beta_1 x_i)\}^2.\end{displaymath}

Call the minimizers $\hat\beta_0$ and $\hat\beta_1$.
*
In English, the ``best line'', minimizes the sum of vertical distances squared, the Least Squares Line.
*
Sometimes, we may fit a line on a transformed scale, then back-transform, which gives ``best fitting'' curves.

Interpretation

*
The cardinal rule of data analysis: always, always plot your data.
*
Our model: $Av(Y\,\vert\,X) = \beta_0 + \beta_1 X$.
*
In the equation $y = \beta_0 + \beta_1 x$
*
Intercept: $\beta_0$: the value of y, when x = 0.
*
Slope: $\beta_1$: the change in y for every one unit change in x. Always understand the units on $\beta_1$.
*
The interpretation of the slope in a ln(x) transformed model as a percentage change (see p.21 of the bulkpack).

Regression diagnostics

Assumptions

In this section, you must understand the idea of the residual:

\begin{displaymath}(y_i - \hat y_i).\end{displaymath}

Residual = the difference between what we observe (yi), and what we expect ($\hat y_i$) under the model.

You must also understand the difference between the ``true regression line'' $\beta_0 + \beta_1 x$,and the ``estimated regression line'' $\hat\beta_0 + \hat\beta_1 x$. It's like the difference between the population and sample means.

*
The second rule of data analysis: if it's measured against time, plot against time.
*
The third rule of data analysis: always check the residuals.
*
Regression assumptions and how to check for them.
So far we have thought of our model as the rule that relates the average value of Y to X, that is $Av(Y\,\vert\,X) = \beta_0 + \beta_1 X$.
The process that generates the data, the actual Y-variables themselves is often modeled as

\begin{displaymath}Y_i = \underbrace{\beta_0 + \beta_1 X_i}_{systematic} +
\overbrace{\epsilon_i}^{noise}.\end{displaymath}

That is we think of the data as coming from a two part process, a systematic part and a random part (signal plus noise). The noise part is sometimes due to measurement error and other times interpreted as all the important variables that should have been in the model but were left out.
The regression assumptions:
*
$\epsilon_i$ are independent.
*
$\epsilon_i$ are mean zero and have constant variance, $Av(\epsilon_i) = 0$ and $Var(\epsilon_i) = \sigma^2$ for all i. (constant variance)
*
$\epsilon_i$ are approximately normally distributed.
Consequences of violation of the assumptions:
*
If positive autocorrelation then we are over-optimistic about the information content in the data. We think that there is less noise than there really is. Confidence intervals too narrow.
*
Non-constant variance:
*
Incorrectly quantify the true uncertainty.
*
Prediction intervals are inaccurate.
*
Least squares is inefficient: if you understood the structure of $\{\sigma_i^2\}$ better you could get better estimates of $\beta_0$ and $\beta_1$.
*
If $\epsilon_i$ are symmetric then normality assumption is not a big deal. If $\epsilon_i$ are really skewed and you only have a small amount of data then it is all up the creek.
*
Since the assumptions are on the error term, the $\epsilon_i$, we have to check them by looking at the ``estimated errors'', the residuals.
*
Note that $\epsilon_i$ is the distance from the point to the true line.
*
But the residual is the distance from the point to the estimated line.

Outliers

*
Identifying unusual points; residuals, leverage and influence.
*
Points extreme in the X-direction are called points of high leverage.
*
Points extreme in the Y-direction are points with BIG residuals.
*
Points to watch out for are those that are high leverage and with a BIG residual. These are called influential points. Deleting them from the regression can change everything.

Understanding almost all the regression output

*
R-squared.
*
The proportion of variability in Y explained by the regression model.
*
Answers the question ``how good is the fit''?
*
Root mean squared error (RMSE).
*
The spread of the points about the fitted model.
*
Answers the question ``can you do good prediction''?
*
Write the variance of the $\epsilon_i$ as $Var(\epsilon_i) = \sigma^2$, then RMSE estimates $\sigma$.
*
Only a meaningful measure with respect to the range of Y.
*
A rule of thumb 95% prediction interval: up to the line +/- 2 RMSE (only works in the range of the data).

Confidence interval for the slope

*
Answers the question ``is there any point in it all''?
*
If the CI contains 0, then 0 is a feasible value for the slope, i.e. the line may be flat, that is X tells you nothing about Y.
*
The p-value associated with the slope is testing the hypothesis Slope = 0 vs Slope $\ne$ 0.

Two types of prediction (concentrate on the second)

*
Estimating an average, ``where is the regression line''?

\begin{displaymath}\underbrace{Av(Y\,\vert\,X)}_{\mbox{Estimate this}} = \beta_0 + \beta_1
X.\end{displaymath}

Range of feasible values should reflect uncertainty in the true regression line.
*
Predicting a new observation, ``where's a new point going to be''?

\begin{displaymath}\underbrace{Y_i}_{\mbox{Estimate this}} = \beta_0 + \beta_1 X +
\epsilon_i.\end{displaymath}

Range of feasible values should reflect uncertainty in the true regression line AND the variability of the points about the line.

What a difference a leveraged point can make

*
   From pages 90 and 94.

     Measure      With outlier    Without outlier

     R-squared    0.78            0.075
     RMSE         3570             3634
     Slope        9.75             6.14
     SE(slope)    1.30             5.56

If someone comes with a great R-squared it does not mean they have a great model; maybe there is just a highly leveraged point well fit by the regression.




2000-11-03