---
title: "DSI Example"
author: Bob Stine
date: November 19, 2017
output: html_notebook
---
This example fits simple regression models to small data sets. The second shows lattice/trellis plots.
```{r}
library(ggplot2)
library(car)
library(lattice)
```
To demonstrate the extensibility of R, here's two functions that allow me to show students the idea of visually testing for association. I would hide the definition of this function from students, but being written in R, the curious students could always find it.
```{r}
permute <- function(x) { x[sample.int(length(x))] }
visual_test_for_association <- function(x, y, rows=4, cols=5) {
reset() # compress the margins of plots to fit more together
par(mfrow=c(rows,cols))
ix <- sample.int(rows,1)
iy <- sample.int(cols,1)
for(i in 1:rows) {
for(j in 1:cols) { # all but one plot shows permuted indices
if( (i==ix)&(j==iy) ) plot(x,y, xlab='x',ylab='y')
else plot(permute(x),y,xlab='x',ylab='y')
}}}
```
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Example 1: Locating a Franchise Outlet
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Read the data into a data frame. The data frame has 80 observations of two variables.
```{r}
Franchise <- read.csv("Data/21_4m_franchise.csv")
```
Each row of the data frame describes the amount of gasoline sales (in thousands of gallons) and the traffic volume, also in thousands. For example, sales in the first week were `r 1000 * Franchise$Sales[1]` and `r 1000 * Franchise$Sales[2]` in the second week. Traffic volume was `r 1000 * Franchise$Traffic[1]` in the first week.
```{r}
Franchise
```
The `View` command opens a spreadsheet view of the data. You can only view, not change, the data. Buttons in the header of the view support sorting the columns.
```{r eval=FALSE}
View(Franchise)
```
## Marginal Plots
Histograms are a good starting point, identifying the shape of the distribution and outliers.
```{r}
hist(Franchise$Sales, breaks=10)
```
You can then get more clever and add a boxplot to the figure if you'd like, helping to explain both.
```{r eval=FALSE}
hist(Franchise$Sales, breaks=10)
boxplot(Franchise$Sales, add=TRUE, horizontal=TRUE, width=2, at=6)
```
## Bivariate Plots
A scatterplot of sales on traffic volume shows that the two variables are linearly associated. The association is moderately strong. This chunk also adds the fitted regression line to the figure.
```{r}
plot(Sales ~ Traffic, data=Franchise)
```
Once you've seen the plot, a regression line seems like a good summary. This is also a good chance to show the *visual test for association*
```{r}
visual_test_for_association(Franchise$Traffic,Franchise$Sales,3,3)
```
```{r}
plot(Sales ~ Traffic, data=Franchise)
regr <- lm(Sales ~ Traffic, data=Franchise)
abline(regr, col='red')
```
The summary of a regression creates a named list that has properties of the regression,
```{r}
sRegr <- summary(regr)
names(sRegr)
```
These include the intercept, slope and standard error of the regression.
```{r}
sRegr$coefficients
sRegr$sigma
```
Printing the summary shows a table of the least squares estimates and the overall fit of the regression.
```{r}
sRegr
```
You can add components from the model summary, embellishing the plot.
```{r eval=FALSE}
plot(Sales ~ Traffic, data=Franchise)
regr <- lm(Sales ~ Traffic, data=Franchise)
abline(regr, col='red')
b <- round(coefficients(regr),2)
text(45,4, paste0("Fit = ",b[1],"+",b[2],"Traffic"), col='blue')
```
You can also use the summary of the regression to build prediction intervals, though now you can start to see how R begins to look more like "programming" rather than "statistics".
```{r}
plot(Sales ~ Traffic, data=Franchise)
regr <- lm(Sales ~ Traffic, data=Franchise)
abline(regr, col='red')
xValues <- data.frame(Traffic = seq(25, 50, length.out=100))
predInt <- predict(regr, newdata = xValues, interval="prediction")
lines(xValues$Traffic, predInt[,'lwr'], lty=3, col='red')
lines(xValues$Traffic, predInt[,'upr'], lty=3, col='red')
```
Of course, packages exist that automate routine figures.
```{r eval=FALSE}
ggplot(Franchise, aes(x=Traffic, y=Sales)) +
geom_point() +
geom_smooth(method='lm')
```
Before relying on the summary statistics for inference, check that the residuals do not show an evident pattern.
```{r}
plot(Franchise$Traffic, residuals(regr))
abline(h=0, col='gray')
```
In addition check that the distribution of the residuals appears nearly normal. To get the bands, use the version of the normal quantile plot from the `car` package (see Chapter 12).
```{r}
par(mfrow=c(1,2))
qqnorm(residuals(regr))
car::qqPlot(residuals(regr), main="Q-Q Plot")
```
At the extremes, the residuals seem to be a bit "fat-tailed", more extreme that what normality would expect. The deviation is slight, but worth noting.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# Example #2: Pricing Used Cars
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
The following data describe 276 "certified" used BMW sedans.
```{r}
# Cars <- read.csv("http://www-stat.wharton.upenn.edu/~stine/stat405/bmw_2016.csv")
Cars <- read.csv("bmw_2016.csv")
dim(Cars)
```
The data include the model year, price (in dollars), mileage, model, model style, color, and type of transmission (auto or manual).
```{r}
Cars
```
This analysis concerns the affect of the model type on price. Before continuing, convert the model number to a `factor` (R's version of a categorical variable) rather than leave as a number. Same for the model year. This conversion avoids treating these data as numbers rather than identifying groups.
```{r}
Cars$Model <- as.factor(Cars$Model)
Cars$Age <- Cars$Year - 2016 # numerical
Cars$Year <- as.factor(Cars$Year)
```
Here's the standard comparison boxplot display.
```{r}
boxplot(Price ~ Model, data = Cars)
```
`lattice` offers a similar plot.
```{r}
lattice::bwplot(Price ~ Model, data = Cars)
```
`lattice` produces marginal plots that are perhaps more attractive -- and easier to construct -- than if done in R. For example, here are kernel density plots of the prices of the 3 different models (listed price, in dollars). I prefer to have them "on top of each other" to help with the comparison, but the common scales are helpful. `lattice` uses statistical notation for conditional association, with a vertical bar indicating conditional relationships, in this case showing the distribution of prices conditional on each of the 3 model types.
```{r}
lattice::densityplot(~ Price | Model, data=Cars)
```
Boxplots like the following convey a sense of the general depreciation within each model type. Grid lines would help to align the data as well, but the option does not run in this plot.
```{r}
bwplot(Price ~ Year | Model, data=Cars)
```
`stripplot` shows the data without the boxes. With overprinting as in this example, you want the dithering option ("jitter" in lattice) turned on. This option adds a small amount of random variation to avoid overprinting. Gridlines work here very well.
```{r}
stripplot(Price ~ Year | Model, data=Cars, jitter.data=TRUE, grid='h')
```
Lattice plots are helpful when looking at the effect of "lurking" variables on bivariate association. For example, here's a "regular R" plot of price on mileage. To show that this relationship depends upon the model type, I've colored points by the model type. You can see what's happening (maybe), but it's subtle.
```{r}
plot(Price ~ Mileage, data=Cars, col=Cars$Model)
```
Rather than drop the data into a regression, we can explore conditional associations graphically using `lattice`. You can guess what the following plot does.
```{r}
xyplot(Price ~ Mileage | Model, data=Cars, grid='h',
main="Scatterplot by Model Type")
```
Because of the common scaling used for the 3 frames of the plot, we can see that the slopes look very similar (ie, no interaction), but the level is higher for the 335 models that the other two.
You can add more to each panel. The optional `type` setting for lattice plots includes the settings used in `plot` (eg, 'p' for points and 'h' for histogram lines) and further adds regression lines and smooth curves. (Further customization is possible by using the `panel` option to pass in a function that takes over how to draw the content of each panel.)
```{r}
xyplot(Price~Mileage|Model, data=Cars, grid='h',
type=c('p','r'), # 'smooth' for smooth curve
main="Scatterplot by Model Type")
```
The slopes are very similar, and we can confirm that with a regression: a large shift indicated is indicated by the dummy variable coefficients, but nothing interesting in the way of interactions (which we should check with `anova` because the left out group here is the smallest of the three).
```{r}
regr.1 <- lm(Price~Mileage + Model, data=Cars)
summary(
regr.2 <- lm(Price~Mileage * Model, data=Cars)
)
```
You can condition on more than one variable, but don't get carried away or you won't see much because of all of the plots. Again, the plot is nicer with Year as categorical. Some of the years are sparse, so let's limit the analysis to the more common years 2013-2015.
```{r}
table(Cars$Year)
```
When conditioned on both year and model, there's little association between mileage and price for 2015 models because there are so few cases and little variation in mileage. The association is more clear in 2014 and evident in 2013, particularly for the common 328 model. (The `fig.width` and `fig.height` options control the size of the plot rendered in the Rmd document.)
```{r, fig.width=6, fig.height=6}
xyplot(Price~Mileage|Model*Year, data=Cars[Cars$Year %in% 2013:2015,],
main="Scatterplot by Model Type and Model Year")
```
- - - - - - - - - - - - - - - - - - -