Worksheet 5

Published

September 27, 2023

Questions are below. My solutions are below all the question parts for a question; scroll down if you get stuck. There is extra discussion below that for some of the questions; you might find that interesting to read, maybe after tutorial.

For these worksheets, you will learn the most by spending a few minutes thinking about how you would answer each question before you look at my solution. There are no grades attached to these worksheets, so feel free to guess: it makes no difference at all how wrong your initial guess is!

1 Home prices

A realtor kept track of the asking prices of 37 homes for sale in West Lafayette, Indiana, in a particular year. The asking prices are in http://ritsokiguess.site/datafiles/homes.csv. There are two columns, the asking price (in $) and the number of bedrooms that home has (either 3 or 4, in this dataset). The realtor was interested in whether the mean asking price for 4-bedroom homes was bigger than for 3-bedroom homes.

  1. Read in and display (some of) the data.

  2. Draw a suitable graph of these data.

  3. Comment briefly on your plot. Does it suggest an answer to the realtor’s question? Do you have any doubts about the appropriateness of a \(t\)-test in this situation? Explain briefly.

  4. Sometimes prices work better on a log scale. This is because percent changes in prices are often of more interest than absolute dollar-value changes. Re-draw your plot using logs of asking prices. (In R, log() takes natural (base \(e\)) logs, which are fine here.) Do you like the shapes of the distributions better? Hint: you have a couple of options. One is to use the log right in your plotting (or, later, testing) functions. Another is to define a new column containing the log-prices and work with that.

  5. Run a suitable \(t\)-test to compare the log-prices. What do you conclude?

My solutions

  1. Read in and display (some of) the data.

Solution

The exact usual:

my_url <- "http://ritsokiguess.site/datafiles/homes.csv"
asking <- read_csv(my_url)
Rows: 37 Columns: 2
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
dbl (2): price, bdrms

ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
asking

There are indeed 37 homes; the first of them have 4 bdrms, and the ones further down, if you scroll, have three. The price column does indeed look like asking prices of homes for sale.1

\(\blacksquare\)

  1. Draw a suitable graph of these data.

Solution

Two groups of prices to compare, or one quantitative column and one column that appears to be categorical (it’s actually a number, but it’s playing the role of a categorical or grouping variable). So a boxplot. This requires care, though; if you do it without thinking you’ll get this:

ggplot(asking, aes(x = bdrms, y = price)) + geom_boxplot()
Warning: Continuous x aesthetic
ℹ did you forget `aes(group = ...)`?

The message is the clue: the number of bedrooms looks quantitative and so ggplot has tried (and failed) to treat it as such.

The perhaps most direct way around this is to take the error message at face value and add bdrms as a group, thus:

ggplot(asking, aes(x = bdrms, y = price, group = bdrms)) + geom_boxplot()

and that works (as you will see below, it is the same as the other methods that require a bit more thought).

You might be thinking that this is something like black magic, so I offer another idea where you have a fighting chance of understanding what is being done.

The problem is that bdrms looks like a quantitative variable (it has values 3 and 4 that are numbers), but we want it to be treated as a categorical variable. The easiest way to turn it into one is via factor, like this:

ggplot(asking, aes(x = factor(bdrms), y = price)) + geom_boxplot()

If the funny label on the \(x\)-axis bothers you, and it probably should,2 define a new variable first that is the factor version of bdrms. You can overwrite the old bdrms since we will not need the number as a number anywhere in this question:3

asking %>% 
  mutate(bdrms = factor(bdrms)) -> asking
ggplot(asking, aes(x = bdrms, y = price)) + geom_boxplot()

and that works smoothly.4

As a very quick extra: factor(bdrms) and group = bdrms both correctly give two boxplots side by side, but if you look carefully, the shaded grey area in the background of the graph is slightly different in each case. The group = way still treats bdrms as quantitative, and the \(x\)-axis reflects that (there is an axis “tick” at 3.5 bedrooms), but the factor(bdrms) plot treats the made-categorical bdrms as a genuine categorical variable with the values 3 and 4 and nothing else (the \(x\)-axis only has ticks at 3 and 4). From that point of view, the group = bdrms plot is a bit of a hack: it makes the boxplots come out right without fixing up the \(x\)-axis.

\(\blacksquare\)

  1. Comment briefly on your plot. Does it suggest an answer to the realtor’s question? Do you have any doubts about the appropriateness of a \(t\)-test in this situation? Explain briefly.

Solution

It seems pretty clear that the average (on this plot, median) asking price for 4-bedroom houses is higher than for 3-bedroom houses. However, for a \(t\)-test to be appropriate, we need approximately normal distributions within each group of asking prices, and you can reasonably say that we do not: both distributions of asking prices are skewed to the right, and the 3-bedroom asking prices have three outliers at the top end.5

The other thing you need to consider is sample size: there are 37 houses altogether, so about 20 in each group:

asking %>% count(bdrms)

Thus the Central Limit Theorem will offer some help, but you could reasonably argue that even a sample size of 23 won’t be enough to fix up that skewness and those outliers in the 3-bedroom group.

\(\blacksquare\)

  1. Sometimes prices work better on a log scale. This is because percent changes in prices are often of more interest than absolute dollar-value changes. Re-draw your plot using logs of asking prices. (In R, log() takes natural (base \(e\)) logs, which are fine here.) Do you like the shapes of the distributions better? Hint: you have a couple of options. One is to use the log right in your plotting (or, later, testing) functions. Another is to define a new column containing the log-prices and work with that.

Solution

You can put the log right in the ggplot command, thus:

ggplot(asking, aes(x = bdrms, y = log(price))) + geom_boxplot()

These look a lot better. The 4-bedroom distribution is close to symmetric and the 3-bedroom distribution is much less skewed (and has lost its outliers).

For this, and the sample sizes we have, I would now have no problem at all with a \(t\)-test.

The other way to do this is to make a new column that has the log-price in it:

asking %>% 
  mutate(log_price = log(price)) -> asking

and then make the plot:

ggplot(asking, aes(x = bdrms, y = log_price)) + geom_boxplot()

Both ways come out the same, and are equally good.

For the second way, it is better to save a dataframe with the log-prices in it and then make a plot, because we will be using the log-prices in our hypothesis test in a moment. If you use a pipeline here, like this:

asking %>% 
  mutate(log_price = log(price)) %>% 
  ggplot(aes(x = bdrms, y = log_price)) + geom_boxplot()

it works here, but you will have to define the log-prices again below. If you don’t see that now, that’s OK, but when you come to do the \(t\)-test with the log-prices in the next part, you ought to realize that you are doing something inefficient by calculating the log-prices again, so you should come back here and save the dataframe with the log-prices in it so that you don’t have to calculate them again. Or, I guess, use the log-prices directly in the \(t\)-test, but it seems odd to do one thing one way and the other thing a different way.

\(\blacksquare\)

  1. Run a suitable \(t\)-test to compare the log-prices. What do you conclude?

Solution

Bear in mind what the realtor wants to know: whether the mean (log-) price is higher for 4-bedroom houses vs. 3-bedroom houses. This was something the realtor was curious about before they even looked at the data, so a one-sided test is appropriate. 3 is less than 4, so the alternative will be "less". Once again, you can put the log directly into the t.test, or use a column of log-prices that you create (such as the one you did for the boxplot, if you did that). Thus, two possibilities are:

t.test(log(price)~bdrms, data = asking, alternative = "less")

    Welch Two Sample t-test

data:  log(price) by bdrms
t = -5.1887, df = 30.59, p-value = 6.481e-06
alternative hypothesis: true difference in means between group 3 and group 4 is less than 0
95 percent confidence interval:
       -Inf -0.4139356
sample estimates:
mean in group 3 mean in group 4 
       11.82912        12.44410 

and (my new column was called log_price):

t.test(log_price ~ bdrms, data = asking, alternative = "less")

    Welch Two Sample t-test

data:  log_price by bdrms
t = -5.1887, df = 30.59, p-value = 6.481e-06
alternative hypothesis: true difference in means between group 3 and group 4 is less than 0
95 percent confidence interval:
       -Inf -0.4139356
sample estimates:
mean in group 3 mean in group 4 
       11.82912        12.44410 

The P-value and conclusion are the same either way. The P-value is 0.0000065, way less than 0.05, so there is no doubt that the mean (log-) asking price is higher for 4-bedroom homes than it is for 3-bedroom homes.

Side note: t.test is more forgiving than ggplot was with bdrms. Before, we had to wrap it in factor to get it treated as a categorical variable. It is reasonable enough to do that here as well (it works either way), and using factor(bdrms) shows that you are suspecting that there might be a problem again, which is intelligent. t.test, however, like other things from the early days of R,6 is more forgiving: it uses the distinct values of the variable on the right of the squiggle (bdrms) to make groups, whether they are text or numbers. Since the two-sample \(t\)-test is for comparing exactly two groups, it will complain if bdrms has more than two distinct values, but here we are good.

The other thing you should consider is whether we should have done a Welch or a pooled test. This one is, as you see, Welch, but a pooled test would be better if the two groups of log-prices had equal spreads. Go back and look at the last boxplot you did: on the log scale, the two spreads do actually look pretty similar.7 So we could also have done the pooled test. My guess (I haven’t looked at the results yet as I type this) is that the results will be almost identical in fact:

t.test(log_price ~ bdrms, data = asking, alternative = "less", var.equal = TRUE)

    Two Sample t-test

data:  log_price by bdrms
t = -5.0138, df = 35, p-value = 7.693e-06
alternative hypothesis: true difference in means between group 3 and group 4 is less than 0
95 percent confidence interval:
       -Inf -0.4077387
sample estimates:
mean in group 3 mean in group 4 
       11.82912        12.44410 

The test statistic and P-value are very close, and the conclusion is identical, so it didn’t matter which test you used. But the best answer will at least consider whether a pooled or a Welch test is the better one to use.

\(\blacksquare\)

2 Extras

Extra: as I originally conceived this question, I was going to have you finish by finding a confidence interval to quantify how different the mean (log-) prices are. The problem with that here is that you get, if you re-do it two-sided, a confidence interval for the difference in mean log-prices, not an easy thing to interpret:

t.test(log_price ~ bdrms, data = asking)

    Welch Two Sample t-test

data:  log_price by bdrms
t = -5.1887, df = 30.59, p-value = 1.296e-05
alternative hypothesis: true difference in means between group 3 and group 4 is not equal to 0
95 percent confidence interval:
 -0.8568304 -0.3731165
sample estimates:
mean in group 3 mean in group 4 
       11.82912        12.44410 

Some thinking required: this is a difference of means of logs of prices. How can we say something about actual prices here? Let’s ignore the mean part for now; the scale these things are on is log-prices. What do we know about differences of logs? Haul out some math here:8

\[ \log a - \log b = \log(a/b), \] so

\[ \exp (\log a - \log b) = a/b.\]

So how does this apply to our confidence interval? What it says is that if you take the confidence interval for the difference in means of log-prices, and exp its endpoints, what you get is a confidence interval for the ratio of means of the actual prices:

ci_log <- c(-0.8568304,-0.3731165)
exp(ci_log)
[1] 0.4245055 0.6885850

This says that the average asking price for the 3-bedroom houses is between 42 and 69 percent of the average asking price for the 4-bedroom houses. Thus the asking prices for the 3-bedroom houses are quite a bit less on average.9 Thus it is not at all surprising that the P-value was so small, whether you did pooled or Welch.

Footnotes

  1. At least, for somewhere that is not Toronto!↩︎

  2. Note that the group idea I showed you first gives you a perfectly reasonable axis label.↩︎

  3. I am not asking for anything like the mean number of bedrooms anywhere here.↩︎

  4. Turning the number of bedrooms into text, via as.character(bdrms), also works. This is actually how we have been handling categorical variables so far: reading them in as text. Usually, though, they look like text. Here they don’t.↩︎

  5. I would be happy to call these genuine outliers, because there are only three of them, and they look a little separated from the whisker, so that it is reasonable to say that these three asking prices are bigger than the others.↩︎

  6. We see cbind from base R in STAD29, which we use because it is more forgiving than the tidyverse ways of gluing things together.↩︎

  7. The spread of actual asking prices without taking logs is bigger for the 4-bedroom houses, so if we had been willing to do a \(t\)-test without taking logs first, we should definitely have preferred the Welch test. The effect of taking logs is to bring the higher values down, compared to the lower ones, which made both distributions less right-skewed and also made the spreads more equal. For this reason, the log transformation is often useful: it can both equalize spread and make things look more normal, all at once.↩︎

  8. For those of us old enough to remember times before calculators (which I am, just), this is how we would do division if we couldn’t do it by long division. We used to have books called “log tables” in which you could look up base-10 logs of anything. Look up the log of the thing on the top of the division, look up the log of the thing on the bottom, subtract, and then turn to the “antilog” tables and look up the result there to find the answer. exp is playing the role of antilog here. Example: to work out \(4/3\), look up the (base 10) log of 4, which is 0.602, and the log of 3, which is 0.477. Subtract to get 0.125. I happen to remember that the base 10 log of 1.3 is 0.114, so \(4/3\) is a bit bigger than 1.3, as it is. An antilog table would give the answer more precisely.↩︎

  9. This is what I meant earlier when I said that with logs, percent changes are the ones of interest.↩︎