library(MASS)
library(tidyverse)
28 Logistic regression with ordinal response
28.1 Do you like your mobile phone?
A phone company commissioned a survey of their customers’ satisfaction with their mobile devices. The responses to the survey were on a so-called Likert scale of “very unsatisfied”, “unsatisfied”, “satisfied”, “very satisfied”. Also recorded were each customer’s gender and age group (under 18, 18–24, 25–30, 31 or older). (A survey of this kind does not ask its respondents for their exact age, only which age group they fall in.) The data, as frequencies of people falling into each category combination, are in link.
* Read in the data and take a look at the format. Use a tool that you know about to arrange the frequencies in just one column, with other columns labelling the response categories that the frequencies belong to. Save the new data frame. (Take a look at it if you like.)
We are going to fit ordered logistic models below. To do that, we need our response variable to be a factor with its levels in the right order. By looking at the data frame you just created, determine what kind of thing your intended response variable currently is.
If your intended response variable is not a factor, create a factor in your data frame with levels in the right order. Hint: look at the order your levels are in the data.
* Fit ordered logistic models to predict satisfaction from (i) gender and age group, (ii) gender only, (iii) age group only. (You don’t need to examine the models.) Don’t forget a suitable
weights
!Use
drop1
on your model containing both explanatory variables to determine whether you can remove either of them. Usetest="Chisq"
to obtain P-values.Use
anova
to decide whether we are justified in removinggender
from a model containing bothgender
andage.group
. Compare your P-value with the one fromdrop1
.Use
anova
to see whether we are justified in removingage.group
from a model containing bothgender
andage.group
. Compare your P-value with the one fromdrop1
above.Which of the models you have fit so far is the most appropriate one? Explain briefly.
Obtain predicted probabilities of a customer falling in the various satisfaction categories, as it depends on gender and age group. To do that, you need to feed
predict
three things: the fitted model that contains both age group and gender, the data frame that you read in from the file back in part (here) (which contains all the combinations of age group and gender), and an appropriatetype
.* Describe any patterns you see in the predictions, bearing in mind the significance or not of the explanatory variables.
28.2 Finding non-missing values
* This is to prepare you for something in the next question. It’s meant to be easy.
In R, the code NA
stands for “missing value” or “value not known”. In R, NA
should not have quotes around it. (It is a special code, not a piece of text.)
Create a vector
v
that contains some numbers and some missing values, usingc()
. Put those values into a one-column data frame.Obtain a new column containing
is.na(v)
. When is this true and when is this false?The symbol
!
means “not” in R (and other programming languages). What does!is.na(v)
do? Create a new column containing that.Use
filter
to display just the rows of your data frame that have a non-missing value ofv
.
28.3 High School and Beyond
A survey called High School and Beyond was given to a large number of American high school seniors (grade 12) by the National Center of Education Statistics. The data set at link is a random sample of 200 of those students.
The variables collected are:
gender
: student’s gender, female or male.race
: the student’s race (African-American, Asian,1 Hispanic, White).ses
: Socio-economic status of student’s family (low, middle, or high)schtyp
: School type, public or private.prog
: Student’s program, general, academic, or vocational.read
: Score on standardized reading test.write
: Score on standardized writing test.math
: Score on standardized math test.science
: Score on standardized science test.socst
: Score on standardized social studies test.
Our aim is to see how socio-economic status is related to the other variables.
Read in and display (some of) the data.
Explain briefly why an ordinal logistic regression is appropriate for our aims.
Fit an ordinal logistic regression predicting socio-economic status from the scores on the five standardized tests. (You don’t need to display the results.) You will probably go wrong the first time. What kind of thing does your response variable have to be?
Remove any non-significant explanatory variables one at a time. Use
drop1
to decide which one to remove next.The quartiles of the
science
test score are 44 and 58. The quartiles of thesocst
test score are 46 and 61. Make a data frame that has all combinations of those quartiles. If your best regression had any other explanatory variables in it, also put the means of those variables into this data frame.Use the data frame you created in the previous part, together with your best model, to obtain predicted probabilities of being in each
ses
category. Display these predicted probabilities so that they are easy to read.What is the effect of an increased science score on the likelihood of a student being in the different socioeconomic groups, all else equal? Explain briefly. In your explanation, state clearly how you are using your answer to the previous part.
28.4 How do you like your steak?
When you order a steak in a restaurant, the server will ask you how you would like it cooked, or to be precise, how much you would like it cooked: rare (hardly cooked at all), through medium rare, medium, medium well to well (which means “well done”, so that the meat has only a little red to it). Could you guess how a person likes their steak cooked, from some other information about them? The website link commissioned a survey where they asked a number of people how they preferred their steak, along with as many other things as they could think of to ask. (Many of the variables below are related to risk-taking, which was something the people designing the survey thought might have something to do with liking steak rare.) The variables of interest are all factors or true/false:
respondent_ID
: a ten-digit number identifying each person who responded to the survey.lottery_a
: true if the respondent preferred lottery A with a small chance to win a lot of money, to lottery B, with a larger chance to win less money.smoke
: true if the respondent is currently a smokeralcohol
: true if the respondent at least occasionally drinks alcohol.gamble
: true if the respondent likes to gamble (eg. betting on horse racing or playing the lottery)skydiving
: true if the respondent has ever been skydiving.speed
: true if the respondent likes to drive fastcheated
: true if the respondent has ever cheated on a spouse or girlfriend/boyfriendsteak
: true if the respondent likes to eat steaksteak_prep
(response): how the respondent likes their steak cooked (factor, as described above, with 5 levels).female
: true if the respondent is femaleage
: age group, from 18–29 to 60+.hhold_income
: household income group, from $0–24,999 to $150,000+.educ
: highest level of education attained, from “less than high school” up to “graduate degree”region
: region (of the US) that the respondent lives in (five values).
The data are in link. This is the cleaned-up data from a previous question, with the missing values removed.
Read in the data and display the first few lines.
We are going to predict
steak_prep
from some of the other variables. Why is the model-fitting functionpolr
from packageMASS
the best choice for these data (alternatives beingglm
andmultinom
from packagennet
)?What are the levels of
steak_prep
, in the order that R thinks they are in? If they are not in a sensible order, create an ordered factor where the levels are in a sensible order.Fit a model predicting preferred steak preparation in an ordinal logistic regression from
educ
,female
andlottery_a
. This ought to be easy from your previous work, but you have to be careful about one thing. No need to print out the results.Run
drop1
on your fitted model, withtest="Chisq"
. Which explanatory variable should be removed first, if any? Bear in mind that the variable with the smallest AIC should come out first, in case your table doesn’t get printed in order.Remove the variable that should come out first, using
update
. (If all the variables should stay, you can skip this part.)Using the best model that you have so far, predict the probabilities of preferring each different steak preparation (method of cooking) for each combination of the variables that remain. (Some of the variables are TRUE and FALSE rather than factors. Bear this in mind.) Describe the effects of each variable on the predicted probabilities, if any. Note that there is exactly one person in the study whose educational level is “less than high school”.
Is it reasonable to remove all the remaining explanatory variables from your best model so far? Fit a model with no explanatory variables, and do a test. (In R, if the right side of the squiggle is a
1
, that means “just an intercept”. Or you can remove whatever remains usingupdate
.) What do you conclude? Explain briefly.In the article for which these data were collected, link, does the author obtain consistent conclusions with yours? Explain briefly. (It’s not a very long article, so it won’t take you long to skim through, and the author’s point is pretty clear.)
28.5 How do you like your steak – the data
This question takes you through the data preparation for one of the other questions. You don’t have to do this* question, but you may find it interesting or useful.
When you order a steak in a restaurant, the server will ask you how you would like it cooked, or to be precise, how much you would like it cooked: rare (hardly cooked at all), through medium rare, medium, medium well to well (which means “well done”, so that the meat has only a little red to it). Could you guess how a person likes their steak cooked, from some other information about them? The website link commissioned a survey where they asked a number of people how they preferred their steak, along with as many other things as they could think of to ask. (Many of the variables below are related to risk-taking, which was something the people designing the survey thought might have something to do with liking steak rare.) The variables of interest are all factors or true/false:
respondent_ID
: a ten-digit number identifying each person who responded to the survey.lottery_a
: true if the respondent preferred lottery A with a small chance to win a lot of money, to lottery B, with a larger chance to win less money.smoke
: true if the respondent is currently a smokeralcohol
: true if the respondent at least occasionally drinks alcohol.gamble
: true if the respondent likes to gamble (eg. betting on horse racing or playing the lottery)skydiving
: true if the respondent has ever been skydiving.speed
: true if the respondent likes to drive fastcheated
: true if the respondent has ever cheated on a spouse or girlfriend/boyfriendsteak
: true if the respondent likes to eat steaksteak_prep
(response): how the respondent likes their steak cooked (factor, as described above, with 5 levels).female
: true if the respondent is femaleage
: age group, from 18–29 to 60+.hhold_income
: household income group, from $0–24,999 to $150,000+.educ
: highest level of education attained, from “less than high school” up to “graduate degree”region
: region (of the US) that the respondent lives in (five values).
The data are in link.
Read in the data and display the first few lines.
What do you immediately notice about your data frame? Run
summary
on the entire data frame. Would you say you have a lot of missing values, or only a few?What does the function
drop_na
do when applied to a data frame with missing values? To find out, pass the data frame intodrop_na()
, then intosummary
again. What has happened?Write the data into a
.csv
file, with a name likesteak1.csv
. Open this file in a spreadsheet and (quickly) verify that you have the right columns and no missing values.
My solutions follow:
28.6 Do you like your mobile phone?
A phone company commissioned a survey of their customers’ satisfaction with their mobile devices. The responses to the survey were on a so-called Likert scale of “very unsatisfied”, “unsatisfied”, “satisfied”, “very satisfied”. Also recorded were each customer’s gender and age group (under 18, 18–24, 25–30, 31 or older). (A survey of this kind does not ask its respondents for their exact age, only which age group they fall in.) The data, as frequencies of people falling into each category combination, are in link.
- * Read in the data and take a look at the format. Use a tool that you know about to arrange the frequencies in just one column, with other columns labelling the response categories that the frequencies belong to. Save the new data frame. (Take a look at it if you like.)
Solution
<- "http://ritsokiguess.site/datafiles/mobile.txt"
my_url <- read_delim(my_url, " ") mobile
Rows: 8 Columns: 6
── Column specification ────────────────────────────────────────────────────────
Delimiter: " "
chr (2): gender, age.group
dbl (4): very.unsat, unsat, sat, very.sat
ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
mobile
With multiple columns that are all frequencies, this is a job for pivot_longer
:
%>%
mobile pivot_longer(very.unsat:very.sat,
names_to="satisfied",
values_to="frequency") -> mobile.long
mobile.long
Yep, all good. See how mobile.long
contains what it should? (For those keeping track, the original data frame had 8 rows and 4 columns to collect up, and the new one has \(8\times 4=32\) rows.)
\(\blacksquare\)
- We are going to fit ordered logistic models below. To do that, we need our response variable to be a factor with its levels in the right order. By looking at the data frame you just created, determine what kind of thing your intended response variable currently is.
Solution
I looked at mobile.long
in the previous part, but if you didn’t, look at it here:
mobile.long
My intended response variable is what I called satisfied
. This is chr
or “text”, not the factor
that I want.
\(\blacksquare\)
- If your intended response variable is not a factor, create a factor in your data frame with levels in the right order. Hint: look at the order your levels are in the data.
Solution
My intended response satisfied
is text, not a factor, so I need to do this part. The hint is to look at the column satisfied
in mobile.long
and note that the satisfaction categories appear in the data in the order that we want. This is good news, because we can use fct_inorder
like this:
%>%
mobile.long mutate(satis = fct_inorder(satisfied)) -> mobile.long
If you check, by looking at the data frame, satis
is a factor
, and you can also do this to verify that its levels are in the right order:
with(mobile.long, levels(satis))
[1] "very.unsat" "unsat" "sat" "very.sat"
Success.
Extra: so now you are asking, what if the levels are in the wrong order in the data? Well, below is what you used to have to do, and it will work for this as well. I’ll first find what levels of satisfaction I have. This can be done by counting them, or by finding the distinct ones:
%>% count(satisfied) mobile.long
or
%>% distinct(satisfied) mobile.long
If you count them, they come out in alphabetical order. If you ask for the distinct ones, they come out in the order they were in mobile.long
, which is the order the columns of those names were in mobile
, which is the order we want.
To actually grab those satisfaction levels as a vector (that we will need in a minute), use pluck
to pull the column out of the data frame as a vector:
<- mobile.long %>%
v1 distinct(satisfied) %>%
pluck("satisfied")
v1
[1] "very.unsat" "unsat" "sat" "very.sat"
which is in the correct order, or
<- mobile.long %>%
v2 count(satisfied) %>%
pluck("satisfied")
v2
[1] "sat" "unsat" "very.sat" "very.unsat"
which is in alphabetical order. The problem with the second one is that we know the correct order, but there isn’t a good way to code that, so we have to rearrange it ourselves. The correct order from v2
is 4, 2, 1, 3, so:
<- c(v2[4], v2[2], v2[1], v2[3])
v3 v3
[1] "very.unsat" "unsat" "sat" "very.sat"
<- v2[c(4, 2, 1, 3)]
v4 v4
[1] "very.unsat" "unsat" "sat" "very.sat"
Either of these will work. The first one is more typing, but is perhaps more obvious. There is a third way, which is to keep things as a data frame until the end, and use slice
to pick out the rows in the right order:
<- mobile.long %>%
v5 count(satisfied) %>%
slice(c(4, 2, 1, 3)) %>%
pluck("satisfied")
v5
[1] "very.unsat" "unsat" "sat" "very.sat"
If you don’t see how that works, run it yourself, one line at a time.
The other way of doing this is to physically type them into a vector, but this carries the usual warnings of requiring you to be very careful and that it won’t be reproducible (eg. if you do another survey with different response categories).
So now create the proper response variable thus, using your vector of categories:
%>%
mobile.long mutate(satis = ordered(satisfied, v1)) -> mobile.long2
mobile.long2
satis
has the same values as satisfied
, but its label ord
means that it is an ordered factor, as we want.
\(\blacksquare\)
- * Fit ordered logistic models to predict satisfaction from (i) gender and age group, (ii) gender only, (iii) age group only. (You don’t need to examine the models.) Don’t forget a suitable
weights
!
Solution
(i):
library(MASS)
.1 <- polr(satis ~ gender + age.group, weights = frequency, data = mobile.long) mobile
For (ii) and (iii), update
is the thing (it works for any kind of model):
.2 <- update(mobile.1, . ~ . - age.group)
mobile.3 <- update(mobile.1, . ~ . - gender) mobile
We’re not going to look at these, because the output from summary
is not very illuminating. What we do next is to try to figure out which (if either) of the explanatory variables age.group
and gender
we need.
\(\blacksquare\)
- Use
drop1
on your model containing both explanatory variables to determine whether you can remove either of them. Usetest="Chisq"
to obtain P-values.
Solution
drop1
takes a fitted model, and tests each term in it in turn, and says which (if any) should be removed. Here’s how it goes:
drop1(mobile.1, test = "Chisq")
The possibilities are to remove gender
, to remove age.group
or to remove nothing. The best one is “remove nothing”, because it’s the one on the output with the smallest AIC. Both P-values are small, so it would be a mistake to remove either of the explanatory variables.
\(\blacksquare\)
- Use
anova
to decide whether we are justified in removinggender
from a model containing bothgender
andage.group
. Compare your P-value with the one fromdrop1
.
Solution
This is a comparison of the model with both variables (mobile.1
) and the model with gender
removed (mobile.3
). Use anova
for this, smaller (fewer-\(x\)) model first:
anova(mobile.3, mobile.1)
The P-value is (just) less than 0.05, so the models are significantly different. That means that the model with both variables in fits significantly better than the model with only age.group
, and therefore that taking gender
out is a mistake.
The P-value is identical to the one from drop1
(because they are both doing the same test).
\(\blacksquare\)
- Use
anova
to see whether we are justified in removingage.group
from a model containing bothgender
andage.group
. Compare your P-value with the one fromdrop1
above.
Solution
Exactly the same idea as the last part. In my case, I’m comparing models mobile.2
and mobile.1
:
anova(mobile.2, mobile.1)
This one is definitely significant, so I need to keep age.group
for sure. Again, the P-value is the same as the one in drop1
.
\(\blacksquare\)
- Which of the models you have fit so far is the most appropriate one? Explain briefly.
Solution
I can’t drop either of my variables, so I have to keep them both: mobile.1
, with both age.group
and gender
.
\(\blacksquare\)
- Obtain predicted probabilities of a customer falling in the various satisfaction categories, as it depends on gender and age group. To do that, you need to feed
predictions
three things: the fitted model that contains both age group and gender, the data frame that you read in from the file back in part (here) (which contains all the combinations of age group and gender), and an appropriatetype
.
Solution
My model containing both \(x\)s was mobile.1
, the data frame read in from the file was called mobile
, and I need type="p"
to get probabilities. The first thing is to get the genders and age groups to make combinations of them, which you can do like this:
<- datagrid(model = mobile.1,
new gender = levels(factor(mobile$gender)),
age.group = levels(factor(mobile$age.group)))
new
The levels(factor)
thing turns the (text) variable into a factor so that you can extract the distinct values using levels
. You could also count
the genders and age groups to find out which ones there are:
%>% count(gender, age.group) mobile
This gives you all the combinations, and so will also serve as a new
without needing to use datagrid
. Your choice.
Having done that, you now have a new
to feed into predictions
, but some care is still required:
cbind(predictions(mobile.1, newdata = new))
Re-fitting to get Hessian
The predictions come out long, and we would like all the predictions for the same gender - age-group combination to come out in one row: That means pivoting the group column wider. I also took the opportunity to grab only the relevant columns:
cbind(predictions(mobile.1, newdata = new)) %>%
select(gender, age.group, group, estimate) %>%
pivot_wider(names_from = group, values_from = estimate)
Re-fitting to get Hessian
Depending on the width of your display, you may or may not see all four probabilities.
This worked for me, but this might happen to you, with the same commands as above:
cbind(predictions(mobile.1, newdata = new)) %>%
::select(gender, age.group, group, estimate) %>%
MASSpivot_wider(names_from = group, values_from = estimate)
Error in MASS::select(., gender, age.group, group, estimate): unused arguments (gender, age.group, group, estimate)
Oh, this didn’t work. Why not? There don’t seem to be any errors.
This is the kind of thing that can bother you for days. The resolution (that it took me a long time to discover) is that you might have the tidyverse
and also MASS
loaded, in the wrong order, and MASS
also has a select
(that takes different inputs and does something different). If you look back at part (here), you might have seen a message there when you loaded MASS
that select
was “masked”. When you have two packages that both contain a function with the same name, the one that you can see (and that will get used) is the one that was loaded last, which is the MASS
select (not the one we actually wanted, which is the tidyverse
select). There are a couple of ways around this. One is to un-load the package we no longer need (when we no longer need it). The mechanism for this is shown at the end of part (here). The other is to say explicitly which package you want your function to come from, so that there is no doubt. The tidyverse
is actually a collection of packages. The best way to find out which one our select
comes from is to go to the Console window in R Studio and ask for the help for select
. With both tidyverse
and MASS
loaded, the help window offers you a choice of both select
s; the one we want is “select/rename variables by name”, and the actual package it comes from is dplyr
.
There is a third choice, which is the one I prefer now: install and load the package conflicted
. When you run your code and it calls for something like select
that is in two packages that you have loaded, it gives an error, like this:
Error: [conflicted] `select` found in 2 packages.
Either pick the one you want with `::`
* MASS::select
* dplyr::select
Or declare a preference with `conflict_prefer()`
* conflict_prefer("select", "MASS")
* conflict_prefer("select", "dplyr")
Fixing this costs you a bit of time upfront, but once you have fixed it, you know that the right thing is being run. What I do is to copy-paste one of those conflict_prefer
lines, in this case the second one, and put it before the select
that now causes the error. Right after the library(conflicted)
is a good place. When you use conflicted
, you will probably have to run several times to fix up all the conflicts, which will be a bit frustrating, and you will end up with several conflict_prefer
lines, but once you have them there, you won’t have to worry about the right function being called because you have explicitly said which one you want.
This is a non-standard use of cbind
because I wanted to grab only the gender and age group columns from mobile
first, and then cbind
that to the predicted probabilities. The missing first input to cbind
is “whatever came out of the previous step”, that is, the first two columns of mobile
.
I only included the first two columns of mobile
in the cbind
, because the rest of the columns of mobile
were frequencies, which I don’t need to see. (Having said that, it would be interesting to make a plot using the observed proportions and predicted probabilities, but I didn’t ask you for that.)
\(\blacksquare\)
- * Describe any patterns you see in the predictions, bearing in mind the significance or not of the explanatory variables.
Solution
I had both explanatory variables being significant, so I would expect to see both an age-group effect and a gender effect. For both males and females, there seems to be a decrease in satisfaction as the customers get older, at least until age 30 or so. I can see this because the predicted prob. of “very satisfied” decreases, and the predicted prob. of “very unsatisfied” increases. The 31+ age group are very similar to the 25–30 group for both males and females. So that’s the age group effect. What about a gender effect? Well, for all the age groups, the males are more likely to be very satisfied than the females of the corresponding age group, and also less likely to to be very unsatisfied. So the gender effect is that males are more satisfied than females overall. (Or, the males are less discerning. Take your pick.) When we did the tests above, age group was very definitely significant, and gender less so (P-value around 0.03). This suggests that the effect of age group ought to be large, and the effect of gender not so large. This is about what we observed: the age group effect was pretty clear, and the gender effect was noticeable but small: the females were less satisfied than the males, but there wasn’t all that much difference.
\(\blacksquare\)
28.7 Finding non-missing values
* This is to prepare you for something in the next question. It’s meant to be easy.
In R, the code NA
stands for “missing value” or “value not known”. In R, NA
should not have quotes around it. (It is a special code, not a piece of text.)
- Create a vector
v
that contains some numbers and some missing values, usingc()
. Put those values into a one-column data frame.
Solution
Like this. The arrangement of numbers and missing values doesn’t matter, as long as you have some of each:
<- c(1, 2, NA, 4, 5, 6, 9, NA, 11)
v <- tibble(v)
mydata mydata
This has one column called v
.
\(\blacksquare\)
- Obtain a new column containing
is.na(v)
. When is this true and when is this false?
Solution
<- mydata %>% mutate(isna = is.na(v))
mydata mydata
This is TRUE
if the corresponding element of v
is missing (in my case, the third value and the second-last one), and FALSE
otherwise (when there is an actual value there).
\(\blacksquare\)
- The symbol
!
means “not” in R (and other programming languages). What does!is.na(v)
do? Create a new column containing that.
Solution
Try it and see. Give it whatever name you like. My name reflects that I know what it’s going to do:
<- mydata %>% mutate(notisna = !is.na(v))
mydata mydata
This is the logical opposite of is.na
: it’s true if there is a value, and false if it’s missing.
\(\blacksquare\)
- Use
filter
to display just the rows of your data frame that have a non-missing value ofv
.
Solution
filter
takes a column to say which rows to pick, in which case the column should contain something that either is TRUE
or FALSE
, or something that can be interpreted that way:
%>% filter(notisna) mydata
or you can provide filter
something that can be calculated from what’s in the data frame, and also returns something that is either true or false:
%>% filter(!is.na(v)) mydata
In either case, I only have non-missing values of v
.
\(\blacksquare\)
28.8 High School and Beyond
A survey called High School and Beyond was given to a large number of American high school seniors (grade 12) by the National Center of Education Statistics. The data set at link is a random sample of 200 of those students.
The variables collected are:
gender
: student’s gender, female or male.race
: the student’s race (African-American, Asian,2 Hispanic, White).ses
: Socio-economic status of student’s family (low, middle, or high)schtyp
: School type, public or private.prog
: Student’s program, general, academic, or vocational.read
: Score on standardized reading test.write
: Score on standardized writing test.math
: Score on standardized math test.science
: Score on standardized science test.socst
: Score on standardized social studies test.
Our aim is to see how socio-economic status is related to the other variables.
- Read in and display (some of) the data.
Solution
This is a .csv
file (I tried to make it easy for you):
<- "http://ritsokiguess.site/datafiles/hsb.csv"
my_url <- read_csv(my_url) hsb
Rows: 200 Columns: 11
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr (5): race, ses, schtyp, prog, gender
dbl (6): id, read, write, math, science, socst
ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
hsb
\(\blacksquare\)
- Explain briefly why an ordinal logistic regression is appropriate for our aims.
Solution
The response variable ses
is categorical, with categories that come in order (low less than middle less than high).
\(\blacksquare\)
- Fit an ordinal logistic regression predicting socio-economic status from the scores on the five standardized tests. (You don’t need to display the results.) You will probably go wrong the first time. What kind of thing does your response variable have to be?
Solution
It has to be an ordered
factor, which you can create in the data frame (or outside, if you prefer):
<- hsb %>% mutate(ses = ordered(ses, c("low", "middle", "high")))
hsb hsb
ses
is now ord
. Good. Now fit the model:
.1 <- polr(ses ~ read + write + math + science + socst, data = hsb) ses
No errors is good.
\(\blacksquare\)
- Remove any non-significant explanatory variables one at a time. Use
drop1
to decide which one to remove next.
Solution
drop1(ses.1, test = "Chisq")
I would have expected the AIC column to come out in order, but it doesn’t. Never mind. Scan for the largest P-value, which belongs to read
. (This also has the lowest AIC.) So, remove read
:
.2 <- update(ses.1, . ~ . - read)
sesdrop1(ses.2, test = "Chisq")
Note how the P-value for science
has come down a long way.
A close call, but math
goes next. The update
doesn’t take long to type:
.3 <- update(ses.2, . ~ . - math)
sesdrop1(ses.3, test = "Chisq")
science
has become significant now (probably because it was strongly correlated with at least one of the variables we removed (at my guess, math
). That is, we didn’t need both science
and math
, but we do need one of them.
I think we can guess what will happen now: write
comes out, and the other two variables will stay, so that’ll be where we stop:
.4 <- update(ses.3, . ~ . - write)
sesdrop1(ses.4, test = "Chisq")
Indeed so. We need just the science and social studies test scores to predict socio-economic status.
Using AIC to decide on which variable to remove next will give the same answer here, but I would like to see the test=
part in your drop1
to give P-values (expect to lose something, but not too much, if that’s not there).
Extras: I talked about correlation among the explanatory variables earlier, which I can explore:
%>% select(read:socst) %>% cor() hsb
read write math science socst
read 1.0000000 0.5967765 0.6622801 0.6301579 0.6214843
write 0.5967765 1.0000000 0.6174493 0.5704416 0.6047932
math 0.6622801 0.6174493 1.0000000 0.6307332 0.5444803
science 0.6301579 0.5704416 0.6307332 1.0000000 0.4651060
socst 0.6214843 0.6047932 0.5444803 0.4651060 1.0000000
The first time I did this, I forgot that I had MASS
loaded (for the polr
), and so, to get the right select
, I needed to say which one I wanted.
Anyway, the correlations are all moderately high. There’s nothing that stands out as being much higher than the others. The lowest two are between social studies and math, and social studies and science. That would be part of the reason that social studies needs to stay. The highest correlation is between math and reading, which surprises me (they seem to be different skills).
So there was not as much insight there as I expected.
The other thing is that you can use step
for the variable-elimination task as well:
.5 <- step(ses.1, direction = "backward", test = "Chisq") ses
Start: AIC=404.63
ses ~ read + write + math + science + socst
Df AIC LRT Pr(>Chi)
- read 1 403.09 0.4620 0.496684
- math 1 403.19 0.5618 0.453517
- write 1 403.81 1.1859 0.276167
<none> 404.63
- science 1 404.89 2.2630 0.132499
- socst 1 410.08 7.4484 0.006349 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Step: AIC=403.09
ses ~ write + math + science + socst
Df AIC LRT Pr(>Chi)
- math 1 402.04 0.9541 0.328689
- write 1 402.10 1.0124 0.314325
<none> 403.09
- science 1 404.29 3.1968 0.073782 .
- socst 1 410.58 9.4856 0.002071 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Step: AIC=402.04
ses ~ write + science + socst
Df AIC LRT Pr(>Chi)
- write 1 400.60 0.5587 0.4547813
<none> 402.04
- science 1 405.41 5.3680 0.0205095 *
- socst 1 411.07 11.0235 0.0008997 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Step: AIC=400.6
ses ~ science + socst
Df AIC LRT Pr(>Chi)
<none> 400.60
- science 1 403.45 4.8511 0.0276291 *
- socst 1 409.74 11.1412 0.0008443 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
I would accept you doing it this way, again as long as you have the test=
there as well.
\(\blacksquare\)
- The quartiles of the
science
test score are 44 and 58. The quartiles of thesocst
test score are 46 and 61. Make a data frame that has all combinations of those quartiles. If your best regression had any other explanatory variables in it, also put the means of those variables into this data frame.
Solution
This is what datagrid
does by default (from package marginaleffects
):
<- datagrid(model = ses.5, science = c(44, 58), socst = c(46, 61))
new new
This explicitly fills in mean values or most frequent categories for all the other variables in the dataset, even though those other variables are not in the model. The two variables you actually care about are over on the right.
Since there are only two variables left, this new
data frame has only \(2^2=4\) rows.
There is a veiled hint here that these are the two variables that should have remained in your regression. If that was not what you got, the means of the other variables in the model will go automatically into your new
:
datagrid(model = ses.1, science = c(44, 58), socst = c(46, 61))
so you don’t have to do anything extra.
\(\blacksquare\)
- Use the data frame you created in the previous part, together with your best model, to obtain predicted probabilities of being in each
ses
category. Display these predicted probabilities so that they are easy to read.
Solution
This is predictions
, and we’ve done the setup. My best model was called ses.4
.
cbind(predictions(ses.4, newdata = new)) %>%
select(group, estimate, science, socst)
Re-fitting to get Hessian
predictions
always works by having one column of predictions. That isn’t the best layout here, though; we want to see the three predicted probabilities for a particular value of science
and socst
all in one row, which means pivoting-wider:
cbind(predictions(ses.4, newdata = new)) %>%
select(group, estimate, science, socst) %>%
pivot_wider(names_from = group, values_from = estimate)
Re-fitting to get Hessian
The easiest strategy seems to be to run predictions
first, see that it comes out long, and then wonder how to fix it. Then pick the columns you care about: the predicted group
, the predictions, and the columns for science and social science, and then pivot wider.
\(\blacksquare\)
- What is the effect of an increased science score on the likelihood of a student being in the different socioeconomic groups, all else equal? Explain briefly. In your explanation, state clearly how you are using your answer to the previous part.
Solution
Use your predictions; hold the socst
score constant (that’s the all else equal part). So compare the first and third rows (or, if you like, the second and fourth rows) of your predictions and see what happens as the science score goes from 44 to 58. What I see is that the probability of being low
goes noticeably down as the science score increases, the probability of middle
stays about the same, and the probability of high
goes up
(by about the same amount as the probability of low
went down). In other words, an increased science score goes with an increased chance of high
(and a decreased chance of low
).
If your best model doesn’t have science
in it, then you need to say something like “science
has no effect on socio-economic status”, consistent with what you concluded before: if you took it out, it’s because you thought it had no effect.
Extra: the effect of an increased social studies score is almost exactly the same as an increased science score (so I didn’t ask you about that). From a social-science point of view, this makes perfect sense: the higher the social-economic stratum a student comes from, the better they are likely to do in school. I’ve been phrasing this as “association”, because really the cause and effect is the other way around: a student’s family socioeconomic status is explanatory, and school performance is response. But this was the nicest example I could find of an ordinal response data set.
\(\blacksquare\)
28.9 How do you like your steak?
When you order a steak in a restaurant, the server will ask you how you would like it cooked, or to be precise, how much you would like it cooked: rare (hardly cooked at all), through medium rare, medium, medium well to well (which means “well done”, so that the meat has only a little red to it). Could you guess how a person likes their steak cooked, from some other information about them? The website link commissioned a survey where they asked a number of people how they preferred their steak, along with as many other things as they could think of to ask. (Many of the variables below are related to risk-taking, which was something the people designing the survey thought might have something to do with liking steak rare.) The variables of interest are all factors or true/false:
respondent_ID
: a ten-digit number identifying each person who responded to the survey.lottery_a
: true if the respondent preferred lottery A with a small chance to win a lot of money, to lottery B, with a larger chance to win less money.smoke
: true if the respondent is currently a smokeralcohol
: true if the respondent at least occasionally drinks alcohol.gamble
: true if the respondent likes to gamble (eg. betting on horse racing or playing the lottery)skydiving
: true if the respondent has ever been skydiving.speed
: true if the respondent likes to drive fastcheated
: true if the respondent has ever cheated on a spouse or girlfriend/boyfriendsteak
: true if the respondent likes to eat steaksteak_prep
(response): how the respondent likes their steak cooked (factor, as described above, with 5 levels).female
: true if the respondent is femaleage
: age group, from 18–29 to 60+.hhold_income
: household income group, from $0–24,999 to $150,000+.educ
: highest level of education attained, from “less than high school” up to “graduate degree”region
: region (of the US) that the respondent lives in (five values).
The data are in link. This is the cleaned-up data from a previous question, with the missing values removed.
- Read in the data and display the first few lines.
Solution
The usual:
<- read_csv("steak1.csv") steak
Rows: 331 Columns: 15
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr (5): steak_prep, age, hhold_income, educ, region
dbl (1): respondent_id
lgl (9): lottery_a, smoke, alcohol, gamble, skydiving, speed, cheated, steak...
ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
steak
\(\blacksquare\)
- We are going to predict
steak_prep
from some of the other variables. Why is the model-fitting functionpolr
from packageMASS
the best choice for these data (alternatives beingglm
andmultinom
from packagennet
)?
Solution
It all depends on the kind of response variable. We have a response variable with five ordered levels from Rare to Well. There are more than two levels (it is more than a “success” and “failure”), which rules out glm
, and the levels are ordered, which rules out multinom
. As we know, polr
handles an ordered response, so it is the right choice.
\(\blacksquare\)
- What are the levels of
steak_prep
, in the order that R thinks they are in? If they are not in a sensible order, create an ordered factor where the levels are in a sensible order.
Solution
This is the most direct way to find out:
%>% distinct(steak_prep) %>% pull(steak_prep) -> preps
steak preps
[1] "Medium rare" "Rare" "Medium" "Medium Well" "Well"
This is almost the right order (distinct
uses the order in the data frame). We just need to switch the first two around, and then we’ll be done:
<- preps[c(2, 1, 3, 4, 5)]
preps1 preps1
[1] "Rare" "Medium rare" "Medium" "Medium Well" "Well"
If you used count
, there’s a bit more work to do:
<- steak %>% count(steak_prep) %>% pull(steak_prep)
preps2 preps2
[1] "Medium" "Medium Well" "Medium rare" "Rare" "Well"
because count
puts them in alphabetical order, so:
<- preps2[c(4, 2, 1, 3, 5)]
preps3 preps3
[1] "Rare" "Medium Well" "Medium" "Medium rare" "Well"
These use the idea in the attitudes-to-abortion question: create a vector of the levels in the right order, then create an ordered factor with ordered()
. If you like, you can type the levels in the right order (I won’t penalize you for that here), but it’s really better to get the levels without typing or copying-pasting, so that you don’t make any silly errors copying them (which will mess everything up later).
So now I create my ordered response:
<- steak %>% mutate(steak_prep_ord = ordered(steak_prep, preps1)) steak
or using one of the other preps
vectors containing the levels in the correct order. As far as polr
is concerned, it doesn’t matter whether I start at Rare
and go “up”, or start at Well
and go “down”. So if you do it the other way around, that’s fine. As long as you get the levels in a sensible order, you’re good.
\(\blacksquare\)
- Fit a model predicting preferred steak preparation in an ordinal logistic regression from
educ
,female
andlottery_a
. This ought to be easy from your previous work, but you have to be careful about one thing. No need to print out the results.
Solution
The thing you have to be careful about is that you use the ordered factor that you just created as the response:
.1 <- polr(steak_prep_ord ~ educ + female + lottery_a, data = steak) steak
\(\blacksquare\)
- Run
drop1
on your fitted model, withtest="Chisq"
. Which explanatory variable should be removed first, if any? Bear in mind that the variable with the smallest AIC should come out first, in case your table doesn’t get printed in order.
Solution
This:
drop1(steak.1, test = "Chisq")
My table is indeed out of order (which is why I warned you about it, in case that happens to you as well). The smallest AIC goes with female
, which also has a very non-significant P-value, so this one should come out first.
\(\blacksquare\)
- Remove the variable that should come out first, using
update
. (If all the variables should stay, you can skip this part.)
Solution
You could type or copy-paste the whole model again, but update
is quicker:
.2 <- update(steak.1, . ~ . - female) steak
That’s all.
I wanted to get some support for my drop1
above (since I was a bit worried about those out-of-order rows). Now that we have fitted a model with female
and one without, we can compare them using anova
:
anova(steak.2, steak.1, test = "Chisq")
Don’t get taken in by that “LR stat” that may be on the end of the first row of the output table; the P-value might have wrapped onto the second line, and is in fact exactly the same as in the drop1
output (it is doing exactly the same test). As non-significant as you could wish for.
Extra: I was curious about whether either of the other \(x\)’s could come out now:
drop1(steak.2, test = "Chisq")
lottery_a
should come out, but educ
is edging towards significance. We are about to do predictions; in those, the above suggests that there may be some visible effect of education, but there may not be much effect of lottery_a
.
All right, so what happens when we remove lottery_a
? That we find out later.
\(\blacksquare\)
- Using the best model that you have so far, predict the probabilities of preferring each different steak preparation (method of cooking) for each combination of the variables that remain. (Some of the variables are TRUE and FALSE rather than factors. Bear this in mind.) Describe the effects of each variable on the predicted probabilities, if any. Note that there is exactly one person in the study whose educational level is “less than high school”.
Solution
Again, I’m leaving it to you to follow all the steps. My variables remaining are educ
and lottery_a
, which are respectively categorical and logical.
The first step is to get all combinations of their values, along with “typical” values for the others:
<- datagrid(model = steak.2,
new educ = levels(factor(steak$educ)),
lottery_a = c(FALSE, TRUE))
new
I wasn’t sure how to handle the logical lottery_a
, so I just typed the TRUE
and FALSE
.
On to the predictions, remembering to make them wider:
cbind(predictions(steak.2, newdata = new)) %>%
select(rowid, group, estimate, educ, lottery_a) %>%
pivot_wider(names_from = group, values_from = estimate)
Re-fitting to get Hessian
There are 5 levels of education, 2 levels of lottery_a
, and 5 ways in which you might ask for your steak to be cooked, so the original output from predictions
has \(5 \times 2 \times 5 = 50\) rows, and the output you see above has \(5 \times 2 = 10\) rows.
I find this hard to read, so I’m going to round off those predictions. Three or four decimals seems to be sensible. The time to do this is while they are all in one column, that is, before the pivot_wider
. On my screen, the education levels also came out rather long, so I’m going to shorten them as well:
cbind(predictions(steak.2, newdata = new)) %>%
select(rowid, group, estimate, educ, lottery_a) %>%
mutate(estimate = round(estimate, 3),
educ = abbreviate(educ, 15)) %>%
pivot_wider(names_from = group, values_from = estimate)
Re-fitting to get Hessian
That’s about as much as I can shorten the education levels while still having them readable.
Then, say something about the effect of changing educational level on the predictions, and say something about the effect of favouring Lottery A vs. not. I don’t much mind what: you can say that there is not much effect (of either variable), or you can say something like “people with a graduate degree are slightly more likely to like their steak rare and less likely to like it well done” (for education level) and “people who preferred Lottery A are slightly less likely to like their steak rare and slightly more likely to like it well done” (for effect of Lottery A). You can see these by comparing the odd-numbered rows rows with each other to assess the effect of education while holding attitudes towards lottery_a
constant (or the even-numbered rows, if you prefer), and you can compare eg. rows 1 and 2 to assess the effect of Lottery A (compare two lines with the same educational level but different preferences re Lottery A).
I would keep away from saying anything about education level “less than high school”, since this entire level is represented by exactly one person.
\(\blacksquare\)
- Is it reasonable to remove all the remaining explanatory variables from your best model so far? Fit a model with no explanatory variables, and do a test. (In R, if the right side of the squiggle is a
1
, that means “just an intercept”. Or you can remove whatever remains usingupdate
.) What do you conclude? Explain briefly.
Solution
The fitting part is the challenge, since the testing part is anova
again. The direct fit is this:
.3 <- polr(steak_prep_ord ~ 1, data = steak) steak
and the update
version is this, about equally long, starting from steak.2
since that is the best model so far:
.3a <- update(steak.2, . ~ . - educ - lottery_a) steak
You can use whichever you like. Either way, the second part is anova
, and the two possible answers should be the same:
anova(steak.3, steak.2, test = "Chisq")
or
anova(steak.3a, steak.2, test = "Chisq")
At the 0.05 level, removing both of the remaining variables is fine: that is, nothing (out of these variables) has any impact on the probability that a diner will prefer their steak cooked a particular way. However, it is a very close call; the P-value is only just bigger than 0.05.
However, with data like this and a rather exploratory analysis, I might think about using a larger \(\alpha\) like 0.10, and at this level, taking out both these two variables is a bad idea. You could say that one or both of them is “potentially useful” or “provocative” or something like that.
If you think that removing these two variables is questionable, you might like to go back to that drop1
output I had above:
drop1(steak.2, test = "Chisq")
The smallest AIC goes with lottery_a
, so that comes out (it is nowhere near significant):
.4 <- update(steak.2, . ~ . - lottery_a)
steakdrop1(steak.4, test = "Chisq")
and what you see is that educational level is right on the edge of significance, so that may or may not have any impact. Make a call. But if anything, it’s educational level that makes a difference.
\(\blacksquare\)
- In the article for which these data were collected, link, does the author obtain consistent conclusions with yours? Explain briefly. (It’s not a very long article, so it won’t take you long to skim through, and the author’s point is pretty clear.)
Solution
The article says that nothing has anything to do with steak preference. Whether you agree or not depends on what you thought above about dropping those last two variables. So say something consistent with what you said in the previous part. Two points for saying that the author said “nothing has any effect”, and one point for how your findings square with that.
Extra: now that you have worked through this great long question, this is where I tell you that I simplified things a fair bit for you! There were lots of other variables that might have had an impact on how people like their steaks, and we didn’t even consider those. Why did I choose what I did here? Well, I wanted to fit a regression predicting steak preference from everything else, do a big backward elimination, but:
.5 <- polr(steak_prep_ord ~ ., data = steak) steak
Warning: glm.fit: algorithm did not converge
Error in polr(steak_prep_ord ~ ., data = steak): attempt to find suitable starting values failed
The .
in place of explanatory variables means “all the other variables”, including the nonsensical personal ID. That saved me having to type them all out.
Unfortunately, however, it didn’t work. The problem is a numerical one. Regular regression has a well-defined procedure, where the computer follows through the steps and gets to the answer, every time. Once you go beyond regression, however, the answer is obtained by a step-by-step method: the computer makes an initial guess, tries to improve it, then tries to improve it again, until it can’t improve things any more, at which point it calls it good. The problem here is that polr
cannot even get the initial guess! (It apparently is known to suck at this, in problems as big and complicated as this one.)
I don’t normally recommend forward selection, but I wonder whether it works here:
.5 <- polr(steak_prep_ord ~ 1, data = steak)
steak.6 <- step(steak.5,
steakscope = . ~ lottery_a + smoke + alcohol + gamble + skydiving +
+ cheated + female + age + hhold_income + educ + region,
speed direction = "forward", test = "Chisq", trace = 0
)drop1(steak.6, test = "Chisq")
It does, and it says the only thing to add out of all the variables is education level. So, for you, I picked this along with a couple of other plausible-sounding variables and had you start from there.
Forward selection starts from a model containing nothing and asks “what can we add?”. This is a bit more complicated than backward elimination, because now you have to say what the candidate things to add are. That’s the purpose of that scope
piece, and there I had no alternative but to type the names of all the variables. Backward elimination is easier, because the candidate variables to remove are the ones in the model, and you don’t need a scope
. The trace=0
says “don’t give me any output” (you can change it to a different value if you want to see what that does), and last, the drop1
looks at what is actually in the final model (with a view to asking what can be removed, but we don’t care about that here).
\(\blacksquare\)
28.10 How do you like your steak – the data
This question takes you through the data preparation for one of the other questions. You don’t have to do this question, but you may find it interesting or useful.
When you order a steak in a restaurant, the server will ask you how you would like it cooked, or to be precise, how much you would like it cooked: rare (hardly cooked at all), through medium rare, medium, medium well to well (which means “well done”, so that the meat has only a little red to it). Could you guess how a person likes their steak cooked, from some other information about them? The website link commissioned a survey where they asked a number of people how they preferred their steak, along with as many other things as they could think of to ask. (Many of the variables below are related to risk-taking, which was something the people designing the survey thought might have something to do with liking steak rare.) The variables of interest are all factors or true/false:
respondent_ID
: a ten-digit number identifying each person who responded to the survey.lottery_a
: true if the respondent preferred lottery A with a small chance to win a lot of money, to lottery B, with a larger chance to win less money.smoke
: true if the respondent is currently a smokeralcohol
: true if the respondent at least occasionally drinks alcohol.gamble
: true if the respondent likes to gamble (eg. betting on horse racing or playing the lottery)skydiving
: true if the respondent has ever been skydiving.speed
: true if the respondent likes to drive fastcheated
: true if the respondent has ever cheated on a spouse or girlfriend/boyfriendsteak
: true if the respondent likes to eat steaksteak_prep
(response): how the respondent likes their steak cooked (factor, as described above, with 5 levels).female
: true if the respondent is femaleage
: age group, from 18–29 to 60+.hhold_income
: household income group, from $0–24,999 to $150,000+.educ
: highest level of education attained, from “less than high school” up to “graduate degree”region
: region (of the US) that the respondent lives in (five values).
The data are in link.
- Read in the data and display the first few lines.
Solution
The usual:
<- "http://ritsokiguess.site/datafiles/steak.csv"
my_url <- read_csv(my_url) steak0
Rows: 550 Columns: 15
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr (5): steak_prep, age, hhold_income, educ, region
dbl (1): respondent_id
lgl (9): lottery_a, smoke, alcohol, gamble, skydiving, speed, cheated, steak...
ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
steak0
I’m using a temporary name for reasons that will become clear shortly.
\(\blacksquare\)
- What do you immediately notice about your data frame? Run
summary
on the entire data frame. Would you say you have a lot of missing values, or only a few?
Solution
I see missing values, starting in the very first row. Running the data frame through summary
gives this, either as summary(steak0)
or this way:
%>% summary() steak0
respondent_id lottery_a smoke alcohol
Min. :3.235e+09 Mode :logical Mode :logical Mode :logical
1st Qu.:3.235e+09 FALSE:279 FALSE:453 FALSE:125
Median :3.235e+09 TRUE :267 TRUE :84 TRUE :416
Mean :3.235e+09 NA's :4 NA's :13 NA's :9
3rd Qu.:3.235e+09
Max. :3.238e+09
gamble skydiving speed cheated
Mode :logical Mode :logical Mode :logical Mode :logical
FALSE:280 FALSE:502 FALSE:59 FALSE:447
TRUE :257 TRUE :36 TRUE :480 TRUE :92
NA's :13 NA's :12 NA's :11 NA's :11
steak steak_prep female age
Mode :logical Length:550 Mode :logical Length:550
FALSE:109 Class :character FALSE:246 Class :character
TRUE :430 Mode :character TRUE :268 Mode :character
NA's :11 NA's :36
hhold_income educ region
Length:550 Length:550 Length:550
Class :character Class :character Class :character
Mode :character Mode :character Mode :character
Make a call about whether you think that’s a lot of missing values or only a few. This might not be all of them, because missing text doesn’t show here (we see later how to make it show up).
\(\blacksquare\)
- What does the function
drop_na
do when applied to a data frame with missing values? To find out, pass the data frame intodrop_na()
, then intosummary
again. What has happened?
Solution
Let’s try it and see.
%>% drop_na() %>% summary() steak0
respondent_id lottery_a smoke alcohol
Min. :3.235e+09 Mode :logical Mode :logical Mode :logical
1st Qu.:3.235e+09 FALSE:171 FALSE:274 FALSE:65
Median :3.235e+09 TRUE :160 TRUE :57 TRUE :266
Mean :3.235e+09
3rd Qu.:3.235e+09
Max. :3.235e+09
gamble skydiving speed cheated steak
Mode :logical Mode :logical Mode :logical Mode :logical Mode:logical
FALSE:158 FALSE:308 FALSE:28 FALSE:274 TRUE:331
TRUE :173 TRUE :23 TRUE :303 TRUE :57
steak_prep female age hhold_income
Length:331 Mode :logical Length:331 Length:331
Class :character FALSE:174 Class :character Class :character
Mode :character TRUE :157 Mode :character Mode :character
educ region
Length:331 Length:331
Class :character Class :character
Mode :character Mode :character
The missing values, the ones we can see anyway, have all gone. Precisely, drop_na
, as its name suggests, drops all the rows that have missing values in them anywhere. This is potentially wasteful, since a row might be missing only one value, and we drop the entire rest of the row, throwing away the good data as well. If you check, we started with 550 rows, and we now have only 311 left. Ouch.
So now we’ll save this into our “good” data frame, which means doing it again (now that we know it works):
%>% drop_na() -> steak steak0
Extra: another way to handle missing data is called “imputation”: what you do is to estimate a value for any missing data, and then use that later on as if it were the truth. One way of estimating missing values is to do a regression (of appropriate kind: regular or logistic) to predict a column with missing values from all the other columns.
Extra extra: below we see how we used to have to do this, for your information.
First, we run complete.cases
on the data frame:
complete.cases(steak0)
[1] FALSE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE
[13] TRUE TRUE TRUE TRUE TRUE FALSE TRUE FALSE TRUE TRUE FALSE TRUE
[25] TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE FALSE TRUE FALSE FALSE
[37] TRUE FALSE FALSE FALSE FALSE TRUE FALSE TRUE FALSE TRUE TRUE TRUE
[49] TRUE TRUE FALSE FALSE TRUE TRUE FALSE TRUE TRUE TRUE FALSE TRUE
[61] FALSE TRUE TRUE FALSE TRUE TRUE FALSE FALSE FALSE TRUE FALSE FALSE
[73] FALSE TRUE TRUE FALSE FALSE FALSE TRUE FALSE FALSE TRUE FALSE FALSE
[85] FALSE TRUE TRUE TRUE FALSE FALSE TRUE TRUE TRUE FALSE FALSE TRUE
[97] TRUE TRUE FALSE TRUE FALSE TRUE FALSE TRUE TRUE TRUE FALSE TRUE
[109] FALSE TRUE TRUE FALSE FALSE TRUE FALSE TRUE FALSE FALSE TRUE TRUE
[121] TRUE TRUE TRUE FALSE TRUE TRUE FALSE FALSE FALSE TRUE TRUE TRUE
[133] FALSE FALSE TRUE FALSE FALSE FALSE TRUE TRUE TRUE TRUE FALSE TRUE
[145] TRUE FALSE TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE FALSE TRUE
[157] TRUE TRUE TRUE FALSE FALSE TRUE TRUE FALSE FALSE FALSE FALSE FALSE
[169] TRUE FALSE TRUE FALSE TRUE TRUE FALSE FALSE TRUE TRUE TRUE FALSE
[181] FALSE TRUE FALSE FALSE TRUE FALSE TRUE FALSE TRUE TRUE FALSE FALSE
[193] TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE FALSE FALSE FALSE TRUE
[205] TRUE FALSE TRUE FALSE FALSE FALSE FALSE FALSE TRUE TRUE TRUE TRUE
[217] FALSE TRUE TRUE FALSE TRUE TRUE FALSE TRUE TRUE FALSE TRUE TRUE
[229] TRUE TRUE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE
[241] TRUE TRUE TRUE TRUE TRUE FALSE FALSE TRUE FALSE TRUE FALSE TRUE
[253] FALSE TRUE FALSE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE
[265] FALSE TRUE TRUE TRUE FALSE TRUE TRUE TRUE FALSE TRUE FALSE FALSE
[277] TRUE FALSE TRUE FALSE FALSE FALSE TRUE TRUE TRUE TRUE TRUE TRUE
[289] TRUE FALSE FALSE FALSE FALSE TRUE TRUE TRUE FALSE TRUE FALSE TRUE
[301] FALSE TRUE TRUE TRUE FALSE FALSE TRUE TRUE FALSE TRUE TRUE TRUE
[313] TRUE FALSE TRUE TRUE TRUE TRUE FALSE TRUE FALSE TRUE FALSE TRUE
[325] FALSE FALSE TRUE TRUE FALSE TRUE TRUE FALSE TRUE TRUE FALSE TRUE
[337] TRUE TRUE TRUE TRUE TRUE FALSE FALSE TRUE FALSE FALSE TRUE FALSE
[349] FALSE TRUE TRUE TRUE FALSE TRUE TRUE FALSE FALSE TRUE TRUE TRUE
[361] FALSE TRUE FALSE TRUE TRUE FALSE FALSE TRUE TRUE TRUE TRUE FALSE
[373] FALSE FALSE TRUE FALSE FALSE TRUE TRUE TRUE TRUE FALSE FALSE TRUE
[385] TRUE TRUE TRUE TRUE TRUE FALSE FALSE TRUE FALSE FALSE TRUE TRUE
[397] TRUE TRUE TRUE TRUE FALSE TRUE TRUE FALSE TRUE TRUE TRUE FALSE
[409] TRUE TRUE TRUE TRUE FALSE TRUE FALSE TRUE FALSE TRUE FALSE FALSE
[421] TRUE FALSE FALSE TRUE TRUE TRUE FALSE TRUE FALSE FALSE TRUE TRUE
[433] TRUE FALSE FALSE TRUE TRUE FALSE TRUE TRUE FALSE FALSE TRUE FALSE
[445] TRUE TRUE TRUE FALSE FALSE TRUE TRUE FALSE TRUE TRUE FALSE TRUE
[457] TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE TRUE TRUE FALSE
[469] TRUE TRUE FALSE TRUE FALSE FALSE TRUE TRUE FALSE FALSE TRUE TRUE
[481] FALSE FALSE TRUE TRUE TRUE TRUE TRUE FALSE TRUE FALSE TRUE FALSE
[493] FALSE TRUE TRUE TRUE TRUE TRUE FALSE TRUE FALSE FALSE TRUE FALSE
[505] TRUE FALSE FALSE TRUE FALSE TRUE FALSE TRUE FALSE TRUE TRUE FALSE
[517] TRUE TRUE FALSE FALSE TRUE TRUE FALSE FALSE TRUE TRUE TRUE FALSE
[529] FALSE FALSE TRUE FALSE FALSE TRUE TRUE FALSE FALSE TRUE TRUE FALSE
[541] FALSE FALSE TRUE TRUE TRUE TRUE FALSE TRUE FALSE FALSE
You might be able to guess what this does, in the light of what we just did, but if not, you can investigate. Let’s pick three rows where complete.cases
is TRUE
and three where it’s FALSE
, and see what happens.
I’ll pick rows 496, 497, and 498 for the TRUE rows, and 540, 541 and 542 for the FALSE ones. Let’s assemble these rows into a vector and use slice
to display the rows with these numbers:
<- c(496, 497, 498, 540, 541, 542)
rows rows
[1] 496 497 498 540 541 542
Like this:
%>% slice(rows) steak0
What’s the difference? The rows where complete.cases
is FALSE have one (or more) missing values in them; where complete.cases
is TRUE the rows have no missing values. (Depending on the rows you choose, you may not see the missing value(s), as I didn’t.) Extra (within “extra extra”: I hope you are keeping track): this is a bit tricky to investigate more thoroughly, because the text variables might have missing values in them, and they won’t show up unless we turn them into a factor first:
%>%
steak0 mutate(across(where(is.character), \(x) factor(x))) %>%
summary()
respondent_id lottery_a smoke alcohol
Min. :3.235e+09 Mode :logical Mode :logical Mode :logical
1st Qu.:3.235e+09 FALSE:279 FALSE:453 FALSE:125
Median :3.235e+09 TRUE :267 TRUE :84 TRUE :416
Mean :3.235e+09 NA's :4 NA's :13 NA's :9
3rd Qu.:3.235e+09
Max. :3.238e+09
gamble skydiving speed cheated
Mode :logical Mode :logical Mode :logical Mode :logical
FALSE:280 FALSE:502 FALSE:59 FALSE:447
TRUE :257 TRUE :36 TRUE :480 TRUE :92
NA's :13 NA's :12 NA's :11 NA's :11
steak steak_prep female age
Mode :logical Medium :132 Mode :logical >60 :131
FALSE:109 Medium rare:166 FALSE:246 18-29:110
TRUE :430 Medium Well: 75 TRUE :268 30-44:133
NA's :11 Rare : 23 NA's :36 45-60:140
Well : 36 NA's : 36
NA's :118
hhold_income educ
$0 - $24,999 : 51 Bachelor degree :174
$100,000 - $149,999: 76 Graduate degree :133
$150,000+ : 54 High school degree : 39
$25,000 - $49,999 : 77 Less than high school degree : 2
$50,000 - $99,999 :172 Some college or Associate degree:164
NA's :120 NA's : 38
region
Pacific : 91
South Atlantic : 88
East North Central: 86
Middle Atlantic : 72
West North Central: 42
(Other) :133
NA's : 38
There are missing values everywhere. What the where
does is to do something for each column where the first thing is true: here, if the column is text, then replace it by the factor version of itself. This makes for a better summary, one that shows how many observations are in each category, and, more important for us, how many are missing (a lot).
All right, so there are 15 columns, so let’s investigate missingness in our rows by looking at the columns 1 through 8 and then 9 through 15, so they all fit on the screen. Recall that you can select
columns by number:
%>% select(1:8) %>% slice(rows) steak0
and
%>% select(9:15) %>% slice(rows) steak0
In this case, the first three rows have no missing values anywhere, and the last three rows have exactly one missing value. This corresponds to what we would expect, with complete.cases
identifying rows that have any missing values.
What we now need to do is to obtain a data frame that contains only the rows with non-missing values. This can be done by saving the result of complete.cases
in a variable first; filter
can take anything that produces a true or a false for each row, and will return the rows for which the thing it was fed was true.
<- complete.cases(steak0)
cc %>% filter(cc) -> steak.complete steak0
A quick check that we got rid of the missing values:
steak.complete
There are no missing values there. Of course, this is not a proof, and there might be some missing values further down, but at least it suggests that we might be good.
For proof, this is the easiest way I know:
%>%
steak.complete mutate(across(where(is.character), \(x) factor(x))) %>%
summary()
respondent_id lottery_a smoke alcohol
Min. :3.235e+09 Mode :logical Mode :logical Mode :logical
1st Qu.:3.235e+09 FALSE:171 FALSE:274 FALSE:65
Median :3.235e+09 TRUE :160 TRUE :57 TRUE :266
Mean :3.235e+09
3rd Qu.:3.235e+09
Max. :3.235e+09
gamble skydiving speed cheated steak
Mode :logical Mode :logical Mode :logical Mode :logical Mode:logical
FALSE:158 FALSE:308 FALSE:28 FALSE:274 TRUE:331
TRUE :173 TRUE :23 TRUE :303 TRUE :57
steak_prep female age hhold_income
Medium :109 Mode :logical >60 :82 $0 - $24,999 : 37
Medium rare:128 FALSE:174 18-29:70 $100,000 - $149,999: 66
Medium Well: 56 TRUE :157 30-44:93 $150,000+ : 39
Rare : 18 45-60:86 $25,000 - $49,999 : 55
Well : 20 $50,000 - $99,999 :134
educ region
Bachelor degree :120 South Atlantic :68
Graduate degree : 86 Pacific :57
High school degree : 20 East North Central:48
Less than high school degree : 1 Middle Atlantic :46
Some college or Associate degree:104 West North Central:29
Mountain :24
(Other) :59
If there were any missing values, they would be listed on the end of the counts of observations for each level, or on the bottom of the five-number sumamries. But there aren’t. So here’s your proof.
\(\blacksquare\)
- Write the data into a
.csv
file, with a name likesteak1.csv
. Open this file in a spreadsheet and (quickly) verify that you have the right columns and no missing values.
Solution
This is write_csv
, using my output from drop_na
:
write_csv(steak, "steak1.csv")
Open up Excel, or whatever you have, and take a look. You should have all the right columns, and, scrolling down, no visible missing values.
\(\blacksquare\)