# 1 Logistic & Poisson Regression: Overview

In this chapter, I’ve mashed together online datasets, tutorials, and my own modifications thereto. I start with the packages we will need. Then I move into data cleaning and assumptions. The model itself is possibly the easiest thing to run. Then, we wrap up with all the stats you’ll ever need for your logistic regression and how to graph it. Before we leave, we’ll look at the slight modification for running a Poisson regression.

## 1.1 Why would you do logistic regression?

Logistic regression, also known as logit regression, is what you use when your outcome variable (dependent variable) is dichotomous. These would refer to all your research yes/no questions: Did you vote, yes or no? Do you have a disease, yes or no? Did you use [insert your favorite illicit substance here] in the past week, yes or no? You might also think of it as pass/fail, hit/miss, success/failure, alive/dead, etc. Rather than estimate beta sizes, the logistic regression estimates the probability of getting one of your two outcomes (i.e., the probability of voting vs. not voting) given a predictor/independent variable(s). For our purposes, “hit” refers to your favored outcome and “miss” refers to your unfavored outcome.

## 1.2 Why would you do a Poisson regression?

Poisson regression, also known as a log-linear model, is what you use when your outcome variable is a count (i.e., numeric, but not quite so wide in range as a continuous variable.) Examples of count variables in research include how many heart attacks or strokes one’s had, how many days in the past month one’s used [insert your favorite illicit substance here], or, as in survival analysis, how many days from outbreak until infection. The Poisson distribution is unique in that its mean and its variance are equal. This is often due to zero inflation. Sometimes two processes may be at work: one that determines whether or not an event happens at all and another that determines how many times the event happens when it does. Using our count variables from above, this could be a sample that contains individuals with and without heart disease: those without heart disease cause a disproportionate amount of zeros in the data and those with heart disease trail off in a tail to the right with increasing amounts of heart attacks. This is why logistic and Poisson regressions go together in research: there is a dichotomous outcome inherent in a Poisson distribution. However, the “hits” in the logistic question can’t be understood without further conducting the Poisson regression.

### 1.2.1 Packages

• aod: Stands for Analysis of Overdispersed Data, contains a bunch of functions for this type of analysis on counts or proportions. The one you’ll see the most in this chapter is wald.test (i.e., Wald Test for Model Coefficients.)

• ggplot2: ggplot2 is an expansion on Leland Wilkinson’s The Grammar of Graphics. ggplot2 let’s you graphically represent both univariate and multivariate numerical and categorical data. Since we’re doing logistic regression, we need a graphing library that can handle categorical data.

• effects: Another graphing package because options are good.

• graphics: This package allows you to go beyond R graphing primitives. Below, you’ll see cdplot (Conditional Density Plots), xlim, ylim, and others in action.

• car: Lets us modify simple plots in R with labels, titles, etc.

• pscl: Need this to create a pseudo R-squared for logistic regression.

Note: You might get some Warning Messages when you load these depending on what version of RStudio you have and/or whether you’ve previously loaded these in the past. You may also need to go to into Packages>Install and check the box next to the package you want to load if you’re using RStudio. Usually, once you install a package, library() should be sufficient to load it in the future.

##
## Attaching package: 'aod'
## The following object is masked from 'package:survival':
##
##     rats
## Classes and Methods for R developed in the
## Political Science Computational Laboratory
## Department of Political Science
## Stanford University
## Simon Jackman
## hurdle and zeroinfl functions by Achim Zeileis

## 1.3 Now let’s load our data.

I’ll be bringing in a couple datasets freely available online in order to demonstrate what needs to happen in logistic regression. I basically string together things available in several places online so that we have everything we need for logistic regression analysis here in one chapter. See endnotes for links and references. We have 2 datasets we’ll be working with for logistic regression and 1 for poisson. We start with the logistic ones. Because we will be using multiple datasets and switching between them, I will use attach and detach to tell R which dataset each block of code refers to. Attach is a function that brings the dataset into action and detach phases it out again.

cuse <- read.table("http://data.princeton.edu/wws509/datasets/cuse.dat", header = TRUE)
#Notice we're not setting a working directory because we're reading in data from the internet.

## 1.4 Data Cleaning & Reading Output

…Because none of this matters if your data is not in binary form!

Let’s start by learning some ways to manage our data so we can use it for a dichotomous outcome.

The cuse dataset is an old one that includes an N = 16 women, age group in which they belong, their education level, whether they want more children, and whether or not they’re using contraceptives. Location is listed above. I use the dataset from Princeton and modify/add to the tutorial by G. Rodriguez. Here’s a look at the data to get a sense of it.

attach(cuse)
str(cuse) 
## 'data.frame':    16 obs. of  5 variables:
##  $age : Factor w/ 4 levels "<25","25-29",..: 1 1 1 1 2 2 2 2 3 3 ... ##$ education: Factor w/ 2 levels "high","low": 2 2 1 1 2 2 1 1 2 2 ...
##  $wantsMore: Factor w/ 2 levels "no","yes": 2 1 2 1 2 1 2 1 2 1 ... ##$ notUsing : int  53 10 212 50 60 19 155 65 112 77 ...
##  $using : int 6 4 52 10 14 10 54 27 33 80 ... levels(cuse$age)
## [1] "<25"   "25-29" "30-39" "40-49"
detach(cuse)

Here’s your basic glm model wherein the wantsMorefit (as in, they want more children: yes or no) object is our first model to fit. We specify the glm function, our outcome variable (i.e., wantsMore) and our predictors (i.e., age and education, without any interaction factors yet, hence we only use + for the main effects), we tell it what data to use (i.e., cuse), and that the family function for our distribution (i.e., binomial.) This model is asking, “Do age and education predict wanting more children? If yes, to what extent?” Here, our outcome (wanting more children/wantsMore) is already in binary form:

attach(cuse)
wantsMorefit <- glm(wantsMore~age+education,data=cuse,family=binomial())
summary(wantsMorefit) # display results
##
## Call:
## glm(formula = wantsMore ~ age + education, family = binomial(),
##     data = cuse)
##
## Deviance Residuals:
##    Min      1Q  Median      3Q     Max
## -1.177  -1.177   0.000   1.177   1.177
##
## Coefficients:
##                Estimate Std. Error z value Pr(>|z|)
## (Intercept)   6.267e-16  1.118e+00       0        1
## age25-29     -4.047e-16  1.414e+00       0        1
## age30-39     -5.860e-16  1.414e+00       0        1
## age40-49     -6.280e-16  1.414e+00       0        1
## educationlow -4.441e-16  1.000e+00       0        1
##
## (Dispersion parameter for binomial family taken to be 1)
##
##     Null deviance: 22.181  on 15  degrees of freedom
## Residual deviance: 22.181  on 11  degrees of freedom
## AIC: 32.181
##
## Number of Fisher Scoring iterations: 2

Some notes on the stats we generated above: Unlike linear regression, we’re using glm and our family is binomial. In logistic regression, we are no longer speaking in terms of beta sizes. The logistic function is S-shaped and constricts the range to 0-1. Thus, we are instead calculating the odds of getting a 0 vs. 1 outcome.

Let’s walk through the output: The first thing you see is the deviance residuals, which is a measure of model fit (higher is worse.) Next is our first batch of coefficients. Going from left to right, the Coefficients table shows you the “Estimate” change in the log odds of the outcome variable per one unit increase in the predictor variable, standard error, the z-statistic AKA the Wald z-statistic, and the p-value.

Here’s an example of how you’d interpret the coefficients above: For each unit change in [insert predictor variable], the log odds of [achieving the outcome of interest] increases by [coefficient].

You’ve never seen someone report anything in terms of log odds and you’re not sure how this is a meaningful description of the outcome variable, you say? That’s why we exponentiate: to put this in terms of odds ratios (see below for interpretation of ORs.)

confint(wantsMorefit) # 95% CI for the coefficients
## Waiting for profiling to be done...
##                  2.5 %   97.5 %
## (Intercept)  -2.343134 2.343134
## age25-29     -2.884060 2.884060
## age30-39     -2.884060 2.884060
## age40-49     -2.884060 2.884060
## educationlow -1.999555 1.999555

The next part of our output is the 95% confidence intervals (CI) for the unexponentiated-coefficients. In logistic regression, if the confidence interval crosses over zero, as in the interval stretches from a negative value to a positive value, that effect is not significant.

Here’s the same thing below, but this time with exponentiated coefficients:

exp(coef(wantsMorefit)) # exponentiated coefficients
##  (Intercept)     age25-29     age30-39     age40-49 educationlow
##            1            1            1            1            1
exp(confint(wantsMorefit)) # 95% CI for exponentiated coefficients
## Waiting for profiling to be done...
##                   2.5 %    97.5 %
## (Intercept)  0.09602624 10.413821
## age25-29     0.05590729 17.886754
## age30-39     0.05590729 17.886754
## age40-49     0.05590729 17.886754
## educationlow 0.13539558  7.385765

Interpreting ORs: For each unit [“increase” if OR is positive/“decrease” if OR is negative], the odds of [achieving the outcome of interest vs. not] increases/decreases by a factor of [that predictor variable’s OR].

predict(wantsMorefit, type="response") # predicted values
##   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16
## 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5
residuals(wantsMorefit, type="deviance") # residuals
##        1        2        3        4        5        6        7        8
##  1.17741 -1.17741  1.17741 -1.17741  1.17741 -1.17741  1.17741 -1.17741
##        9       10       11       12       13       14       15       16
##  1.17741 -1.17741  1.17741 -1.17741  1.17741 -1.17741  1.17741 -1.17741
plot(residuals(wantsMorefit, type="deviance")) # plot your residuals

detach(cuse)

Returning to our assumptions, we can now look at the residucals and plot them so we can see if they look weird. (They should look like a bunch of randomly scattered dots, not any special pattern.)

Above, our outcome variable was already entered as either 0 or 1. But what if your outcome isn’t already in standard 0/1 form? Here’s what you can do: Try working across columns! If you cbind the two columns into 1 vector, R can read one column as “hits” and the next as “misses” just like the previous analyses read the 0’s and 1’s. Here’s how we set up the columns like that. FYI, we copy a new column because I prefer not to overwrite the original data so I can’t use it later. Also, R reads the column in order as “hits” and then “misses,” so if your data isn’t already in that order, you may need to move or add columns to get them in that order. Here’s how.

Copy a Column in R 1. Create the new column

attach(cuse)
cuse$wantsMore_copy <- NA #This gives you a column of NAs.  1. Copy the data from the existing column into the new one. cuse$wantsMore_copy <- cuse$wantsMore cuse$wantsMore_copy<-recode(cuse$wantsMore_copy,"'no'=0;else=1") #Now those NAs have actual values. 1. Run the model cbinding across the columns where the first column becomes “hits” and the second column becomes “misses.” You’ll see in our first line of the model, where the outcome variable usually appears, we have a cbind across our two columns instead. We have cbinded them into one variable for the purpose of this analysis. usingfit <- glm(cbind(using, notUsing)~age+education+wantsMore_copy,data=cuse,family=binomial()) summary(usingfit) # display results ## ## Call: ## glm(formula = cbind(using, notUsing) ~ age + education + wantsMore_copy, ## family = binomial(), data = cuse) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -2.5148 -0.9376 0.2408 0.9822 1.7333 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -0.8082 0.1590 -5.083 3.71e-07 *** ## age25-29 0.3894 0.1759 2.214 0.02681 * ## age30-39 0.9086 0.1646 5.519 3.40e-08 *** ## age40-49 1.1892 0.2144 5.546 2.92e-08 *** ## educationlow -0.3250 0.1240 -2.620 0.00879 ** ## wantsMore_copy1 -0.8330 0.1175 -7.091 1.33e-12 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 165.772 on 15 degrees of freedom ## Residual deviance: 29.917 on 10 degrees of freedom ## AIC: 113.43 ## ## Number of Fisher Scoring iterations: 4 confint(usingfit) # 95% CI for the coefficients ## Waiting for profiling to be done... ## 2.5 % 97.5 % ## (Intercept) -1.12473993 -0.50071613 ## age25-29 0.04632634 0.73646245 ## age30-39 0.58955178 1.23552226 ## age40-49 0.77093870 1.61220754 ## educationlow -0.56972464 -0.08329659 ## wantsMore_copy1 -1.06373543 -0.60308458 exp(coef(usingfit)) # exponentiated coefficients ## (Intercept) age25-29 age30-39 age40-49 ## 0.4456506 1.4760678 2.4808804 3.2845805 ## educationlow wantsMore_copy1 ## 0.7225312 0.4347628 exp(confint(usingfit)) # 95% CI for exponentiated coefficients ## Waiting for profiling to be done... ## 2.5 % 97.5 % ## (Intercept) 0.3247369 0.6060965 ## age25-29 1.0474162 2.0885341 ## age30-39 1.8031800 3.4401747 ## age40-49 2.1617946 5.0138673 ## educationlow 0.5656812 0.9200782 ## wantsMore_copy1 0.3451641 0.5471214 predict(usingfit, type="response") # predicted values ## 1 2 3 4 5 6 7 ## 0.1228009 0.2435683 0.1623053 0.3082699 0.1712509 0.3221665 0.2223899 ## 8 9 10 11 12 13 14 ## 0.3967948 0.2577768 0.4440846 0.3246329 0.5250773 0.3149818 0.5140024 ## 15 16 ## 0.3889006 0.5941188 residuals(usingfit, type="deviance") # residuals ## 1 2 3 4 5 6 ## -0.50713553 0.36079777 1.48747671 -2.51479462 0.40419450 0.25960953 ## 7 8 9 10 11 12 ## 1.22864003 -2.06525969 -0.84242507 1.64528938 -1.22297791 0.22193897 ## 13 14 15 16 ## -2.49141433 -0.06525423 0.90006915 1.73330325 plot(residuals(usingfit, type="deviance")) # plot your residuals detach(cuse) # 2 Logistic Regression Assumptions Here are your assumptions: 1. Independence. Just know your data, know your sampling methods. Do they make a decent case for independence? Yes? Godspeed, classmates. Onto assumption #2. 2. Non-fishy distribution of the residuals. (See residuals plot above.) 3. Correct specification of the variance structure 4. Linear relationship between the response and the linear predictor For 2-4, you need to use your model. Also, your predictor may not be linear, so don’t be a perfectionist. Let’s get into the analysis, then. In mydata, our outcome variable is admit (i.e., admission to grad school, which is a yes/no response.) We’ve also got predictor variables: gre (i.e., GRE exam score) and gpa (undergraduate GPA), which are continuous(ish), and rank (ranking of undergrad institution where 1 is the best and 4 is the worst, sort of like tiers) which is more ordinal/categorical as a variable. This dataset is asking whether and to what extent GPA, GRE score, and rank of institution changes the likelihood of getting admitted to grad school. Let’s take a look at our data and make sure we in fact have a binary outcome. Certain graphs and analyses require continuous vs. categorical predictors, so we can confirm what we’re working with by getting a summary. Knowing our data should help us cover the 1st assumption. This will use the data from and modify/add onto the UCLA Statistical Consulting Group’s tutorial. Their dataset is different from cuse and allows us to do different logistic analyses. ## The following object is masked from package:pscl: ## ## admit ## admit gre gpa rank ## Min. :0.0000 Min. :220.0 Min. :2.260 Min. :1.000 ## 1st Qu.:0.0000 1st Qu.:520.0 1st Qu.:3.130 1st Qu.:2.000 ## Median :0.0000 Median :580.0 Median :3.395 Median :2.000 ## Mean :0.3175 Mean :587.7 Mean :3.390 Mean :2.485 ## 3rd Qu.:1.0000 3rd Qu.:660.0 3rd Qu.:3.670 3rd Qu.:3.000 ## Max. :1.0000 Max. :800.0 Max. :4.000 Max. :4.000 ## admit gre gpa rank ## 0.4660867 115.5165364 0.3805668 0.9444602 Here are some ways to eyeball the data for your assumptions. Is your outcome variable dichotomous? There should only be 2 possible outcomes (or levels) for that variable. Visualizing your data in a table will show you how many levels you’re dealing with. Take a look: Since admit (admission to grad school, yes or no) is our outcome variable here, does it show you exactly 2 levels of admit in the table? If yes, that’s a point toward meeting our assumptions. xtabs(~ admit + rank, data = mydata)  ## rank ## admit 1 2 3 4 ## 0 28 97 93 55 ## 1 33 54 28 12 A conditional density plot will also show us if our distribution looks binary. You’re looking for a diagonal(ish) plot with odds that differ according to the level (0 or 1) of your outcome variable. What if it isn’t reading the DV as a factor? Earlier it gave us a mean on this, so it’s probably reading it as a numeric value, not as a factor, like it needs to be. Let’s make a 2nd version of admit that R will definitely read as a factor with 2 levels. Keep this in mind: it’s a very common error in logistic regression in R. Here’s how to remind R that it’s definitely a factor: mydata$admit2 = factor(mydata$admit) We tell R to add a variable called admit2 to mydata, which is a factored version of our original admit. levels(mydata$admit2) = c("no","yes") 

And just to make sure it’s not read as a numeric again, let’s put some non-numeric levels on admit2. If you click on mydata in the Global Environment, you should see admit2 as a column with yes and no corresponding to the 0’s and 1’s in admit.

Reminder: 0 = no, 1 = yes.

Now let’s try that conditional density plot with admit2.

## 2.1 Logistic Regression and Categorical Predictor Variables that are Also Ordinal

Above we ran a basic glm model and looked at main effects. Here’s how we can do some other analyses with our grad school admit data. Here’s a cool categorical analysis from the UCLA Logit Regression Tutorial that I think is worth knowing you can do with a logistic model. Here, rank has an order and we can analyze the potential change in the outcome (admission to grad school) depending on each level of rank.

##
## Call:
## glm(formula = admit ~ gre + gpa + rank, family = "binomial",
##     data = mydata)
##
## Deviance Residuals:
##     Min       1Q   Median       3Q      Max
## -1.6268  -0.8662  -0.6388   1.1490   2.0790
##
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.989979   1.139951  -3.500 0.000465 ***
## gre          0.002264   0.001094   2.070 0.038465 *
## gpa          0.804038   0.331819   2.423 0.015388 *
## rank2       -0.675443   0.316490  -2.134 0.032829 *
## rank3       -1.340204   0.345306  -3.881 0.000104 ***
## rank4       -1.551464   0.417832  -3.713 0.000205 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
##     Null deviance: 499.98  on 399  degrees of freedom
## Residual deviance: 458.52  on 394  degrees of freedom
## AIC: 470.52
##
## Number of Fisher Scoring iterations: 4
## Waiting for profiling to be done...
##                     2.5 %       97.5 %
## (Intercept) -6.2716202334 -1.792547080
## gre          0.0001375921  0.004435874
## gpa          0.1602959439  1.464142727
## rank2       -1.3008888002 -0.056745722
## rank3       -2.0276713127 -0.670372346
## rank4       -2.4000265384 -0.753542605
##                     2.5 %       97.5 %
## (Intercept) -6.2242418514 -1.755716295
## gre          0.0001202298  0.004408622
## gpa          0.1536836760  1.454391423
## rank2       -1.2957512650 -0.055134591
## rank3       -2.0169920597 -0.663415773
## rank4       -2.3703986294 -0.732528724
## Wald test:
## ----------
##
## Chi-squared test:
## X2 = 20.9, df = 3, P(> X2) = 0.00011

The p-value in the above wald test tells you if the overall effect of that categorical variable is significant.

This is almost akin to running an omnibus test in ANOVA. We know something is happening with rank, so here’s how you can compare levels of rank.

## Wald test:
## ----------
##
## Chi-squared test:
## X2 = 5.5, df = 1, P(> X2) = 0.019
## (Intercept)         gre         gpa       rank2       rank3       rank4
##   0.0185001   1.0022670   2.2345448   0.5089310   0.2617923   0.2119375
## Waiting for profiling to be done...
##                    OR       2.5 %    97.5 %
## (Intercept) 0.0185001 0.001889165 0.1665354
## gre         1.0022670 1.000137602 1.0044457
## gpa         2.2345448 1.173858216 4.3238349
## rank2       0.5089310 0.272289674 0.9448343
## rank3       0.2617923 0.131641717 0.5115181
## rank4       0.2119375 0.090715546 0.4706961

## 2.2 Here Are the Rest of your Stats to Look at Interactions

attach(cuse)
Intusingfit <- glm(cbind(using, notUsing)~age*education,data=cuse,family=binomial())
summary(Intusingfit) # display results
##
## Call:
## glm(formula = cbind(using, notUsing) ~ age * education, family = binomial(),
##     data = cuse)
##
## Deviance Residuals:
##     Min       1Q   Median       3Q      Max
## -3.7849  -1.0158  -0.0602   1.3826   3.4456
##
## Coefficients:
##                       Estimate Std. Error z value Pr(>|z|)
## (Intercept)            -1.4412     0.1412 -10.205  < 2e-16 ***
## age25-29                0.4420     0.1919   2.303   0.0213 *
## age30-39                1.0357     0.1827   5.668 1.44e-08 ***
## age40-49                2.1090     0.3092   6.822 9.00e-12 ***
## educationlow           -0.3993     0.3685  -1.084   0.2785
## age25-29:educationlow   0.2071     0.4550   0.455   0.6490
## age30-39:educationlow   0.2904     0.4042   0.719   0.4724
## age40-49:educationlow  -0.6740     0.4923  -1.369   0.1710
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
##     Null deviance: 165.772  on 15  degrees of freedom
## Residual deviance:  73.033  on  8  degrees of freedom
## AIC: 160.54
##
## Number of Fisher Scoring iterations: 4
confint(Intusingfit) # 95% CI for the coefficients
## Waiting for profiling to be done...
##                             2.5 %     97.5 %
## (Intercept)           -1.72642022 -1.1718125
## age25-29               0.06731008  0.8207309
## age30-39               0.68123554  1.3983248
## age40-49               1.51470767  2.7315906
## educationlow          -1.17430407  0.2846155
## age25-29:educationlow -0.66503892  1.1304095
## age30-39:educationlow -0.47008655  1.1267953
## age40-49:educationlow -1.62388932  0.3165131
exp(coef(Intusingfit)) # exponentiated coefficients
##           (Intercept)              age25-29              age30-39
##             0.2366412             1.5558651             2.8172043
##              age40-49          educationlow age25-29:educationlow
##             8.2403226             0.6707629             1.2301350
## age30-39:educationlow age40-49:educationlow
##             1.3370229             0.5096888
exp(confint(Intusingfit)) # 95% CI for exponentiated coefficients
## Waiting for profiling to be done...
##                           2.5 %     97.5 %
## (Intercept)           0.1779202  0.3098049
## age25-29              1.0696271  2.2721599
## age30-39              1.9763180  4.0484125
## age40-49              4.5480914 15.3572950
## educationlow          0.3090340  1.3292509
## age25-29:educationlow 0.5142535  3.0969244
## age30-39:educationlow 0.6249482  3.0857517
## age40-49:educationlow 0.1971305  1.3723342
predict(Intusingfit, type="response") # predicted values
##         1         2         3         4         5         6         7
## 0.1369863 0.1369863 0.1913580 0.1913580 0.2330097 0.2330097 0.2691030
##         8         9        10        11        12        13        14
## 0.2691030 0.3741722 0.3741722 0.4000000 0.4000000 0.4000000 0.4000000
##        15        16
## 0.6610169 0.6610169
residuals(Intusingfit, type="deviance") # residuals
##          1          2          3          4          5          6
## -0.8207009  1.4484269  0.2309360 -0.4947108 -0.9132139  1.3606657
##          7          8          9         10         11         12
## -0.3512671  0.5223897 -3.7848869  3.4455922 -3.1959327  3.2711054
##         13         14         15         16
## -3.5518799  2.1658149 -1.3235504  0.8435519
1-pchisq(73.033,8) # 1 - chisquare (residual deviance, DF)
## [1] 1.220468e-12
#Analysis of Deviance Table
anova(Intusingfit, test="Chisq") 
## Analysis of Deviance Table
##
##
## Response: cbind(using, notUsing)
##
## Terms added sequentially (first to last)
##
##
##               Df Deviance Resid. Df Resid. Dev Pr(>Chi)
## NULL                             15    165.772
## age            3   79.192        12     86.581  < 2e-16 ***
## education      1    6.162        11     80.418  0.01305 *
## age:education  3    7.385         8     73.033  0.06057 .
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#McFadden R^2 is a pseudo R^2 for logistic regression
pR2(Intusingfit)
##          llh      llhNull           G2     McFadden         r2ML
##  -72.2703401 -118.6401419   92.7396036    0.3908441    0.9969610
##         r2CU
##    0.9969614

## 2.3 Updating the Model

What if you (or your PI) suddenly decide you want to see the analysis, but WITHOUT that one predictor variable? Fret not, you don’t need to rewrite the whole thing. You can use the update function instead. Since it’s easier to demonstrate this with a simple additive model, we’ll look at it with the old usingfit one, which only looked at main effects and not interactions. This is the model from the 3rd section of “Copying a Column in R.” Now we’ll look at it without education.

Updateusingfit <- update(usingfit, ~ . - education)
summary(Updateusingfit)
##
## Call:
## glm(formula = cbind(using, notUsing) ~ age + wantsMore_copy,
##     family = binomial(), data = cuse)
##
## Deviance Residuals:
##     Min       1Q   Median       3Q      Max
## -2.7870  -1.3208  -0.3417   1.2346   2.4577
##
## Coefficients:
##                 Estimate Std. Error z value Pr(>|z|)
## (Intercept)      -0.8698     0.1571  -5.536 3.10e-08 ***
## age25-29          0.3678     0.1754   2.097    0.036 *
## age30-39          0.8078     0.1598   5.056 4.27e-07 ***
## age40-49          1.0226     0.2039   5.014 5.32e-07 ***
## wantsMore_copy1  -0.8241     0.1171  -7.037 1.97e-12 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
##     Null deviance: 165.772  on 15  degrees of freedom
## Residual deviance:  36.888  on 11  degrees of freedom
## AIC: 118.4
##
## Number of Fisher Scoring iterations: 4
detach(cuse)

Regarding the McFadden R^2, which is a pseudo R^2 for logistic regression…A regular (i.e., non-pseudo) R^2 in ordinary least squares regression is often used as an indicator of goodness-of-fit. However, in a logistic regression we don’t have the types of values to calculate a real R^2. A pseudo R^2 here is calculated differently and is similarly used as a measure of goodness of fit (lower is better.) But that’s not all there is to pseudo-R^2’s. Here’s a good explanation with more detail.

## 2.4 Graphing Logistic Regression Analyses

This takes some work. Because we talk about binary outcomes in terms of odds ratios, but it’s not very informative to graph odds ratios, we need to work instead with predicted probabilities and graph those. That involves making a bunch of new datasets. This is my annotated version of the UCLA tutorial for beginners, along with titling the graph because untitled graphs are my pet peeve.

1. Start by calculating the predicted probability of the outcome at each level of the categorical/ordinal predictor variable and/or holding continuous variables at their means.
attach(mydata)
## The following object is masked from package:pscl:
##
##     admit
newdata1 <- with(mydata,
data.frame(gre = mean(gre), gpa = mean(gpa), rank = factor(1:4)))
1. Create the predicted probabilities using the new dataframe
newdata1\$rankP <- predict(mylogit, newdata = newdata1, type = "response")
newdata1
##     gre    gpa rank     rankP
## 1 587.7 3.3899    1 0.5166016
## 2 587.7 3.3899    2 0.3522846
## 3 587.7 3.3899    3 0.2186120
## 4 587.7 3.3899    4 0.1846684

Note: This actually requires using rep and seq functions in order to make our data fit our GRE score range and have tic marks at 100 points, similar to what our SD showed us about the GRE scores’ distribution. We also have to constantly remind R that our factors are indeed factors. If you have something logistic that won’t run, this is a common reason why.

1. Start creating the predicted probability version of your variables
newdata2 <- with(mydata,
data.frame(gre = rep(seq(from = 200, to = 800, length.out = 100), 4),
gpa = mean(gpa), rank = factor(rep(1:4, each = 100))))
1. Set the SE to display
newdata3 <- cbind(newdata2, predict(mylogit, newdata = newdata2, type="link", se=TRUE))
newdata3 <- within(newdata3, {
PredictedProb <- plogis(fit)
LL <- plogis(fit - (1.96 * se.fit))
UL <- plogis(fit + (1.96 * se.fit))
})
1. Finally, the graph
PredProbPlot <- ggplot(newdata3, aes(x = gre, y = PredictedProb)) +
geom_ribbon(aes(ymin = LL, ymax = UL, fill = rank), alpha = .2) +
geom_line(aes(colour = rank), size=1)

PredProbPlot + ggtitle("Admission to Grad School by Rank") #add a title

detach(mydata)

Interpreting the graph: Each line is the predicted probability of being admitted to grad school for each institutional rank. The legend tells us that red represents #1 ranked institutions, green represents #2 ranked institutions, blue represents #3 ranked institutions, and purple represents #4 ranked institutions (in order from top ranked to lowest ranked.) Each is set agaist the color-coded confidence interval. We can see that the intercepts all fall in rank order and that for each institution rank, the positive slope shows the predicted probability of being admitted to grad school increases as GRE score increases.

Plotting an Interaction Using the Effects Package

attach(cuse)
AgeWantfit <- glm(cbind(using,notUsing)~
age*wantsMore_copy,data=cuse,
family=binomial())
summary(AgeWantfit) # display results
##
## Call:
## glm(formula = cbind(using, notUsing) ~ age * wantsMore_copy,
##     family = binomial(), data = cuse)
##
## Deviance Residuals:
##      Min        1Q    Median        3Q       Max
## -1.67001  -0.85288   0.02621   0.72300   2.18925
##
## Coefficients:
##                          Estimate Std. Error z value Pr(>|z|)
## (Intercept)               -1.4553     0.2968  -4.903 9.43e-07 ***
## age25-29                   0.6354     0.3564   1.783  0.07463 .
## age30-39                   1.5412     0.3183   4.842 1.29e-06 ***
## age40-49                   1.7643     0.3435   5.136 2.80e-07 ***
## wantsMore_copy1           -0.0640     0.3303  -0.194  0.84637
## age25-29:wantsMore_copy1  -0.2672     0.4091  -0.653  0.51366
## age30-39:wantsMore_copy1  -1.0905     0.3733  -2.921  0.00349 **
## age40-49:wantsMore_copy1  -1.3672     0.4834  -2.828  0.00468 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
##     Null deviance: 165.772  on 15  degrees of freedom
## Residual deviance:  20.099  on  8  degrees of freedom
## AIC: 107.61
##
## Number of Fisher Scoring iterations: 4
plot(effect("age:wantsMore_copy",
AgeWantfit,
xlevels = list(age=4, wantsMore_copy=c(1,0))),
rug=F,
multiline=T,
main="Age by Wants More Interaction \n to Predict Using Contraceptives",
ylab="Contraceptive Use",
xlab="Age")

detach(cuse)

Interpreting the graph: The legend tells us that the black line with circles represents women who DON’T want more kids and the red dashed line with triangles represents women who DO want more kids. We see that, in general across both groups, the older women are, the more likely they are to use contraceptives. However, we see an Group by Age interaction. The slope is much steeper for women who DON’T want more kids: they are markedly more likely to use contraceptives than women who DO want more kids above. Women who DO want more kids experience less of an impact of age on their contraceptive use than do women who DON’T want more kids.

# 3 Poisson Regression

## 3.1 That, but Change the family to “Poisson”

No, but seriously, here’s the entire Poisson section on Robert I. Kabacoff’s quickR blog at http://www.statmethods.net/advstats/glm.html:

#Poisson Regression
#where count is a count and
# x1-x3 are continuous predictors
#fit <- glm(count ~ x1+x2+x3, data=mydata, family=poisson())
#summary(fit) display results

See? I wasn’t joking. But let’s do one for real. Here’s data I got from http://www.theanalysisfactor.com/generalized-linear-models-in-r-part-6-poisson-regression-count-variables/ by David Lillis.

It’s about cases (i.e., counts) of disease among high school students by number of days after outbreak. Here’s the data, called ‘cases.’ Each time, run the whole chunk at once or it won’t work.

cases <-
structure(list(Days = c(1L, 2L, 3L, 3L, 4L, 4L, 4L, 6L, 7L, 8L,
8L, 8L, 8L, 12L, 14L, 15L, 17L, 17L, 17L, 18L, 19L, 19L, 20L,
23L, 23L, 23L, 24L, 24L, 25L, 26L, 27L, 28L, 29L, 34L, 36L, 36L,
42L, 42L, 43L, 43L, 44L, 44L, 44L, 44L, 45L, 46L, 48L, 48L, 49L,
49L, 53L, 53L, 53L, 54L, 55L, 56L, 56L, 58L, 60L, 63L, 65L, 67L,
67L, 68L, 71L, 71L, 72L, 72L, 72L, 73L, 74L, 74L, 74L, 75L, 75L,
80L, 81L, 81L, 81L, 81L, 88L, 88L, 90L, 93L, 93L, 94L, 95L, 95L,
95L, 96L, 96L, 97L, 98L, 100L, 101L, 102L, 103L, 104L, 105L,
106L, 107L, 108L, 109L, 110L, 111L, 112L, 113L, 114L, 115L),
Students = c(6L, 8L, 12L, 9L, 3L, 3L, 11L, 5L, 7L, 3L, 8L,
4L, 6L, 8L, 3L, 6L, 3L, 2L, 2L, 6L, 3L, 7L, 7L, 2L, 2L, 8L,
3L, 6L, 5L, 7L, 6L, 4L, 4L, 3L, 3L, 5L, 3L, 3L, 3L, 5L, 3L,
5L, 6L, 3L, 3L, 3L, 3L, 2L, 3L, 1L, 3L, 3L, 5L, 4L, 4L, 3L,
5L, 4L, 3L, 5L, 3L, 4L, 2L, 3L, 3L, 1L, 3L, 2L, 5L, 4L, 3L,
0L, 3L, 3L, 4L, 0L, 3L, 3L, 4L, 0L, 2L, 2L, 1L, 1L, 2L, 0L,
2L, 1L, 1L, 0L, 0L, 1L, 1L, 2L, 2L, 1L, 1L, 1L, 1L, 0L, 0L,
0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L)), .Names = c("Days", "Students"
), class = "data.frame", row.names = c(NA, -109L))

attach(cases)

Behold, the Poisson distribution! Notice the mode at one extreme and the tail off to the other side. This is characteristic of Poisson distributions.

plot(Days, Students, xlab = "Days", ylab = "Students", pch = 16)
title(main = "Spread of Teenage Disease Post-Outbreak", sub = NULL,
line = NA, outer = FALSE) 

## 3.2 Seriously, Just Change Family

PoissonModel <- glm(Students ~ Days, poisson)
summary(PoissonModel)
##
## Call:
## glm(formula = Students ~ Days, family = poisson)
##
## Deviance Residuals:
##      Min        1Q    Median        3Q       Max
## -2.00482  -0.85719  -0.09331   0.63969   1.73696
##
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)
## (Intercept)  1.990235   0.083935   23.71   <2e-16 ***
## Days        -0.017463   0.001727  -10.11   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for poisson family taken to be 1)
##
##     Null deviance: 215.36  on 108  degrees of freedom
## Residual deviance: 101.17  on 107  degrees of freedom
## AIC: 393.11
##
## Number of Fisher Scoring iterations: 5
exp(coef(PoissonModel))
## (Intercept)        Days
##   7.3172529   0.9826884
exp(confint(PoissonModel))
## Waiting for profiling to be done...
##                2.5 %    97.5 %
## (Intercept) 6.188427 8.6004621
## Days        0.979322 0.9859775

Rather than odds ratios (which only apply to 0/1 outcomes), we use relative risk ratios in Poisson regression for count outcome variables.

Interpretation:The expected log count for each unit increase/decrease (depending on the sign of the coefficient) in [outcome variable] given [predictor variable] is [coefficient].