Inference with Logistic Regression

2025-04-28

Motivating Example

  • Motivating Example

  • \(\beta\) Hypothesis Testing

  • Model Hypothesis Testing

Motivating Example

Code
bladder1 |> ggplot(aes(number, color = death2)) +
  geom_boxplot()

\(\beta\) Hypothesis Testing

  • Motivating Example

  • \(\beta\) Hypothesis Testing

  • Model Hypothesis Testing

Hypothesis

\[H_0: \beta = \theta\]

\[H_0: \beta \ne \theta\]

Testing \(\beta_j\)

\[ \frac{\hat\beta_j - \theta}{\mathrm{se}(\hat\beta_j)} \sim N(0,1) \]

Confidence Intervals

\[ PE \pm CV \times SE \]

  • PE: Point Estimate

  • CV: Critical Value \(P(X<CV) = 1-\alpha/2\)

  • \(\alpha\): significance level

  • SE: Standard Error

Conducting HT of \(\beta_j\)

Code
xlm <- glm(Y ~ X, data = DATA, family = binomial())
summary(xlm)

Example

Code
m1 <- glm(death ~ recur + number + size, bladder1, family = binomial())
summary(m1)
#> 
#> Call:
#> glm(formula = death ~ recur + number + size, family = binomial(), 
#>     data = bladder1)
#> 
#> Coefficients:
#>               Estimate Std. Error z value Pr(>|z|)    
#> (Intercept) -0.8525259  0.4462559  -1.910 0.056082 .  
#> recur       -0.3897480  0.1062848  -3.667 0.000245 ***
#> number       0.0008451  0.1124503   0.008 0.994004    
#> size        -0.2240419  0.1626749  -1.377 0.168439    
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> (Dispersion parameter for binomial family taken to be 1)
#> 
#>     Null deviance: 189.38  on 293  degrees of freedom
#> Residual deviance: 166.43  on 290  degrees of freedom
#> AIC: 174.43
#> 
#> Number of Fisher Scoring iterations: 6

Confidence Interval

Code
confint(xlm, level = LEVEL)

Example

Code
confint(m1, level = 0.95)
#>                  2.5 %      97.5 %
#> (Intercept) -1.7353779  0.02529523
#> recur       -0.6217831 -0.20078281
#> number      -0.2421738  0.20731479
#> size        -0.5880581  0.06061498

Confidence Interval for Odds Ratio

Code
exp(confint(m1, level = 0.95))
#>                 2.5 %    97.5 %
#> (Intercept) 0.1763335 1.0256179
#> recur       0.5369861 0.8180901
#> number      0.7849197 1.2303698
#> size        0.5554048 1.0624898

Model Hypothesis Testing

  • Motivating Example

  • \(\beta\) Hypothesis Testing

  • Model Hypothesis Testing

Model inference

We conduct model inference to determine if different models are better at explaining variation. A common example is to compare a linear model (\(g(\hat Y)=\hat\beta_0 + \hat\beta_1 X\)) to the mean of Y (\(\hat \mu_y\)). We determine the significance of the variation explained using an Analysis of Variance (ANOVA) table and F test.

Model Inference

Given 2 models:

\[ g(\hat Y) = \hat\beta_0 + \hat\beta_1 X_1 + \hat\beta_2 X_2 + \cdots + \hat\beta_p X_p \]

or

\[ g(\hat Y) = \bar y \]

Is the model with predictors do a better job than using the average?

Likelihood Ratio Test

The Likelihood Ratio Test is a test to determine whether the likelihood of observing the outcome is significantly bigger in a larger, more complicated model, than a simpler model.

It conducts a hypothesis tests to see if models are significantly different from each other.

Conducting an LRT in R

Code
xlm <- glm(Y ~ X, data = DATA, family = binomial)
xlm0 <- glm(Y ~ 1, data = DATA, family = binomial)
anova(xlm0, xlm, test = "LRT")

Example

Code
m0 <- update(m1, formula. = ~ 1)
anova(m0, m1, test = "LRT")
#> Analysis of Deviance Table
#> 
#> Model 1: death ~ 1
#> Model 2: death ~ recur + number + size
#>   Resid. Df Resid. Dev Df Deviance Pr(>Chi)    
#> 1       293     189.38                         
#> 2       290     166.43  3   22.953 4.13e-05 ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Model Inference

Model inference can be extended to compare models that have different number of predictors.

Model Inference

Given:

\[ M1:\ g(\hat y) = \beta_0 + \beta_1 X_1 + \beta_2 X_2 \]

\[ M2:\ g(\hat y) = \beta_0 + \beta_1 X_1 \]

Let \(M1\) be the FULL (larger) model, and let \(M2\) be the RED (Reduced, smaller) model.

Model Inference

He can test the following Hypothesis:

  • \(H_0\): The error variations between the FULL and RED model are not different.
  • \(H_1\): The error variations between the FULL and RED model are different.

Likelihood Ratio Test in R

Code
full <- glm(Y  ~  X1 + X2 + X3 + X4, DATA, family = binomial())
red <- glm(Y ~ X1 + X2, DATA, family = binomial())
anova(red, full, test = "LRT")

Example

Code
m1 <- glm(death ~ number + size + recur, bladder1, family = binomial())
m2 <- glm(death ~ recur, bladder1, family = binomial())
anova(m2, m1, test = "LRT")
#> Analysis of Deviance Table
#> 
#> Model 1: death ~ recur
#> Model 2: death ~ number + size + recur
#>   Resid. Df Resid. Dev Df Deviance Pr(>Chi)
#> 1       292     168.72                     
#> 2       290     166.43  2   2.2883   0.3185