Jeff -
I found this post to be very informative with regards to thinking about AIC and so-called uninformative parameters, as described by Burnham and Anderson 2002 (p. 131), and Arnold 2010 JWM. In the discussion on pin-pointing uninformative parameters the authors use phrases like "essentially the same values of the maximized log-likelihood" or "essentially identical..." which I've always struggled to understand.
If I'm interpreting your post correctly,
- Code: Select all
1 - pchisq(2, df = 1)
[1] 0.1572992
means that two nested models that differ in one parameter and have identical AIC values will differ in deviance by approximately 2 (which is certainly the case in a few of my model sets I quickly checked). This corresponds with a LRT value of ~0.157 for a comparison those two models.
So, when thinking about AIC and uninformative parameters, we're really thinking about what alpha level we want to use for a comparison...which I know sounds dangerous/ill-advised since it is clearly mixing two fundamentally different modeling approaches (hypothesis testing vs. IT/MMI). Nonetheless, it really helps me to understand what may be implied by "essentially identical" NLL values or, at least, what a meaningful difference may be.
Assuming nested models:
- Code: Select all
1 - pchisq(2, df = 1) # deviance diff of 2
[1] 0.1572992
1 - pchisq(1, df = 1) # deviance diff of 1
[1] 0.3173105
1 - pchisq(0.5, df = 1) # deviance diff of 0.5
[1] 0.4795001
1 - pchisq(0, df = 1) # deviance diff of 0
[1] 1
Perhaps this muddies the waters more because AIC and alpha levels clearly are not used together, but it is useful for me to understand the connection. It is also obviously important to acknowledge/understand uninformative parameters and how they can influence AIC model selection, which puts me back in the odd position of feeling like I'm mixing AIC and hypothesis testing
Joe