Likelihood Ratio Test in RMark

posts related to the RMark library, which may not be of general interest to users of 'classic' MARK

Likelihood Ratio Test in RMark

Postby justicecasey » Wed Mar 27, 2013 4:28 pm

Is it possible to run a likelihood ratio test in RMark? If so, what is the primary function in R called so I can look up the help files for it. Alternatively, is it discussed in Cooch and White (2012) or any other document that I could look up? Thanks.
justicecasey
 
Posts: 1
Joined: Tue Mar 26, 2013 7:37 pm

Re: Likelihood Ratio Test in RMark

Postby bacollier » Wed Mar 27, 2013 5:45 pm

justicecasey wrote:Is it possible to run a likelihood ratio test in RMark? If so, what is the primary function in R called so I can look up the help files for it. Alternatively, is it discussed in Cooch and White (2012) or any other document that I could look up? Thanks.


You can do it in the MARK interface, but I don't see anywhere in RMark that it is set up and I don't think the RMark output is set up so R functions like lrtest() are usable.

if I remember, LRT are discussed in C&W and in the MARK help index, so perhaps you should start there.

bret
bacollier
 
Posts: 230
Joined: Fri Nov 26, 2004 10:33 am
Location: Louisiana State University

Re: Likelihood Ratio Test in RMark

Postby jlaake » Wed Mar 27, 2013 6:10 pm

LRT are very easy to construct but only work with nested models. Why use LRT instead of AIC?

All you need is model$results$lnl for each model and the difference in the number of parameters and then use pchisq to compute the probability.

Here is an example. Note that the lnl field is actually -2LnL. mod0 must be nested in mod1 (i.e. mod1 must have all the same parameters as mod0 and more).

Code: Select all
data(dipper)
mod.0=mark(dipper)
mod.1=mark(dipper,model.parameters=list(p=list(formula=~time)))
1-pchisq((mod.0$results$lnl-mod.1$results$lnl),df=mod.1$results$npar-mod.0$results$npar)


Also, note that for models that differ by 1 parameter, an LRT value of 2 is the same as having identical AIC values and if you were to treat small differences in AIC as a hypothesis test (not saying you should), the LRT is equivalent to using alpha=0.157 (1-pchisq(2,1)=0.157).

--jeff
jlaake
 
Posts: 1417
Joined: Fri May 12, 2006 12:50 pm
Location: Escondido, CA

Re: Likelihood Ratio Test in RMark

Postby jCeradini » Mon Jan 16, 2017 4:37 pm

Jeff -

I found this post to be very informative with regards to thinking about AIC and so-called uninformative parameters, as described by Burnham and Anderson 2002 (p. 131), and Arnold 2010 JWM. In the discussion on pin-pointing uninformative parameters the authors use phrases like "essentially the same values of the maximized log-likelihood" or "essentially identical..." which I've always struggled to understand.

If I'm interpreting your post correctly,
Code: Select all
1 - pchisq(2, df = 1)
[1] 0.1572992

means that two nested models that differ in one parameter and have identical AIC values will differ in deviance by approximately 2 (which is certainly the case in a few of my model sets I quickly checked). This corresponds with a LRT value of ~0.157 for a comparison those two models.

So, when thinking about AIC and uninformative parameters, we're really thinking about what alpha level we want to use for a comparison...which I know sounds dangerous/ill-advised since it is clearly mixing two fundamentally different modeling approaches (hypothesis testing vs. IT/MMI). Nonetheless, it really helps me to understand what may be implied by "essentially identical" NLL values or, at least, what a meaningful difference may be.

Assuming nested models:
Code: Select all
1 - pchisq(2, df = 1) # deviance diff of 2
[1] 0.1572992

1 - pchisq(1, df = 1) # deviance diff of 1
[1] 0.3173105

1 - pchisq(0.5, df = 1) # deviance diff of 0.5
[1] 0.4795001

1 - pchisq(0, df = 1) # deviance diff of 0
[1] 1

Perhaps this muddies the waters more because AIC and alpha levels clearly are not used together, but it is useful for me to understand the connection. It is also obviously important to acknowledge/understand uninformative parameters and how they can influence AIC model selection, which puts me back in the odd position of feeling like I'm mixing AIC and hypothesis testing :?

Joe
jCeradini
 
Posts: 72
Joined: Mon Oct 13, 2014 3:53 pm

Re: Likelihood Ratio Test in RMark

Postby jlaake » Tue Jan 17, 2017 11:57 am

I'm glad it was helpful but be careful in your interpretation. It depends on which model has the better AIC. I believe you are interpreting it as if the model with one additional parameter has the lower AIC and differs by 2 from the model with one less. When it is the opposite then p=1 because the log likelihoods are essentially the same even though you added another parameter. Look at how lRT is constructed. The models will be within a delta aic of 2 but the additional parameter added nothing. That is the case discussed by B&A. Where the alpha/p interpretation is comparable is when the AIC values are nearly the same. That happens when the added parameter improves the -2*log-likelihood by -2 such that when you penalize by 2 the AIC values are the same. But note that this comparison of LRT and AIC only holds for nested models that differ by 1 parameter. When you go to more than one parameter the alpha value changes. For example when you compare nested models with 2,3 and 4 parameters the critical value alpha drops.

> 1-pchisq(4,2)
[1] 0.1353352832
> 1-pchisq(6,3)
[1] 0.1116102251
> 1-pchisq(8,4)
[1] 0.09157819444

The other problem with LRT is that it is only appropriate for nested models.

--jeff
jlaake
 
Posts: 1417
Joined: Fri May 12, 2006 12:50 pm
Location: Escondido, CA

Re: Likelihood Ratio Test in RMark

Postby jCeradini » Tue Jan 17, 2017 12:24 pm

Thanks for the expanded explanation, Jeff. Very helpful as usual.

It is certainly a drawback that LRT only applies to nested models, and that there are often multiple comparisons that can be made (as discussed in the Gentle MARK book, 4.6.2). However, many researchers make model sets with all possible combinations of predictors (not that that is necessarily the best approach...), and are then comparing lots of nested models. In my experience, when you do this, there's a high chance you'll run into highly ranked models with uninformative predictors (depending on how many predictors are in the set). It then becomes quite important to know how to judge uninformative models, especially when interpreting model weights. LRT is an interesting alternative that is designed to deal with nested models, perhaps in a more straightforward way than AIC, and also sheds some light on what is meant be uninformative parameters....although my model sets are often a mix of nested and non-nested models, so I'll probably be sticking with AIC.

Relevant point from Arnold 2010 JWM (1177):
"For n/K > 40, AIC-based model selection will support additional variables whose approximately 85% confidence intervals exclude zero (i.e., if likelihood-ratio chi-squared > 2 on 1 degree of freedom, then P < 0.157). It makes little sense to select variables at P < 0.157 using AIC and then turn around and dismiss them at P > 0.05 using 95% confidence intervals"

Joe
jCeradini
 
Posts: 72
Joined: Mon Oct 13, 2014 3:53 pm

Re: Likelihood Ratio Test in RMark

Postby jlaake » Tue Jan 17, 2017 2:57 pm

Most situations do involve nested and non-nested models which is what makes AIC more useful.

With regard to Arnold (2010) comment I agree completely.
jlaake
 
Posts: 1417
Joined: Fri May 12, 2006 12:50 pm
Location: Escondido, CA

Re: Likelihood Ratio Test in RMark

Postby cooch » Tue Jan 17, 2017 8:18 pm

jCeradini wrote:Jeff -

I found this post to be very informative with regards to thinking about AIC and so-called uninformative parameters, as described by Burnham and Anderson 2002 (p. 131), and Arnold 2010 JWM. In the discussion on pin-pointing uninformative parameters the authors use phrases like "essentially the same values of the maximized log-likelihood" or "essentially identical..." which I've always struggled to understand.


There is text in the MARK book related to this issue -- Chapter 4, pp. 61-62.
cooch
 
Posts: 1628
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: Likelihood Ratio Test in RMark

Postby jCeradini » Wed Jan 18, 2017 2:13 pm

Great, thanks Evan. I had not seen that (new?) sidebar. That figure helps to solidify what Jeff was explaining. Interesting interaction between the number of parameters added and the p-value at which AIC "retains" a predictor - important aspect of AIC that I don't think many people understand (I certainly didn't fully understand it until about 10 minutes ago :)).

Joe
jCeradini
 
Posts: 72
Joined: Mon Oct 13, 2014 3:53 pm

Re: Likelihood Ratio Test in RMark

Postby cooch » Thu Jan 19, 2017 12:27 pm

jCeradini wrote:Great, thanks Evan. I had not seen that (new?) sidebar. That figure helps to solidify what Jeff was explaining. Interesting interaction between the number of parameters added and the p-value at which AIC "retains" a predictor - important aspect of AIC that I don't think many people understand (I certainly didn't fully understand it until about 10 minutes ago :)).

Joe


All credit to Gary for that bit...
cooch
 
Posts: 1628
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University


Return to RMark

Who is online

Users browsing this forum: No registered users and 10 guests