AICc reliable when variance calculation fails?

questions concerning anlysis/theory using program DENSITY and R package secr. Focus on spatially-explicit analysis.

AICc reliable when variance calculation fails?

Postby howeer » Fri Jan 18, 2013 12:49 pm

If I get the warning message "variance calculation failed" after fitting an secr model (conditional likelihood), are the LL and AICc values still reliable, or is e.g. a convergence problem indicated?

Thanks,
Eric
howeer
 
Posts: 39
Joined: Wed Jun 21, 2006 10:49 am

Re: AICc reliable when variance calculation fails?

Postby murray.efford » Fri Jan 18, 2013 2:06 pm

Eric

The message is given when the inverse of the Hessian matrix has any negative elements on the diagonal (the sqrt of these otherwise gives the SE of each 'beta' parameter). This usually reflects an underlying problem with the model or data (nonidentifiability or parameter at a boundary). Although for historical reasons the code suggests trying BFGS as an alternative maximization method (i) Nelder Mead tends to be more robust and might be better (ii) the problem is probably not in the maximization.

As I understand it, the maximized LL and AIC are 'correct', but one should reject as unusable a model that has such a fundamental problem. This is a general question that arises in MARK and elsewhere (see e.g. bottom p. C.60 in The Book, which shows that in this case (R) MARK does tabulate the AIC and LL, but then rejects the model for model averaging).

I would be interested in other views.
Murray
murray.efford
 
Posts: 686
Joined: Mon Sep 29, 2008 7:11 pm
Location: Dunedin, New Zealand

Re: AICc reliable when variance calculation fails?

Postby Granjon » Mon Feb 04, 2013 4:31 am

I also have the warning "variance calculation failed" on one of my models, but I really don't see what the problem would be.

I am studying a chimpanzee population and trying to estimate population size using fecal samples. I am comparing different models with the same capture and polygon data, and eventually I will also perform an AIC test. My problem is that variance calculation fails at only 1 model:

> ngogosex <- read.capthist('CaptXYsex-2013-01-08.txt', 'Perimeter-2013-02-01.txt',
+ fmt = 'XY', detector = 'polygon', covnames="Sex")
No errors found :-)

> ngogosex.fit1=secr.fit(ngogosex, trace=FALSE, buffer=5000, groups="Sex",
+ model=list(D~g, g0~g), method='BFGS')
Warning message:
In secr.fit(ngogosex, trace = FALSE, buffer = 5000, groups = "Sex", :
using default starting values

> ngogosex.fit2=secr.fit(ngogosex, trace=TRUE, buffer=5000, groups="Sex",
+ model=D~g, method='BFGS')
Warning messages:
1: In secr.fit(ngogosex, trace = FALSE, buffer = 5000, groups = "Sex", :
using default starting values
2: In secr.fit(ngogosex, trace = FALSE, buffer = 5000, groups = "Sex", :
variance calculation failed


I have already run ngogosex.fit2 with a different polygon file and it worked fine, so I really do not understand what could be the problem here.

Thanks for your help,
Anne-Celine
Granjon
 
Posts: 9
Joined: Sat Feb 02, 2013 11:01 am

Re: AICc reliable when variance calculation fails?

Postby murray.efford » Mon Feb 04, 2013 4:26 pm

It does seem odd that you can fit the more general model, but not one with g0 independent of Sex. My first step would be to try method = 'Nelder-Mead'. I may be able to see something by looking at the data, so feel free to send them to me offline.
Murray
murray.efford
 
Posts: 686
Joined: Mon Sep 29, 2008 7:11 pm
Location: Dunedin, New Zealand

Re: AICc reliable when variance calculation fails?

Postby Granjon » Tue Feb 05, 2013 4:25 am

Thanks, it worked with method='Nelder-Mead'.

I don't really understand what the difference is between BFGS and Nelder-Mead though. Also, now that I used Nelder-Mead for the basic model (D~g), should I also use it for the other model (D~g, g0~g) in order to compare them?

Anne-Celine
Granjon
 
Posts: 9
Joined: Sat Feb 02, 2013 11:01 am

Re: AICc reliable when variance calculation fails?

Postby murray.efford » Tue Feb 05, 2013 5:12 am

Thanks, it worked with method='Nelder-Mead'.
Great.
I don't really understand what the difference is between BFGS and Nelder-Mead though. Also, now that I used Nelder-Mead for the basic model (D~g), should I also use it for the other model (D~g, g0~g) in order to compare them?

BFGS, Nelder-Mead and Newton-Raphson (the default) are different numerical algorithms for navigating to the maximum likelihood; they should produce the same result, give or take a trivial amount determined by the tolerance arguments of nlm and optim. Try this if you need to be convinced:
Code: Select all
fit.NM <- secr.fit(captdata, method = 'Nelder-Mead', trace = FALSE)
fit.BFGS <- secr.fit(captdata, method = 'BFGS', trace = FALSE)

AIC(fit.NM, fit.BFGS)
                    model   detectfn npar    logLik     AIC     AICc dAICc AICwt
fit.NM   D~1 g0~1 sigma~1 halfnormal    3 -759.0198 1524.04 1524.373     0   0.5
fit.BFGS D~1 g0~1 sigma~1 halfnormal    3 -759.0198 1524.04 1524.373     0   0.5

collate(fit.NM, fit.BFGS)[,,,'D']
         estimate SE.estimate      lcl      ucl
fit.NM   5.481091   0.6468860 4.352655 6.902077
fit.BFGS 5.478994   0.6467626 4.350799 6.899740


Yes, the density estimates are different numerically, but the difference is biologically and statistically trivial. Nelder-Mead tends to be the most robust, but it is a marginally slower. If you stick with one method it does make it easier to write up cleanly, but I suppose it's not essential.
Murray
murray.efford
 
Posts: 686
Joined: Mon Sep 29, 2008 7:11 pm
Location: Dunedin, New Zealand


Return to analysis help

Who is online

Users browsing this forum: No registered users and 11 guests

cron