Standard error estimates of zero

questions concerning analysis/theory using program MARK

Standard error estimates of zero

Postby Bryan Hamilton » Sun Dec 11, 2016 11:44 pm

I'm working on Pradel recruitment models. The time varying model is strongly supported, with 23 of 25 parameters estimable. All of the beta estimates for f(t) have standard errors of zero, except one of the times which has a standard error of -400 to 400. When I add a time parameter to phi, the standard errors are estimated for f.

So there are obviously some major problems with the most favored model. I've checked TFM and tried to search the forum but haven't found much except that standard errors of zero indicate a problem with the model. So three questions:

1. How do I troubleshoot where the problem might be in the model structure?
2. Are there some ways to constrain time that might allow the standard errors to be estimated?
3. Why would complicating model structure, going from 25 to 35 parameters, allow standard error to be estimated?

I appreciate any advice on this. Thank you.
Bryan Hamilton
 
Posts: 111
Joined: Thu Jun 15, 2006 11:36 am
Location: Great Basin National Park

Re: Standard error estimates of zero

Postby abreton » Mon Dec 12, 2016 7:19 am

1. How do I troubleshoot where the problem might be in the model structure?

I suspect the issue is that the data are possibly, keep reading, insufficient to estimate all of the parameters of the fully, time-dependent, model.

2. Are there some ways to constrain time that might allow the standard errors to be estimated?

Use time-varying and / or individual covariates to model time-variation in place of the fully-time-dependent structure that you have now. Alternatively, reduce variation in time in other ways, to 'good' and 'bad' years for example as opposed to 'all years are different'.

3. Why would complicating model structure, going from 25 to 35 parameters, allow standard error to be estimated?

It may appear that the increase in parameters improved the issue but i suspect a thorough analysis of what is / is not estimated would conclude that the problems remain. I propose that you use estimates from a simpler model as 'initial parameter estimates' (see options on the set-up / run model window in MARK) for each more complex model. Incrementally make your way to the fully time-dependent model using this strategy and you may get desirable results. In general, given data are almost exclusively 'sparse' for the realistic models that we want to run this approach of "pulling yourself up by your bootstraps" (using estimates from a simpler model as starting values for a slightly more complex model) is always a good idea ... even when problems are not anticipated.
abreton
 
Posts: 111
Joined: Tue Apr 25, 2006 8:18 pm
Location: Insight Database Design and Consulting

Re: Standard error estimates of zero

Postby Bryan Hamilton » Sat Dec 17, 2016 10:04 pm

Thank you abreton for the thoughtful response. I think you are probably right that the data are insufficient to model full time dependence. I did add a climate variable as a time response and the model converged nicely, with standard error estimates.

Thanks also for the advice on pulling up the bootstraps. edit: I did try to set the initial parameter estimates based on a dot model for f. It actually made things worse, with one less parameter estimated and still no standard errors. I'll try again tomorrow.

Interestingly, simpler models for p and Phi, did not help with fully time dependent model for recruitment. Estimates for f must depend on good estimates for Phi as well as p. I should probably re-read the markbook and papers on temporal symmetry models.....
Bryan Hamilton
 
Posts: 111
Joined: Thu Jun 15, 2006 11:36 am
Location: Great Basin National Park

Re: Standard error estimates of zero

Postby abreton » Mon Dec 19, 2016 8:19 am

I should have stressed that the subtleties, especially confounding, in realistic models like the ones you're trying to deploy should also be thoroughly considered when problems arise. An error in the specification of a model, such as Pradel's Recruitment model, would certainly lead to problems. Another review, as you mentioned, of the relevant chapters in the 'gentle' intro is a good idea, as well as the primary references for Pradel's Recruitment model. Along the way, you might also want to consider simpler model parameterizations for your analysis, such as the joint live-dead Barker Model, that will provide answers to your research questions without introducing unreasonable assumptions about the process that gave rise to your data. Have a look at Chapter 9 in the mark book, Joint live encounter & dead recovery data.
abreton
 
Posts: 111
Joined: Tue Apr 25, 2006 8:18 pm
Location: Insight Database Design and Consulting

Re: Standard error estimates of zero

Postby Bryan Hamilton » Mon Dec 19, 2016 5:50 pm

I'm not sure how to address confounding, beyond looking at correlations between explanatory variables. I am rereading the gentle introduction and there is a recommendation in chapter 13 to model Phi, p, and f with full time dependence. A fully time dependent model makes things worse as far as the numbers of parameters it can estimate.

Interestingly and hard for me to understand, is that when I use a Pradel lambda model the time parameters all converge and some of the model selection inferences change. In principle, the lambda and recruitment models are based on the same equations so model selection and parameter estimates should be identical....

edit: and further confusing things, the MARK output file does estimate standard errors, RMark does not.

edit2- The MARK output file gives standard errors and confidence intervals for the REAL estimates not the beta estimates. I don't know how to interpret that but it possibly deserves another tread for discussion?

I will check out Chapter 9. Thank you.
Bryan Hamilton
 
Posts: 111
Joined: Thu Jun 15, 2006 11:36 am
Location: Great Basin National Park


Return to analysis help

Who is online

Users browsing this forum: No registered users and 11 guests

cron