I've been searching the forums and not having much luck finding any answers on this.
I get the "wonky" SEs in many of my models.
I'm evaluating a set of 49 RD closed population estimation models, the goal being to estimate N, to test whether counts vary linearly with respect to N, and whether detection probability is constant over time. I've set S to be constant (a somewhat, but not totally, arbitrary decision), partly because almost all models with S varying by time return ridiculous estimates (i.e. partial convergence issues). Even within this set of 49 S(.) models, many contain wonky SE's.
I realize this could be due to data sparseness, etc., but am uncertain what exactly can be done about it. Some models of this group work just fine, even with more parameters than some that have non-sense estimates.
Questions:
(1) Is it legitimate to simply toss out models with convergence issues and compare the rest based on AIC? Is there a science to doing this?
(2) Can it ever be legitimate to keep a model with some convergence issues, say when one gamma parameter is wonky, but everything else looks "fine"?
(3) Is it ever acceptable to constrain parameters simply on the basis of numerical convenience, without strong biological reason (as I have done with S)? I've read published studies that do similar things, and I can make a case for S being relatively constant but feel guilty about doing it...I suspect the saints of mark-recapture are trying to tell me something.
Nate