Hi,
I'm running a single-season occupancy model, and my top two models are 1) psi(elevation) and 2) psi(elevation*SC5), where SC5 = % of size class 5 trees within the sample unit. Although the interaction model appears to be significant, I'm questioning whether something strange is occurring with the interaction model that might be pulling it to the top of the set. Both of the models share nearly identical delta AIC (0.00 vs 0.01), AIC weights, and deviance, which seems strange as a novice to occupancy modeling (please correct me if that's not odd). I've double checked my design matrix and nothing appears to be wrong in the matrix that would make them almost identical (i.e. one beta for elevation, one for SC5, and one for product(elev,SC5)).
The beta estimates for elevation seem reasonable for both models, but the odds ratio derived from the beta estimate for SC5 in this interaction model seems unreasonable (i.e. SC5 beta = 0.164, OR = 1.178, and with a 1 unit change reflecting a 1% change in SC5 per sample unit this would translate to: for every 20% increase in SC5 per sample unit, probability of occupancy increases by 356%). In the additive model psi(elev + SC5) and the univariate model psi(SC5) the odds ratios are reasonable, so I'm wondering if there is something going on with the interaction model that is pulling it to the top. I've tried looking into 'pretending variables' or reading through the MARK book to find any information on why this might be occurring, so any insight or direction in where to look for help/interpretation would be greatly appreciated.
I noticed in MARK Ch. 4 - 51 it talks about 'problem models' when model averaging parameter estimates. It states that "if the 'problem models' have appreciable support in the data, you'll need to be more careful [when considering dropping models for estimating parameters]. You might choose simply to average only those 'well-estimated' parameters, but you need to first confirm that those models aren't well-supported because of poorly estimated parameters." This is what I'm confused by. I'm not sure 1) how to determine if a model is a 'problem model'/what is considered a 'problem model', and 2) how to determine if my 2nd best model is well-supported simply because of poorly estimate parameters.
Thank you!
~Holly