Hi all,
I've used the parametric bootstrap in MARK and PRESENCE to assess GOF for models. My question is not focused on c-hat, but rather on where the observed test statistic (deviance or chi-squared) falls on the distribution of test stats from the bootstrap.
From reading the MARK book and other sources, my understanding is that the simulated data is known to meet the model assumptions thus if the observed test stat falls "not too far from" the mean of the bootstrap results we can conclude that there's not strong evidence that model fit is poor.
I'm wondering if this alternative interpretation is correct as well:
If the observed test stat and the distribution from the bootstrap are similar, then we can conclude that the model does a good job predicting the observed data. The better the model predicts the observed data, the more similar the observed and bootstrapped results should be, right (it's like refitting the same model over and over again)? Or, does this logic not hold up because the simulation is constraining the data to perfectly meet model assumptions? Or, did I just say the same thing as the MARK book but worded differently enough to confuse myself?
Thanks!
Joe