Interpreting parametric bootstrap for GOF

Forum for discussion of general questions related to study design and/or analysis of existing data - software neutral.

Interpreting parametric bootstrap for GOF

Postby jCeradini » Fri May 01, 2015 4:57 pm

Hi all,

I've used the parametric bootstrap in MARK and PRESENCE to assess GOF for models. My question is not focused on c-hat, but rather on where the observed test statistic (deviance or chi-squared) falls on the distribution of test stats from the bootstrap.

From reading the MARK book and other sources, my understanding is that the simulated data is known to meet the model assumptions thus if the observed test stat falls "not too far from" the mean of the bootstrap results we can conclude that there's not strong evidence that model fit is poor.

I'm wondering if this alternative interpretation is correct as well:
If the observed test stat and the distribution from the bootstrap are similar, then we can conclude that the model does a good job predicting the observed data. The better the model predicts the observed data, the more similar the observed and bootstrapped results should be, right (it's like refitting the same model over and over again)? Or, does this logic not hold up because the simulation is constraining the data to perfectly meet model assumptions? Or, did I just say the same thing as the MARK book but worded differently enough to confuse myself?

Thanks!
Joe
jCeradini
 
Posts: 72
Joined: Mon Oct 13, 2014 3:53 pm

Return to analysis & design questions

Who is online

Users browsing this forum: No registered users and 15 guests