by Lea » Wed Feb 11, 2015 7:58 pm
Well in case anyone is interested, I finally found reference to it in the MARK book on pg 181.
4. it is also worth looking qualitatively at the ‘sensitivity’ of your model rankings to
changes in ˆ c.Manually increase ˆ c in the results browsers from 1.0, 1.25, 1.5 and so on
(up to, say, 2.0), and look to see how much the ‘results’ (i.e., relative support among the
models in your candidate model set) changes. In many cases, your best model(s) will
continue to be among those with high AIC weight, even as you increase ˆ c. This gives
you some grounds for confidence (not much, perhaps, but some). Always remember,
though, that in general, the bigger the ˆ c, the more ‘conservative’ your model selection
will be - AIC will tend to favor reduced parameter models with increasing ˆ c (a look at
equation for calculating AIC will show why). This should make intuitive sense as well
- if you have ‘noise’ (i.e., lack of t), perhaps the best you can do is t a simple model.
In cases where the model rankings change dramatically with even small changes in
ˆ c, this might suggest that your data are too sparse for robust estimation, and as such,
there will be real limits to the inferences you can make from your candidate model
set.