Hi All,
I made the fatal mistake of taking too long to type this and not saving it, so when I went to submit it the first time I was sent to a re-login screen and I lost it all! Lesson learned
Here we go, round 2!
I am interested in testing the effect of harvest on survival in my population. My top models for Phi in the RB Pradel (survival and recruitment) are Phi(~1) and, equally supported, Phi (~Harvest), however the betas for the harvest model are slightly positive and grossly overlap 0. As a result, I generally have no evidence to support the hypothesis that harvest is additive in this population and can infer that the harvest may be compensatory given that we have varying harvest yet a generally constant survival rate over the 8 years of sampling. However, I do not want to commit a type I error here, where I fail to detect an effect of harvest when it truly was there.
In an effort to address this issue I set out to simulate a scenario where harvesting was purely additive, i.e, if survival without harvest=0.9 in a population of 100, harvesting 10 individuals should decrease survival by 10%, so survival after harvest will be=0.81. In general, our harvest rates vary in the range of ~5-6% through the 8 years of study, so I simulated this effect as such:
PIM Chart for TRUE MODEL
Betas for TRUE MODEL, where 1-7 are the true survival values under an additive mortality scenario, 8 is the recruitment and 9 is the capture probability (I fixed 8 and 9 at the means of the values from my real data). As far as I understand I was able to input these “true” parameter values into the beta fields because I used the identity link.
Finally, I fit two models to these data,
1) TRUE KILL was fit using the harvest data used to build the true model, so:
Phi(~TrueKill) f(~1) p(~1)
2) NULL was fit using a constant value for Phi, despite the fact we know the underlying survival is not constant, this will contrast model fit.
Phi(~1) f(~1) p(~1)
Following this I simulated these data and fit both of the above models to it for 100 iterations. I have not done the formal LRT tests on these results because I wanted to ensure my methods were correct thus far, but: My results were that the NULL model almost always had a lower AIC value than the TRUE KILL model suggesting that I do not have the power to detect an effect when it is present. BUT, to test this further I multipled the effect of harvest by 3, so that the range of harvests was ~15% as opposed to ~5%
so survival=0.804, 0.624, 0.684, 0.723, etc.
And my results were still generally equivocal, the True Kill model performed better than the NULL at times, but maybe only about 30% of the time.
Is my simulation off, OR, do I just not have the power to do any sort of discrimination with these data?
Cheers,
CL