I have been working on some simulations of cluster sampling designs for months now with unsatisfying results! I am basically using secrdesign to compare relative bias and precision of some 3 x 3 trapping clusters at various spacings between traps (0.5 to 4 sigma) and spacings between cluster centers (10 to 30 sigma). I am running 50 replicates which is about all I can do given the long run times. Basically, my relative bias estimates are all over the place:
Table 1. Relative bias relative to various trap spacings (relative to the home range scaling parameter, σ) with σ = 1,000 m, and cluster spacings of 16,000 m on center. The 95% CIs based on the SEs are in parentheses.
Trap spacing Relative Bias
0.5σ -0.538 (-0.589–-0.487)
1σ -0.186 (-0.229–-0.143)
1.5σ -0.081 (-0.119–-0.043)
2σ 0.074 (0.030–0.118)
2.5σ 0.035 (-0.001–0.071)
3σ 0.040 (-0.025–0.106)
3.5σ 0.238 (0.132–0.343)
4σ 0.453 (0.212–0.694)
This represents a negative bias when the 3x3 cluster is small relative to the home range and a positive bias when the cluster is large relative to the home range. That makes sense but I would expect to have more scenarios with an unbiased estimate. I am wondering if it is more appropriate to use the SD rather than the SE of relative bias to determine if it differs from zero indicating a significant bias. When you plot the estimates, CI coverage is usually high and there is lots of scatter above and below the true estimate, yet the CIs of the relative bias estimates suggest the bias is real. Does secrdesign produce a new distribution of animals on the landscape and associated capture histories for every run or just unique capture histories? Is it possible that the number of activity centers that the program simulates is biased high or low and the RB is not estimated based on the actual number? I am thinking that 50 replicates may not be enough but the long run times make more "problematic". Thanks and sorry for the long post!
Joe