The project that never dies

questions concerning anlysis/theory using program DENSITY and R package secr. Focus on spatially-explicit analysis.

The project that never dies

Postby sixtystrat » Tue Mar 07, 2017 2:16 pm

I have been working on some simulations of cluster sampling designs for months now with unsatisfying results! I am basically using secrdesign to compare relative bias and precision of some 3 x 3 trapping clusters at various spacings between traps (0.5 to 4 sigma) and spacings between cluster centers (10 to 30 sigma). I am running 50 replicates which is about all I can do given the long run times. Basically, my relative bias estimates are all over the place:

Table 1. Relative bias relative to various trap spacings (relative to the home range scaling parameter, σ) with σ = 1,000 m, and cluster spacings of 16,000 m on center. The 95% CIs based on the SEs are in parentheses.
Trap spacing Relative Bias
0.5σ -0.538 (-0.589–-0.487)
1σ -0.186 (-0.229–-0.143)
1.5σ -0.081 (-0.119–-0.043)
2σ 0.074 (0.030–0.118)
2.5σ 0.035 (-0.001–0.071)
3σ 0.040 (-0.025–0.106)
3.5σ 0.238 (0.132–0.343)
4σ 0.453 (0.212–0.694)

This represents a negative bias when the 3x3 cluster is small relative to the home range and a positive bias when the cluster is large relative to the home range. That makes sense but I would expect to have more scenarios with an unbiased estimate. I am wondering if it is more appropriate to use the SD rather than the SE of relative bias to determine if it differs from zero indicating a significant bias. When you plot the estimates, CI coverage is usually high and there is lots of scatter above and below the true estimate, yet the CIs of the relative bias estimates suggest the bias is real. Does secrdesign produce a new distribution of animals on the landscape and associated capture histories for every run or just unique capture histories? Is it possible that the number of activity centers that the program simulates is biased high or low and the RB is not estimated based on the actual number? I am thinking that 50 replicates may not be enough but the long run times make more "problematic". Thanks and sorry for the long post!
Joe
sixtystrat
 
Posts: 115
Joined: Wed Jun 01, 2011 9:19 am

Re: The project that never dies

Postby murray.efford » Tue Mar 07, 2017 4:17 pm

Hi Joe
I agree one would expect better performance in the mid-range of spacing relative to sigma. There may be several issues tangled up here (below). If we want to characterize a scenario in terms of RB (estimated as the average over replicates) I think we care about the sampling distribution of replicate-specific RB, and SE is the appropriate measure. But maybe I'm not thinking straight.

Other points

1. Choosing a mask for a sparse design is tricky. The default discretization (nx = 32) may be far too coarse if you are explicitly modelling a large clustered array. Given that conceptually, and in practice, your clusters each sample different animals, for anything but single-catch traps you can get away with simply simulating one cluster and multiplying density by the number of clusters. That is what secrdesign offers in the 'nrepeats' argument of make.scenarios() (see ?make.scenarios).

2. Have you charted the number of distinct animals (n), number of recaptures (r) and number of moves for each design using fit = FALSE in run.scenarios()? That is much faster and can be done cheaply with many replicates. n and r together are very good predictors of precision; predicting bias is a work in progress.

3. run.scenarios() simulates a new population for each replicate unless you bend over backwards to control the populations. You may want to think about using Ndist = 'fixed'.

Hope this helps you move forwards
Murray
murray.efford
 
Posts: 686
Joined: Mon Sep 29, 2008 7:11 pm
Location: Dunedin, New Zealand

Re: The project that never dies

Postby sixtystrat » Tue Mar 07, 2017 5:45 pm

I have a 100x100 km study area and I am using a mask spacing of 1000 m. This is my code:

regionsmall<-cbind(x=c(0, 100000, 100000, 0), y=c(0,0,100000,100000)) #10,000 km2
scen.6.0015.1000.1<-make.scenarios(noccasions=6, D=0.0015, sigma=1000, g0=0.1)
cluster3x3.2000<-make.grid(nx=3, ny=3, spacing=2000, detector="proximity")
trap3x3.2000<-make.systematic (cluster= cluster3x3.2000, spacing=16000, region = regionsmall)
mask3x3.2000<-make.mask(trap3x3.2000, buffer=7500, spacing=1000)
sim2000.3x3.occ6.100<-run.scenarios(nrepl = 10, scenarios = scen.6.0015.1000.1, seed=5,
trapset = trap3x3.2000, maskset = mask3x3.2000, fit = FALSE)
The above is for a 3x3 cluster with traps 2000 m apart and clusters 16000 m apart, on center.

With fit=FALSE n=130 animals, ndet=173, nmov=22 and dpa=1.17. So, this simulation is for pretty sparse data with relatively few multiple detections, which is pretty typical of the data I am simulating.

Standard errors of the estimates are pretty good (about 15%) it is just that the bias seems pretty unstable. Since the traps are spaced 2000 m apart, do you think that a mesh spacing of 1000 m is fine enough? I could increase the state space (say 200000 x 200000 m) to increase the number of animals, but that is going to slow things down again.

I will look at trying the nrepeats option. How would you recommend estimating the SE for density for several clusters? Thanks again.
Joe
sixtystrat
 
Posts: 115
Joined: Wed Jun 01, 2011 9:19 am

Re: The project that never dies

Postby murray.efford » Tue Mar 07, 2017 6:14 pm

I do think nrepeats was made for you... There's no SE issue as you are (I think) estimating only an overall density - the estimated SE and CI will automatically adjust.

Setting type='trapbuffer' in make.mask() will discard some superfluous points (the default type in secr.fit is not the default in make.mask). Also, run.scenarios uses a multiple of sigma (xsigma) to tune the mask to the current sigma, which can save time; otherwise generate a sigma-appropriate mask for each sigma (hard work).

I'll play with your code later and may have more insight then.
Murray
murray.efford
 
Posts: 686
Joined: Mon Sep 29, 2008 7:11 pm
Location: Dunedin, New Zealand

Re: The project that never dies

Postby sixtystrat » Tue Mar 07, 2017 6:16 pm

I tried nrepeats in make.scenarios and that runs much faster and the estimates seem fine. I tried it with smaller mesh sizes than the 1000 m I was using and the estimates did not vary much. I'm guessing that the number of repeats goes into estimating the SE because it grows if I reduce the number of repeats. I think this might work. Thanks!
Joe
sixtystrat
 
Posts: 115
Joined: Wed Jun 01, 2011 9:19 am

Re: The project that never dies

Postby sixtystrat » Mon Mar 20, 2017 10:20 am

Your suggestion worked beautifully Murray. Thank you for your untiring efforts to help users of your fantastic software package. I think I can see the light at the end of the tunnel now. Thanks!
Joe
sixtystrat
 
Posts: 115
Joined: Wed Jun 01, 2011 9:19 am


Return to analysis help

Who is online

Users browsing this forum: No registered users and 8 guests