deb wrote:Hi Murray
These problems relate to my skink capture data you looked at a few months ago. This computer has 3 GB of memory and I'd just like to check if there is anything I can do to get some fairly simple models running. E.g. I can run
mtb.hab.sig <- secr.fit(caps09, model = list(g0 ~ t+b+hab, sigma~hab), mask = M09_1_12, CL=TRUE, detectfn = 0)
but not
mtb.sp.sig <- secr.fit(caps09, model = list(g0 ~ t+b+sp, sigma~sp), mask = M09_1_12, CL=TRUE, detectfn = 0)
The latter tries to allocate a 200 Mb vector; sp has 3 levels whereas hab in the previous model has only 2 levels.
I get very different AICc and density by changing between hab and siz (also 2 levels) for example, so am keen to try other variables too.
Also, when using a continuous variable svl, secr wants a 6GB block even when I just use part of my dataset, e.g.
mtb.svl.sig <- secr.fit(caps09$In, model = list(g0 ~ t+b+svl, sigma~svl), mask = In09_1m_12m, CL=TRUE, detectfn = 0)
I've checked that svl is numeric and not a factor. You helped me with this variable once before and I'm wondering if every measurement is still being treated as a different level?
Happy to send you the files etc...
Cheers
Deb
There are no miracle cures, I'm sorry, just compromises. Using individual covariates can be demanding because of the way secr covers all possible combinations, and I haven't had pressure to refine it - this may help, but I can't tackle it for a month or two.
I uploaded a new secr version 2.0 to CRAN today (binaries probably available there later in week); I don't remember making any changes that would help this problem, but it might be worth a try. I found I could run your 'mtb.hab.sig' and 'mtb.sp.sig' examples (at least the first few likelihood evaluations) using masks of about 6000 points/session (probably many more than is needed) in both 32-bit and 64-bit R 2.12.1 on 64-bit Windows 7. Also, I seriously doubt the value of complex models even if that is where AIC leads you - maybe check whether modelling time variation really affects the estimates?
You are right about continuous individual covariates: at one point in the code an array is allocated using a multiple of the number of unique levels, even when the covariate is continuous. Discretizing to say 5 classes instead of 50 should help, with no serious loss of information. Your tack of treating sessions separately seems a good idea - I would think you have enough data to estimate the detection function each time without needing to pool data, even if that might be elegant.
Another option available in principle, but maybe not in practice, is to pool data from grids of the same shape and divide the resulting density (and its SE) by the number of pooled grids. That way you drastically reduce the number of unique trap sites.
Murray