Robust design N estimate

questions concerning analysis/theory using program MARK

Re: Robust design N estimate

Postby ganghis » Thu Mar 15, 2018 3:16 pm

Hi Jeff (et al.),
The approach I was outlining was indeed to get an "average" abundance estimate, but in a way that gives more weight to the primary period estimates that have higher precision, and one that could account for additional variance. However, I like your use of the variance-covariance matrix (rather than assuming estimates are independent).

There's a range of things that could be done here, but it sounds like there are some time and learning constraints that might preclude some of the more advanced options at the moment. What Jeff outlines is certainly reasonable, especially if the estimated CV for each of the N's is similar. Sometimes simpler is better.

His question about why you are using a robust design certainly is worth thinking about though. Are there really no changes to the population over time? If so why are you using the robust design?

P
ganghis
 
Posts: 84
Joined: Tue Aug 10, 2004 2:05 pm

Re: Robust design N estimate

Postby Bryan Hamilton » Thu Mar 15, 2018 5:09 pm

I was thinking of the total population size over the study. Which could be useful to understand turnover, although survival pretty much gets at that. I think the method I mentioned N/p would give a rough estimate of that but I don't think there is way to calculate variance for that?

Great discussion.
Bryan Hamilton
 
Posts: 111
Joined: Thu Jun 15, 2006 11:36 am
Location: Great Basin National Park

Re: Robust design N estimate

Postby RBoys » Thu Mar 15, 2018 6:47 pm

Hi all,

Thank you for all your detailed replies. I should perhaps have given a better outline of the study at the beginning of the discussion, I'm sorry for that.
So e have collected data on bottlenose dolphins living in open ocean environment under the robust design sampling strategy. Data were collected for only 5 months split as 3 primary periods each with 5 secondary periods. Due to weather and general difficulties in sampling in open ocean, our captures vary somewhat being much higher in the secondary primary period, and lower in all the second secondary occasions. We are really doing the study to set a baseline for a continuous monitoring strategy for the species in a few different areas but all using the same sampling design to enable the use of the same modelling group.

The reason that I ask about the constant N is that the abundance estimates being derived vary quite widely, particularly in the first primary period, where the SE and CIs are very large. As our overall study period is so short and we have relatively short time intervals between primary periods it is not reasonable that the N varies betwenn 600 and 300 animals for example. This is why I thought it may be better if I could use a single estimate that accounts for N from all the primary periods, such as an average. I just wasn't sure of the best way to correctly do this as the way of estimating an average with the number of animals divided by number of occasions just seemed too simple! But perhaps as the data is quite sparse and the capture probability relatively low, it is better to keep things simple.

However, if you do have some thoughts that are more complex and require some learning, I would also be very interested to hear them as I really want to improve my knowledge and working understanding in this area.

Thank you,

Rebecca
RBoys
 
Posts: 18
Joined: Fri Jun 17, 2016 9:18 am

Re: Robust design N estimate

Postby cooch » Thu Mar 15, 2018 7:10 pm

There are various ways to get estimates in MARK for standard open, CJS models. I'll post the details later. And, once you have those estimates, you can derive an average using the MCMC capabilities in MARK.
cooch
 
Posts: 1628
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: Robust design N estimate

Postby cooch » Thu Mar 15, 2018 9:06 pm

You can get an estimate of N from a simple, open CJS model in MARK by tricking/tweaking the .INP into RD format. You do this by adding a 'dot' after every occasion in the encounter history. So

1101 would become 1.1.0.1.

Then set up as a RD with (using the above as an example) 4 primary samples, 2 secondary per primary. Fit closed model M0 to the primaries. MARK will look at the .INP file, generate an estimate of M(t+1) for each primary (which is actually written out in the 'full output', if you look), estimate p* for each primary (which amounts to the single p estimate, given there is only one real occasion -- this p is the same p you'd get if you'd used the original .INP file, in a CJS analysis with p(t) for encounter prob), and then dump out N as a derived parameter (this happens automatically using the RD approach). This is way simpler than deriving it by hand (since you also get the variances and covariances). Using the RD in MARK give you the two pieces you need for the canonical estimate of N as M(t+1)/p*, and all it requires is a tweak to the .INP file...

[Note added:] It also requires the constraint that \gamma^{\prime}=1,\gamma^{\prime\prime}=0 - see below}

I'm not sure what you mean by 'average population size', but if you really mean 'average' in the sense that dumb people like me would use it, you can get there from here by taking the derived estimates of N from each primary, and calculating an average. To get the 'stats right' (meaning, the correct estimate of the average and the variance of same), but avoiding messy Delta methods, you can do more or less what you might do in JAGS using the MCMC capabilities in MARK. You simply take the posterior chains for each N in the MCMC bin file, and (say, in R) create a derived parm which is the mean of these chains, and then generate various moments of the posterior (mode for 'mean', HPD for credible interval, and so on...).

This is more or less what generating a derived parameter involves in JAGS, except that JAGS calculates the derived parameter with each iteration of the chain(s), whereas doing this in MARK requires taking the chains and doing the derivation of the parameter after the fact. I submit that the 'JAGS way' is more efficient (computaitonally), but you can use MARK to get the exact same answer (I've tested this sort of thing any number of times, and have yet to find the exception -- MARK isn't nearly as flexible as JAGS, since it isn't a 'language', obviously, but if doing things with 'derived parameters' is what you're after, using the MCMC capabilities in MARK can often work quite well.)

A demonstration of the basic idea (albeit in a different context) starts on p. 41 of the 'Delta method' appendix of the 'MARK book'. The example presented there is using MCMC to generate a product of survival estimates as the derived parameter, but if you understand what its doing, it should be fairly clear how to modify it for other derived parameters.

Now, all of the preceding is avoiding any discussion over whether or not M(t+1)/p* is a meaningful estimate of N when p* is not estimated using a fully fleshed out model. This has been discussed in various places. Some of us are not big fans of basically taking M(t+1)/p*, if p* is not estimated on a model that accounts for things like capture effect, or individual heterogeneity, and the like (i.e., the majority of the advanced model types in Chapter 14). Simply taking 'count/detection' might be relatively robust in some cases (say, high p), but I'd always be suspicious. I'm not a big fan of using these sorts of uncalibrated open pop estimates, because you're making heroic (and, depending on the sampling scheme, often untestable) assumptions about p. IMO.
cooch
 
Posts: 1628
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: Robust design N estimate

Postby RBoys » Fri Mar 16, 2018 8:15 am

Dear Evan,

You're approach certainly seems understandable and useable. So I will have a thorough read through the section on MCMC in the MARK book.

However, as you said at the end of your comment, how meaningful will the estimate be as heterogeneity is not being accounted for. The data we have does not have high captures and so would not be particularly robust. So if I were to estimate N using this method of M(t+1)/p* is there a way that I could assess the bias that may be occurring?

Thank you all again for all your thoughts and advice,

Rebecca
RBoys
 
Posts: 18
Joined: Fri Jun 17, 2016 9:18 am

Re: Robust design N estimate

Postby cooch » Fri Mar 16, 2018 9:02 am

RBoys wrote:Dear Evan,

You're approach certainly seems understandable and useable. So I will have a thorough read through the section on MCMC in the MARK book.


The one step I left out (my mistake -- I was late for something and in a bit of a rush) is you need to fix \gamma^{\prime}=1, \gamma^{\prime\prime}=0. This should make sense, if you think about it.

I'll throw this in the book at some point. But, the basics are I think straightforward.

However, as you said at the end of your comment, how meaningful will the estimate be as heterogeneity is not being accounted for. The data we have does not have high captures and so would not be particularly robust. So if I were to estimate N using this method of M(t+1)/p* is there a way that I could assess the bias that may be occurring?


No, IMO. To quote Darryl MacKenzie: "These methods are statistical, not magical". There are limits to how far you can go (perhaps generally), without making strong assumptions. There are any number of approaches that push forward under assumptions that in fact are likely to be at least partially violated (e.g., identifiability for models with unobservable states is often contingent upon assuming that - say - survival for observable and unobservable individuals is the same, which may be very unlikely if observability is itself tied up with something influencing survival). Sometimes these assumptions are testable, sometimes they're not. Its more or less up to you how far out on the 'branch of assumptions' you want to climb. It is often hard to test the assumptions given data, because it is weakness in the data that leads you to those 'assumption laden' methods in the first place. My preferred recourse then is - simulation. Establish the bounds on how much bias or lack of precision (or both) you might get if you make 'biologically plausible' changes to the assumptions, and then build the argument around that. It also provides guidance on what you might need (in terms of sampling, etc) to 'do better'.

Good luck.
cooch
 
Posts: 1628
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: Robust design N estimate

Postby RBoys » Fri Mar 16, 2018 9:11 am

Dear Evan,

Yes I realised that they would need to be fixed with gamma'=1 and gamma"=0 but thank you very much for confirming.

Yes of course simulation is definitely the best way and would be very useful to see the issues in the sampling design as well. I am going to look into your suggestions anyway but will make sure that I always keep in my mind the N is estimated from M(t+1)/p*.

Thank you again for taking the time to reply to all my questions.

Rebecca
RBoys
 
Posts: 18
Joined: Fri Jun 17, 2016 9:18 am

Re: Robust design N estimate

Postby tlyons4 » Thu May 31, 2018 4:55 pm

Would this trick (coding secondary periods as missing) work for a Barker robust design too?
tlyons4
 
Posts: 19
Joined: Thu Nov 30, 2017 7:29 pm

Re: Robust design N estimate

Postby egc » Mon Nov 19, 2018 10:24 pm

tlyons4 wrote:Would this trick (coding secondary periods as missing) work for a Barker robust design too?


Probably -- simulation is your friend, here.
egc
Site Admin
 
Posts: 201
Joined: Thu May 15, 2003 3:25 pm

PreviousNext

Return to analysis help

Who is online

Users browsing this forum: No registered users and 7 guests

cron