Another beginner's question - POPAN theory & Confounding

Forum for discussion of general questions related to study design and/or analysis of existing data - software neutral.

Another beginner's question - POPAN theory & Confounding

Postby smw62 » Wed Feb 14, 2018 1:18 pm

Struggling to understand the theory behind the POPAN model.

Backstory: I am trying to get a population estimate of fish that could potentially move past a large barrier on a river. Am using the POPAN model since it is an open population. 18 sampling events over 36 day period.

The 'best' model is p(t), phi(.), pent(t). AICc Weight 0.99863 -- Number of parameters 24. The next model p(.), phi(.), pent(t). has a delta AICc of 13.4, Weight 0.00121 -- Number of parameters 15.

So my question comes down to the confounding of the parameters. Because I'm using time dependent probability capture, I have confounding variables in N1 and Nk right? In the MARK book: 'If confounding takes place, the estimated super-population number may be suspect....This confounding implies that careful parameter counting may have to be done when fitting POPAN models'.

Does the model take into account the confounding variables when outputting the final N? I started out with 37 variables (18 p, 1 phi, 17 pent, 1 N), but ended up with 24 in the model output. Do I have to manually go back and do 'careful parameter counting', or does POPAN do that for me?

I know I can't use the next model that doesn't have confounded variables because the delta AICc is greater than 10, and has no support.

All I need is a half decent population estimate, and I want to make sure that the theory behind getting it is legit.

Sidenote: From a previous 2005 comment by Cooch on N - 'Estimating population size in open populations is not trivial, and is (more often than not) not overly precise'.
I don't need it to be overly precise, but is there a better way to estimate N? If it is useless then I might as well not even send it out for peer-review.
smw62
 
Posts: 4
Joined: Mon Oct 02, 2017 9:46 am

Re: Another beginner's question - POPAN theory & Confounding

Postby Bryan Hamilton » Thu Feb 15, 2018 7:26 pm

Assuming you've read chapter 12 of the MARKbook a few times.... It never hurts to re-read those chapters.

Some folks are really critical of estimating populations with Jolly Seber models. Talking to you Evan. One reason is that we have better models to estimate abundance than JS models (ie robust design). We should be designing our studies to use the more robust models. The other reason is the assumption of homogeneous capture probability (equal catchability) between marked and unmarked animals in JS models is usually violated. We know this and so the real question is how much is that assumption violated? Is it enough that our abundance estimates are garbage? Or are they good enough to use for management and hypothesis testing? Well we can't know because the unmarked animals are never seen, kind of like dark matter. Some of the discussions on the board indicate that if p is large enough, this assumption equal catachability becomes less important. How large is large enough depends but it seems like p of 0.3-0.5 can give you good estimates of abundance. I don't know what kind of recapture rate you're getting with fish but I bet its less than 0.3? Maybe much less?

I can't really speak to the parameter confounding and estimate of superpopulation. If the estimates of p are very wrong then the estimates of abundance will also suffer. A few things you can do to check:

Divide your captures by p- this is a rough estimate of total abundance
Do some averaging of N with variance components and exclude the confounded estimates
Use some common sense. If the first and last abundance estimates are 7 and 7,000 and the middle estimates range between 200-250 there is a problem.
Check your abundance estimates of the highest rated model against the model without confounding. They should be kind of close.

Its seems to me that more people than ever (including me) are publishing estimates of abundance using JS models. Good or bad? I don't know but it does speak the need to have some idea of abundance. There is no inherent reason you can't publish JS abundance estimates. You might have some bumps through peer review and you need to acknowledge the limitations, but its a repeatable methodology and it has a place.
Bryan Hamilton
 
Posts: 111
Joined: Thu Jun 15, 2006 11:36 am
Location: Great Basin National Park

Re: Another beginner's question - POPAN theory & Confounding

Postby smw62 » Tue Mar 06, 2018 3:44 pm

Hi Bryan, thanks for your response. I think I figured out the confounding parameters for the most part.

I did more research into it, and re-read chapter 12 for about the 20th time and other chapters :)
If I actually understand it, MARK did not include the 'correct' number of parameters, so I did have to go back and make sure the number of parameters were right, and adjust it in each model variation.
This resulted in a non-confounded model (p.) having lowest AICc, highest support, most weight etc, so that made things a lot easier.

As for your additional comments;

My re-capture rate is a lot lower than 0.3; approximately 0.04. I'm dealing with an endangered, migrating species, so won't compare to other fish. But multiplying that by my captures does give an estimate that falls within the CI's, so that is good.

When I tried to do the averaging of N with variance components, I got an error saying 'variance components
failed' because 'Matrix D was found to be not positive definite. sigma2 = 94 (-260 to 62000). This was consistent for the model variations that had some support (delta AICc < 10). I'm not so concerned about this because confounded parameters are a non-issue with the p. model, but I would like to learn more about it. Will do further investigation, but I probably need to have it explained like I'm a child to begin to understand it.

I also need to do a lot more reading into using JS models to get N to fully understand the criticisms. But if there are better models out there that get more accurate estimates, then I agree that studies should be designed to use those models. I can't use a robust model with this data since it was a pilot study. But surely a rough estimate (even if it isn't super accurate, or can have increased accuracy with a different model design) is better to have than nothing at all. At least that rough estimate can tell us if we should dedicate resources to getting a more accurate estimate. I'll prepare to have some bumps in the peer-review process :)

Again, thanks for your help!
smw62
 
Posts: 4
Joined: Mon Oct 02, 2017 9:46 am

Re: Another beginner's question - POPAN theory & Confounding

Postby Bryan Hamilton » Thu Mar 08, 2018 9:23 pm

But multiplying that by my captures does give an estimate that falls within the CI's, so that is good.

You should be dividing your captures by p to get a rough abundance estimate. Obviously multiplying is going to give you less individuals than you captured so this is probably just a error in what you wrote.

I'm not sure if you are in RMark or Mark. In RMark you can often get around the variance component error of a negative sign by using 'vcv.reml'. These variance components are complicated and I still struggle with understanding them.

But surely a rough estimate (even if it isn't super accurate, or can have increased accuracy with a different model design) is better to have than nothing at all. At least that rough estimate can tell us if we should dedicate resources to getting a more accurate estimate.

I tend to agree with this. With your p being so low there are going to be some issues with abundance estimation. You might consider some simulation to see if getting recapture rate up is feasible and results in better estimates.
Bryan Hamilton
 
Posts: 111
Joined: Thu Jun 15, 2006 11:36 am
Location: Great Basin National Park


Return to analysis & design questions

Who is online

Users browsing this forum: No registered users and 17 guests

cron