MCMC with interaction term

questions concerning analysis/theory using the R package 'marked'

MCMC with interaction term

Postby Mnat » Mon Oct 04, 2021 1:13 pm

Hi,
I'm analysing survival of bats over 30 years. The individuals in the dataset belong to six different colonies. The recapture rate heavily varies between the colonies each year. Therefore I need to build the model with an interaction term colony*time for recapture p.

When I'm fitting MCMC models, the model runs if I fit it with colony+time for p. Unfortunately I get the following error message if I fit it as interaction term.

Code: Select all
model.parameters=list(Phi=list(formula=~1), p=list(formula=~time*colony))

MCMCfit=crm(Mnat.NSH.ch.size, model="probitCJS",
            model.parameters = model.parameters,
            burnin = 10, iter=10)


probitCJS MCMC beginning...
p model = ~time * colony
phi model = ~1
Error in Xy %*% beta.y : non-conformable arguments
In addition: Warning messages:
1: glm.fit: algorithm did not converge
2: glm.fit: fitted probabilities numerically 0 or 1 occurred



Isn't it possible to run MCMC models with interactions?

Thanks a lot
Bianca
Mnat
 
Posts: 18
Joined: Wed Apr 10, 2019 4:28 am

Re: MCMC with interaction term

Postby dsjohnson » Mon Oct 04, 2021 2:08 pm

You should technically be able to fit the model using probitCJS. The interaction is fine. However, `probitCJS()` is not the most robust function for fitting a CJS model in marked. The MCMC samples are fine, it just doesn't have all the error checking necessary. I have a hunch that not all colonies were visited in all occasions, is this correct? That means that model.matrix() on the inside of the function will drop those levels be default and the matrices will have different dimensions. This would not happen with the additive model, hence, lack of an error. Is there a reason that you prefer to use `probitCJS()` over just fitting the model with MLE? My first suggestion would be to estimate the model using `crm()`. However, if you are a hardcore Bayesian and my hunch was correct about sites and occasions not being completely crossed, then another solution would be to create a new variable, say `colony_occ = paste0(colony, "_",time)` that way you are creating 1 interaction variable with only the levels present in the data. If you do have completely crossed colonies and times, then a deeper dive is needed to find out where the error is occurring.
dsjohnson
 
Posts: 3
Joined: Fri Dec 20, 2013 1:01 pm
Location: Ft. Collins, CO

Re: MCMC with interaction term

Postby Mnat » Tue Oct 05, 2021 2:17 am

Thank you for your quick reply!

The variance in recapture and survival is quite high and the data are relatively sparse, at least during the first 10-15 years. I worked with Mark in the beginning and I always had a c-hat around 7. It was suggested to me that fitting Bayesian models could solve that overdisperion problem, thats why I tried to fit these models.

You're totally right. Not each colony is present in each year. We have only two of the colonies from the beginning (1989). Capture of three further colonies starts thirteen years later and we have data from the sixth colony since 6 years.

I would like to try your suggestion to create a new variable, but could you please give a short example? I think I didn't completely understand how to create this variable.

Thank you
Bianca
Last edited by Mnat on Tue Oct 05, 2021 4:19 am, edited 1 time in total.
Mnat
 
Posts: 18
Joined: Wed Apr 10, 2019 4:28 am

Re: MCMC with interaction term

Postby Mnat » Tue Oct 05, 2021 3:44 am

Wouldn't it make sense to fix the parameter values of the years where we don't have samples of the colonies that come up later?
Unfortunately I did not yet figure out how to fix parameter values in marked (just in Mark).
Mnat
 
Posts: 18
Joined: Wed Apr 10, 2019 4:28 am

Re: MCMC with interaction term

Postby jlaake » Tue Oct 05, 2021 2:02 pm

I asked Devin to answer your question because he wrote the probitCJS function in marked for MCMC. His solution should work but I think someone misguided you to think you could get out of c-hat issue by trying Bayesian methods at least by just using the Bayesian method in marked. I have a number of thoughts that I'll layout below.

1) You didn't say how you got the c-hat value of 7. If this is from the output file of MARK those values are not very reliable especially with sparse data and longish time series which it sounds like is the situation you are in. If you remember back to your beginning stats class you may have learned that for the chi-square approximation to hold, the expected values should be sufficiently large - some rules of thumb of 2 or 5 have been suggested. If the expected values are too small then the chat value gets inflated. I'm sure this is explained much better in the Cooch and White book somewhere but not sure. Anyhow, an alternative would be the release goodness of fit tests or median c-hat computation. If you haven't done so already you should read Cooch and White before you jump into your data analysis. You may not have a lack of fit issue at all.

2) With regard to fitting a colony - time interaction for p, from what you described you started sampling colonies at different times but continued to sample each colony through time once you started. If that is the case, then with the CJS model you only estimate p for times subsequent to first occasion they were seen. Thus from the first 2 colonies you have a p for each time since 1989, for the other 3 since 2002 and the sixth colony since 2015 from what you described. These should all be estimable and the issue you are having with probitCJS in marked is simply because the empty columns (e.g. before 2015 for sixth colony) are not being removed. If you were to use ~-1+time:colony with the CJS model in marked or with RMark, the non-used columns are removed. This is just a glitch in the construct of probitCJS.

3) I'm not sure why you would want to fix the values of p that are not being used.

4) I recommend you go figure out if you really have a lack of fit as pointed out in #1 above. I doubt if chat will be as large as 7 if you do a proper evaluation. But if there is, then there are other alternatives. You can do MCMC in MARK but not via RMark. You can also fit random effect models for p in marked. But first work out whether you really do have a lack of fit issue.

--jeff
jlaake
 
Posts: 1413
Joined: Fri May 12, 2006 12:50 pm
Location: Escondido, CA

Re: MCMC with interaction term

Postby Mnat » Wed Oct 06, 2021 1:48 am

Dear Jeff,

Indeed, I have the c-hat from the Mark output file, but didn't know that this isn't reliable. So in my case - good news. I'll figure out release or how to compute the median chat.

Thank you for sharing your thoughts! That was very helpful! :D

All the best,
Bianca
Mnat
 
Posts: 18
Joined: Wed Apr 10, 2019 4:28 am

Re: MCMC with interaction term

Postby cooch » Wed Oct 06, 2021 9:06 am

While RELEASE is the 'gold standard' for GOF testing for CJS models, like all GOF tests, it suffers from sparse data (see Jeff's point about pooling, and other issues with constructing and analyzing contingency tables, which is what RELEASE is based on). A better option might be the Fletcher c-hat, which seems to do really well for a number of data types, including CJS, and seems (IMO) to be somewhat more robust than RELEASE or median c-hat for CJS data.

All of this is discussed in Chapter 5.
cooch
 
Posts: 1628
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: MCMC with interaction term

Postby jlaake » Wed Oct 06, 2021 4:08 pm

I don't have any experience with Fletcher c-hat but I trust Evan's thoughts and experience. I should probably read the updated chapter 5. But... I have a philosophical problem with automated application of c-hat. Unless you have a good reason to apply c-hat as in situations where you know you have linked fates like a mother-young or schools/flocks of fish/birds, then I'm skeptical of just applying c-hat to model selection. Instead I think you should explore the reason for the lack of fit. You get some of that with the Test2 and Test3 of Release gof. My reasoning is as follows. Unless you think you have linked fates, then c-hat is telling you there is some lack of fit which often means that your model isn't sufficiently complex to explain the data. If you then apply c-hat to compute a QAICc for model selection, you will probably fit a model with even fewer parameters when in fact you probably need more complexity and more parameters to explain the data. I don't have the same issue with application of c-hat to inflate standard errors because any remaining lack of fit should lead to more uncertainty. Bottom line, in my opinion if you are getting a c-hat greater than 1.5 then you should be exploring the reasons for the lack of fit and thinking harder about the models for your data.

--jeff
jlaake
 
Posts: 1413
Joined: Fri May 12, 2006 12:50 pm
Location: Escondido, CA

Re: MCMC with interaction term

Postby cooch » Wed Oct 06, 2021 5:19 pm

Jeff's points are spot on. Basically, there are 3 sources of 'lack' of fit (typically) - wrong model structure, extra-binomial noise (e.g., the non-independence Jeff mentions), and unspecified, unobserved individual heterogeneity. All the the c-hat estimation procedures assume the lack of fit is due to extra-binomial noise only. If there is little reason to suspect that as a source of lack of fit, then you need to be careful interpreting estimates of c-hat.

You can explore this a bit using the simulation capabilities in MARK -- in fact, it lets you simulate all the sources of lack of fit as noted above. It doesn't take long to show that if the lack of fit is pure extra-binomial, then most of the estimation tools do relatively well (some better than others). Fletcher's c-hat has emerged as my favorite. And, based on some work Gary and I did, it also does pretty well with individual heterogeneity for closed populations (which I think surprised us - and David Fletcher - since we hadn't actually suspected it would do well for closed abundance models). On the other hand, if the lack of fit is structural, or individual, or both, then the c-hat estimate can be...strange (sometimes). At worst, it will be too high, which if applied in the quasi-likelihood context will make model selection more conservative.
cooch
 
Posts: 1628
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: MCMC with interaction term

Postby jlaake » Wed Oct 06, 2021 6:58 pm

Evan brings up individual heterogeneity which I forgot to mention in my post. It is possible to incorporate individual heterogeneity with CJSRandom in MARK. You mentioned you are working on colonies and depending on your situation if the critter has fidelity to a specific area and you are not able to survey the entire area uniformly or the critter has behavioral differences then it would be easy to introduce heterogeneity in p.

--jeff
jlaake
 
Posts: 1413
Joined: Fri May 12, 2006 12:50 pm
Location: Escondido, CA


Return to analysis help

Who is online

Users browsing this forum: No registered users and 1 guest