release.gof error

posts related to the RMark library, which may not be of general interest to users of 'classic' MARK

release.gof error

Postby aptrk » Tue Apr 27, 2021 12:35 pm

Hi rmarkers,
I've been trying to estimate daily survival of nightjars using sex and age(adults, juveniles) as covariates. We have captured about 150 individuals over 5 years and we have 311 occasions.
The top ranked model has age dependent survival and sex dependent recapture and looks like this (where M stand for males and H for females):

results.eanomalus$Phi.age.p.sex$results$real #get real estimates
estimate se lcl ucl
Phi gAH c1 a1 t1 0.9992911 0.0001352532 0.9989697 0.9995123
Phi gJH c1 a0 t1 0.9980639 0.0003490498 0.9972436 0.9986403
p gAH c1 a1.0027397260274 t2 0.0058037 0.0012220000 0.0038396 0.0087637
p gAM c1 a1.0027397260274 t2 0.0268471 0.0020530000 0.0231034 0.0311781
fixed note
Phi gAH c1 a1 t1
Phi gJH c1 a0 t1
p gAH c1 a1.0027397260274 t2
p gAM c1 a1.0027397260274 t2


I was trying to run a gof test using release to this dataset and I got the following error:

eanomalus.processed <- process.data(eanomalus, groups = c("age", "sex"),
time.intervals = time.intervals,
age.var = 1, age.unit = age.unit,
initial.ages = c(1,0),nocc = 311)

release.gof(eanomalus.processed)
Length of command exceeded MAXCHR/2 characters.
RELEAS ERROR TERMINATION
Error in (x3 + 4):length(out) : argument of length 0

Looking at a previous post it has been suggested that this may happen with nocc less or equal to 2.
Thus looking at the summary of my data, I can see that actually nocc is 1 and not 311.

data 6 data.frame list
model 1 -none- character
mixtures 1 -none- numeric
freq 4 data.frame list
nocc 1 -none- numeric
nocc.secondary 0 -none- NULL
time.intervals 310 -none- numeric
begin.time 1 -none- numeric
age.unit 1 -none- numeric
initial.ages 4 -none- numeric
group.covariates 2 data.frame list


Is there something I am doing obviously wrong? Thoughts are welcome!
aptrk
 
Posts: 9
Joined: Wed Oct 07, 2020 11:40 am
Location: Salta, Argentina

Re: release.gof error

Postby jlaake » Tue Apr 27, 2021 3:28 pm

You are misinterpreting summary output. It is telling you that nice is numeric of length 1. It's value is not 1. The problem is that you have too many occasions. Release has a limit on the input line length and you have exceeded it as the error states. Not much I can d do about that as it is the external program that comes with MARK.
jlaake
 
Posts: 1295
Joined: Fri May 12, 2006 12:50 pm
Location: Escondido, CA

Re: release.gof error

Postby aptrk » Tue Apr 27, 2021 4:26 pm

Thanks for the prompt reply and clarification Jeff.
I did not know there was a limit in the number of occasions. How do people usually deal with this? They split the dataset? They look at other measures of goodness of fit like the c-hat?
aptrk
 
Posts: 9
Joined: Wed Oct 07, 2020 11:40 am
Location: Salta, Argentina

Re: release.gof error

Postby jlaake » Tue Apr 27, 2021 4:48 pm

Not sure. Never had the problem. Are these daily occasions? Maybe use Fletcher chat although that doesn't really help you do any diagnosis. Maybe try UCARE there is an r version.
jlaake
 
Posts: 1295
Joined: Fri May 12, 2006 12:50 pm
Location: Escondido, CA

Re: release.gof error

Postby aptrk » Tue Apr 27, 2021 5:06 pm

Yup, they are daily occasions. I will try UCARE. Thanks Jeff!
aptrk
 
Posts: 9
Joined: Wed Oct 07, 2020 11:40 am
Location: Salta, Argentina

Re: release.gof error

Postby cooch » Wed Apr 28, 2021 9:26 am

aptrk wrote:Thanks for the prompt reply and clarification Jeff.
I did not know there was a limit in the number of occasions. How do people usually deal with this? They split the dataset??


Several points...

The first question you want to ask yourself is...do you really want daily survival? For most vertebrate taxa, daily survival for adults is typically so close to 1.0 that it becomes very difficult to estimate the parameter. On the other hand, estimating survival over a longer interval is often more stable, computationally. A simple numerical example will suffice to make the point. Suppose the annual probability of some species is 0.85. Assuming (heroically, not biologically) that the hazard is evenly distributed over the year, that amounts to a monthly survival of 0.98655 (12th root of 0.85), which, if we 'pretend' that each month has 30 days, means a daily survival of 0.9995 (30th root of 0.98655), which is very likely not estimable (even if you have lots of data). Further, you'd have basically no ability to model variation in a survival value that is nearly 1 as a function of interesting covariates.

Of course the counter-argument (and the second point) is that there are biologically relevant periods (and, in particular, age/size classes) where daily survival is <<1.0, and of interest. Daily survival is meaningless for s(say) an elephant (well, the elephant might disagree). But daily survival is very relevant for (say) a butterfly. As the biologist, you would need to define the biologically relevant time-scale over which to estimate survival. It might be a day, or it might not. Very often, for taxa where (i) daily survival might be important over some interval, and (ii) where the taxa (and your field budget) allows for it, telemetry ('known-fate') is commonly used (countless studies of 'nestling' daily survival, which is generally <<1, have used this approach).

Summary -- just because you can 'encounter' individuals over a daily interval doesn't necessarily mean its a good thing. A study should evaluate the appropriate interval (given the biology, and the biologically-motivated questions), and go from there.

Given that you have daily encounter data, you're faced with a challenge. Pooling is a non-trivial decision. Simple demonstration. Suppose you decide to pool over weeks. Fine. Eazy enough if you're good at massaging your encounter data. But....that means that an individual encountered on (say) a Monday is being treated as having the same underlying fate/risk as an individual encountered (say) on a Sunday. The Monday individual has to 'survive' 7 days to 'make it' to the following week, whereas the Sunday individual only has to 'survive' 1 day to 'make it' to the following week. Pretty strong assumption.

One possible approach is to focus on a longer interval (say, weekly, or monthly), and treat the intervening encounter data as 'incidental' -- a Barker model (chapter 9) might be a good starting point. I fooled with this idea several years ago, and it seemed to work pretty well (what I did was focus on monthly survival, collapse all the intervening encounters into a 'encountered' or 'not-encountered' piece of data, and used the number of incidental encounters as an individual covariate. Not saying its a canonical approach, but it might be a start).
cooch
 
Posts: 1578
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: release.gof error

Postby aptrk » Wed Apr 28, 2021 11:51 am

Thanks Evan,
I totally agree this may not be a natural scale to estimate survival. But when the study started, there wasn't much planning. Surveys were conducted in unequal day intervals (1-300!), rangers will go out when they could (some months very often and others never). Also recapture events of female are pretty rare, are often common in consecutive days or weeks and in collapsing I'm affraid we may miss information.
Luckily we are working only with two covariates and we did not have problems estimating survival or detection. Survival of adults is around .75 and juveniles .5 with CI not overlapping.
I could try a Barker and see how that compare to what we have.

Thank you all!
aptrk
 
Posts: 9
Joined: Wed Oct 07, 2020 11:40 am
Location: Salta, Argentina

Re: release.gof error

Postby egc » Wed Apr 28, 2021 12:03 pm

aptrk wrote:Thanks Evan,
I totally agree this may not be a natural scale to estimate survival. But when the study started, there wasn't much planning. Surveys were conducted in unequal day intervals (1-300!), rangers will go out when they could (some months very often and others never). Also recapture events of female are pretty rare, are often common in consecutive days or weeks and in collapsing I'm affraid we may miss information.
Luckily we are working only with two covariates and we did not have problems estimating survival or detection. Survival of adults is around .75 and juveniles .5 with CI not overlapping.
I could try a Barker and see how that compare to what we have.

Thank you all!


Usual story I'm guessing: (i) researcher/analyst has to work with a dataset that (ii) they had no hand in collecting, (ii) based on opporunistic encounters with (iv) virtually no formal igorous sampling design. I'm very pleased to be at the point in my career where I can simply say 'no' to such 'opportunities'. ;-)

I think the Barker approach is worth trying, but in the end, remember Tukey's Law: "The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data..."
egc
Site Admin
 
Posts: 193
Joined: Thu May 15, 2003 3:25 pm

Re: release.gof error

Postby jlaake » Wed Apr 28, 2021 2:31 pm

Why can't you just use the unequal time intervals between occasions?
jlaake
 
Posts: 1295
Joined: Fri May 12, 2006 12:50 pm
Location: Escondido, CA

Re: release.gof error

Postby cooch » Wed Apr 28, 2021 4:28 pm

jlaake wrote:Why can't you just use the unequal time intervals between occasions?


In principle fine, but in application, not so easy, *if* the variation in interval length is large (as seems to be the case). Mechanically, no problem -- but how would you meaningfully interpret differences in survival over (say) an interval of 1, with estimates over the next interval, which might be (say) the 25th root of a 25 day interval?

So, interpreting estimates has always been a rate-limiting concern for me for situations where the interval differences are very large.
cooch
 
Posts: 1578
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University


Return to RMark

Who is online

Users browsing this forum: No registered users and 1 guest

cron