Unable to calculate c-hat

questions concerning analysis/theory using program MARK

Unable to calculate c-hat

Postby cristinvera » Thu Jul 23, 2020 3:36 pm

Hi there,

I am looking for some advice with my survival analysis. I have been given an extremely small data set and have been doing what I can with said data. I have been following the MARK book throughout and have lost count of the number of times I have read chapter 5!

In order to calculate C-hat so I can adjust to QAIC I initally attempted to do so within MARK using RELEASE. (to attempt to take the sum of Test 1 and Test 2 divided by the overall df). However, I keep getting zeros for both my chi-squared and df and the test summaries tell me I do not have sufficient data.

I then tried to run it in Program UCARE to see if I could get anything from that but each time I try to input my file (inp) after selecting MARK format I receive error messages saying that UCARE was unable to perform analyses.

I am pretty desperate right now and not sure what I could alternatively do to calculate and adjust for c-hat. This is my first time ever using MARK so maybe I am being extremely ignorant to something obvious.

Any help or advice would be massively appreciated!
cristinvera
 
Posts: 3
Joined: Fri Jul 17, 2020 11:08 am

Re: Unable to calculate c-hat

Postby cooch » Thu Jul 23, 2020 3:49 pm

RELEASE (and U-CARE) are based on a series of contingency tables, and as a result, are susceptible to sparse data sets (which will lead to lots of zero cells in the contingency table. Any decent biometrics text will tell you there are many problems with pooling sparse cells to achieve minimum sufficiency for any sort of inference).

With sparse data, your options are extremely limited, but you might consider the Fletcher c-hat, which has very good properties, and might give you something useful. But with sparse data, you might have to accept the possible reality that there is nothing you can do. To paraphrase John Tukey: "The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data."
cooch
 
Posts: 1628
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: Unable to calculate c-hat

Postby cristinvera » Fri Jul 24, 2020 12:21 pm

cooch wrote:RELEASE (and U-CARE) are based on a series of contingency tables, and as a result, are susceptible to sparse data sets (which will lead to lots of zero cells in the contingency table. Any decent biometrics text will tell you there are many problems with pooling sparse cells to achieve minimum sufficiency for any sort of inference).

With sparse data, your options are extremely limited, but you might consider the Fletcher c-hat, which has very good properties, and might give you something useful. But with sparse data, you might have to accept the possible reality that there is nothing you can do. To paraphrase John Tukey: "The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data."


Thank you so much for getting back to me so quickly! I've looked into the Fletcher c-hat today and it is just under 1 for both my models so I will adjust for it and see if it gives me anything useful, although I do think that John Tukey "quote" may be all too applicable in my case...
cristinvera
 
Posts: 3
Joined: Fri Jul 17, 2020 11:08 am


Return to analysis help

Who is online

Users browsing this forum: No registered users and 10 guests