C hat of 0.000

questions concerning analysis/theory using program PRESENCE

C hat of 0.000

Postby heg90 » Thu Apr 25, 2019 2:56 pm

Hi,

I'm curious if anyone has dealt with a c-hat of 0.000 when running a GOF test in PRESENCE. When I ran the GOF test on my most parameterized model I saw that it had a c-hat value of 1.8 (indicating overdispersion in which I should adjust and use the QAIC to rank models). However, after running more simplified models and settling on a top model, I was surprised to see that the c-hat value was 0.000. I assumed that once I simplified the models I would still have an overdispersed model that I would have to adjust, so I'm not sure how to interpret or handle the 0.000 score.

I did some reading in the forums and the MARK book, and it seems like the verdict for what to do with c-hat < 1 is still undecided in the literature. I know this typically implies underdispersion, but I'm not sure if there is something I may have been overlooking when I ran my models. Some forum posts suggested that this could be a result of sparse data, which I certainly have, but I do not know if there is something I can do to counter this problem. A score of 0.000 seems incredibly low and I'm not sure what it could mean.

Any insight into this problem would be greatly appreciated!

~Holly
Last edited by heg90 on Fri Apr 26, 2019 1:35 pm, edited 1 time in total.
heg90
 
Posts: 20
Joined: Sun Nov 04, 2018 4:52 pm

Re: C hat of 0.000

Postby jhines » Thu Apr 25, 2019 3:41 pm

Hi Holly,

A c-hat of zero means all simulated data-sets had a higher test-statistic than the real data. It should get worse with simpler models. If you send me the most recent backup zipfile, I'd be happy to take a look and see what's happening.

Jim
jhines
 
Posts: 599
Joined: Fri May 16, 2003 9:24 am
Location: Laurel, MD, USA

Re: C hat of 0.000

Postby darryl » Thu Apr 25, 2019 4:20 pm

Hi Holly,
You may also want to update to the most recent version of PRESENCE (if you haven't already) as we recently identified a small bug with the GOF test that was causing a similar issue in some circumstances (all of the bootstrapped chi-square values were stupidly large). Jim made the fix about a week or two ago.
Cheers
Darryl
darryl
 
Posts: 495
Joined: Thu Jun 12, 2003 3:04 pm
Location: Dunedin, New Zealand

Re: C hat of 0.000

Postby heg90 » Fri Apr 26, 2019 1:34 pm

Hi,

Thanks so much for both of your replies. I'm going to try downloading the latest version and rerunning my models.

Jim, perhaps I'm not understanding the process for how to properly run a GoF test. I've been doing some reading, and it seems like most books/papers I've come across talk about the importance of running the GoF test on the most parameterized, global model. My global model had a c-hat score of 1.8. Is it correct to say that in order to appropriately apply the correction factor for overdispersion I would adjust the c-hat value in PRESENCE using 1.8 and then assess my complete model list?

I'm a little confused on whether once I've ran a GoF test on my global model and corrected for overdispersion if I then have to run the GoF test on my top model. From what you're saying, it sounds like a c-hat value of 0.000 is not surprising from my top model because the c-hat score will get worse for simpler models. If that is the case, if I adjust based on my global model and then run my model set, do I need to do any further model assessment on my top model? I initially thought I would need to do a second GoF test on my top model, which is why the value of 0.000 seemed weird.

Apologies if that is a basic question, this is my first time running a GoF test and the literature is a bit confusing on how exactly I should implement this test.
heg90
 
Posts: 20
Joined: Sun Nov 04, 2018 4:52 pm

Re: C hat of 0.000

Postby cooch » Fri Apr 26, 2019 1:41 pm

You need run the GOF test on the most general, global model only. You do not want to/need to run it on other, individual models. You estimate the lack of fit for the most parameterized model in the model set, estimate c-hat as a metric characterizing how far your general model is from the saturated model, and use that c-hat estimate from the general model to adjust the model selection accordingly (typically, using a quasi-likelihood adjustment to the AIC).

It might help with some of the conceptual bits to wade through first 5-7 pages of the GOF chpter in the MARK book (chapter 5), and perhaps the lat 2-3 pages as well. You're using PRESENCE, but the underlying 'theory' is general, and not software-specific.
cooch
 
Posts: 1628
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: C hat of 0.000

Postby jhines » Fri Apr 26, 2019 1:46 pm

Sorry to have been confusing the issue. Not surprisingly, Evan said it succinctly. When I said "worse", I meant that less-complicated models should have a higher (worse) c-hat than the most general model.
jhines
 
Posts: 599
Joined: Fri May 16, 2003 9:24 am
Location: Laurel, MD, USA

Re: C hat of 0.000

Postby heg90 » Fri Apr 26, 2019 1:58 pm

Thank you all so much for the feedback! That clears things up tremendously.
heg90
 
Posts: 20
Joined: Sun Nov 04, 2018 4:52 pm

Re: C hat of 0.000

Postby jhines » Fri Apr 26, 2019 4:03 pm

Hi Holly,

Thanks for sending me the data. The most recent version of Presence is 12.29, which is the one Darryl mentioned that fixes the GOF bug.

A nuance of gof tests with covariates is that you should test the most general model under consideration in your model-set. In your case, you have only have 42 sites, so I don't think models with more than 4 covariates should be considered.
jhines
 
Posts: 599
Joined: Fri May 16, 2003 9:24 am
Location: Laurel, MD, USA

Re: C hat of 0.000

Postby heg90 » Fri Apr 26, 2019 5:08 pm

Hi,

I downloaded the latest version (12.29) and specified the new file location. The file says it was last updated April 15th so I think it is the most updated version. Unfortunately I'm still having issues with the c -hat being very low when I run the GoF test on a model with 5 covariates for occupancy, 4 covariates with occupancy, and one with just two covariates for occupancy (not counting 2 covariates for detection probability). All three trials resulted in c-hat values of 0 or close to 0. I know this isn't an appropriate strategy for choosing a global model but I was interested in seeing if this trend is consistent even when limiting the number of covariates in my global model.

Thank you for your input. I was hesitant to use too many covariates when I created my candidate model set, and I tried to limit my model set to contain no more than 3 covariates for any one occupancy model (not including covariates for detection probability). I thought that a global model with 5 covariates for occupancy might be more representative to start with as I know it is over-parameterized and any model after that would contain no more than three. Do you mean that the global model I select should only contain 4 covariates total, including the ones I used to model detection probability? I do have a large list of covariates that I'm testing for different reasons so I'm unsure what the best strategy is for picking a model to run the GoF test on. The MacKenzie and Bailey (2004) paper discusses the possibility of using a more parsimonious model to run the GoF test on, but doesn't go into detail on how you would select a global model from the list of coviariates available.

I've spent quite some time thinking about the covariates I will be using to determine any way to reduce the number of coviariates. I was advised that I could use numerous covariates to model occupancy as long as one model does not contain more than 3 covariates at a time, given my data set.
heg90
 
Posts: 20
Joined: Sun Nov 04, 2018 4:52 pm


Return to analysis help

Who is online

Users browsing this forum: No registered users and 12 guests