Hi All,
I'm running into some problems with a Huggins p and c robust design analysis. I'm getting over and underflow with some of the models I am running in one particular period:
N-hat Standard Error Lower Upper N
13 5.0000000 0.2897003E-012 5.0000000 5.0002257
I understand this is happening because the 5 unique individuals seen in the sampling period were all seen in the first secondary period and all recaptured in the second secondary period (only two secondary periods in the primary period all up). So this essentially is a super precise estimate because of that. I'm unsure what the general protocol is for using this data however, or whether there is some way to reduce the overflow and underflow occurring?
Thanks,
Tamlyn