I have a quick question regarding estimating survival between two intervals.
If you have collected data semi-annually and model survival over these 6-month intervals, you will obtain one estimate. Presumably you would square this estimate to then produce a yearly survival estimate (which is often of more useful for various reasons).
However, if you ignore half of your data and just model survival over 1-year intervals, you will obtain a different estimate of yearly survival.
In nearly all cases in which I've tried this, a modeled 1-year survival estimate is greater than a squared 6-month survival estimate.
I cannot figure out why this occurs or what to make of it. Does it stem from the fact that by squaring shorter survival estimates you are compounding the uncertainty (limitations to precision) of those estimates?
Philosophically, it seems you'd want to use all of your data in analyses, necessitating modeling 6-month survival intervals. But if this underestimates yearly survival, what good are the resulting estimates? This problem gets exponentially worse when you have data collected every month and want to derive a yearly survival estimate.
Any clarity on why the disparity exists and how to interpret this problem would be greatly appreciated.
Many thanks,
Brian Todd