BetaHats should not change

announcements (new versions, changes, bugs, installation problems...) related to program MARK

BetaHats should not change

Postby geofgivens » Fri Oct 28, 2016 2:43 pm

I've moved this question to the bug forum because further investigation has shown very weird behavior.

I'm fitting a RD(Huggins) model with one covariate. When I ask for real parameter predictions with the default (mean) covariate values, I get answer "A" for the BetaHats (and NHats). When I ask for predictions when setting the covariate to a different value, I get answer "B". When I set the covariate value for another covariate not in the model I get answer "C".

The BetaHats and NHats should not change when I change the covariate value for real parameter prediction. And they should especially not change when I'm changing the value for a covariate that's not even in the model.

I have the example in a DBF file for examination. I'm copying some output below so you can see I'm not crazy (look at the BICs). I can post the BetaHats, too, if you need further convincing.

So, what's going on here? Thanks!

Geof

Note: in the results below, the mean covariate value for the covariate in the model ("iscore") is .8929224. The covariate "numz" is in none of these models. The modest changes in BIC belie much larger changes in BetaHats and NHats.

Code: Select all
---------------------------------------------------------------------------------------------------------------------
                                                                        Delta     BIC     Model                     
Model                                                             BIC     BIC   Weight  Likelihood   #Par    Deviance
---------------------------------------------------------------------------------------------------------------------
{iscore; default covariate values}                           2147.787    0.00  0.34432      1.0000  8.0000   2091.127
{iscore; set  cov for reals = iscore=.8929224}               2147.961    0.17  0.31573      0.9170  8.0000   2091.300
{iscore; set cov for reals = iscore=.8929224 and numz=4}     2148.357    0.57  0.25894      0.7520  8.0000   2091.697
{iscore; set cov for reals = iscore=2}                       2150.708    2.92  0.07992      0.2321  8.0000   2094.048
{Null model (no covariates)}                                 2159.306   11.52  0.00109      0.0032  7.0000   2109.728
---------------------------------------------------------------------------------------------------------------------

geofgivens
 
Posts: 14
Joined: Tue Apr 12, 2016 2:02 pm

Re: BetaHats should not change

Postby geofgivens » Wed Nov 02, 2016 12:46 pm

This puzzle has been solved.

It is indeed true that the BetaHats changed when I re-ran the model with different covariate prediction values. However, it turned out that the results also changed when I changed nothing.

In other words, the problem was non-convergence. I don't know why the results would change unless there is some randomness in the optimization procedure (random starting value?), but they did.

The problem was fixed by switching to the "alternate optimization method" (simulated annealing?). Also, re-parameterizing the design matrix seemed to help. It seems ironic that this approach provided the same answer across multiple runs since SA is a stochastic procedure, but I suppose perhaps the same random seed is used(?).

Anyhow, it appears that the question has been resolved. I hope this post may help someone facing the same issue in the future.
geofgivens
 
Posts: 14
Joined: Tue Apr 12, 2016 2:02 pm

Re: BetaHats should not change

Postby cooch » Wed Nov 02, 2016 1:02 pm

The problem was fixed by switching to the "alternate optimization method" (simulated annealing?). Also, re-parameterizing the design matrix seemed to help. It seems ironic that this approach provided the same answer across multiple runs since SA is a stochastic procedure, but I suppose perhaps the same random seed is used(?).


SA uses random jumps during the exploration of the likelihood surface, but the algorithm will converge on the MLE if the ML surface is smooth continuous near the parameter(s). Unless there is something 'odd' about the likelihood surface (multi-state models being a notorious culprit), then multiple SA runs invariable yield the same, identical answer. SA uses an adapting stochastic jump rule, but is *not* a stochastic estimation routine (i.e., SA is not MCMC).
cooch
 
Posts: 1628
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University


Return to software problems/news

Who is online

Users browsing this forum: No registered users and 12 guests

cron