Changing results when re-running models

announcements (new versions, changes, bugs, installation problems...) related to program MARK

Changing results when re-running models

Postby ewhite » Wed Oct 30, 2019 5:54 pm

Hi all,

I've been running multi-state models in MARK and noticed today that when I re-run models (not changing anything, just re-loading the model and re-running it), I get different results (in terms of deviance, number of parameters, and AIC values) each time I run the model.

Any ideas why this would happen, or how I can get consistent results with each model run?

Thanks,
Emma
ewhite
 
Posts: 2
Joined: Wed Oct 30, 2019 5:29 pm

Re: Changing results when re-running models

Postby jlaake » Wed Oct 30, 2019 6:24 pm

Try setting threads=1. Using multiple CPUs can lead to slightly different results. It will take longer with a single thread.
jlaake
 
Posts: 1417
Joined: Fri May 12, 2006 12:50 pm
Location: Escondido, CA

Re: Changing results when re-running models

Postby nperlut » Thu Oct 31, 2019 9:28 am

I am having the same issues with CJS models, where I run the exact same model and get different results. I tried your suggestion and set threads=1, and the deviance is still 0.2 different. Any other thoughts on what is going on?
nperlut
 
Posts: 14
Joined: Tue Feb 07, 2006 9:28 am

Re: Changing results when re-running models

Postby jlaake » Thu Oct 31, 2019 10:13 am

Are you using MARK interface or RMark. If the latter, send me data and code and I'll look into it. If it is MARK interface, Gary will have to address.
jlaake
 
Posts: 1417
Joined: Fri May 12, 2006 12:50 pm
Location: Escondido, CA

Re: Changing results when re-running models

Postby ewhite » Thu Oct 31, 2019 10:18 am

I'm using MARK interface.
ewhite
 
Posts: 2
Joined: Wed Oct 30, 2019 5:29 pm

Re: Changing results when re-running models

Postby cooch » Thu Oct 31, 2019 5:08 pm

I was just about to make the same suggestion Jeff did. Since that proposed solution didn't work, more likely remaining possiblity is one or more parameters estimated up near the boundary, causing some problems.

Send me the .fpt and .dbf files, and I'll have a look.
cooch
 
Posts: 1628
Joined: Thu May 15, 2003 4:11 pm
Location: Cornell University

Re: Changing results when re-running models

Postby gwhite » Sat Nov 02, 2019 7:45 pm

The reason you are getting different result is that your models are horribly over parameterized. You have specified 215 beta parameters, but MARK is estimating 52 or 53. The likelihood is nearly flat. Plus, you are running these models with 4 threads, which means that in a situation like this, you will get small differences.

You need to start with a much simpler model and use the estimates from it to build more complex models. The Gentle Introduction provides details of how to do this. You cannot expect a model with 215 parameters to optimize correctly without good starting values.

Gary
gwhite
 
Posts: 329
Joined: Fri May 16, 2003 9:05 am


Return to software problems/news

Who is online

Users browsing this forum: No registered users and 8 guests