The problem I am trying to solve is: we have so many cores in computers (8 on i7 with hyper-threading) and use only one of them for Mark, it would be nice to use all of them.
With series of experiments I have formulated some solutions of using multiple cores/threads under Windows (it may work with other systems but I did not try, as I am a lamer, not a professional).
As far as writing final functions will take some time I would like to know first:
a) Is it really needed by anybody else?
b) Are the any working solutions written?
I am still working on a huge multistrata dataset using simulated annealing. As far as I have a lot of different factors that may or may not affect survival and/or transitions the amount of models to test is huge.
1. I modified Jeff's mark.wrapper function adding a possibility to extract mark output from Mark files without knowing a relationship between model.name and xxx in markxxx files (it is needed for parallel runs)..
2. I found a nice R package "batch" that can run several copies of a script in several R-processes (from command line). Each of R-processes may run Mark process through RMark
3. The next step was to add couple of locks and logical switches..
Now the whole stuff works fine except I did not finalize it in functional form.
If you are interested in the idea of parallel runs - let's discuss a way of making a function from this logic (I can see several types of final functions intended for different R-level of end user).
I think that if we will make a progress here we can switch to clusters - and it will speed the process a lot!
Eldar