large RD data sets and computation time

posts related to the RMark library, which may not be of general interest to users of 'classic' MARK

large RD data sets and computation time

Postby rasrage » Mon Nov 16, 2015 3:54 pm

Dear all,
I am new to RMARK and was wondering how large a data set it can handle. I am looking at a Robust Design data set with 27 years of monthly trapping data (so that's what - 324 primary periods), with 4 occasions per primary period and individual counts across this entire study in the 10,000's to 100,000's (I think R should just about be able to handle a 100,000 x 1300 matrix). Any opinion on whether fitting a model to this amount of data would be feasible, and any idea on how much computational time that might take (just an order of magnitude), would be greatly appreciated.
Thank you!
Rahel
rasrage
 
Posts: 17
Joined: Thu Jul 23, 2015 5:02 pm

Re: large RD data sets and computation time

Postby jlaake » Mon Nov 16, 2015 4:01 pm

I have never attempted a problem of that size. Computation time depends on the model you run. RMark simplifies the design matrix to the unique rows prior to sending it to MARK. Most of your questions relate to MARK limits rather than RMark. If I understand correctly your capture history will be over 1200 characters (324*4). Not sure what limits you'll encounter with MARK.

I'd just suggest trying it and see what happens. I'd start with a very simple model and then use it to provide starting values for the more complex models which will take longer to fit.

--jeff
jlaake
 
Posts: 1479
Joined: Fri May 12, 2006 12:50 pm
Location: Escondido, CA

Re: large RD data sets and computation time

Postby rasrage » Mon Nov 16, 2015 4:07 pm

Thanks, Jeff!
rasrage
 
Posts: 17
Joined: Thu Jul 23, 2015 5:02 pm


Return to RMark

Who is online

Users browsing this forum: No registered users and 2 guests