I decided to give Presence a whirl and experimented with different design matrices. To make sure I was specifying models correctly I started comparing some results to MARK, where I'm more up to speed.
I stumbled upon some inconsistencies in Presence and am wondering if other folks have observed this.
My data consists of 31 sites, 5 primary sessions, each with 3 surveys in each session. In all cases I ran equivalent models in MARK and Presence (MARK: RD occupancy Estimation with psi, gamm. PRESENCE: Seasonal occupancy and colonization).
I ended up comparing output from the programs for 4 design matrices, all modelling temporal variation in Psi.
Identity:
10000
01000
00100
00010
00001
Intercept 1
10000
11000
10100
10010
10001
Intercept 2
11000
10100
10010
10001
10000
Means
11000
10100
10010
10000
1-1-1-1-1
Psi(T), gam(.), p(.) produced identical results across all design matrices and both programs (8 runs). Woo Hoo!!

Psi(T), gam(,), p(T) (p varies across primary sessions, but not within), produces problems.

In MARK, all models were identical (same AIC, loglikelihood, etc).
In Presence, the Means and Intercept 1 model match each other and MARK. For these models, -2LL = 493.339853
For Intercept 2: -2LL = 498.574913 (=deltaAIC ~5!!)
For, Identity: -2LL = 494.082105
All models have the same # of parameters and I've checked and double checked the 2 design matrices causing the problems.
Any ideas? I'm using Presence 2.0 <070626.1646>
Thanks, Jay