-
1:55-2:20 PM
Computing issues concerning hierarchical models
-
Ken Burnham
-
The subject of capture-recapture (CR) data analysis is incomplete without
well-developed, user capable software (such as MARK) that can fit models
incorporating random effects as well as fixed effects. The MCMC (i.e.,
Bayesian) model fitting approach provides a sound theory for fitting all
CR models. Why not just use it? Answer: lack of software suitable for
all potential users, and the relative slowness of MCMC analysis (there
are reasons why speed still matters). Pragmatically I maintain that likelihood
analysis, including profile likelihood intervals, as needed, is eminently
adequate for CR data analysis when models have all fixed effects: the
inferential results are virtually the same as those from MCMC with vague
priors; but the analyses are much faster. This talk considers what we
might be able to do to obtain CR random effects inferences, say incorporated
into MARK, that are suitable approximations to MCMC inferences at a fraction
of the computing time.
The technical issue is simple. If all effects are considered fixed, numerical
likelihood inference only requires function evaluations over a K dimensional
space. Exact inference for a set of K random parameters requires integration
of an even more complex function (than the likelihood) over a K dimensional
space. Such integration is computationally more demanding than function
maximization. MCMC analysis has complexity at the same level as the integration
approach. The random effects method in MARK can be reformulated and extended.
So doing has value, even though I accede to the MCMC approach as the gold
standard for rigor and generality.
The talk will note several possible approximate analyses approaches for
some types of random effects (along with fixed effects, so technically mixed models).
Under any inference computations we will augment the fixed effects
likelihood with a model for the "randomly" varying parameters, a model which includes
fixed hyperparameters. Focus is usually on survival probabilities. In some sense the
resultant analysis is either a type of penalized likelihood for inference about random
effects, or a smoothed likelihood for inference about fixed effects, including
hyperparameters. Thus, the issue can be thought of more as a computing issue than as a
modeling issue: the same model underlies all inference methods for random effects.
-
2:20-2:45 PM
Estimation of nest success rates and comparing methods avaialbe in SAS GLM, SAS NLMIXED,
and MARK
-
Jay Rotella, Steve Dinsmore & Terry Shaffer
-
Estimating nesting success and evaluating factors potentially related to nest survival
are key aspects of many studies of avian populations. A strong interest in nest survival
has led to a rich literature detailing a variety of estimation methods for this vital
rate. In recent years, modeling approaches have undergone especially rapid development.
Despite these advances, most recent field studies still employ Mayfield's ad-hoc method
(1961) or, in some cases, the maximum-likelihood estimator of Johnson (1979) and Bart
and Robson (1982). Such methods allow for analyses of stratified data but do not allow
for more complex and realistic models of survival data that include covariates that vary
by individual, nest age, time, etc. and that may be continuous rather than categorical.
Methods that allow researchers to rigorously assess the importance of a variety of
biological factors that might affect nest survival can now be readily implemented in
Program MARK and in SAS's Proc GENMOD and Proc NLMIXED. In this paper, we first
describe the likelihood used for these models and then consider the question of what the
effective sample size is for computation of AICc. Next, we consider the advantages and
disadvantages of these different programs in terms of ease of data input and model
construction; utility/flexibility of generated estimates and predictions; ease of model
selection; and ability to estimate variance components. Finally, we discuss
improvements that would, if they became available, promote a better general
understanding of nest survival.
-
2:45-3:10 PM
M-SURGE: an integrated software for multistate recapture models
- Remi Choquet, Anne-Marie Reboulet, Roger Pradel, Olivier Gimenez & Jean-Dominique
Lebreton
-
M-SURGE (along with its companion program U_CARE) has been written specifically to
handle multistate capture-recapture models (Lebreton and Pradel 2002) with the ultimate
concern to alleviate their inherent difficulties (model specification, quality of
convergence, flexibility of parameterization, assessment of fit…). In its domain, MSURGE
covers a broader range of models than a general program like MARK (White and Burnham
1999), while being more user-friendly than a research program like MS-SURVIV (Hines
1994).
Among the main features of MSURGE is a wide class of models and a variety of
parameterizations:
- M-SURGE integrates conditional models with probability of recapture depending on
the current state (Arnason-Schwarz type models) but also on the current and previous
state (Jolly-MoVement type models; (Brownie, Hines et al. 1993)). In both cases, age
and/or time-dependence and multiple groups can be considered.
- Combined Survival-Transition probabilities can be represented as such or
decomposed into transition and survival probabilities (Hestbeck, Nichols et al. 1991).
- Among the transition probabilities with the same state of departure, the one to
be computed as 1 minus the others can be freely picked by the user.
User-friendliness is enhanced by the easiness with which constrained models are built,
using a language, interpreted by a generator of design matrices called GEMACO. This
language is alike those in general statistical software such as SAS or GLIM, i.e., a
formula such as t+g generates a model with additive effects of time and group, thus
avoiding tedious and error-prone matrix manipulations using an editor or a spread-sheet.
Examples of various types of multistate models are developed and presented.
You can download M-Surge freely from ftp.cefe.cnrs-mop.fr/biom/Soft-CR.
- 3:10-3:45 PM
DENSITY: software for fitting spatial detection functions to data from passive sampling
- Murray Efford & Deanna Dawson
-
Rigorous sampling of bird populations to estimate density raises the problem of
incomplete detection (e.g., Pollock et al. 2002). Varying detectability is widely
acknowledged, but variation in its spatial component (how detection declines with
distance) is considered less often. Active methods (double sampling and distance
sampling) require an observer to determine the instantaneous distance between the
sampling point and each animal. Passive methods (mistnets or traps) rely on animals
moving to the detector: instantaneous locations are unknown, and movement is an
important unmeasured component of detectability.
New methods have been developed to fit spatial detection functions to capture-recapture
data from passive detectors. The methods are computer-intensive and depend on
specialised software ('DENSITY'), available for download at
www.landcareresearch.co.nz.
DENSITY provides a graphical interface for the analysis of closed-population
capture-recapture data from arrays of passive detectors. Its simulation capability
enables users to perform power analysis of different sampling designs before going into
the field.
We demonstrate the use of DENSITY to estimate bird population density from mist netting
data. Netting was conducted over 1992 to 1996 on a forest-pasture ecotone in Mexico. In
each of 16 netting sessions, six local arrays of 20 nets were run for 2-3 consecutive
days. Despite the large number of captures in total, within-session recaptures were rare
for most species. This restricted application of the method to a few common species
(e.g. Sporophila torqueola) and to species aggregates (e.g. 'winter residents').
Although the use of the method for mist netting data was experimental, it may lead
through simulation in DENSITY to improved design of mist net arrays where density
estimation is a study goal.
The spatially explicit framework of DENSITY opens up new possibilities and we expect the
software to evolve. An exciting prospect is the direct fitting of simple density
surfaces during estimation, given suitable spatial covariates.
|