95% Confidence Intervals versus 85% CIs

Forum for discussion of general questions related to study design and/or analysis of existing data - software neutral.

95% Confidence Intervals versus 85% CIs

Postby brouwern » Sat Jun 21, 2014 6:14 pm

I have noticed sometimes that 95% confidence intervals are wider on parameters than I'd expect given the results of AIC model selection. I just read Arnold 2010 "Uninformative Parameters & Model Selection Using Akaike’s Information Criterion" (JWM 74:1175–78) where he recommends the use of 85% confidence
intervals, stating that they are more compatible with the results of AIC model selection.

This paper has been well cited in the last several years but I haven't found any other discussion on the construction & justification of confidence intervals after model selection or model averaging. Is there any other conventional wisdom or papers written on constructing (or maybe not constructing?) confidence intervals around parameter estimates? Are 95% CIs provided in MARK and other software as a concession to NHST people?

My work involves experimental manipulations that tend to vary in their efficacy each year (time*treatment interactions) and I am often not just interested in whether high-ranked models contain "treatment" and its interactions, but also what the effect size is and when its large and when its not. I've read the section in "The MARK Book" on effect size and, if I remember correctly, there is some indication that there is a tension here between finding important parameters using model selection, characterizing uncertainty around parameter estimates themselves using confidence intervals, and characterizing uncertainty around effect sizes [eg differences between treatment means].
brouwern
 
Posts: 9
Joined: Thu Jan 21, 2010 9:51 am
Location: USA

Return to analysis & design questions

Who is online

Users browsing this forum: No registered users and 14 guests