tag:blogger.com,1999:blog-1754053998399059707.post2690351949519474940..comments2017-06-10T05:54:19.578-05:00Comments on Invariances: The Effect-Size PuzzlerJeff Rouderhttp://www.blogger.com/profile/12042232118911308833noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-1754053998399059707.post-70104361508668549602016-03-28T22:36:55.462-05:002016-03-28T22:36:55.462-05:00Hi All, The answer is up!Hi All, The answer is up!Jeff Rouderhttps://www.blogger.com/profile/12042232118911308833noreply@blogger.comtag:blogger.com,1999:blog-1754053998399059707.post-13164206988251347582016-03-25T10:15:57.637-05:002016-03-25T10:15:57.637-05:00This looks like a mixed effects model would be app...This looks like a mixed effects model would be appropriate; but I don't know of any methods that would provide effect sizes for individual fixed effects. So my answer would be: NA.Simon Columbushttp://simoncolumbus.comnoreply@blogger.comtag:blogger.com,1999:blog-1754053998399059707.post-36162572412980876152016-03-24T23:06:46.305-05:002016-03-24T23:06:46.305-05:00I am aware of 4 different and generally non-equiva...I am aware of 4 different and generally non-equivalent ways that people might commonly compute even just a d-like effect size for this dataset. (Let alone all the possibilities for variance-explained-type measures!) I've actually been meaning to blog about this, so I guess it's time I finally do so.<br /><br />I think standardized effect sizes are generally a bad idea for data summary and meta-analytic purposes, but can be useful if you want to do a power analysis or define reasonably informative priors, but don't have previous experimental data.<br /><br />Anyway, of the possible ways to compute a d-like statistic here, I think the least crazy way is to use...wait for it...the classical definition of cohen's d. Crucially, this ignores information about the experimental design at hand -- it is always computed simply as the mean difference over the standard deviation of a single observation (pooled across conditions). In R that would look like:<br /><br />with(df, diff(tapply(rt, cond, mean)) / sqrt(mean(tapply(rt, cond, var)))) # about .25<br /><br />where df is the effectSizePuzzler data.frame. This differs from Jeremy Anglim's method, which first aggregates the responses within subject-by-condition, as well as from other possible approaches that I'll hopefully discuss in my blog post.Jake Westfallhttp://jakewestfall.org/noreply@blogger.comtag:blogger.com,1999:blog-1754053998399059707.post-81218612049669687412016-03-24T20:55:19.607-05:002016-03-24T20:55:19.607-05:00# import data
rlong <- effectSizePuzzler
# agg...# import data<br />rlong <- effectSizePuzzler<br /><br /># aggregate to person by condition stats<br />r2long <- aggregate(rt ~ cond + id, rlong, mean)<br /><br />means <- sapply(split(r2long$rt, r2long$cond), mean)<br />sds <- sapply(split(r2long$rt, r2long$cond), sd)<br /><br /># difference in means using sd based on poooled variance<br />es <- diff(means) / sqrt(mean(sds^2))<br /><br />round(es, 2)<br /><br /><br /># Answer<br /># d = .84Jeromy Anglimhttps://www.blogger.com/profile/12949204812496382042noreply@blogger.com