Saturday, September 30, 2017

Your Input Needed: Are There Theories that Predict Variabilities of Individual Differences?

Hi Folks,

Input please about this individual-differences question.


Suppose I have a few  tasks, say Task A, Task B, Task C, etc..  These tasks can be any tasks, but for the sake of concreteness, let's assume each is a two-choice task that yields accuracy as a dependent measure with chance being .5 and ceiling being 1.  Suppose I choose task parameters so that each task yields a mean accuracy across participants of .75.


Here is an example: Task A might be a perception task where I flash letters and mask them, and the participant has to decide whether the letter has a curved element, like in Q but not in X.  Task B might be a recognition memory task where the participant decides if items were previously studied or new.  By playing with the duration of the flash in the first task and the number of memoranda in the second task, I can set up the experiments so that the mean performance across people is near .75.


If we calculate the variability across individuals, can you predict which task would be more variable.   The below figures show three cases.   Which would hold?  Why?  Obviously it depends on the tasks.  My question is that are there any tasks you could predict the order.

Example Revisited (and an answer)

Now, if we were running the above perception and memory tasks, people would be more variable in the perception task.  At 30 ms, some people will be at ceiling, others will be at floor, and the rest will be well distributed across the range.  At 100 items, most people in memory will be between 60% and 90% accurate.   I know of no theory however that addresses, predicts, or anticipates this degree of variability.

Variability In The Shadows

In psychophysics, we give each person unique parameters to keep accuracy controlled.  In cognition, we focus on mean levels rather than variability.  In individual differences, it is the correlation of people across tasks rather than the marginal variability in tasks that is of interest.

Questions Refined:

1. Do you think documenting and theorizing about this variability is helpful?  Foundational?  Arbitrary?

2. Do you know of any theory that addresses this question for any set of tasks?

3. My hunch is that the more complex or high-level a task is, the less variability.  Likewise, the more perceptual, simple, or low-level a task is, the more variability.  This seems a bit backwards in some sense, but it matches my observations as a cognitive person.  Does this hunch seem plausible?


Unknown said...

Hi. I think theorizing about and studying individual differences in such tasks is important and exciting. Justin Kantner were struck by how much variability there is in recognition memory response bias for words. On average, across subjects, bias tends to be nada, but some subjects are very conservative, some very neutral. We found modest evidence of stability in response bias (i.e., subjects who were conservative on a recog test tended to be conservative two weeks later on a supericially different recog test).

Alan Pickering said...

I think the questions you raise are very important. Of course, a complete account of behaviour (as reflected in performance on psychological tasks) requires an understanding of both the variance and mean. It is true that there is virtually no theorising relevant to the question you pose. I recall an old claim from the early 1990s by Arthur Reber where he suggested that implicit learning tasks would show less covariance with IQ than explicit tasks (Reber, A. S. (1989). Implicit learning and tacit knowledge. Journal of Experimental Psychology: General, 118, 219–235.)

This might lead to a prediction that, for two matched tasks, the implicit task would show less variance. But, as more recent work (Cognition 116 (2010) 321–340) shows, there are other factors with which implicit learning is associated. Thus, a simple attempt to quantify the total variance in a task is likely to be rife with problems.

In my own work (Pickering & Pesola, 2014;, for example, we have suggested that individual differences researchers should build formal models of their tasks and carry out a sensitivity analysis on the effects of adding variance to each model parameter. This allows the opportunity to see which model parameters create the most variance in (simulated) task performance.

Why did we do this? Well, if one has a theory linking a model parameter to some underlying cognitive/psychobiological process then, if a task has a high degree of sensitivity to variance in that model parameter, one can make predictions for what other variables will correlate most robustly with the task in real data (i.e., those other variables that also depend strongly on the model cognitive-psychobiological process related to the model parameter). This work is only just scratching the surface of decomposing the sources of variance in tasks, but might I guess be extended (under certain ideal conditions) to address questions of the kind you pose.

Jeff Rouder said...

Hi Steve, Thanks. That is a cool insight that there may be more variability in bias than in d'. I think that falls out of the aging lit too where for recognition memory aging effects are relatively small but there is a big bias toward "new".

Alan, I love the Reber conjecture. Thanks. I would have thought the opposite to be honest, that explicit is fairly stable, at least by my more variability for perception conjecture.

Overall, I think getting the means sort of equated is important, or at least the differences in means needs to be modeled very carefully. Accuracy of course compresses at the end, and RT gets more variable naturally as it slows. Likert is really difficult without such equating because it is an ordinal scale.

Unknown said...

Variability, and predicting differences in variability across conditions is imo very important. I would even state that the absence of theories and tests on (differences in) variability is evidence of the immature state of psychological theory. This is also what we argue in for instance

Böing-Messing, F., van Assen, M. A., Hofman, A. D., Hoijtink, H., & Mulder, J. (2017). Bayesian evaluation of constrained hypotheses on variances of multiple independent groups. Psychological Methods, 22(2), 262.

Consider two examples:

1) RCT with two time points.
Why do we not always estimate the data using a five or six parameter model, two on mean differences (with the difference between those means estimating the treatment effect) AND the ratio of the variances of the differences? It is quite likely that IF the treatment has an effect, this effect is not equal across persons, which would result in a higher variance of the difference for the treatment group than for the control group.
By the way, the fifth/sixth parameter is/are the covariances.

2) Longitudinal models.
A linear slope model implies either (i) variances decreasing over time, (ii) variances first decreasing and then increasing over time, (iii) variances increasing over time.

To conclude: there may be A LOT to gain by studying (differences in) variability. :-)

pakescorts646 said...

Delightful Models Karachi Escorts Agency will arrange an incredible date for you If you are finding for a gorgeous Girls Claiming for some great experiences so this is the Wonderful Place for your wishes. Elite Escorts In Pakistan will try her best to fulfill your mood with good experiences. If you are really interested in dating Call us Now we have no hidden cost like Other Escort agency.

Carl Gaspar said...

If test-retest reliability for A is lower than for B, then Var(A) > Var(B); all else being equal. In that case, true inter-individual differences may not vary across conditions. That's only one interpretation of Var(A) > Var(B) measured in a single experiment. So I think the two possibilities should be disambiguated (by comparing reliability) before attention is focussed on either one of these interpretations. If reliability does differ then there are simple mechanisms that can be invoked.