Saturday, September 30, 2017

Your Input Needed: Are There Theories that Predict Variabilities of Individual Differences?

Hi Folks,

Input please about this individual-differences question.

Setup

Suppose I have a few  tasks, say Task A, Task B, Task C, etc..  These tasks can be any tasks, but for the sake of concreteness, let's assume each is a two-choice task that yields accuracy as a dependent measure with chance being .5 and ceiling being 1.  Suppose I choose task parameters so that each task yields a mean accuracy across participants of .75.

Example

Here is an example: Task A might be a perception task where I flash letters and mask them, and the participant has to decide whether the letter has a curved element, like in Q but not in X.  Task B might be a recognition memory task where the participant decides if items were previously studied or new.  By playing with the duration of the flash in the first task and the number of memoranda in the second task, I can set up the experiments so that the mean performance across people is near .75.

Question

If we calculate the variability across individuals, can you predict which task would be more variable.   The below figures show three cases.   Which would hold?  Why?  Obviously it depends on the tasks.  My question is that are there any tasks you could predict the order.




Example Revisited (and an answer)

Now, if we were running the above perception and memory tasks, people would be more variable in the perception task.  At 30 ms, some people will be at ceiling, others will be at floor, and the rest will be well distributed across the range.  At 100 items, most people in memory will be between 60% and 90% accurate.   I know of no theory however that addresses, predicts, or anticipates this degree of variability.

Variability In The Shadows

In psychophysics, we give each person unique parameters to keep accuracy controlled.  In cognition, we focus on mean levels rather than variability.  In individual differences, it is the correlation of people across tasks rather than the marginal variability in tasks that is of interest.

Questions Refined:

1. Do you think documenting and theorizing about this variability is helpful?  Foundational?  Arbitrary?

2. Do you know of any theory that addresses this question for any set of tasks?

3. My hunch is that the more complex or high-level a task is, the less variability.  Likewise, the more perceptual, simple, or low-level a task is, the more variability.  This seems a bit backwards in some sense, but it matches my observations as a cognitive person.  Does this hunch seem plausible?


Wednesday, September 27, 2017

The Justification of Sample Size (Un)Planning for Cognition and Perception

There apparently is keen interest in justifying sample sizes, especially in the reviewing of external grant applications. I am resistant, at least for cognitive and perceptual psychology.  I and other senior colleagues advise that 20 to 30 participants is usually sufficient for most studies cognition and perception, so long as you have tens or hundreds of replicates per condition.  That advice, however, rankled many of my tweeps.  It especially rankles my comrades in the Methodological Revolution, and I hope you don't paint me one with too much flair.  It has been a Twitter day, and even worse, you rankled methodological-comrades have motivated me to write this blog post.

I provide common sense justification for sample size (un)planning for within-subject designs in cognition and perception.  You can cite my rule of thumb, below, and this blog in your grant proposals.  You can even ding people who don't cite it.  I'll PsyArxiv this post for your citational convenience soon, after you tell me where all the typos are.

Rouder's Rule of Thumb:  If you run within-subject designs in cognition and perception, you can often get high powered experiments with 20 to 30 people so long as they run about 100 trials per condition.


Setup.  You have \(N\) people observing \(M\) trials per condition.  How big should \(N\) and \(M\) be?  

People.  Each person has a true effect, \(\mu_i\).   This true effect would be known exactly if we had gazillions of trials per condition, that is, as \(M\rightarrow\infty\).  We don't have gazillions, so we don't know it exactly.   But let's play for the moment like we do.  We will fix the situation subsequently.

Crux Move. The crux move here is a stipulation  that \(\mu_i \geq 0\)  for all people.  What does this mean?  It means that true scores all have the same sign.  For example, in the Stroop effect, it implies that nobody truly responds more quickly to incongruent than congruent colors.    In perception, it implies that in nobody responds quicker to faint tones than loud ones.  Let's call this stipulation, The Stipulation

Should You Make The Stipulation?  That depends on your experiment and paradigm.  It is quite reasonable in most cognition and perception paradigms.  After all, it is implausible that some people truly reverse Stroop or truly identify faint tones more quickly than loud ones.  In fact, in almost all cases I routinely deal with, in attention, memory, perception and cognition, this is a reasonable stipulation.  You may not be able to make or justify this stipulation.  For example, personal preferences may violate the stipulation.  So, whether you make or reject the stipulation will affect your (un)planning.  Overall, however, the stipulation should be attractive for most cases in cognition and perception.  If you make the stipulation, read on.  If not, the following (un)planning is not for you.

Individual Variation is Limited with The Stipulation: If you stipulate, then the gain is that the variability across people is limited.  When all effects are positive, and the mean is a certain size, then the variability cannot be too big else the mean would be bigger.  This is the key to sample size (un)planning.  For the sake of argument, let's assume that true effects are distributed as below, either as the blue or green line.  I used a Gamma distribution with a shape of 2, and not only is the distribution all positive, the shape value of 2 is reasonable for a distribution of true effects.  And as bonus, the shape of 2 gives the right tail a normal-like fall off.   Two curves are plotted, the blue one for a small effect; the green one for a large effect.




There limited-variation proposition is now on full display.  The blue curve with the smaller effect also has smaller variance.  The effect size, the ratio of mean to the standard deviation is the same for both curves!  It is \(sqrt(2)\), about 1.4, or the root of the shape. 

Am I claiming that if you had gazillion trials per condition, all experiments have an effect size of about 1.4?  Well yes, more or less to first order.  Once you make the stipulation, it is natural to use a scale family distribution like the gamma.  In this family the shape is the effect size, and reasonable shapes yield about effect sizes between 1 and 2.   The stipulation is really consequential as it stabilizes the true effect sizes!  This licenses unplanning.

Power and (Un)Planning: Experiments capable of  detecting effect sizes of say 1.4, the size in the figure, do not require many subjects.  Ten is more than enough.  For example, at a .05 level, \(N=10\) yields a power of .98.    This ease of powering designs also holds for more realistic cases without a gazillion trials.  [R code: 1-pt(qt(.975,9),9,ncp=sqrt(2)*sqrt(10))].

Finite Trials:  nobody wants to run a gazillion trials.  Let's slim \(M\) down.  Let's take two cases, one for RT and another for accuracy:

For RT, we are searching for a 40 ms effect, and the residual variation is some 300 ms.  This 300 ms value is a good estimate for tasks that take about 750 ms to complete, which is typical for many paradigms.  The variability for $M$ trials is \(300/\sqrt{M}\), and if we take a contrast, we need an additional \(\sqrt(2)\) for the subtraction.  If \(M=100\), then we expect variability of about  42 ms.  Combining this with the variability across participants from the above Gamma distribution yields a total variability of about 51 ms, or an effect size of 40/51 = .78.  Now, it doesn't take that many participants to power up this effect size value.  N=20 correspond to power of .91.  We can explore fewer trials per condition too.  If \(M=50\), then the effective effect size is .60, and the power  at N=25 is .82, which is quite acceptable.  

For accuracy, the calculations as follows:  Suppose we are trying to detect the difference between .725 and .775, or a .05 difference in the middle of a two-alternative force choice range.  The standard deviation for observed proportions for \(M\) trials is \(\sqrt{p(1-p)/M}\).  For 100 trials, it is .043, and if we throw in the factor of \(\sqrt{2}\) for the contrast, it is .061.  Combining this with the variability across participants from the above Gamma distribution yields a total variability of .070, or an effective effect size of .71.  N=25 corresponds to a power of .925.   Even for M=50, the power remains quite high at N=30, and is .80.  

So, for accuracy and RT, somewhere between 20 and 30 participants and 50 to 100 trials per condition is quite sufficient.  And this holds so long as one is willing to make the stipulation, which, again, seems quite reasonable in most cases to me.

Gamma of Shape 2?  Because so many of you are as argumentative as you smart, you are bound to complain about the Gamma distribution.  Why shape of 2.0?  Suppose the shape is lower?  And how would we know?  Let's go backwards.  The way we are to know what is a good shape (or true effect size across people) is by running a good number of people for lots of trials each.  We are pretty good at this in my lab, better than most. Our experiments are biased toward many trials with few conditions.  But this is not enough.  One needs an analytic method for decomposing trial-by-trial noise from population noise.  We also use hierarchical models.  The results are always a bit shocking.  There is usually a large degree of regularization meaning that trial-by-trial noise dominates over people noise.   People are truly not much different from each other.  The following graph is typical of this finding.  In the experiment, there are 50 people each observing 50 trials in 6 conditions.  The remainder of the details are unimportant for this blog.  The data are pretty, and the means tell a great story (Panel A).  Panel B is the individual level differences or contrasts among the condition.  Each line is for a different individual.  These individual differences have 10s if not 100s milliseconds in variation.  But when a reasonable hierarchical model is fit (Panel C), there is a great defree of regularization indicating that almost all the noise comes from the trial-by-trial variability.  The size of the effect relative to the variability is stable and large!  We find that this type of result repeated often and in many paradigms.  From looking at many such plots, it is my expert opinion that the gamma of shape 2 is wildly conservative, and a more defensible shape might be 3 or 4.  Hence, the power estimates here are if anything conservative too.  




Of course, your milage may differ, but probably by not that much.


Wednesday, March 8, 2017

Please Help / Real Criminal Case

Please  help.  This is a real-life case of likely false conviction where your input can help.  A man is spending life in jail without parole for a murder he likely did not commit.

Background:


  • In 1969, Jane Mixer, a law student, was murdered.  The case went cold.  
  • The case was reopened 33 years later when crime-scene evidence was submitted to DNA analysis.  
  • The DNA yielded two matches; both matches were from samples that were analyzed in the same lab and at the same time of the crime-scene DNA analysis.  All three samples were analyzed in late 2001 and early 2002.
  • One match was to John Ruelas.  Mr. Ruelas was 4 in 1969 and was excluded as a suspect.
  • The other match was to Gary Leiterman.  Mr. Leiterman was 26 at the time.  He was convicted in 2005 and is serving life without parole.  His appeal was denied in 2007.
  • There is no doubt that Mr. Leiterman's DNA was deposited on the crime scene sample.  The match is 176-trillion-to-1.
  • The question is whether the DNA was deposited at the crime scene in 1969 or if there was a cross-contamination event in the lab in 2002.

A Very Easy and Helpful PowerPoint:

  • This case comes from John Wixted, a psychologist at UCSD
  • He has made a detailed and convincing presentation.  Click here for The Power Point from John's website.
  • John has helped to persuade the Innocence Clinic at the University of Michigan to investigate the Leiterman case.
  • John and I are convinced this is a an injustice.  We are working pro bono.

Our Job:

  • Our job is to make an educated assessment of Mr. Leiterman's guilt or innocence.  It would greatly help the Innocence Clinic to assess whether there is sufficient evidence to appeal.
  • The jury heard that DNA is a trillion-to-1 accurate and there was only a very tine chance of cross contamination.   Yet, we know these are the wrong conditional probabilities to compute.
  • Consider the two hypotheses above that Leiterman's DNA was deposited at the crime scene or, alternatively, that it was deposited in the lab through cross contamination.  Conditional on the match, compute posterior probabilities.

My Analysis:



I have done my own analyses and typeset them.  But reasoning is tricky, and I would like some backup.  It is just too important to mess up.  Can you try your own analysis?  Then we can decide what is best.

You will need more information.  I used  the following specifications.  Write me if you want more:

  • John and I assumed 2.5M people are possible suspects in 1969.  It is a good guess based on population estimate of Detroit metro area.
  • The lab processes 12,000 samples a year.  The time period the DNA overlapped can be assumed to be 6 months, that is 6,000 other samples could be cross contaminated with Mixer or Leiterman.
  • The known rate of DNA cross-contamination is 1-in-1500.  That is, each time they do a mouth swab from one person, they end up with two or more DNA profiles with probability of 1/1500. We assume this rate holds for unknowable cross-contamination such as that in processing a crime scene.
  • The probability of getting usable DNA from a 33-year-old sample is 1/2.
  • Need other facts?  Just ask in the comments.

Jeff's answer is at GitHub, https://github.com/rouderj/leiterman



Thank you,
Jeff Rouder
John Wixted

Tuesday, January 3, 2017

Why Is It So Hard To Organize My Lab?

It is clear I need to pay more attention to the organization of my lab.   Organization is a challenge to me, it causes much apprehension, and seems to be a chronic need in all aspects of my life.  Let's focus on the lab.

Parameters:


1. Minimizing mistakes.  There is no upside in analyzing the wrong data set, using the wrong parameters, including the wrong figure, or reporting the wrong statistics.  These mistakes are in my view unacceptable in science.  Minimizing them is the highest priority

2.  Knowing what we did.  Some time in the future, way in the future, we or someone else will visit what we did.  Can we figure out what happened?  I'd like to plan on the time scale of decades rather than months or years.

3.  Planning for Human fallible.  Some people think science is for those who are meticulous.  Then count me out.  I am messy, careless, and chronically clueless.  A good organization anticipates human mistakes.

4. Easy to learn.  I collaborate with a lot of people.  The organization structure should be fairly intuitive self explanatory.

What we do:


1. Data acquisition and curation.  I think we have this wired.  We use a born-open data model where data are collected, logged, versioned, and uploaded nightly to GitHub automatically.  We also automatically populate local mysql tables including information on subjects and sessions, and have additional tables for experiments, experimenters, computers, and IRB info.  We even have an adverse-events table to record and address any flaws in the organizational system.  The basic unit of organization is the dataset, and it works well.

2. Outputs.  We have the usual outputs: papers, talks, grant proposals, dissertations, etc.  Some are collaborative; some are individual; some are important; some go nowhere.  The basic unit here is pretty obvious---we know exactly where each paper, talk, dissertation, etc., begins and ends.

3. Value-added endeavors.  A value-added endeavor (VAE) is a small unit of intellectual contribution.  It could be a proof, a simulation, a specific analysis, or (on occasion) a verbal argument.  VAEs, as important as they are, are ill-defined in size and scope.  And it is sometimes unclear (perhaps arbitrary) where one ends and another begins.

The Current System, The Good:

Perhaps the strongest elements of my lab's organization is that we use really good tools for open and high-integrity science.  Pretty much everything is script based, and scripts are in many ways self-documenting, especially when compared to menu-driven alternatives.  Our analyses are done in R, our papers in Latex and Markdown, and the two are integrated with RMarkdown and Knitr.  Moreover, we use a local git server and curate all development in repositories.

The Current System, The Bad and Ugly:


We use projects as our basic organization unit.  Projects are basically repositories on our local git server.  They contain ad-hoc organizations of files.  But what a project encompasses and how it is organized is ad-hoc, disordered, unstandardized, and idiosyncratic.   Here are the issues:

1. There is no natural relation between the three things we do, acquire and curate data, produce outputs, and produce VAEs and projects.  One VAE might serve several different papers; likewise, one dataset might serve several different papers.  Papers and talks encompass several different experiments (usually) and VAEs.

2. Projects have no systematic relations to VAEs, outputs or datasets.  This is why I am unhappy.  Does a project mean one paper?  Does it mean one analysis?  One development?  A collection of related papers?  A paper and all talks and the supporting dissertation?  We have done all of the these.

Help


What do you do?  Are there good standards?  What should be the basic organization unit?  Stay with project?  I am thinking about a strict output model where every output is a repository as the main organizing unit.  The problem is what-to-do about VAEs that span several outputs.  Say I have an analysis or graph that is common for a paper, a dissertation, and a talk.  I don't think I want this VAE repeated in three places.  I don't want symbolic links or hard codings because it makes it difficult to publicly archive.  That is why projects were so handy.   VAEs themselves are too small and too ill-defined to be organizing units.  Ideas?


Friday, October 28, 2016

A Probability Riddle

Some flu strains can jump from people to birds, and, perhaps vice-versa.

Suppose \(A\) is the event that there is a flu outbreak in a certain community say in the next month, and let \(P(A)\) denote the probability of this event occurring.    Suppose \(B\) is the even that there is flu outbreak among chickens in the same community in the same time frame, with \(P(B)\) being the probability of this event as well.

Now let's focus in on the relative flu risk to humans from chickens.  Let's define this risk as
\[R_h=\frac{P(A|B)}{P(A)},
\]
If the flu strain jumps from chickens to people, then the conditional probability, \(P(A|B)\) may well be higher than baserate, \(P(A)\), and the risk to people will be greater than 1.0.

Now, if you are one of those animal-lover types, you might worry about the relative flu risk to chickens from people.  It is:
\[
R_c=\frac{P(B|A)}{P(B)}
\]

At this point, you might have the intuition that there is no good reason to think \(R_h\) would be the same value as \(R_c\).  You might think that the relative risk is a function of say the virology and biology of chickens, people, and viruses.

And you would be wrong.  While it may be that chickens and people have different base rates and different conditions, it must be that \(R_h=R_c\).  It is a matter of math rather than biology or virology.

To see the math, let's start with the Law of Conditional Probability:
\[
P(A|B) = \frac{P(B|A)P(A)}{P(B)}.
\]

We can move \(P(A)\) from one side to the other, arriving at
\[
\frac{P(A|B)}{P(A)} = \frac{P(B|A)}{P(B)} .
\]

Now, note that the left-hand side is the risk to people and the right hand side is the risk to chickens.

I find the fact that these risk ratios are preserved to be a bit counterintuitive.  It is part of what makes conditional probability hard.


Sunday, April 10, 2016

This Summer's Challenge: Share Your Data

"It would take me weeks of going through my data and coordinating them, documenting them, and cleaning them if I were to share them." anonymous senior faculty member

"Subject 7 didn't show. There is an empty file. Normally the program would label the next person Subject 8 and we would just exclude Subject 7 in analysis. But now that we are automatically posting data, what should I do? Should I delete the empty file so the next person is Subject 7?" anonymous student in my lab

"Why? Data from a bad study is, by definition, no good." @PsychScienctists, in response to my statement that all data should be curated and available.

All three of the above quotes illustrate a common way of thinking about data. Our data reflect something about us. When we share them, we are sharing something deep and meaningful about ourselves. Our data may be viewed as statements about our competence, our organizational skills, our meticulousness, our creativity, and our lab culture. Even the student in my lab feels this pressure. This student is worried that our shared data won't be viewed as sufficiently systematic because we have no data for Subject 7. Maybe we want to present a better image.

The Data-Are-The-Data Mindset

I don't subscribe to the Judge-Me-By-My-Data mindset. Instead, I think of data as follows:
  • Scientific data are precious resources collected for the common good.
  • We should think in terms of stewardship rather than ownership. Be good stewards
  • Data are neither good nor bad, nor are they neat nor messy. They just are.
  • We should judge each other by the authenticity of our data

Mistake-Free Data Stewardship through Born-Open Data

To be good stewards and to insure authentic data, we upload everything, automatically, every night. Nobody has to remember anything, nobody makes decisions---it all just happens. Data are uploaded to GitHub where everyone can see them. In fact, I don't even use locally stored data for analysis; I point my analyses to the copy on GitHub. We upload data from well-though-out experiments. We upload data from poorly-thought-out-bust experiments. We upload pilot data. We upload incomplete data. If we collected it, it is uploaded. We have an accurate record of what happened in the lab, and you all are welcome to look in at our GitHub account. I call this approach born-open data, and have an in-press paper coming out about it. We have been doing born-open data for about a year.

So far, the main difference I have noticed is an increase in quality control with no energy or time spent to maintain this quality. Nothing ever gets messed up, and there is no after-the-fact reconstruction of what had happened. There is only one master copy of data---the one on GitHub. Analysis code points to the GitHub version. We never analyze the wrong or incomplete data. And it is trivially easy to share our analyses among lab members and others. In fact, we can build the analyses right into our papers with Knitr and Markdown. Computers are so much more meticulous than we will ever be. They never take a night off!

This Summer's Challenge: Automatic Data Curation

I'd like to propose a challenge: Set up your own automatic data curation system for new data that you collect. Work with your IT people. Set up the scripts. Hopefully, when next Fall rolls around, you too are practicing born-open data!



Tuesday, April 5, 2016

The Bayesian Guarantee And Optional Stopping.

Frequentist intuitions run so deep in us that we often mistakenly interpret Bayesian statistics in frequentist ones. Optional stopping has always been a case in point.  Bayesian quantities, when interpreted correctly, are not affected by optional stopping.  This fact is guaranteed by Bayes' Theorem.  Previously, I have shown how this guarantee works for Bayes factors.  Here, let's consider the simple case of estimating an effect size.

For demonstration purposes, let's generate data from a normal with unknown mean, \(\mu\), but known variance of 1.  I am going to use a whacky optional stopping rule that favors sample means near .5 over others.  Here is how it works:  I. As each observation comes in, compute the running sample mean. II. Compute a probability of stopping that is dependent on the sample mean according to the figure below.  The probability favors stopping for sample means near .5.  III. Flip a coin with sides labeled "STOP" and "GO ON" with the below probability.  IV. Do what the coin says (up to a maximum of 50 observations, then stop no matter).



The results of this rule is a bias toward sample means near .5.   I ran a simulation with a true mean of zero for ten thousand replicates (blue histogram below).  The key property is a biasing of the observed sample means higher than the true value of zero.   Bayesian estimation seems biased too.    The green histogram shows the posterior means when the prior on \(\mu\) is a normal with mean of zero and a standard deviation of .5.  The bias is less, but that just reflects the details of the situation where the true value, zero, is also favored by the prior.



So it might seem I have proved the opposite of my point---namely that optional stopping affects Bayesian estimation.

Nope.  The above case offers a frequentist interpretation, and that interpretation entered when we examined the behavior on a true value, the value zero.  Bayesians don't interpret analyses conditional on unknown "truths".

The Bayesian Guarantee 

Bayes' Theorem provides a guarantee.  If you start with your prior and observed data, then Bayes' Theorem guarantees that the posterior is the optimal set of probability statements about the parameter at hand.  It is a bit subtle to see this in simulation because one needs to condition on data rather than on some unknown truth.

Here is how a Bayesian uses simulation shows the Bayesian Guarantee.

I. On each replicate, sample a different true value from the prior.  In my case, I just draw from a normal centered at zero with standard deviation of .5 since that is my prior on effect size for this  post.  Then, on each replicate, simulate data from that truth value for that replicate.  I have chosen data of 25 observations (from a normal with variance of 1).  A histogram of the sample mean across these varying true values is provided below, left panel.   I ran the simulation for 100,000 replicates.

II. The histogram is that of data (sample means) we expect under our prior.  We need to condition on data, so let's condition on an observed sample mean of .3.  I have highlighted a small bin between .25 and .35 with red.  Observations fall in this bin about 6% of the time.

III.  Look at all the true values that generated those sample means in the bin with .3.  These true values are shown in the yellow histogram.  This histogram is the target of Bayes' Theorem, that is, we can use Bayes Theorem to describe this distribution without going through the simulations.   I have computed the posterior distribution for a sample mean of .3 and 25 observations under my prior, and plotted it as the line.  Notice the correspondence.  This correspondence is the simulation showing that Bayes Theorem works.  It works, by the way, for every bin though I have just shown it for the one centered on .3.

TAKE HOME 1: Bayes Theorem tells the distribution of true values given your prior and the data.





Is The Bayesian Guarantee Affected By Optional Stopping?

So, we come to the crux move.  Let's simulate the whacky optional stopping rule that favors sample means near .5.  Once again, we start with the prior, and for each replicate we choose a different truth value as a sample from the prior.  Then we simulate data using optional stopping, and the resulting sample means are shown in the histogram on the left.  Optional stopping has affected these data dramatically.  No matter, we choose our bin, again around .3, and plot the true values that led to these sample means.  These true values are shown as the yellow histogram on the right.  They are far more spread out than in the previous simulation without optional stopping primarily because stopping occurred often for less than 25 observations.  Now, is this spread predicted?  Yes.  On each replication we obtain a posterior distribution, and these vary from replication-to-replication because the sample size is random.  I averaged these posteriors (as I should), and the result is the line that corresponds well to the histogram.




TAKE HOME  II: Bayes Theorem tells you where the true values are given your prior and the data, and it doesn't matter how the data were sampled!

And this should be good news.




*****************

R code

set.seed(123)

m0=0
v0=.5^2

runMean=function(y) cumsum(y)/(1:length(y))
minIndex=function(y) order(y)[1]

mySampler=function(t.mu,topN)
{
M=length(t.mu)
mean=rep(t.mu,topN)
y=matrix(nrow=M,ncol=topN,rnorm(M*topN,mean,1))
ybar=t(apply(y,1,runMean))
prob=plogis((ybar-.6)^2,0,.2)
another=matrix(nrow=M,ncol=topN,rbinom(M*topN,1,prob))
stop=apply(another,1,minIndex)
return(list("ybar"=ybar[cbind(1:M,stop)],"N"=stop))
}

goodSampler=function(t.mu,topN){
M=length(t.mu)
mean=rep(t.mu,topN)
y=matrix(nrow=M,ncol=topN,rnorm(M*topN,mean,1))
return(apply(y,1,mean))}

M=10000

png('freqResults.png',width=960,height=480)
par(mfrow=c(1,2),cex=1.3,mar=c(4,4,2,1),mgp=c(2,1,0))
t.mu=rep(0,M)
out=mySampler(t.mu,50)
ybar=out$ybar
N=out$N
v=1/(N+1/v0)
c=(N*ybar+m0/v0)
hist(ybar,col='lightblue',main="",xlab="Sample Mean",breaks=50,xlim=c(-1,1.25),prob=T,ylim=c(0,2.6))
mtext("Distortion of Sample Mean",side=3,adj=.5,line=1,cex=1.3)
abline(v=mean(ybar),lwd=3,lty=2)
hist(v*c,col='lightgreen',main="",xlab="Posterior Mean",xlim=c(-1,1.25),prob=T,ylim=c(0,2.6))
abline(v=mean(v*c),lwd=3,lty=2)
mtext("Distortion of Posterior Mean",side=3,adj=.5,line=1,cex=1.3)
dev.off()


###############################
set.seed(456)
png('bayesGuarantee.png',width=960,height=480)
par(mfrow=c(1,2),cex=1.3,mar=c(4,4,2,1),mgp=c(2,1,0))


M=100000
N=25
t.mu=rnorm(M,m0,sqrt(v0))
ybar=goodSampler(t.mu,N)
myBreak=seq(-2.45,2.45,.1)
bars=hist(ybar,breaks=myBreak,plot=F)

mid=.3
good=(ybar >(mid-.05) & ybar<(mid+.05))
myCol=rep("white",length(myBreak))
myCol[round(bars$mids,2)==0.3]='red'
plot(bars,col=myCol,xlab="Sample Mean",main="")
mtext(side=3,adj=.5,line=0,cex=1.3,"Sample Mean Across Prior")

v=1/(N+1/v0)
c=(N*mid+m0/v0)

hist(t.mu[good],prob=T,xlab=expression(paste("Parameter ",mu)),col='yellow',
ylim=c(0,2.2),main="",xlim=c(-1.75,1.75))
myES=seq(-2,2,.01)
post=1:length(myES)
for (i in 1:length(myES))
post[i]=mean(dnorm(myES[i],c*v,sqrt(v)))
lines(myES,post,lwd=2)
mtext(side=3,adj=.5,line=0,cex=1.3,"True values for sample means around .3")

dev.off()
########################
set.seed(790)
png('moneyShot.png',width=960,height=480)
par(mfrow=c(1,2),cex=1.3,mar=c(4,4,2,1),mgp=c(2,1,0))


M=100000
t.mu=rnorm(M,m0,sqrt(v0))
out=mySampler(t.mu,50)
ybar=out$ybar
N=out$N
myBreak=seq(-5.95,5.95,.1)
bars=hist(ybar,breaks=myBreak,plot=F)

mid=.3
good=(ybar >(mid-.05) & ybar<(mid+.05))
myCol=rep("white",length(myBreak))
myCol[round(bars$mids,2)==0.3]='red'
plot(bars,col=myCol,xlab="Sample Mean",main="",xlim=c(-4,3))
mtext(side=3,adj=.5,line=0,cex=1.3,"Sample Mean Across Prior")

v=1/(N[good]+1/v0)
c=(N[good]*ybar[good]+m0/v0)

hist(t.mu[good],prob=T,xlab=expression(paste("Parameter ",mu)),col='yellow',main="",
ylim=c(0,2.2),xlim=c(-1.75,1.75))
myES=seq(-2,2,.01)
post=1:length(myES)
for (i in 1:length(myES))
post[i]=mean(dnorm(myES[i],c*v,sqrt(v)))
lines(myES,post,lwd=2)
mtext(side=3,adj=.5,line=0,cex=1.3,"True values for sample means around .3")

dev.off()



######################
#stop probability

png(file="probStop.png",width=480,height=480)
par(cex=1.3,mar=c(4,4,1,1),mgp=c(2,1,0))
ybar=seq(-2,3,.01)
prob=plogis((ybar-.6)^2,0,.2)
plot(ybar,1-prob,typ='l',lwd=2,ylab="Stopping Probability",xlab="Sample Mean",ylim=c(0,.55))
mtext("Optional Stopping Depends on Sample Mean",side=3,adj=.5,line=-1,cex=1.3)
dev.off()