Posted by: Gary Ernest Davis on: June 7, 2014
My
The gain statistic assesses the amount individual students increase their test scores from initial-test to final-test, and as a proportion of the possible increase for each student.
We examined the written work in mathematics classes of pre-service elementary teachers with very high gain and those with very low gain and showed that these groups exhibit distinct psychological attitudes and dispositions to learning mathematics.
We showed a statistically significant, small, increase in average gain when course goals focus on patterns, connections, and meaning making in mathematics.
A common belief is that students with low initial-test scores will have higher gains, and students with high initial-test scores will have lower gains. We showed that this is not correct for a cohort of pre-service elementary teachers.
Posted by: Gary Ernest Davis on: November 9, 2013
Given two data sets it is a not unreasonable question to ask if they have similar distributions.
For example, if we produce one data set of 500 numbers between 0 and 1, chosen uniformly and randomly (at least as randomly as we can using a pseudo-random number generator), and another data set of 500 numbers distributed normally, with mean 0 and variance 1, then eyeballing their histograms tells us , even if we did not know, that they are not similarly distributed:
A
Mathematica® incorporates these, and other goodness-of-fit tests in the function DistributionFitTest[]
These goodness-of-fit tests basically perform a hypothesis test with the null hypothesis being that the  data sets are identically  distributed,  and an alternative hypothesis that they are not.
The goodness-of-fit tests return a p-value, and a small p-value indicates it is unlikely the data sets are similarly distributed.
So, if we carry out the Pearson Chi-Square test on the uniform and normal data sets, as above, we get the exceptionally small p-value indicating, very strongly, that the two data sets are not similarly distributed.
The p-value from a Pearson Chi-Square test is a random variable: if we carry out the test on two other data sets, one from a uniform distribution, the other from a normal distribution, we will get a somewhat different p-value. How different? The plot below shows the variation in p-values when we simulated choosing 500 uniformly distributed numbers and 500 normally distributed numbers 1,000 times:
We see that despite some variation the values are all very small, as we would expect.
Now let’s see what happens if we choose  500 points from a uniform distribution, and 500 more points from the same uniform distribution. We expect the Pearson Chi-Square test to return a reasonably high p-value, indicating that we cannot reject the idea that the data come from the same distribution.  We did this once and got a satisfying 0.741581 as the p-value.
But what if we repeat this experiment 1,000 times. How will the p-values vary?
The plot below shows the result of 1,000 simulations of choosing two data sets of 500 points, each from the same uniformly distribution:
These p-values seem reasonably uniformly spread between 0 and 1. Are they? The Cramér-von Mises goodness-of-fit test indicates that we cannot reject the hypothesis that these p-values are uniformly distributed in the interval [0,1].
We set the confidence level for the Pearson Chi-Square test at 0.01, so we could expect that 1 time in 100 the Pearson Chi-Square test will indicate that the two data sets are not from the same distribution, even though they are. In 1,000 trials we could expect about 10 such instances, and that is more or less what we find.
The uniform distribution of the p-values is, at first glance, quite surprising, but since the p-values themselves are random values we expect that they will indicate something other than what we know to be the case every so often, dependent on the confidence level we set beforehand. For example, with the confidence level set at 0.05, we see that about 5% of the time the Pearson Chi-Square test indicates that the two data sets are not from the same distribution even though they are. :
Notes: