Ask an MLer: Classification and Hypothesis Testing

Often scientists want to use machine learning as an analytical tool to answer a question about a data set. However, transitioning from classification accuracy to a hypothesis test can be tricky. Today we answer a question from a neuroscientist on the subject.

How can I tell if my classifier is performing significantly above chance? Can I use a t test?

The short answer is: with a permutation test, and no, you can’t use a t test.

Why won’t a t test work here?

Let’s start by thinking about what flavor of t-test we might be tempted to perform.  Given that you have a distribution of classification accuracies (either from cross-validation or a held-out data set), and you understand that there are  C number of classes in your data set, it seems natural to perform a one-sample t test, as follows:

t = \frac{\bar{x} - \mu_0}{s/\sqrt{n}}

Here, \bar{x} is your average classification accuracy, s is the standard deviation of your accuracies, n is the number of samples, and \mu_0= \frac{1}{C}, which is chance performance.

This test is asking if your average classification accuracy is equal to \mu_0. If \mu_0 is chance, this seems pretty sensible at first glance. However, there are a few important assumptions being made here:

  • It should be possible for the distribution of mean accuracies  to be symmetric about \mu_0
  • \mu_0 is the correct representation of chance performance.
  • The distribution of \bar{x} should be Gaussian. This is only true if the individual samples x_i that went into the computation of \bar{x} are independent.

It should be obvious that the first assumption is violated by the nature of classification accuracies. Seeing below chance performance on data is extremely rare (although not impossible), therefore even if your classifier is not performing well you could still pass this test.  

This leads into the second assumption. The truth is that \mu_0= \frac{1}{C} is only “chance” with infinite data, which you definitely do not have. Often if you actually estimate chance (as below) it is not exactly \mu_0. The estimate of chance you get is a distribution whose mean AND variance matter.

The last assumption is tricky. Are your estimations of classification accuracy independent? Not if you’ve cross-validated. The different folds depend on each other and are correlated. Not only does this make the computation of \bar{x} problematic for the above formula, but the standard deviation estimate is also affected. Furthermore, you can’t possibly have accuracies below 0 or above 1, so there is absolutely no way \bar{x} could be drawn from a true Gaussian.

So, what’s a scientist to do? If you can’t use the standard statistical toolbox to evaluate the performance of your classifier, you can use a permutation test.

What is a permutation test? How do I do it?

Let’s think about what we actually want to test: what is the probability that we found a connection between training data and labels by chance alone?  For example, if we are trying to predict the word a person is reading from brain images, could it be possible that we are getting above chance accuracy just by luck?  More than that, how probable is it?

A permutation test creates the scenario where there is no connection between the training data and the labels, and simulates the accuracy we would observe by chance.  We do this by permuting the order of the labels, thereby assigning “incorrect” labels to each of the training instances.  We then run the same machine learning pipeline that we ran on the original data, but use the permuted labels.  We do this many times (say, 100 times) and record the accuracy each time.  These accuracies create a null distribution for the regime where any relation between the training data and the labels is entirely coincidental.

If we would like to be able to say that our true performance is above chance with p<0.05, we look at where the true accuracy falls relative to the distribution of permuted accuracies. If it is larger than 95% of those accuracies, than we can assign it a p value of 0.05.

So permutation tests are pretty easy!  You do whatever you did to measure the true training accuracy, just on the permuted labels.  The biggest downfall of this method is the computational load required to run hundreds of tests.  Especially if you have multiple subjects and many time windows/ROIs to test (as in MEG or fMRI) these tests can take hours or even days to run. We’ve effectively traded analytical energy for computational energy.

Remember that if you are testing across time or ROIs, you still need to correct for multiple comparisons!  We’ll talk about that more in a future post.

Further Reading:

Do you have a burning Machine Learning question? Ask us and we’ll answer it in a post!