Wednesday, August 6, 2014

King 2014: MEG MVPA and temporal generalization matrices

Last week at ICON I attended an interesting talk by Stanislas Dehaene, in which he described some work in recent papers by Jean-Rémi King; I'll highlight a few aspects (and muse a bit) here.

First, MVPA of MEG data. I've never worked with MEG data, but this paper (citation below) describes classifying (linear SVM, c=1!) trial type within-subjects, using amplitude in each of the 306 MEG sensors instead of BOLD in voxels (using all the MEG sensors is a whole-brain analysis: features span the entire brain). I should go back to (as the authors do here) referring to "MVPA" as an acronym for "multivariate pattern analysis" instead of "multi-voxel pattern analysis" to not exclude MEG analyses!

Second, I was quite taken with what they call "temporal generalization matrices", illustrated in their Figure 1, shown at left. These matrices succinctly summarize the results of what I'd call "cross-timepoint classification": rather than doing some sort of temporal compression to get one summary example per trial, they classify each timepoint separately (e.g. all the images at onset - 0.1 seconds; all the images at onset + 0.1 seconds). Usually I've seen this sort of analysis plotted as  accuracy at each timepoint, like Figure 3 in Bode & Haynes 2009.

"Temporal generalization matrices" take timepoint-by-timepoint analyses a step further: instead of training and testing on images from the same timepoint, they train on images from one timepoint, then test on images from every other timepoint, systematically. The axes are thus (within-trial) timepoint, training-set timepoint on the y-axis and testing-set timepoint on the x-axis. The diagonal is from training and testing on the same timepoint, same as the Bode & Haynes 2009-style line graphs.

Since this is MEG data, there are a LOT of timepoints, giving pretty matrices like these, taken from Figure 3. In the left-side matrix the signal starts around 0.1 seconds (bottom left corner of the red blotch) and lasts until around 0.4 seconds, without much generalization - a classifier trained at timepoint 0.2 won't accurately classify timepoint 0.4. The right-side matrix has much more generalization: a classifier trained at timepoint 0.2 classified all the way to the end of the trial. Altogether, these matrices are nice summaries of a huge number of classifications.

Finally, note the dark blue blotch in the left-side matrix: they found clear below-chance accuracy for certain combinations of timepoints, which they discuss quite a bit in the manuscript. I'm pointing this out since below-chance accuracies are a persistent issue, and they (properly) didn't over-interpret (or hide) it in this case.


ResearchBlogging.orgKing JR, Gramfort A, Schurger A, Naccache L, & Dehaene S (2014). Two distinct dynamic modes subtend the detection of unexpected sounds. PLoS One, 9 (1) PMID: 24475052

No comments:

Post a Comment