Saturday, May 5, 2012

scaling, part 1: entire ROI affected

This is the first of several posts demonstrating the impact of scaling of MVPA. This first case is of a ROI-based analysis in which every voxel has higher BOLD for one condition than the other. This is obviously a toy situation, but analogous to a uniform mass-univariate within the ROI. In later posts I'll show cases where only some of the voxels are affected and the impact on searchlight analyses.

I'll take a flat ROI of 25 voxels, with two conditions ("a" and "b), two runs, and two examples in each run. I filled the voxels for condition "a" with random numbers, then added 1 to each voxel value to get the corresponding image for condition "b".

Here are the values for the first 10 voxels of the dataset:

and images of the entire dataset (darker blues are larger values, reds negative, white zero):

 The difference between the datasets is obvious to the eye (class "b" is more blue than class "a") and it is classified perfectly by a linear svm.

Next, I perform run-column scaling (normalizing voxelwise, all examples within each run separately)).

This does not remove the difference between class a and b in each voxel, though the values are changed. The dataset is still classified perfectly by a linear svm.

But row-scaling (normalizing volumewise, across all voxels within each example) does remove the difference between the a and b classes, so classification fails.

Likewise, row-subtraction (removing the mean from all voxels in each example) will remove the difference between the classes and cause classification to fail.

To recap: if you're performing a ROI-based MVPA and have a uniform effect (e.g. all voxels have a higher BOLD for one type of stimuli than the other) row-scaling and row-subtraction will eliminate this information, but column-scaling will not.

R code for these analyses is available here.

No comments:

Post a Comment