Thursday, November 19, 2015

"Assessment of tonotopically organised subdivisions in human auditory cortex using volumetric and surface-based cortical alignments"

Given my occasional forays into surface analysis (and lack of conviction that it's necessarily a good idea for fMRI), I was intrigued by some of the analyses in a recent paper from Dave Langers ("Assessment of tonotopically organised subdivisions in human auditory cortex using volumetric and surface-based cortical alignments").

For task fMRI, the most transparent way to compare different analysis methods seems to use a task that's fairly well defined and anatomically predictable. Motion and primary senses are probably most tractable; for example a task that has blocks of finger-tapping and toe-wiggling should produce very strong signal, and be distinguishable in motor cortex. This gives a basis of comparison: with which method do we see the finger and toe signals most distinctly?

Langers (2014) investigated tonotopic maps in auditory cortex, which map the frequency of heard sounds, using volumetric and surface analysis. While tonotopic maps are not fully understood (see the paper for details!), this is a well-defined question for comparing surface and volumetric analysis of fMRI data: we know where the primary auditory cortex is located, and we know when people were listening to which sounds. He used a Hotelling's T2-related statistic for describing the "tonotopic gradient vectors" which reminds me a bit of cvMANOVA and the LD-t, but I'll just concentrate on the surface vs. volume aspects here.

This is Figure 2 from the paper, which, gives a flowchart of the procedures for the surface and volume versions of the analysis. He mapped the final volumetric results onto a (group) surface to make it easier to compare the surface and volume results, but the preprocessing and statistics were carried out separately (and it seems to me, reasonably): SPM8 for volumetric, freesurfer for surface. The fMRI voxels were small - just 1.5 x 1.5 . 1.5 mm, which is plausible to support surface analysis.

So, which did better, surface or volume? Well, to quote from the Discussion, "... the activation detected by the surface-based method was stronger than that according to the volumetric method. At the same time, the related statistical confidence levels were more or less the same." The image below is part of Figure 4 from the paper, showing a few of the group-level results (take a look at the full paper). As he observes, "The surface-based results had a noisier appearance than the volumetric ones, in particular for the frequency-related outcomes shown in Figure 4c–e."


So, in this paper, while the surface analysis seemed to result in better anatomic alignments between people, the activation maps were not clearer or more significant. I'd very much like to see more comparisons of this type (particularly with HCP processing, given its unique aspects), to see if this is a common pattern, or something unique to tonotopic maps in auditory cortex and this particular protocol.


ResearchBlogging.org Langers, D. (2014). Assessment of tonotopically organised subdivisions in human auditory cortex using volumetric and surface-based cortical alignments Human Brain Mapping, 35 (4), 1544-1561 DOI: 10.1002/hbm.22272

7 comments:

  1. but if I got it right, it actually wasn't a "comparison ... made between the usage of volumetric and surface-based registration methods", it was a " a "comparison ... made between the usage of [an anatomical] volumetric and [an anatomical] surface-based registration methods", i.e. it was comparison between two methods (non-linear volumetric by SPM8b and surface based by FreeSurfer) which differ more than they are similar: Different results could have been obtained even within different volumetric methods, and that on its own is a big can of worms. This paper (abstract/title) should have made objectively narrow scope of comparison clear.

    ReplyDelete
  2. actually -- does spm8b even have non-linear volumetric alignment (getting late here)? so most probably it was a good old fashion linear alignment, imperfections of which would have resulted in smearing of effects across subjects

    ReplyDelete
  3. For the surface-based analysis, here is a key quote:

    "Functional data were assigned to all vertices by trilinear interpolation of the normalised functional volumes."

    So, they could be losing significant signal from the cortical ribbon by doing trilinear mapping from 2mm voxels to midthickness (the closer to a voxel center a vertex lands, the more noise and less signal gets mapped to the vertex, due to less averaging of voxels).

    Granted, the ribbon mapping method we use in the HCP via connectome workbench is a rather new method (paper describing it published 2013, code may not have been available at the time), so this study didn't have the opportunity to try it, so I can't fault the authors for using what may have been standard practice at the time.

    Additionally, surface registration and analysis has the benefit of being able to do cross-subject computation/comparison without mixing csf and white matter with cortex signal, without needing to resolve subject differences in the cortex geometry. This also makes it much easier to do registration based on features other than cortical folding, as the geometry of the cortex is removed from the registration problem (MSM has been aimed at this goal from the start, while I don't know of anyone attempting this in volume registration).

    ReplyDelete
  4. Thanks for mentioning my paper here. Great to see a critical discussion of an issue that remains relevant more widely! Also thanks to the other commenters for their (valid!) points.

    Please do note that the surface data were sampled mid-cortex at the vertex locations in the surface sheet; in particularly, data were not averaged across the thickness of the cortical ribbon. This means that a 2D smoothing occurs in a "disk-like" area, whereas 3D-smoothing occurs in a "spherical" volume. The latter can therefore make use of more data, spanning the cortical thickness. This will affect noise levels, and therefore t-values. For that reason I don't think it is fair to draw too many conclusions from the mere fact that surface-outcomes appeared noisier. (In the paper I did a supplementary surface analysis with increased smoothing radius to see how signal magnitudes and noise smoothness can be exchanged.)

    My conclusion from these analyses was that the 2D alignment does help restricting the analysis to grey matter: you are not comparing GM in one subject to WM in another because the gyri happen to differ. Of course, because the fMRI signal primarily arises in the GM, this helps improve the detectability of the signal. However, within the cortical surface, it is also sometimes suggested that 2D alignment better succeeds in coregistering regions with the same function (i.e. functional fields for instance). I did not get the impression that this was the case in my study: auditory cortex contains a number of subregions with differing tonotopic layout, but the maps did not become any crisper using 2D alignment as compared to 3D alignment.
    In the abstract I wrote: "Although the surface-based method resulted in a better registration across subjects of the grey matter segment as a whole, the alignment of functional subdivisions within the cortical sheet did not appear to improve over volumetric methods."

    @yarikoptic: I agree it wasn't a systematic comparison across the wide range of possible methods; I chose two common methods, one from each type (2D vs. 3D). However, SPM did use non-linear (i.e. non-affine) transformations that include "warping" of space.
    @TimCoalson: I seem to fully agree with you too, given my own comment above. :) Fortunately, some of this was discussed in the paper.

    ReplyDelete
  5. @Dave: Ah, I see. I must confess I had mostly skimmed the paper, and was more responding to the blog post, by adding a possible explanation of the differences. The disk-like nature of 2D smoothing is indeed another way to phrase the effect of point-interpolated surface mapping (with the ribbon mapping method, it behaves more like a cylinder that can only get wider - of course, both conform to the folding of the cortical ribbon, so it isn't quite that simple). Another notable quote from your paper:

    "Similar improvements may presumably be achieved by, for instance, averaging across the thickness of the cortical ribbon, or by employing a non-isotropic oblate spheroidal three-dimensional kernel that is aligned to the cortical sheet."

    You basically described the design motivation of both our ribbon mapping method, and our "myelin-style" mapping method (though we did the second as a clipped spherical kernel, not an oblate spheroid).

    As freesurfer's registration is folding-based, I wouldn't expect it to have resulted in obviously better area alignment (aside from the volumetric problem of cortex overlapping with another subject's csf or white matter). Here is some of our work on surface registration based on features related to functional area boundaries:

    http://www.sciencedirect.com/science/article/pii/S1053811914004546

    The HCP has been working further on surface registration with this goal, stay tuned.

    As for smoothing, it is good to see that you didn't increase volume smoothing beyond 5mm fwhm, considering the unconstrained spherical nature of it, and the tight folding of cortex (in fact, we would likely have tried even less smoothing). We also have better surface smoothing methods than iterative averaging of neighbors, but that is unlikely to have a noticeable effect on the results (and as one of the goals of the HCP is better spatial resolution, we don't generally advocate using much smoothing at all).

    ReplyDelete
    Replies
    1. Sorry, that quote should have a bit more of the surrounding context with it:

      "...surface-based smoothing employs a two-dimensional kernel that spans only a sub-volume of the three-dimensional kernel that is used in volumetric analyses (assuming identical FWHM). As a result, volumetric analyses can be more efficient in averaging out stochastic noise. By enlarging the FWHM in a supplementary surface-based analysis, an improved sensitivity could be obtained while still retaining a stronger signal strength. Similar improvements may presumably be achieved by, for instance, averaging across the thickness of the cortical ribbon, or by employing a non-isotropic oblate spheroidal three-dimensional kernel that is aligned to the cortical sheet."

      Delete
  6. Thanks for the clarifying comments! There are so many fine (but important) details in how surface and volume analyses are carried out, many of which aren't apparent at first glance.

    ReplyDelete