Wednesday, September 27, 2017

voxelwise standard deviation at different MB/SMS acquisitions

I've previously posted about using standard deviation and mean images to evaluate fMRI data quality, and practiCAL fMRI has a very useful series of posts explaining how various artifacts look in these images. In this post I use HCP (MB/SMS 8, RL/LR, customized scanner), and some images from one of our studies (MB4 and MB8, AP/PA, Siemens Prisma scanner); see links for acquisition details, and afni commands to generate these images is after the jump.

These tests were inspired by the work of Benjamin Risk, particularly his illustrations of banding and poor middle-of-the-brain sensitivity in the residuals (e.g., slides 28 and 29 of his OHBM talk). He mentioned that you can see some of the same patterns in simple standard deviation (stdev) images, which are in this post.

Here is a typical example of what I've been seeing with raw (not preprocessed) images. The HCP and MB8 scans are of the same person. The HCP scan is of the WM_RL task; the other two are of one of our tasks and the runs are about 12 minutes in duration (much longer than the HCP task). These MB8 and MB4 runs have low overall motion.


 Looking closely, horizontal bands are visible in the HCP and MB8 images (marked with green arrows). None of the MB4 images I've checked have such banding (though all the HCP ones I've looked at do); sensible anatomy (big vessels, brain edges, etc.) is brightest at MB4, as it should be.

Here are the same runs again, after preprocessing (HCP pipelines minimal preprocessing pipelines), first with all three at a low color threshold (800), second, with all three at a high color threshold (2000).


The HCP run is "washed out" at the 800 threshold with much higher standard deviation in the middle of the brain. Increasing the color scaling makes some anatomy visible in the HCP stdev map, but not as much as the others, and with a hint of the banding (marked with green arrows; easier to see in 3d). The MB4 and MB8 runs don't have as dramatic a "wash out" at any color scaling, with more anatomic structure visible at MB8 and especially MB4.The horizontal banding is still somewhat visible in the MB8 run (marked with an arrow), and the MB4 run has much lower stdev at the tip of the frontal lobe (marked with a green arc). tSNR versions of these images are below the jump.

These patterns are in line with Todd et. al (2017), as well as Benjamin Risk's residual maps. I'm encouraged that it looks like we're getting better signal-to-noise with our MB4 scans (though will be investigating the frontal dropout). Other metrics (innovation variance? GLM residuals?) may be even more useful for exploring these patterns. I suspect that some of the difference between the HCP and MB8 runs above may be due to the longer MB8 runs, but haven't confirmed.


Wednesday, September 20, 2017

yet more with respiration and motion regressors

Last year I did a series of posts on our experiences with multiband (SMS) task fMRI, particularly related to the respiration-related oscillations that are very apparent in the motion regressors of some people. See also practiCal fMRI's posts, especially this one. The tests and analyses here extend those, and convince me that the oscillations are not due to actual head motion, but rather B0 modulation or something else related to chest motion and/or blood oxygenation ("apparent head motion").

These scans are all using this previously-described MB4 acquisition sequence, and an experienced test person with rather prominent respiratory oscillations (the same person is in these first two plots) but little overt motion. We've been evaluating using Caseforge headcases to reduce movement and ensure consistent head positioning in the headcoil. We have two Caseforge headcases for this person: one fits quite tightly, and the other a little looser. This gives a set of comparison runs: with foam packing only (allowing more movement); with the regular Caseforge (some, but not much, movement possible); and with the tight Caseforge (motion essentially impossible).

The plots below show the 6 motion regressors (calculated from SPM12's realignment) and raw respiration signal (with a Siemens belt). I'm pretty confident that the motion regressors and respiration are precisely aligned here (it was approximate in some of last year's plots). The vertical grey lines mark one-minute intervals; the TR was 1.2 seconds. The tick marks show task onset; each run had three blocks of a verbal-response Stroop task.

First, here is a run without a headcase (normal packing). The difference in raw respiration amplitude between rest and task blocks is clear, as is some drifting over the course of the run. The y and z regressors closely match the respiration trace; hopefully you can zoom in to see that the y, z, and respiration curves are in phase - if the respiration curve goes up, both the y and z lines also go up. This was the case for all AP (anterior-posterior) encoded runs.

Next, here is a run of the same task and encoding (AP), with the very tight headcase. The oscillations in the y and z are still tightly matched to the respiration and about the same amplitude as before, but the drifting and rotation is quite reduced.

Finally, here is the same task, but the looser Caseforge headcase, and the reverse encoding (PA). The rotation is perhaps a bit more than with the tight headcase, but overall drift is quite low. The magnitude of the y and z oscillations is again about the same as the previous plots. If you look closely, the y line is out of phase from the respiration and z lines: the z line still goes up when the respiration goes up, but the y goes down.

We ran other tests, including some breath-holding. This is about two minutes of an AP and PA run breath-holding segment. The y-axis flipping is hopefully easier to see here: the blue peaks match the respiration peaks with AP, but fall in between on PA.

This y-axis flipping with encoding direction, plus the consistency of oscillation size across head stabilizers, has me convinced that we're seeing something other than overt head motion: I just don't believe he could have adjusted his y-axis movement with encoding direction that precisely, even had he known which encoding we were using each run.

If any of you have thoughts, or would like to look at these datasets to run additional tests, please let me know.

Wednesday, August 2, 2017

Getting started with Connectome Workbench 1.2.3

This post is a tutorial for getting started with Connectome Workbench 1.2.3. Some of this is an updated version of a tutorial I wrote back in 2013 for plotting a NIfTI image as a surface with Workbench. The information in that post is still fairly current, but the post is awkward as an introduction to using Workbench, since creating surface images from volumes is sort of a side topic when just getting started. If you're using Workbench for the first time I suggest that you start with this post, then explore further (such as these links); Workbench can do quite a bit more than covered in this tutorial (or blog!).

downloading

If you're using Linux, you can install Workbench via NeuroDebian, but if you're using Windows or Mac OS you don't "install" Workbench but just unzip the download then double-click the executable. On my windows box I unzipped it into d:\Workbench\, so I double-click D:\Workbench\workbench-v1.2.3\bin_windows64\wb_view.exe to start the program. If you don't want to navigate to this spot each time, you can make a shortcut to wb_view.exe or add it to your path. wb_command.exe is in the same directory as wb_view.exe. wb_command.exe is not a GUI program (nothing will happen if you double-click it!), but useful for various functions; see this post and this documentation for details.

getting images to plot

Workbench doesn't come with any images. The Conte69 atlas is normalized to MNI, and aligned to the HCP preprocessed images. Follow the instructions from this page to obtain the 32k_fs_LR mesh version of the atlas if you'll be using HCP/CCF pipeline images. Unfortunately, I can't provide a direct download link, since the SumsDB failed, but I do suggest you obtain this full atlas if you'll be using Workbench with HCP or MNI-normalized images.

UPDATE (8 August 2017): Tim Coalson kindly suggested that an alternative suitable MNI underlay is the 1200 Subjects Group Average Data; this post describes the files and gives the direct ConnectomeDB download link.

We can obtain useful images from BALSA, which takes the same login account as the ConnectomeDB. I downloaded all files from "The Brain Analysis Library of Spatial maps and Atlases (BALSA) database" study link. Unzip the archive from BALSA into a convenient directory.

seeing a blank brain

Open the Workbench GUI (e.g., by double-clicking wb_view.exe). A command window will open (just ignore it), as well as a interactive box prompting you to open a spec or scene file. Click the Skip button to go to the main program. Note: spec and scene files are very useful, and a big reason to use Workbench, because they let you save collections of images and visualizations, which can save a massive amount of time. I won't cover them in this tutorial, though.

Since we skipped loading anything, Workbench opens with a blank screen. We want to first open images to use as underlays; these instructions will walk through the BALSA study images (Conte69 or the HCP 1200 subject group average could also be used). The rough idea is that we need a NIfTI volumetric underlay to plot volumetric blobs on, and GIfTI surface files to plot surface blobs on (see this post for a bit more about file types).

Select File -> Open File from the top menus to open a standard file selection dialog. Navigate to where you put the images from BALSA, and change the Files of type drop-down to "Surface Files (*surf.gii)", then select and open all 8 of them (4 inflations * 2 hemispheres). Open the Open File dialog again, setting the Files of type drop-down to "Volume Files (*.nii *.nii.gz)", and select and open both images.

All of the underlay images are now loaded in Workbench, but we need to tell it to display them like we want: let's put the surfaces on the first tab and volumes on the second.


The above image shows the settings to show the volumes and surfaces (click to enlarge it). The highlighted part of the surface tab (1, left side above) shows the toggles for adjusting the surface view; try dragging the hemispheres around with the mouse, then using the buttons in the Orientation part of the menu to reset. Options in the Montage Selection part of the menu control which version(s) of the surface is shown (inflated, midthickness, one hemisphere or both, etc.).

Click on the second tab, then choose Volume in the View part of the menu to tell Workbench we want to see volumes (marked by red dots on the right side of the above image). When you click the Volume button the menus will change, but you won't actually see anything: unlike surfaces, you have to turn the volumetric underlay on as a Layer in the Overlay ToolBox. The red arrow points at the proper box: select one of the NIfTI images in the third box, and check its little On checkbox. You should now see a single slice. I wanted to see multiple slices, so clicked the On button in the Montage part of the menu (also marked by red dots in the above image). Try changing the settings in the Slice Plane, Montage, and Slice Indices/Coords parts of the menu to adjust the view (which slices, how many, which axis, etc.).

adding overlays

Now that we have underlay images, let's add something on top. The BALSA files we downloaded include multiple files (see their documentation for details), but let's just open one CIFTI file: Gordon333_FreesurferSubcortical.32k_fs_LR.dlabel.nii. Open this file in the same way as we opened the underlay images (File -> Open File), but set the Files of type to "Connectivity - Dense Label Files (*dlabel.nii)".

Workbench won't show the overlay immediately, but its name will be added to the Layers boxes in the Overlay ToolBox part of the window (in all the tabs). Click the box in the On column in its row, as marked by the arrows in the image below (left side); the overlay (in this case, the Gordon parcellation) should now be shown (right side). Note that in the screenshot above the left hemisphere is much more inflated than the right (done via the Montage Selection settings), but the single overlay plotted correctly on both; this is the correct behavior.


Finally, let's add an overlay to the volume. Click on tab 2 (where we loaded the volume underlay), then set the top row in the Layer box to the overlay image and click the On box (marked by arrows in the screenshot below): colored regions should appear in subcortical structures. Why just subcortical? Recall that CIFTI files have "grayordinates": surface (for the cortex) and volume (for the subcortex). Since we loaded a CIFTI file, Workbench plotted each part on the appropriate tab. Also, note that I have the underlay in the bottom row of the Layers box, and the overlay in the top row. Try switching them: the underlay will be plotted on top of the overlay.


Thursday, June 29, 2017

slides from OHBM 2017 session "High resolution fMRI via multiband (SMS) acquisition: opportunities and limitations"

We had a great session on multiband/SMS fMRI acquisitions in Vancouver at OHBM 2017. The speakers kindly allowed me to link to their slides here:

Benjamin Zahneisen spoke on the "Basics of Simultaneous Multi-Slice Imaging: SNR properties, g-factor penalty, multi-band artifacts, and other challenges associated with high temporal resolution".

Benjamin Risk spoke on the "Impacts of multiband acceleration factors on sensitivity and specificity".

Renzo (Laurentius) Huber spoke about "Recent experiences using SMS imaging in high-res fMRI studies".

Sunday, May 28, 2017

extracting values from CIFTI files: an easier way!

Several years ago I wrote a post on how to extract timecourses from HCP CIFTI files. That works, but newer versions of wb_command (I'm using version 1.2.3) have added functions that provide a simpler method.

Here's the situation: I want to extract the vertex-wise values corresponding to a specific Gordon parcel from a CIFTI COPE generated by the HCP, saving the values into a text file. Note that I don't want to do averaging or any other calculations on the vertex values - just extract the values.

Short version: convert both CIFTI dtseries files to text using wb_command -cifti-convert -to-text, then use the indices of the rows with the desired Gordon parcel number to get the correct vertices from the COPE file.

inputs

Gordon et. al released several versions of their parcellation, including Parcels_LR.dtseries.nii, which is aligned to the Conte69 (MNI) surface atlas, same as the preprocessed HCP CIFTIs. Parcels_LR.dtseries.nii is shown below (on the Conte69 anatomy) in Workbench (this tutorial and this one explain how to get to this point, if you're not familiar with Workbench).

The values I want are the COPEs the HCP generated from fitting the GLM; images like /SUBID/MNINonLinear/Results/tfMRI_WM/tfMRI_WM_hp200_s2_level2.feat/GrayordinatesStats/cope2.feat/pe1.dtseries.nii. Since these are parameter estimate images, they're not timeseries, but a single full-brain image. I want a vector of numbers (same length as the number of vertices in the parcel) for each COPE.

the command

Then we just run wb_command with -cifti-convert -to-text on the two images:
wb_command -cifti-convert -to-text Parcels_LR.dtseries.nii d:\temp\Gordon_Parcels_LR.dtseries.txt
wb_command -cifti-convert -to-text cope1_pe1.dtseries.nii d:\temp\cope1.txt

(On my windows box I open a command window in the directory containing wb_command.exe, and include the full path to the input and output files to avoid path issues.)

confirming correspondence

The commands should have generated two files, each a column of numbers. cope1.txt has 91,282 rows (cortex and subcortical structures), Gordon_Parcels_LR.dtseries.txt has 59,412 rows (cortex only). From what I can determine, the cortex is the first 59,412 rows of the 91,282 row CIFTI, and the vertex order is the same between the two.


This image shows a way to confirm the correspondence of values in the files. I have both Parcels_LR.dtseries.nii and cope1_pe1.dtseries.nii loaded in Workbench, with the parcels displayed. The two output text files are in the WinEdt window at the upper right of the screenshot; the cope on the left and parcels on the right.

I clicked on a spot in the right hemisphere, SMmouth parcel (little white marker), making the Workbench Information window appear (lower right of the screenshot). The green lines mark that the spot I clicked is row index 33186 in the CIFTI data. This is 0-based, so its values should fall in row 33187 in the 1-based WinEdt window; the blue pointers show this row in each file. The values match: the purple lines mark that this row in cope1.txt is -9.37034, same as the DATA SERIES cope1_pe1.dtseries.nii line of the Workbench Information window. Similarly, the row in Gordon_Parcels_LR.dtseries.txt is 197, same as the DATA SERIES Parcels_LR_dtseries.nii line in the Information window (red lines).

So, that's it: to get my vertex values, all I need to do is subset all the rows from cope1.txt that have the parcel number I want (e.g., 197) in Gordon_Parcels_LR.dtseries.txt.

Wednesday, May 10, 2017

task fMRI motion censoring (scrubbing) #3: impact

 This is the third post in a series, which started with what we were aiming for with the censoring, then how we implemented it for a dataset. Here, I'll show an example of how the censoring changed the GLM results. The GLMs were run in afni, using 3dDeconvolve and TENT functions; the task timing is a complex mixed design (trials embedded in blocks), so the model has both event-related and block-related ("sustained activity") components.

Setting our censoring threshold to FD > 0.9 removed very few frames; less than 2% in most runs, and never more than 12%. I wondered if such a modest amount of censoring would have a noticeable impact, but it looks like it did (which is a bit of a reminder of GLM sensitivity, but that's another issue).

Here's an example person. In the Stroop task for this person 1 frame was censored in the Bas session, 15 in the Pro, and 0 in the Rea. There are more than 1000 frames in each session, so this is not many ant all. The acquisition was with the MB4 paradigm, so we have a pair of runs (AP - PA encoding) for the task each session. Here are motion and FD traces for the person, the Pro session (highest censoring); this is a pretty good set of traces, with the biggest spikes censored at FD > 0.9 (red x).


Now, here are F-statistic images for the same task and person, for the sustained ("block") effect from a GLM. The no-censoring image is first, followed by the with-censoring image. The first row for each is the Bas session, followed by Pro, then Rea; the color scaling is the same in each.




The third (Rea) row is identical in the two images: no censoring (so they better match, since each session went into a separate GLM). The top rows (Bas) look very similar, though the peak values (rng in upper right) vary slightly. The second row (Pro), with the 15 censored frames, varies quite a bit between the two. I found the area marked with the blue arrows particularly interesting: the white matter is much brighter in the no-censoring version, and this white-matter pattern isn't present in the other (less spiky motion) runs, and looks very much like an artifact (not BOLD-ish); particularly the k=44 slice.

Similar effects are seen in the TENTs: high-motion people and sessions tend to have spikier (and less HRF-like) shapes, which is ameliorated a bit by the censoring. There seems to be a bit less ringing with censoring, as well. So, while these are preliminary, qualitative assessments, I'm encouraged that this small amount of censoring may be sensible.

Thursday, May 4, 2017

task fMRI motion censoring (scrubbing) #2: implementing

In the previous post I showed some plots of motion regressors (with enorm and FD) from an ongoing study, with qualitative descriptions and examples of the sort of motion we're seeing. In this post I'll describe some of the literature about motion censoring for task fMRI, and how we decided to implement censoring for this study.

Probably the most directly relevant recent paper for censoring task fMRI datasets, and the one whose recommendations we're broadly following, is Siegel, et. al (2014). They explored the effects of censoring on three datasets at various FD thresholds. As is reasonable, given the great variations in experiments, they refrain from making "universal recommendations", but do provide useful summaries and guidelines.

As in most things, there's no free lunch with censoring: increasing the amount of censoring reduces the number of trials available for response estimation, but hopefully lets those estimates be more reliable. Siegel, et. al (2014) found that a threshold of FD > 0.9 did well in many cases, and generally suggest fairly light censoring - removing the highest-motion frames, not every frame with any movement (see the Discussion section, page 1994). Further, they suggest removing only the above-threshold frames, not adjacent frames (page 1992):
"In a one-factor ANOVA, the FD > 0.9, (f0, b0) mask produced significantly higher zscores than all of the other masks except FD > 1.1 mm (f0,b1) which was not significantly different. On the basis of these results, we do not recommend removing volumes proceeding or following high-motion volumes ..."
Siegel, et. al (2014) didn't attempt to interpolate censored frames, citing the difficulty in accurately interpolating gaps of more than one TR. This strikes me as reasonable, particularly in task designs, where, depending on the analysis, it may be best to simply omit trials with above-threshold movement.

Setting the censoring threshold for any particular study is at least partially subjective, which is unfortunate, given the already-too-many experimenter degrees of freedom. We decided to see if the FD > 0.9 threshold suggested by Siegel, et. al (2014) seemed reasonable: did it capture rare spiky motion, but not oscillations? What percentage of frames were censored? This effort is what let to the images in the previous post: I marked the censored frames on plots of each run's motion, and we judged whether the marked frames seemed reasonable. In our case, no run had more than 12% of the frames censored, and most had less than 2%, so we decided to proceed with the FD > 0.9 threshold.

Looking at papers citing Siegel, et. al (2014), I found one using FD > 0.9 for censoring (Davis, Goldwater, & Giron, 2017), one with FD > 0.8 (O'Hearn, et. al, 2016), and one with FD > 0.5 (Bakkour et. al, 2017). Others mention censoring for motion, but without giving details, and I've heard people mention censoring based on standard deviations of the estimates within the particular dataset. Censoring based on enorm values is pretty similar to the FD used by Siegel, though afni tends to recommend a smaller threshold, such as 0.3, for adult task fMRI. I don't have time to compile a summary of common enorm-based thresholds, but would be interested if someone else finds or creates one!

A final consideration is whether to use only censoring, or censoring plus having the motion estimates as nuisance regressors in the GLM. As summarized in Siegel, et. al (2014), page 1992:
"Motion censoring generally outperformed motion regressions. Censoring at FD > 0.9 mm performed significantly better than the best regression .... To see whether a combination of censoring and regression might most benefit the data, a GLM was created using the default censoring settings and regressions of the derivatives of realignment estimates. The changes in z-score produced by this GLM were not significantly different from censoring alone ..." 
We are currently including 6 motion regressors in the (3dREMLfit) GLMs, plus omitting the censored frames. We might need to reevaluate that choice at some point; we did a bit of looking at including more regressors, but haven't previously considered using only censoring.