Wednesday, November 29, 2017

assigning arbitrary values to surface parcels: method 2

The previous post describes a method for assigning arbitrary values to surface MMP parcels via GIfTI files. Tim Coalson kindly pointed me to another method, which I'll demonstrate here. Both methods work, but one or the other might be easier in particular situations.

In the previous post I used matlab and ran wb_command from the command prompt (on Windows, by opening a command window in the directory with wb_command.exe, then using full paths to the input and output files). Here, I use R, and call wb_command from within R using its system() function. You may need to adjust system for other operating systems, or simply replace it with print and copy-paste the full line to the command prompt.

 rm(list=ls());  # clear R's memory  
   
 setwd("d:/Workbench/workbench-v1.2.3/bin_windows64/"); # location of wb_command.exe, to call wb_command functions via system() within R  
   
 local.path <- "d:/temp/";  # local directory for reading and writing files  
 s1200.path <- "d:/Workbench/HCP_S1200_GroupAvg_v1/"; # HCP S1200 Group Average Data Release  
 template.fname <- "MMPtemplate.pscalar.nii";  # filename for the template we'll make  
   
 # make a template pscalar.nii from the MMP atlas  
 system(paste0("wb_command -cifti-parcellate ", s1200.path, "S1200.thickness_MSMAll.32k_fs_LR.dscalar.nii ",   
        s1200.path, "Q1-Q6_RelatedValidation210.CorticalAreas_dil_Final_Final_Areas_Group_Colors.32k_fs_LR.dlabel.nii COLUMN ",  
        local.path, template.fname));  
 if (!file.exists(paste0(local.path, template.fname))) { stop(paste0("missing: ", local.path, template.fname)); }  
   
 # you can make a text version of the template, which has 360 rows, but the values in each row aren't the parcel numbers.  
 # system(paste0("wb_command -cifti-convert -to-text ", local.path, template.fname, " ", local.path, "temp.txt"));  
   
 # the text file needs to be arranged with the 180 right hemisphere parcels in the first 180 rows (in order),   
 # then the 180 parcels for the left hemisphere.  
 # build a text file with the values to plot.   
 out.vec <- rep(0,360);  
 out.vec[c(1,10,15)] <- 1;  # right hemisphere parcels given value 1  
 out.vec[c(180+1,180+10,180+15)] <- 2;  # corresponding left hemisphere parcels given value 2  
 write.table(out.vec, paste0(local.path, "plotDemo.txt"), col.names=FALSE, row.names=FALSE);  
   
 # create a CIFTI from the text file for viewing in Workbench  
 system(paste0("wb_command -cifti-convert -from-text ", local.path, "plotDemo.txt ", local.path, template.fname, " ", local.path, "plotDemo.pscalar.nii"));  
 if (!file.exists(paste0(local.path, "plotDemo.pscalar.nii"))) { stop(paste("missing:", local.path, "plotDemo.pscalar.nii")); }  

And here is the resulting image, with parcels 1, 10, and 15 plotted in pink on the right hemisphere (1), and yellow on the left (2). Instructions for generating this view are below the jump.


Monday, November 27, 2017

assigning arbitrary values to surface parcels: method 1

This tutorial describes a method for plotting arbitrary per-parcel values (such as from an analysis) on the surface. For example, let's say I want to display MMP parcels 1, 10, and 15 (only) in red, or (more usefully) to assign continuous numbers to each parcel, and then display parcels with larger numbers in hotter colors.

This post describes a method using matlab and creating GIfTI files; see the next post for a method using R, wb_command functions, and creating a CIFTI file. Both methods work, but one or the other might be more convenient in particular situations.

I assume that you have a surface version of the parcellation to use as a template. For example, the MMP parcellation was released in CIFTI format as part of the HCP 1200 release, and the Gordon (2014) parcellation can be downloaded here.

I'll be using the MMP in this example; if you want to follow along, download a copy of the S1200 Group Average Data Release; I put mine at d:/Workbench/HCP_S1200_GroupAvg_v1/. The MMP template is named Q1-Q6_RelatedValidation210.CorticalAreas_dil_Final_Final_Areas_Group_Colors.32k_fs_LR.dlabel.nii. (If you're not sure how to use the files in the S1200 release, try this tutorial to get started.)

I have the values to assign to the parcels in a text file with 180 lines (one line for each MMP parcel). For this tutorial, let's do the simple example of assigning a color to parcels 1, 10, and 15 only. An easy way to do this is to make a text file with 1s on these rows and 0s on all the other rows. I prefer R, but since the GIfTI library is in matlab, here's matlab code for making the little text file:
 out = repelem(0, 180);  
 out(1) = 1;  
 out(10) = 1;  
 out(15) = 1;  
 csvwrite('D:\temp\parcelNums.csv', out')  

The MMP template is in CIFTI format, but we can extract GIfTI files for each hemisphere using wb_command cifti-separate:
 wb_command -cifti-separate D:\Workbench\HCP_S1200_GroupAvg_v1\Q1-Q6_RelatedValidation210.CorticalAreas_dil_Final_Final_Areas_Group_Colors.32k_fs_LR.dlabel.nii COLUMN -metric CORTEX_LEFT D:\temp\mmpL.func.gii  
 wb_command -cifti-separate D:\Workbench\HCP_S1200_GroupAvg_v1\Q1-Q6_RelatedValidation210.CorticalAreas_dil_Final_Final_Areas_Group_Colors.32k_fs_LR.dlabel.nii COLUMN -metric CORTEX_RIGHT D:\temp\mmpR.func.gii  

This matlab code reads in the text file with the new parcel values and the GIfTI templates, then writes GIfTI files with the new parcel values substituted for the parcel label numbers:
 addpath 'C:/Program Files/MATLAB/gifti-1.6';  % so matlab can find the library  
 mmpL = gifti('D:\temp\mmpL.func.gii');  % load the left side gifti MMP atlas  
 mmpR = gifti('D:\temp\mmpR.func.gii');  % and the right side MMP atlas  
 newvals = csvread('D:\temp\parcelNums.csv');  % 180 integers; new value for each parcel  
 Lout = mmpL;  % output gifti  
 Lout.cdata(:,1) = repelem(0, size(mmpL.cdata,1));  % replace the values with zeros  
 Rout = mmpR;  
 Rout.cdata(:,1) = repelem(0, size(mmpR.cdata,1));   
 for i=1:180  % i = 1;  
   inds = find(mmpR.cdata == i);  % find vertices for parcel i  
   Rout.cdata(inds,1) = newvals(i);  % put the new value in for this parcel's vertices  
   inds = find(mmpL.cdata == (i+180)); % in MMP, left hemisphere vertices are 181:360  
   Lout.cdata(inds,1) = newvals(i);    
 end  
 save(Lout,'D:\temp\plotL.func.gii','Base64Binary');  % save the gifti  
 save(Rout,'D:\temp\plotR.func.gii','Base64Binary');  

You can now to plot these GIfTIs in Workbench (see this post if you're not sure how); I plotted them on the S1200 Group Average (MNI) anatomy:
I clicked to put a marker in the left visual parcel. The value at this vertex is 1, as assigned (green lines). I loaded in the MMP atlas as well (blue lines), so it tells me (correctly!) that the marker is in the L_V1_ROI.

Thursday, November 9, 2017

not all MB4 fMRI scans are free of gradients and banding

I was encouraged by the images in the previous post: it looked like the gradient and banding ("washing out") artifacts were not visible in the MB4 people. I need to revise that a bit: I do have bands and gradients in at least some MB4 people, though to a much lesser degree than in our MB8 datasets. How much this affects our task analyses is still to be determined.

On the left is the same MB4 person and run from the previous post (27 September 2017), voxelwise standard deviation over time of a task fMRI run, after going through the HCP minimal preprocessing pipelines. I was glad to see that the vessels were brightest (as they should be), though concerned about the frontal dropout. The person on the right is another person scanned at MB4 (doing the same task; same encoding, scanner, etc.); same preprocessing and color scaling for both.

The vessels clearly don't stand out as much in the person on the right. It's hard to tell in the above image, but there's a gradient in the standard deviation and tSNR images, with better signal on the edges than in the center of the brain. Below is another run from the person on the right, tSNR calculated on the images ready to go into afni for GLM calculation (so through the HCP minimal preprocessing pipelines, plus smoothing and voxelwise normalizing). This tSNR image is shown at six different color scalings; it's easier to see interactively (download the NIfTI here), but hopefully it's clear that the darker colors (lower tSNR) spreads from the center of the brain to the edges, rather than uniformly.


Here is the same person and run again, with the standard deviation (first two) and tSNR (right pane) of the raw (no preprocessing, first and lower) and minimally preprocessed (right two) images. I marked a banding distortion with green lines, as well as the frontal dropout. The banding is perfectly horizontal in the raw image, at 1/4 of the way up the image (see larger image), which makes sense, since this is an MB4 acquisition. I included the larger view of this raw image since all three banding artifacts are somewhat visible; in our dataset the inferior band is generally the most prominent.



The banding and gradient artifacts are certainly less visually prominent in our MB4 than our MB8 images, but they are present in some MB4 people. I haven't systematically evaluated (and probably won't be able to soon) all of our participants, so don't have a sense of how often this occurs, or how much it impacts detection of task BOLD (which is of course the key question).

Below the jump: movement regressors for the two runs in the top image; the person with the banding and gradients had very low motion; less than the person from the September post.


Wednesday, September 27, 2017

voxelwise standard deviation at different MB/SMS acquisitions

I've previously posted about using standard deviation and mean images to evaluate fMRI data quality, and practiCAL fMRI has a very useful series of posts explaining how various artifacts look in these images. In this post I use HCP (MB/SMS 8, RL/LR, customized scanner), and some images from one of our studies (MB4 and MB8, AP/PA, Siemens Prisma scanner); see links for acquisition details, and afni commands to generate these images is after the jump.

These tests were inspired by the work of Benjamin Risk, particularly his illustrations of banding and poor middle-of-the-brain sensitivity in the residuals (e.g., slides 28 and 29 of his OHBM talk). He mentioned that you can see some of the same patterns in simple standard deviation (stdev) images, which are in this post.

Here is a typical example of what I've been seeing with raw (not preprocessed) images. The HCP and MB8 scans are of the same person. The HCP scan is of the WM_RL task; the other two are of one of our tasks and the runs are about 12 minutes in duration (much longer than the HCP task). These MB8 and MB4 runs have low overall motion.


 Looking closely, horizontal bands are visible in the HCP and MB8 images (marked with green arrows). None of the MB4 images I've checked have such banding (though all the HCP ones I've looked at do); sensible anatomy (big vessels, brain edges, etc.) is brightest at MB4, as it should be.

Here are the same runs again, after preprocessing (HCP pipelines minimal preprocessing pipelines), first with all three at a low color threshold (800), second, with all three at a high color threshold (2000).


The HCP run is "washed out" at the 800 threshold with much higher standard deviation in the middle of the brain. Increasing the color scaling makes some anatomy visible in the HCP stdev map, but not as much as the others, and with a hint of the banding (marked with green arrows; easier to see in 3d). The MB4 and MB8 runs don't have as dramatic a "wash out" at any color scaling, with more anatomic structure visible at MB8 and especially MB4.The horizontal banding is still somewhat visible in the MB8 run (marked with an arrow), and the MB4 run has much lower stdev at the tip of the frontal lobe (marked with a green arc). tSNR versions of these images are below the jump.

These patterns are in line with Todd et. al (2017), as well as Benjamin Risk's residual maps. I'm encouraged that it looks like we're getting better signal-to-noise with our MB4 scans (though will be investigating the frontal dropout). Other metrics (innovation variance? GLM residuals?) may be even more useful for exploring these patterns. I suspect that some of the difference between the HCP and MB8 runs above may be due to the longer MB8 runs, but haven't confirmed.


Wednesday, September 20, 2017

yet more with respiration and motion regressors

Last year I did a series of posts on our experiences with multiband (SMS) task fMRI, particularly related to the respiration-related oscillations that are very apparent in the motion regressors of some people. See also practiCal fMRI's posts, especially this one. The tests and analyses here extend those, and convince me that the oscillations are not due to actual head motion, but rather B0 modulation or something else related to chest motion and/or blood oxygenation ("apparent head motion").

These scans are all using this previously-described MB4 acquisition sequence, and an experienced test person with rather prominent respiratory oscillations (the same person is in these first two plots) but little overt motion. We've been evaluating using Caseforge headcases to reduce movement and ensure consistent head positioning in the headcoil. We have two Caseforge headcases for this person: one fits quite tightly, and the other a little looser. This gives a set of comparison runs: with foam packing only (allowing more movement); with the regular Caseforge (some, but not much, movement possible); and with the tight Caseforge (motion essentially impossible).

The plots below show the 6 motion regressors (calculated from SPM12's realignment) and raw respiration signal (with a Siemens belt). I'm pretty confident that the motion regressors and respiration are precisely aligned here (it was approximate in some of last year's plots). The vertical grey lines mark one-minute intervals; the TR was 1.2 seconds. The tick marks show task onset; each run had three blocks of a verbal-response Stroop task.

First, here is a run without a headcase (normal packing). The difference in raw respiration amplitude between rest and task blocks is clear, as is some drifting over the course of the run. The y and z regressors closely match the respiration trace; hopefully you can zoom in to see that the y, z, and respiration curves are in phase - if the respiration curve goes up, both the y and z lines also go up. This was the case for all AP (anterior-posterior) encoded runs.

Next, here is a run of the same task and encoding (AP), with the very tight headcase. The oscillations in the y and z are still tightly matched to the respiration and about the same amplitude as before, but the drifting and rotation is quite reduced.

Finally, here is the same task, but the looser Caseforge headcase, and the reverse encoding (PA). The rotation is perhaps a bit more than with the tight headcase, but overall drift is quite low. The magnitude of the y and z oscillations is again about the same as the previous plots. If you look closely, the y line is out of phase from the respiration and z lines: the z line still goes up when the respiration goes up, but the y goes down.

We ran other tests, including some breath-holding. This is about two minutes of an AP and PA run breath-holding segment. The y-axis flipping is hopefully easier to see here: the blue peaks match the respiration peaks with AP, but fall in between on PA.

This y-axis flipping with encoding direction, plus the consistency of oscillation size across head stabilizers, has me convinced that we're seeing something other than overt head motion: I just don't believe he could have adjusted his y-axis movement with encoding direction that precisely, even had he known which encoding we were using each run.

If any of you have thoughts, or would like to look at these datasets to run additional tests, please let me know.

Wednesday, August 2, 2017

Getting started with Connectome Workbench 1.2.3

This post is a tutorial for getting started with Connectome Workbench 1.2.3. Some of this is an updated version of a tutorial I wrote back in 2013 for plotting a NIfTI image as a surface with Workbench. The information in that post is still fairly current, but the post is awkward as an introduction to using Workbench, since creating surface images from volumes is sort of a side topic when just getting started. If you're using Workbench for the first time I suggest that you start with this post, then explore further (such as these links); Workbench can do quite a bit more than covered in this tutorial (or blog!).

downloading

If you're using Linux, you can install Workbench via NeuroDebian, but if you're using Windows or Mac OS you don't "install" Workbench but just unzip the download then double-click the executable. On my windows box I unzipped it into d:\Workbench\, so I double-click D:\Workbench\workbench-v1.2.3\bin_windows64\wb_view.exe to start the program. If you don't want to navigate to this spot each time, you can make a shortcut to wb_view.exe or add it to your path. wb_command.exe is in the same directory as wb_view.exe. wb_command.exe is not a GUI program (nothing will happen if you double-click it!), but useful for various functions; see this post and this documentation for details.

getting images to plot

Workbench doesn't come with any images. The Conte69 atlas is normalized to MNI, and aligned to the HCP preprocessed images. Follow the instructions from this page to obtain the 32k_fs_LR mesh version of the atlas if you'll be using HCP/CCF pipeline images. Unfortunately, I can't provide a direct download link, since the SumsDB failed, but I do suggest you obtain this full atlas if you'll be using Workbench with HCP or MNI-normalized images.

UPDATE (8 August 2017): Tim Coalson kindly suggested that an alternative suitable MNI underlay is the 1200 Subjects Group Average Data; this post describes the files and gives the direct ConnectomeDB download link.

We can obtain useful images from BALSA, which takes the same login account as the ConnectomeDB. I downloaded all files from "The Brain Analysis Library of Spatial maps and Atlases (BALSA) database" study link. Unzip the archive from BALSA into a convenient directory.

seeing a blank brain

Open the Workbench GUI (e.g., by double-clicking wb_view.exe). A command window will open (just ignore it), as well as a interactive box prompting you to open a spec or scene file. Click the Skip button to go to the main program. Note: spec and scene files are very useful, and a big reason to use Workbench, because they let you save collections of images and visualizations, which can save a massive amount of time. I won't cover them in this tutorial, though.

Since we skipped loading anything, Workbench opens with a blank screen. We want to first open images to use as underlays; these instructions will walk through the BALSA study images (Conte69 or the HCP 1200 subject group average could also be used). The rough idea is that we need a NIfTI volumetric underlay to plot volumetric blobs on, and GIfTI surface files to plot surface blobs on (see this post for a bit more about file types).

Select File -> Open File from the top menus to open a standard file selection dialog. Navigate to where you put the images from BALSA, and change the Files of type drop-down to "Surface Files (*surf.gii)", then select and open all 8 of them (4 inflations * 2 hemispheres). Open the Open File dialog again, setting the Files of type drop-down to "Volume Files (*.nii *.nii.gz)", and select and open both images.

All of the underlay images are now loaded in Workbench, but we need to tell it to display them like we want: let's put the surfaces on the first tab and volumes on the second.


The above image shows the settings to show the volumes and surfaces (click to enlarge it). The highlighted part of the surface tab (1, left side above) shows the toggles for adjusting the surface view; try dragging the hemispheres around with the mouse, then using the buttons in the Orientation part of the menu to reset. Options in the Montage Selection part of the menu control which version(s) of the surface is shown (inflated, midthickness, one hemisphere or both, etc.).

Click on the second tab, then choose Volume in the View part of the menu to tell Workbench we want to see volumes (marked by red dots on the right side of the above image). When you click the Volume button the menus will change, but you won't actually see anything: unlike surfaces, you have to turn the volumetric underlay on as a Layer in the Overlay ToolBox. The red arrow points at the proper box: select one of the NIfTI images in the third box, and check its little On checkbox. You should now see a single slice. I wanted to see multiple slices, so clicked the On button in the Montage part of the menu (also marked by red dots in the above image). Try changing the settings in the Slice Plane, Montage, and Slice Indices/Coords parts of the menu to adjust the view (which slices, how many, which axis, etc.).

adding overlays

Now that we have underlay images, let's add something on top. The BALSA files we downloaded include multiple files (see their documentation for details), but let's just open one CIFTI file: Gordon333_FreesurferSubcortical.32k_fs_LR.dlabel.nii. Open this file in the same way as we opened the underlay images (File -> Open File), but set the Files of type to "Connectivity - Dense Label Files (*dlabel.nii)".

Workbench won't show the overlay immediately, but its name will be added to the Layers boxes in the Overlay ToolBox part of the window (in all the tabs). Click the box in the On column in its row, as marked by the arrows in the image below (left side); the overlay (in this case, the Gordon parcellation) should now be shown (right side). Note that in the screenshot above the left hemisphere is much more inflated than the right (done via the Montage Selection settings), but the single overlay plotted correctly on both; this is the correct behavior.


Finally, let's add an overlay to the volume. Click on tab 2 (where we loaded the volume underlay), then set the top row in the Layer box to the overlay image and click the On box (marked by arrows in the screenshot below): colored regions should appear in subcortical structures. Why just subcortical? Recall that CIFTI files have "grayordinates": surface (for the cortex) and volume (for the subcortex). Since we loaded a CIFTI file, Workbench plotted each part on the appropriate tab. Also, note that I have the underlay in the bottom row of the Layers box, and the overlay in the top row. Try switching them: the underlay will be plotted on top of the overlay.


Thursday, June 29, 2017

slides from OHBM 2017 session "High resolution fMRI via multiband (SMS) acquisition: opportunities and limitations"

We had a great session on multiband/SMS fMRI acquisitions in Vancouver at OHBM 2017. The speakers kindly allowed me to link to their slides here:

Benjamin Zahneisen spoke on the "Basics of Simultaneous Multi-Slice Imaging: SNR properties, g-factor penalty, multi-band artifacts, and other challenges associated with high temporal resolution".

Benjamin Risk spoke on the "Impacts of multiband acceleration factors on sensitivity and specificity".

Renzo (Laurentius) Huber spoke about "Recent experiences using SMS imaging in high-res fMRI studies".