Wednesday, November 29, 2017

assigning arbitrary values to surface parcels: method 2

The previous post describes a method for assigning arbitrary values to surface MMP parcels via GIfTI files. Tim Coalson kindly pointed me to another method, which I'll demonstrate here. Both methods work, but one or the other might be easier in particular situations.

In the previous post I used matlab and ran wb_command from the command prompt (on Windows, by opening a command window in the directory with wb_command.exe, then using full paths to the input and output files). Here, I use R, and call wb_command from within R using its system() function. You may need to adjust system for other operating systems, or simply replace it with print and copy-paste the full line to the command prompt.

 rm(list=ls());  # clear R's memory  
   
 setwd("d:/Workbench/workbench-v1.2.3/bin_windows64/"); # location of wb_command.exe, to call wb_command functions via system() within R  
   
 local.path <- "d:/temp/";  # local directory for reading and writing files  
 s1200.path <- "d:/Workbench/HCP_S1200_GroupAvg_v1/"; # HCP S1200 Group Average Data Release  
 template.fname <- "MMPtemplate.pscalar.nii";  # filename for the template we'll make  
   
 # make a template pscalar.nii from the MMP atlas  
 system(paste0("wb_command -cifti-parcellate ", s1200.path, "S1200.thickness_MSMAll.32k_fs_LR.dscalar.nii ",   
        s1200.path, "Q1-Q6_RelatedValidation210.CorticalAreas_dil_Final_Final_Areas_Group_Colors.32k_fs_LR.dlabel.nii COLUMN ",  
        local.path, template.fname));  
 if (!file.exists(paste0(local.path, template.fname))) { stop(paste0("missing: ", local.path, template.fname)); }  
   
 # you can make a text version of the template, which has 360 rows, but the values in each row aren't the parcel numbers.  
 # system(paste0("wb_command -cifti-convert -to-text ", local.path, template.fname, " ", local.path, "temp.txt"));  
   
 # the text file needs to be arranged with the 180 right hemisphere parcels in the first 180 rows (in order),   
 # then the 180 parcels for the left hemisphere.  
 # build a text file with the values to plot.   
 out.vec <- rep(0,360);  
 out.vec[c(1,10,15)] <- 1;  # right hemisphere parcels given value 1  
 out.vec[c(180+1,180+10,180+15)] <- 2;  # corresponding left hemisphere parcels given value 2  
 write.table(out.vec, paste0(local.path, "plotDemo.txt"), col.names=FALSE, row.names=FALSE);  
   
 # create a CIFTI from the text file for viewing in Workbench  
 system(paste0("wb_command -cifti-convert -from-text ", local.path, "plotDemo.txt ", local.path, template.fname, " ", local.path, "plotDemo.pscalar.nii"));  
 if (!file.exists(paste0(local.path, "plotDemo.pscalar.nii"))) { stop(paste("missing:", local.path, "plotDemo.pscalar.nii")); }  

And here is the resulting image, with parcels 1, 10, and 15 plotted in pink on the right hemisphere (1), and yellow on the left (2). Instructions for generating this view are below the jump.


Monday, November 27, 2017

assigning arbitrary values to surface parcels: method 1

This tutorial describes a method for plotting arbitrary per-parcel values (such as from an analysis) on the surface. For example, let's say I want to display MMP parcels 1, 10, and 15 (only) in red, or (more usefully) to assign continuous numbers to each parcel, and then display parcels with larger numbers in hotter colors.

This post describes a method using matlab and creating GIfTI files; see the next post for a method using R, wb_command functions, and creating a CIFTI file. Both methods work, but one or the other might be more convenient in particular situations.

I assume that you have a surface version of the parcellation to use as a template. For example, the MMP parcellation was released in CIFTI format as part of the HCP 1200 release, and the Gordon (2014) parcellation can be downloaded here.

I'll be using the MMP in this example; if you want to follow along, download a copy of the S1200 Group Average Data Release; I put mine at d:/Workbench/HCP_S1200_GroupAvg_v1/. The MMP template is named Q1-Q6_RelatedValidation210.CorticalAreas_dil_Final_Final_Areas_Group_Colors.32k_fs_LR.dlabel.nii. (If you're not sure how to use the files in the S1200 release, try this tutorial to get started.)

I have the values to assign to the parcels in a text file with 180 lines (one line for each MMP parcel). For this tutorial, let's do the simple example of assigning a color to parcels 1, 10, and 15 only. An easy way to do this is to make a text file with 1s on these rows and 0s on all the other rows. I prefer R, but since the GIfTI library is in matlab, here's matlab code for making the little text file:
 out = repelem(0, 180);  
 out(1) = 1;  
 out(10) = 1;  
 out(15) = 1;  
 csvwrite('D:\temp\parcelNums.csv', out')  

The MMP template is in CIFTI format, but we can extract GIfTI files for each hemisphere using wb_command cifti-separate:
 wb_command -cifti-separate D:\Workbench\HCP_S1200_GroupAvg_v1\Q1-Q6_RelatedValidation210.CorticalAreas_dil_Final_Final_Areas_Group_Colors.32k_fs_LR.dlabel.nii COLUMN -metric CORTEX_LEFT D:\temp\mmpL.func.gii  
 wb_command -cifti-separate D:\Workbench\HCP_S1200_GroupAvg_v1\Q1-Q6_RelatedValidation210.CorticalAreas_dil_Final_Final_Areas_Group_Colors.32k_fs_LR.dlabel.nii COLUMN -metric CORTEX_RIGHT D:\temp\mmpR.func.gii  

This matlab code reads in the text file with the new parcel values and the GIfTI templates, then writes GIfTI files with the new parcel values substituted for the parcel label numbers:
 addpath 'C:/Program Files/MATLAB/gifti-1.6';  % so matlab can find the library  
 mmpL = gifti('D:\temp\mmpL.func.gii');  % load the left side gifti MMP atlas  
 mmpR = gifti('D:\temp\mmpR.func.gii');  % and the right side MMP atlas  
 newvals = csvread('D:\temp\parcelNums.csv');  % 180 integers; new value for each parcel  
 Lout = mmpL;  % output gifti  
 Lout.cdata(:,1) = repelem(0, size(mmpL.cdata,1));  % replace the values with zeros  
 Rout = mmpR;  
 Rout.cdata(:,1) = repelem(0, size(mmpR.cdata,1));   
 for i=1:180  % i = 1;  
   inds = find(mmpR.cdata == i);  % find vertices for parcel i  
   Rout.cdata(inds,1) = newvals(i);  % put the new value in for this parcel's vertices  
   inds = find(mmpL.cdata == (i+180)); % in MMP, left hemisphere vertices are 181:360  
   Lout.cdata(inds,1) = newvals(i);    
 end  
 save(Lout,'D:\temp\plotL.func.gii','Base64Binary');  % save the gifti  
 save(Rout,'D:\temp\plotR.func.gii','Base64Binary');  

You can now to plot these GIfTIs in Workbench (see this post if you're not sure how); I plotted them on the S1200 Group Average (MNI) anatomy:
I clicked to put a marker in the left visual parcel. The value at this vertex is 1, as assigned (green lines). I loaded in the MMP atlas as well (blue lines), so it tells me (correctly!) that the marker is in the L_V1_ROI.

UPDATE 9 November 2018: I added making this same gifti on the fly and plotting in a knitr to my gifti plotting demo.

Thursday, November 9, 2017

not all MB4 fMRI scans are free of gradients and banding

I was encouraged by the images in the previous post: it looked like the gradient and banding ("washing out") artifacts were not visible in the MB4 people. I need to revise that a bit: I do have bands and gradients in at least some MB4 people, though to a much lesser degree than in our MB8 datasets. How much this affects our task analyses is still to be determined.

On the left is the same MB4 person and run from the previous post (27 September 2017), voxelwise standard deviation over time of a task fMRI run, after going through the HCP minimal preprocessing pipelines. I was glad to see that the vessels were brightest (as they should be), though concerned about the frontal dropout. The person on the right is another person scanned at MB4 (doing the same task; same encoding, scanner, etc.); same preprocessing and color scaling for both.

The vessels clearly don't stand out as much in the person on the right. It's hard to tell in the above image, but there's a gradient in the standard deviation and tSNR images, with better signal on the edges than in the center of the brain. Below is another run from the person on the right, tSNR calculated on the images ready to go into afni for GLM calculation (so through the HCP minimal preprocessing pipelines, plus smoothing and voxelwise normalizing). This tSNR image is shown at six different color scalings; it's easier to see interactively (download the NIfTI here), but hopefully it's clear that the darker colors (lower tSNR) spreads from the center of the brain to the edges, rather than uniformly.


Here is the same person and run again, with the standard deviation (first two) and tSNR (right pane) of the raw (no preprocessing, first and lower) and minimally preprocessed (right two) images. I marked a banding distortion with green lines, as well as the frontal dropout. The banding is perfectly horizontal in the raw image, at 1/4 of the way up the image (see larger image), which makes sense, since this is an MB4 acquisition. I included the larger view of this raw image since all three banding artifacts are somewhat visible; in our dataset the inferior band is generally the most prominent.



The banding and gradient artifacts are certainly less visually prominent in our MB4 than our MB8 images, but they are present in some MB4 people. I haven't systematically evaluated (and probably won't be able to soon) all of our participants, so don't have a sense of how often this occurs, or how much it impacts detection of task BOLD (which is of course the key question).

Below the jump: movement regressors for the two runs in the top image; the person with the banding and gradients had very low motion; less than the person from the September post.


Wednesday, September 27, 2017

voxelwise standard deviation at different MB/SMS acquisitions

I've previously posted about using standard deviation and mean images to evaluate fMRI data quality, and practiCAL fMRI has a very useful series of posts explaining how various artifacts look in these images. In this post I use HCP (MB/SMS 8, RL/LR, customized scanner), and some images from one of our studies (MB4 and MB8, AP/PA, Siemens Prisma scanner); see links for acquisition details, and afni commands to generate these images is after the jump.

These tests were inspired by the work of Benjamin Risk, particularly his illustrations of banding and poor middle-of-the-brain sensitivity in the residuals (e.g., slides 28 and 29 of his OHBM talk). He mentioned that you can see some of the same patterns in simple standard deviation (stdev) images, which are in this post.

Here is a typical example of what I've been seeing with raw (not preprocessed) images. The HCP and MB8 scans are of the same person. The HCP scan is of the WM_RL task; the other two are of one of our tasks and the runs are about 12 minutes in duration (much longer than the HCP task). These MB8 and MB4 runs have low overall motion.


 Looking closely, horizontal bands are visible in the HCP and MB8 images (marked with green arrows). None of the MB4 images I've checked have such banding (though all the HCP ones I've looked at do); sensible anatomy (big vessels, brain edges, etc.) is brightest at MB4, as it should be.

Here are the same runs again, after preprocessing (HCP pipelines minimal preprocessing pipelines), first with all three at a low color threshold (800), second, with all three at a high color threshold (2000).


The HCP run is "washed out" at the 800 threshold with much higher standard deviation in the middle of the brain. Increasing the color scaling makes some anatomy visible in the HCP stdev map, but not as much as the others, and with a hint of the banding (marked with green arrows; easier to see in 3d). The MB4 and MB8 runs don't have as dramatic a "wash out" at any color scaling, with more anatomic structure visible at MB8 and especially MB4.The horizontal banding is still somewhat visible in the MB8 run (marked with an arrow), and the MB4 run has much lower stdev at the tip of the frontal lobe (marked with a green arc). tSNR versions of these images are below the jump.

These patterns are in line with Todd et. al (2017), as well as Benjamin Risk's residual maps. I'm encouraged that it looks like we're getting better signal-to-noise with our MB4 scans (though will be investigating the frontal dropout). Other metrics (innovation variance? GLM residuals?) may be even more useful for exploring these patterns. I suspect that some of the difference between the HCP and MB8 runs above may be due to the longer MB8 runs, but haven't confirmed.


Wednesday, September 20, 2017

yet more with respiration and motion regressors

Last year I did a series of posts on our experiences with multiband (SMS) task fMRI, particularly related to the respiration-related oscillations that are very apparent in the motion regressors of some people. See also practiCal fMRI's posts, especially this one. The tests and analyses here extend those, and convince me that the oscillations are not due to actual head motion, but rather B0 modulation or something else related to chest motion and/or blood oxygenation ("apparent head motion").

These scans are all using this previously-described MB4 acquisition sequence, and an experienced test person with rather prominent respiratory oscillations (the same person is in these first two plots) but little overt motion. We've been evaluating using Caseforge headcases to reduce movement and ensure consistent head positioning in the headcoil. We have two Caseforge headcases for this person: one fits quite tightly, and the other a little looser. This gives a set of comparison runs: with foam packing only (allowing more movement); with the regular Caseforge (some, but not much, movement possible); and with the tight Caseforge (motion essentially impossible).

The plots below show the 6 motion regressors (calculated from SPM12's realignment) and raw respiration signal (with a Siemens belt). I'm pretty confident that the motion regressors and respiration are precisely aligned here (it was approximate in some of last year's plots). The vertical grey lines mark one-minute intervals; the TR was 1.2 seconds. The tick marks show task onset; each run had three blocks of a verbal-response Stroop task.

First, here is a run without a headcase (normal packing). The difference in raw respiration amplitude between rest and task blocks is clear, as is some drifting over the course of the run. The y and z regressors closely match the respiration trace; hopefully you can zoom in to see that the y, z, and respiration curves are in phase - if the respiration curve goes up, both the y and z lines also go up. This was the case for all AP (anterior-posterior) encoded runs.

Next, here is a run of the same task and encoding (AP), with the very tight headcase. The oscillations in the y and z are still tightly matched to the respiration and about the same amplitude as before, but the drifting and rotation is quite reduced.

Finally, here is the same task, but the looser Caseforge headcase, and the reverse encoding (PA). The rotation is perhaps a bit more than with the tight headcase, but overall drift is quite low. The magnitude of the y and z oscillations is again about the same as the previous plots. If you look closely, the y line is out of phase from the respiration and z lines: the z line still goes up when the respiration goes up, but the y goes down.

We ran other tests, including some breath-holding. This is about two minutes of an AP and PA run breath-holding segment. The y-axis flipping is hopefully easier to see here: the blue peaks match the respiration peaks with AP, but fall in between on PA.

This y-axis flipping with encoding direction, plus the consistency of oscillation size across head stabilizers, has me convinced that we're seeing something other than overt head motion: I just don't believe he could have adjusted his y-axis movement with encoding direction that precisely, even had he known which encoding we were using each run.

If any of you have thoughts, or would like to look at these datasets to run additional tests, please let me know.

UPDATE 11 October 2018: This dataset is now available for download on openneuro.org, DOI:10.18112/openneuro.ds002737.v1.0.1 (was 10.18112/openneuro.ds001544.v1.1.0), called "multibandCFtests". (Note: I updated the links August 2020; the dataset was not properly deidentified the first time, requiring correction by the openneuro team - thanks!)

Wednesday, August 2, 2017

Getting started with Connectome Workbench 1.2.3

WARNING: This post is out of date; the current tutorial is for Workbench 1.4.2.


This post is a tutorial for getting started with Connectome Workbench 1.2.3. Some of this is an updated version of a tutorial I wrote back in 2013 for plotting a NIfTI image as a surface with Workbench. The information in that post is still fairly current, but the post is awkward as an introduction to using Workbench, since creating surface images from volumes is sort of a side topic when just getting started. If you're using Workbench for the first time I suggest that you start with this post, then explore further (such as these links); Workbench can do quite a bit more than covered in this tutorial (or blog!).

downloading

If you're using Linux, you can install Workbench via NeuroDebian, but if you're using Windows or Mac OS you don't "install" Workbench but just unzip the download then double-click the executable. On my windows box I unzipped it into d:\Workbench\, so I double-click D:\Workbench\workbench-v1.2.3\bin_windows64\wb_view.exe to start the program. If you don't want to navigate to this spot each time, you can make a shortcut to wb_view.exe or add it to your path. wb_command.exe is in the same directory as wb_view.exe. wb_command.exe is not a GUI program (nothing will happen if you double-click it!), but useful for various functions; see this post and this documentation for details.

getting images to plot

Workbench doesn't come with any images. The Conte69 atlas is normalized to MNI, and aligned to the HCP preprocessed images. Follow the instructions from this page to obtain the 32k_fs_LR mesh version of the atlas if you'll be using HCP/CCF pipeline images. Unfortunately, I can't provide a direct download link, since the SumsDB failed, but I do suggest you obtain this full atlas if you'll be using Workbench with HCP or MNI-normalized images.

UPDATE (8 August 2017): Tim Coalson kindly suggested that an alternative suitable MNI underlay is the 1200 Subjects Group Average Data; this post describes the files and gives the direct ConnectomeDB download link.

We can obtain useful images from BALSA, which takes the same login account as the ConnectomeDB. I downloaded all files from "The Brain Analysis Library of Spatial maps and Atlases (BALSA) database" study link. Unzip the archive from BALSA into a convenient directory.

seeing a blank brain

Open the Workbench GUI (e.g., by double-clicking wb_view.exe). A command window will open (just ignore it), as well as a interactive box prompting you to open a spec or scene file. Click the Skip button to go to the main program. Note: spec and scene files are very useful, and a big reason to use Workbench, because they let you save collections of images and visualizations, which can save a massive amount of time. I won't cover them in this tutorial, though.

Since we skipped loading anything, Workbench opens with a blank screen. We want to first open images to use as underlays; these instructions will walk through the BALSA study images (Conte69 or the HCP 1200 subject group average could also be used). The rough idea is that we need a NIfTI volumetric underlay to plot volumetric blobs on, and GIfTI surface files to plot surface blobs on (see this post for a bit more about file types).

Select File -> Open File from the top menus to open a standard file selection dialog. Navigate to where you put the images from BALSA, and change the Files of type drop-down to "Surface Files (*surf.gii)", then select and open all 8 of them (4 inflations * 2 hemispheres). Open the Open File dialog again, setting the Files of type drop-down to "Volume Files (*.nii *.nii.gz)", and select and open both images.

All of the underlay images are now loaded in Workbench, but we need to tell it to display them like we want: let's put the surfaces on the first tab and volumes on the second.


The above image shows the settings to show the volumes and surfaces (click to enlarge it). The highlighted part of the surface tab (1, left side above) shows the toggles for adjusting the surface view; try dragging the hemispheres around with the mouse, then using the buttons in the Orientation part of the menu to reset. Options in the Montage Selection part of the menu control which version(s) of the surface is shown (inflated, midthickness, one hemisphere or both, etc.).

Click on the second tab, then choose Volume in the View part of the menu to tell Workbench we want to see volumes (marked by red dots on the right side of the above image). When you click the Volume button the menus will change, but you won't actually see anything: unlike surfaces, you have to turn the volumetric underlay on as a Layer in the Overlay ToolBox. The red arrow points at the proper box: select one of the NIfTI images in the third box, and check its little On checkbox. You should now see a single slice. I wanted to see multiple slices, so clicked the On button in the Montage part of the menu (also marked by red dots in the above image). Try changing the settings in the Slice Plane, Montage, and Slice Indices/Coords parts of the menu to adjust the view (which slices, how many, which axis, etc.).

adding overlays

Now that we have underlay images, let's add something on top. The BALSA files we downloaded include multiple files (see their documentation for details), but let's just open one CIFTI file: Gordon333_FreesurferSubcortical.32k_fs_LR.dlabel.nii. Open this file in the same way as we opened the underlay images (File -> Open File), but set the Files of type to "Connectivity - Dense Label Files (*dlabel.nii)".

Workbench won't show the overlay immediately, but its name will be added to the Layers boxes in the Overlay ToolBox part of the window (in all the tabs). Click the box in the On column in its row, as marked by the arrows in the image below (left side); the overlay (in this case, the Gordon parcellation) should now be shown (right side). Note that in the screenshot above the left hemisphere is much more inflated than the right (done via the Montage Selection settings), but the single overlay plotted correctly on both; this is the correct behavior.


Finally, let's add an overlay to the volume. Click on tab 2 (where we loaded the volume underlay), then set the top row in the Layer box to the overlay image and click the On box (marked by arrows in the screenshot below): colored regions should appear in subcortical structures. Why just subcortical? Recall that CIFTI files have "grayordinates": surface (for the cortex) and volume (for the subcortex). Since we loaded a CIFTI file, Workbench plotted each part on the appropriate tab. Also, note that I have the underlay in the bottom row of the Layers box, and the overlay in the top row. Try switching them: the underlay will be plotted on top of the overlay.


WARNING: This post is out of date; the current tutorial is for Workbench 1.4.2.

Thursday, June 29, 2017

slides from OHBM 2017 session "High resolution fMRI via multiband (SMS) acquisition: opportunities and limitations"

We had a great session on multiband/SMS fMRI acquisitions in Vancouver at OHBM 2017. The speakers kindly allowed me to link to their slides here:

Benjamin Zahneisen spoke on the "Basics of Simultaneous Multi-Slice Imaging: SNR properties, g-factor penalty, multi-band artifacts, and other challenges associated with high temporal resolution".

Benjamin Risk spoke on the "Impacts of multiband acceleration factors on sensitivity and specificity".

Renzo (Laurentius) Huber spoke about "Recent experiences using SMS imaging in high-res fMRI studies".

Sunday, May 28, 2017

extracting values from CIFTI files: an easier way!

Several years ago I wrote a post on how to extract timecourses from HCP CIFTI files. That works, but newer versions of wb_command (I'm using version 1.2.3) have added functions that provide a simpler method.

Here's the situation: I want to extract the vertex-wise values corresponding to a specific Gordon parcel from a CIFTI COPE generated by the HCP, saving the values into a text file. Note that I don't want to do averaging or any other calculations on the vertex values - just extract the values.

Short version: convert both CIFTI dtseries files to text using wb_command -cifti-convert -to-text, then use the indices of the rows with the desired Gordon parcel number to get the correct vertices from the COPE file.

inputs

Gordon et. al released several versions of their parcellation, including Parcels_LR.dtseries.nii, which is aligned to the Conte69 (MNI) surface atlas, same as the preprocessed HCP CIFTIs. Parcels_LR.dtseries.nii is shown below (on the Conte69 anatomy) in Workbench (this tutorial and this one explain how to get to this point, if you're not familiar with Workbench).

The values I want are the COPEs the HCP generated from fitting the GLM; images like /SUBID/MNINonLinear/Results/tfMRI_WM/tfMRI_WM_hp200_s2_level2.feat/GrayordinatesStats/cope2.feat/pe1.dtseries.nii. Since these are parameter estimate images, they're not timeseries, but a single full-brain image. I want a vector of numbers (same length as the number of vertices in the parcel) for each COPE.

the command

Then we just run wb_command with -cifti-convert -to-text on the two images:
wb_command -cifti-convert -to-text Parcels_LR.dtseries.nii d:\temp\Gordon_Parcels_LR.dtseries.txt
wb_command -cifti-convert -to-text cope1_pe1.dtseries.nii d:\temp\cope1.txt

(On my windows box I open a command window in the directory containing wb_command.exe, and include the full path to the input and output files to avoid path issues.)

confirming correspondence

The commands should have generated two files, each a column of numbers. cope1.txt has 91,282 rows (cortex and subcortical structures), Gordon_Parcels_LR.dtseries.txt has 59,412 rows (cortex only). From what I can determine, the cortex is the first 59,412 rows of the 91,282 row CIFTI, and the vertex order is the same between the two.


This image shows a way to confirm the correspondence of values in the files. I have both Parcels_LR.dtseries.nii and cope1_pe1.dtseries.nii loaded in Workbench, with the parcels displayed. The two output text files are in the WinEdt window at the upper right of the screenshot; the cope on the left and parcels on the right.

I clicked on a spot in the right hemisphere, SMmouth parcel (little white marker), making the Workbench Information window appear (lower right of the screenshot). The green lines mark that the spot I clicked is row index 33186 in the CIFTI data. This is 0-based, so its values should fall in row 33187 in the 1-based WinEdt window; the blue pointers show this row in each file. The values match: the purple lines mark that this row in cope1.txt is -9.37034, same as the DATA SERIES cope1_pe1.dtseries.nii line of the Workbench Information window. Similarly, the row in Gordon_Parcels_LR.dtseries.txt is 197, same as the DATA SERIES Parcels_LR_dtseries.nii line in the Information window (red lines).

So, that's it: to get my vertex values, all I need to do is subset all the rows from cope1.txt that have the parcel number I want (e.g., 197) in Gordon_Parcels_LR.dtseries.txt.

Wednesday, May 10, 2017

task fMRI motion censoring (scrubbing) #3: impact

 This is the third post in a series, which started with what we were aiming for with the censoring, then how we implemented it for a dataset. Here, I'll show an example of how the censoring changed the GLM results. The GLMs were run in afni, using 3dDeconvolve and TENT functions; the task timing is a complex mixed design (trials embedded in blocks), so the model has both event-related and block-related ("sustained activity") components.

Setting our censoring threshold to FD > 0.9 removed very few frames; less than 2% in most runs, and never more than 12%. I wondered if such a modest amount of censoring would have a noticeable impact, but it looks like it did (which is a bit of a reminder of GLM sensitivity, but that's another issue).

Here's an example person. In the Stroop task for this person 1 frame was censored in the Bas session, 15 in the Pro, and 0 in the Rea. There are more than 1000 frames in each session, so this is not many ant all. The acquisition was with the MB4 paradigm, so we have a pair of runs (AP - PA encoding) for the task each session. Here are motion and FD traces for the person, the Pro session (highest censoring); this is a pretty good set of traces, with the biggest spikes censored at FD > 0.9 (red x).


Now, here are F-statistic images for the same task and person, for the sustained ("block") effect from a GLM. The no-censoring image is first, followed by the with-censoring image. The first row for each is the Bas session, followed by Pro, then Rea; the color scaling is the same in each.




The third (Rea) row is identical in the two images: no censoring (so they better match, since each session went into a separate GLM). The top rows (Bas) look very similar, though the peak values (rng in upper right) vary slightly. The second row (Pro), with the 15 censored frames, varies quite a bit between the two. I found the area marked with the blue arrows particularly interesting: the white matter is much brighter in the no-censoring version, and this white-matter pattern isn't present in the other (less spiky motion) runs, and looks very much like an artifact (not BOLD-ish); particularly the k=44 slice.

Similar effects are seen in the TENTs: high-motion people and sessions tend to have spikier (and less HRF-like) shapes, which is ameliorated a bit by the censoring. There seems to be a bit less ringing with censoring, as well. So, while these are preliminary, qualitative assessments, I'm encouraged that this small amount of censoring may be sensible.

Thursday, May 4, 2017

task fMRI motion censoring (scrubbing) #2: implementing

In the previous post I showed some plots of motion regressors (with enorm and FD) from an ongoing study, with qualitative descriptions and examples of the sort of motion we're seeing. In this post I'll describe some of the literature about motion censoring for task fMRI, and how we decided to implement censoring for this study.

Probably the most directly relevant recent paper for censoring task fMRI datasets, and the one whose recommendations we're broadly following, is Siegel, et. al (2014). They explored the effects of censoring on three datasets at various FD thresholds. As is reasonable, given the great variations in experiments, they refrain from making "universal recommendations", but do provide useful summaries and guidelines.

As in most things, there's no free lunch with censoring: increasing the amount of censoring reduces the number of trials available for response estimation, but hopefully lets those estimates be more reliable. Siegel, et. al (2014) found that a threshold of FD > 0.9 did well in many cases, and generally suggest fairly light censoring - removing the highest-motion frames, not every frame with any movement (see the Discussion section, page 1994). Further, they suggest removing only the above-threshold frames, not adjacent frames (page 1992):
"In a one-factor ANOVA, the FD > 0.9, (f0, b0) mask produced significantly higher zscores than all of the other masks except FD > 1.1 mm (f0,b1) which was not significantly different. On the basis of these results, we do not recommend removing volumes proceeding or following high-motion volumes ..."
Siegel, et. al (2014) didn't attempt to interpolate censored frames, citing the difficulty in accurately interpolating gaps of more than one TR. This strikes me as reasonable, particularly in task designs, where, depending on the analysis, it may be best to simply omit trials with above-threshold movement.

Setting the censoring threshold for any particular study is at least partially subjective, which is unfortunate, given the already-too-many experimenter degrees of freedom. We decided to see if the FD > 0.9 threshold suggested by Siegel, et. al (2014) seemed reasonable: did it capture rare spiky motion, but not oscillations? What percentage of frames were censored? This effort is what let to the images in the previous post: I marked the censored frames on plots of each run's motion, and we judged whether the marked frames seemed reasonable. In our case, no run had more than 12% of the frames censored, and most had less than 2%, so we decided to proceed with the FD > 0.9 threshold.

Looking at papers citing Siegel, et. al (2014), I found one using FD > 0.9 for censoring (Davis, Goldwater, & Giron, 2017), one with FD > 0.8 (O'Hearn, et. al, 2016), and one with FD > 0.5 (Bakkour et. al, 2017). Others mention censoring for motion, but without giving details, and I've heard people mention censoring based on standard deviations of the estimates within the particular dataset. Censoring based on enorm values is pretty similar to the FD used by Siegel, though afni tends to recommend a smaller threshold, such as 0.3, for adult task fMRI. I don't have time to compile a summary of common enorm-based thresholds, but would be interested if someone else finds or creates one!

A final consideration is whether to use only censoring, or censoring plus having the motion estimates as nuisance regressors in the GLM. As summarized in Siegel, et. al (2014), page 1992:
"Motion censoring generally outperformed motion regressions. Censoring at FD > 0.9 mm performed significantly better than the best regression .... To see whether a combination of censoring and regression might most benefit the data, a GLM was created using the default censoring settings and regressions of the derivatives of realignment estimates. The changes in z-score produced by this GLM were not significantly different from censoring alone ..." 
We are currently including 6 motion regressors in the (3dREMLfit) GLMs, plus omitting the censored frames. We might need to reevaluate that choice at some point; we did a bit of looking at including more regressors, but haven't previously considered using only censoring.

Friday, April 21, 2017

task fMRI motion censoring (scrubbing) #1: categorizing

Motion ... whether caused by head movement, other movement, breathing, or something else, it is one of the banes of fMRI. Motion artifacts are a huge issue for resting state fMRI, but not only - it causes big problems in task fMRI as well.The best things to do, of course, is to minimize movement during acquisition, by consistent head positioning, bracing with pads (or other systems). But no system is perfect (or able to eliminate breathing and heart beats), so we need to consider motion in the analyses. Here (as usual, though it's certainly not perfect) I'll use the motion traces (by which I mean the x, y, z, roll, pitch, yaw values produced during realignment and often used as nuisance regressors) as a proxy for motion.

Before deciding on any sort of censoring scheme for a study, it's good to look at the motion from all of the people, to get an idea of general movement categories. This post will show some runs I've decided are representative; exemplars of different sorts of movement. For background, these are individual runs from a cognitive task fMRI study, mostly with an MB4 acquisition scheme (details here).

All of these plots have vertical grey lines at one-minute intervals; the runs are around 12 minutes long. The horizontal green lines show the timing of the three task blocks present in each run; tasks were presented at random times and of varying durations during these blocks. The top pane has the translation (mm) and rotation (degrees) from the Movement_Regressors.txt file produced during (HCP-style) preprocessing. The second pane has the enorm and FD versions of the same motion traces, in mm.

I'll start with really nice traces, then work through to some that are not so nice, illustrating our qualitative categorization. I think it's useful to "calibrate your eyes" in this way to have a baseline understanding of some of the data characteristics before starting serious analyses or data manipulations.

Best possible: freakishly smooth: not even 0.5 mm translation over the entire 12 minute run; the little jiggles are probably related to breathing, and are also incredibly regular.

Not perfect, but very good; isolated spiky movement. This trace has very little drifting, almost entirely regular oscillations. This is the sort of movement that seems exactly suited to motion censoring: quite nice, except for a few short periods. (The frames censored with a threshold of FD > 0.9 are marked by red x.)
 

The next category are traces with prominent oscillations, but otherwise pretty clean (not terribly spiky or drifting), and fairly consistent in magnitude and frequency across the run. We'll be using these types of runs without censoring in our analyses (at least for now).

Finally, are the ones of more questionable quality and utility: numerous spikes, drifting, and/or changes in oscillation magnitude. Frames to be censored at FD > 0.9 are marked, but that's only designed to detect spikes. Slow drifts have generally been considered less problematic for task fMRI than spikes, and we generally have comparatively few drifts in this dataset, regardless.


Spiking and drifting are fairly familiar in motion traces; oscillations, less so. (Though I'm sure this sort of movement existed prior to SMS!) It is certainly possible that the oscillation changes (e.g., third image in last set, second in previous pair) reflect changes in respiration rate (perhaps at least somewhat due to entraining to the task timing), which could affect BOLD in all sorts of problematic ways, and for extended periods. We're actively looking into ways to quantify these sorts of effects and minimize (or at least understand) their impacts, but I don't think there are any simple answers. We have respiration and pulse recordings for most runs, but haven't yet been working with those in detail.

Tuesday, March 21, 2017

upcoming events: PRNI and OHBM

June will be busy for me: I'll be attending PRNI in Toronto, then on to Vancouver for OHBM. Here's a bit of a sales pitch; hope to see many of you there!

PRNI (Pattern Recognition in NeuroImaging) is a great little conference, focused on machine learning neuroimaging applications (lots of fMRI, but also EEG, MEG, etc.). It has aspects of both engineering conferences, with proceedings (you can submit a short paper - and there's still time; the deadline isn't until 3 14 April) and psychology conferences (you can submit a short abstract for poster presentation). We're still collecting tutorial proposals, but have a good lineup of invited speakers: Randy McIntosh, Janaina Mourao-Miranda, Rajeev Raizada, and Irina Rish. Besides the website (prni.org), we put announcements on facebook and twitter (@prniworkshop).

At OHBM I'll be speaking at the PR4NI tutorial on Sunday, "A new MVPA-er’s guide to fMRI datasets". Here's the abstract: "fMRI datasets have properties which make the application of machine learning (pattern recognition) techniques challenging – and exciting! This talk will introduce some of the properties most relevant for MVPA, particularly the strong temporal and spatial dependencies inherent in BOLD imaging. These dependencies mean that some fMRI experimental designs are more suitable for MVPA than others, due, for example, to how the tasks are distributed within scanner runs. I will also introduce some of the necessary analysis choices, such as how to summarize the response in time (e.g., convolving with an HRF), which brain areas to include, and feature selection techniques."

I also organized a symposium, which will be Tuesday morning, "High resolution fMRI via multiband (SMS) acquisition: opportunities and limitations". This symposium isn't about MVPA, but rather the practicalities of high-resolution fMRI: working with fMRI datasets with small voxels (say, 2 mm or so isotropic) and multiband acquisitions is different than single-shot fMRI datasets with 3 mm voxels. I will put up a separate post with talk and speaker details soon - I think it'll be a great session. Finally, I'll be presenting a poster sometime, about MVPA-ish twin similarity analyses using parts of the HCP dataset.

Wednesday, March 1, 2017

adjusting my mental model: movement correlation after preprocessing

It's time to adjust my mental model of the fMRI signal: there's a lot more correlation with movement in the timecourses after preprocessing than I'd expected. That movement really affects fMRI is not at all new, of course, and is why including the motion regressors as covariates in GLMs is standard. But I'd pictured that after preprocessing (assuming it went well and included realignment and spatial normalization) the correlation with movement left in the voxel timecourses should be pretty low (something like normally distributed, centered on 0, ranging between -0.2 to 0.2), without much spatial structure (i.e., maybe some ringing, but fairly uniformly present over the brain). Asking around, I think this is a fairly common mental model, but it looks to be quite wrong.

For exploring, I simply used afni's 3dTcorr1D program to correlate the timecourse of every voxel in several preprocessed task fMRI datasets with each of the six motion regressors (generated during preprocessing). 3dTcorr1D makes an image with 6 entries in the 4th dimension (sub-brick, in afni-speak), one for each of the 6 motion columns; the value in each voxel the Pearson correlation between that voxel's timecourse and the movement column. I plotted these correlations on brains, and made histograms to summarize the distribution.

The correlations are much higher than I expected, even in people with very little movement. Here's an example; more follow. Below are the motion regressors from single task run (about 12.5 minutes long; HCP-style preprocessing; MB8 protocol), the correlation with each motion regressor, and a (density-style) histogram of the voxel-wise correlations. Color scaling for this and all brain images is from 1 (hottest) to -1 (coolest), not showing correlations between -0.25 and 0.25.


If my expectation (correlations normally distributed, centered on 0, ranging between -0.2 to 0.2) was right, there shouldn't be any color on these images at all, but there's clearly quite a bit: many voxels correlate around 0.5 with roll and pitch (4th and 5th brain rows are mostly "hot" colors), and around -0.5 with x, y, and z (first three rows mostly "cool" colors). There's some structure to the peak correlations (e.g., a hotter strip along the left side in slice 46), which may correspond with sulci or large vessels, but it's rather speckly. Note that this is a pretty low motion subject overall: less than 2 mm drift over the 12 minute run, and only 4 volumes marked for censoring (I didn't censor before the correlation).

Looking at other people and datasets, including from non-SMS acquisitions with larger voxels and longer TRs, it appears like correlations of 0.5 are pretty common: this isn't just some sort of weird effect that only shows up with high-resolution acquisitions. For another example, these are the histograms and motion regressors for four runs from one person included in this study (acquired with 4 mm isotropic voxels; run duration 7.6 min, TR 2.5 sec, SPM preprocessing). The corresponding brain images are below the jump.



So, really visible motion (which at least sometimes is linked to respiration) in the voxel activity timecourses (such as here) is to be expected. Unfortunately, the correlation is not always (or even usually) uniform across the brain or grey matter, such as below (just the correlation with x and y translation). It also looks like very little (under a mm) motion is needed to induce large correlations.
What to do? Well, adjust our mental models of how much correlation with movement is left in the activation timeseries after preprocessing: there's quite a bit. I'll be exploring further, particularly isolating the task windows (since I work with task, not resting state, datsets): how are the correlations during tasks? I'm not at all sure that applying a global signal regression-type step would be beneficial, given the lack of homogeneity across the brain (though I know there are at least a few reports on using it with task data). Censoring high-movement trials (i.e., not including them) is likely sensible. Interestingly, I've found similar MVPA performance in multiple cases with temporal compression by averaging and fitting a model (PEIs), which would not have been my guess looking at these correlation levels. Perhaps averaging across enough timepoints and trials balances out some of the residual motion effects? I am concerned, however, about respiration (and motion) effects remaining in the timecourses: it's clear that some people adjust their breathing to task timing, and we don't want to be interpreting a breath-holding effect as due to motivation.

Any other thoughts or experiences? Are you surprised by these correlation levels, or is it what you've already known?



Wednesday, February 22, 2017

a methodological tour de force: "The effect of spatial resolution on decoding accuracy in fMRI multivariate pattern analysis"

"The effect of spatial resolution on decoding accuracy in fMRI multivariate pattern analysis" by Gardumi et al. (full citation below) is an impressive methodological tour de force: comprehensive analyses clearly described (even their group-level permutation scheme!). Some of its themes are similar to those by Coutanche, Solomon, & Thompson-Schill (2016) which I posted about yesterday: understanding the spatial scale of fMRI information.

The approach in Gardumi et al. (2016) is different than that of Coutanche et al. (2016): they started with 7T images acquired with 1.1 mm isotropic voxels, then reconstructed the images at 2.2 and 3.3 mm effective resolution as well, by zero-padding the k-space images, as illustrated in their Figure 1, below.
A neat part of this approach is that the resulting images have the same voxel size, but lower effective resolution, making it possible to directly compare analyses including the same number of voxels (which is good, since MVPA performance generally interacts with the number of voxels). Changing the effective resolution this way also avoids the issues related to differences between acquiring 1.1 and 3.3 mm voxels (e.g., movement sensitivity): only a single scanning session was used for each person.

Another interesting aspect is that they had two classification tasks: decoding the speaker or the spoken vowel (see paper for details; they used auditory stimuli and single-subject-defined auditory anatomical ROIs). One of their results summary figures is below (lots more detail in the paper!), showing group-level average accuracy for the two classifications at each effective resolution. As an aside, the x-axis is the number of voxels included, picked from univariate tests (n most active voxels from training set GLM): accuracy increased for both until around 1000 voxels were included, then leveled off (again, see the paper for details), which matches my general experience of plateauing performance (e.g.).

Anyway, Figure 6 (plus other tests that they describe) shows that smaller voxels generally did better for their vowel decoding classification, but not for speaker decoding. In the discussion Gardumi et al. (2016) ties this to previous literature findings "that informative voxels in the auditory cortex are widely distributed for vowel decoding, while more clustered for speaker decoding."

Yesterday I wrote that I'm not "convinced that it's safe to infer information spatial resolution from voxel resolution" ... am I convinced by Gardumi e al.? Yes, I think so. Below is my cartoon for how it could work. The blue squares are the brain region, the white circles informative parts, and the red squares voxels at two different sizes. Suppose that you need around a quarter of the voxel to be informative for its activity to be biased (and so contribute to a classification): this is much easier to obtain with small voxels than large ones if the informative parts are widely distributed (left), but about as easy to obtain with both small and large voxels if the informative parts are clustered (right).

So, I now am thinking that it can sometimes be valid to make inferences about the spatial distribution of information from comparisons across voxel resolutions. The way in which the different voxel resolutions are obtained strikes me as very important: I have a lot more reservations about inferences when the resolutions are generated by different acquisition sequences than by k-space zeroing. And perhaps some of my change of heart is due to the different mental models I have of "widely distributed" or "clustered" information as opposed to "coarse" or "fine-grained" spatial resolution. Both of my cartoons above have 10 informative bits (circles): would you describe the one on left as fine-grained and the one on the right as coarse-grained?


ResearchBlogging.org Gardumi A, Ivanov D, Hausfeld L, Valente G, Formisano E, & UludaÄŸ K (2016). The effect of spatial resolution on decoding accuracy in fMRI multivariate pattern analysis. NeuroImage, 132, 32-42 PMID: 26899782

Tuesday, February 21, 2017

interesting approach: "A meta-analysis of fMRI decoding: Quantifying influences on human visual population codes"

A (fairly) recent paper from Coutanche, Solomon, & Thompson-Schill, "A meta-analysis of fMRI decoding: Quantifying influences on human visual population codes" (full citation below), has an interesting approach to the effort to understand the spatial scale at which information is present in fMRI signals.  

Coutanche, Solomon, & Thompson-Schill 2016 describes a meta-analysis of visual MVPA studies, the details of which (and most findings) I won't get into here. But I do want to highlight their use of the different spatial resolutions (acquired voxel size) across studies to get at spatial resolution. In their words,
"Multi-voxel decoding should be optimized (all else being equal) when the voxel size of acquired data matches the spatial resolution (i.e., granularity) of a region's information-containing patterns. We hypothesized that if V1 holds a more fine-grained map of information than later visual regions, employing larger voxels should not benefit decoding in V1, but may benefit decoding in post-V1 regions (through greater signal-to-noise at the scale of these patterns). .... Naturally, at a certain point, increasing the voxel size is expected to impair performance for any region."
They found that,
"The results of our regression analyses supported this: using larger voxels improved decoding in V2 (B=0.01, p=0.049), unlike V1 (B=0.002, p=0.451). ... These findings are consistent with later visual regions holding coarser multi-voxel codes, while V1 relies on fine-grained patterns."
The "all else being equal" does a lot of work, since there are major interactions between acquisition parameters and the signal-to-noise in the resulting functional images (I'm far from convinced that using voxels around 2 mm isotropic or smaller is a good idea for general task fMRI, but that's another topic!). But if I take as a starting assumption that we have equally good signal-to-noise across a sensible range of voxel sizes, do I accept that decoding should then be optimized "when the voxel size of acquired data matches the spatial resolution (i.e., granularity) of a region's information-containing patterns"?

The idea that, "at a certain point, increasing the voxel size is expected to impair performance for any region", strikes me as plausible: if the voxels are large enough to encompass the entire region, only the average activity of the region as a whole can be used in the analysis, losing any information contained in within-region activation patterns. However, no brain region exists in a vacuum - they are surrounded by other brain structures - and fMRI voxels don't have sharply-defined edges, so in practice, too-large voxels will have signal from adjacent regions, and the combination of regions might have quite a lot of information.

Matching the voxel size to the spatial resolution might indeed optimize decoding if the brain was a fixed grid (so the voxels could be aligned to coincide perfectly with the grid), but I'm not convinced that it's a useful aim for actual fMRI datasets: even if both the voxels and spatial resolution was at 1 mm isotropic, the chance that the voxels would align with the brain grid seems vanishingly small. Setting the voxel size to something like 2 mm seems better, with the aim of having each voxel contain at least one of the 1 mm information units (in other words, setting the voxels larger than the true spatial resolution).

Overall, I accept that idea that voxel size could be used as a marker of spatial resolution in the abstract: a fixed (unmoving, not surrounded by other regions) region and equally good signal-to-noise across the range of voxel sizes. In actual fMRI datasets, I'm not as convinced that it's safe to infer information spatial resolution from voxel resolution, but it is an intriguing idea.

UPDATE 22 Feb 2017: see musings on Gardumi et al. (2016).

ResearchBlogging.org Coutanche, M., Solomon, S., & Thompson-Schill, S. (2016). A meta-analysis of fMRI decoding: Quantifying influences on human visual population codes Neuropsychologia, 82, 134-141 DOI: 10.1016/j.neuropsychologia.2016.01.018