BrainVoyager Q&A
Scaling predictor / confound values (Armin - Jun 2017) 2
ERROR: a or b too big, or MAXIT too small in betacf 3
Manually adjusting threshold in BV 20.6 3
Beta values comparison between single and multi subject analysis 4
Scripting BV: preprocessing boolean function FMR.HasSliceTimeTable 5
Q: According to (Pernet, C. 2014), it is recommended to scale the sdm values between 0 and 1, after the convolution with the HRF. While trying to figure out if the analysis I have made so far were correct, I found the following question:
1) By default, the predictors generated by BrainVoyager are scaled between 0 and 1? If so, why is there a check in the Predictor Function tab of the Single Study GLM Options to "Scale predictor [0-1]", which appears to scale the predictors between -1 and 1?
Regarding the scaling of predictors and its influence on the GLM analysis and the interpretation of results, I can imagine several points one can discuss.
When checking the resulting statistical parameters (t- F-values) though, you should notice that the scaling of the predictors should not have an influence due to the fact that the new scaling not just changes the size of the beta values, but also the size of the corresponding standard error. So even if you would adapt the maximal amplitude of your predictor from 1 to 10, this would change the size of your beta value, but still lead to the same t-value.
Q: Regarding the confounds, should these also be scaled? For instance, I am including the 6 z-normalised detrended motion parameters in the SDM as confounds, which can go up to -4 or 4. Should these be scaled to -1 and 1?
There are also several considerations regarding the treatment of the confound predictors and their scaling. When e.g. using confounds like the average time course of one or multiple regions of interest, you can best normalize the corresponding time courses (using the GLM options) to avoid a problem when running your GLM analysis. In contrast to the simple correlation analysis, the GLM will usually not properly tolerate any amplitude variation within the model predictors (main or confound). Also when using motion predictors, I would usually advise to run a z-transformation before using them as confounds within the GLM model.
Using this normalisation will not only bring all the predictors to the same scale (so they are in a way more comparable) and will identify "outliers", but will also help with specific cases (e.g. when one or more of the motion predictors are close to "flatlining").
Using the detrended and z-transformed motion parameters should not create a problem for the interpretation of the beta values for your main predictors and applying the motion confounds in this specific fashion is - according to what I read so far and learned in internal communication - the advised way to use this information.
Q: While scripting single-study GLM for a VOI
The problem of the not-a-numbers (nan) and “MAXIT too small in betacf” in the results displayed in the BrainVoyager Log tab when running the VOI-GLM via scripting, seem to have been caused by the absence of a constant predictor. When I added a constant predictor to the design matrix file (*.sdm), the VOI-GLM results via scripting were the same as via the graphical user interface (GUI). Probably when running a VOI-GLM via the GUI the constant predictor is added automatically?
Q: Increasing the threshold to a very high value, and then decreasing produces different coloring of the map
I have checked the behaviour of BrainVoyager version 20.6 when manually adapting the statistical threshold like you described in your email (using the corresponding icon).
What basically happens here is that at some point the automatically assigned maximal value for the t-map (e.g. "8") is not working anymore and the upper limit of the map has to be increased (it has to be larger than the new minimal value). Depending on the amount / strength of adaptation, you will end up with a more or less higher maximal t-value for the map. When decreasing the threshold again by using the icons, the maximal value of the map will not be decreased again.
This will visually lead to a different colorcoding of the same voxels shown as before. So while the values per voxel will not change, their colorcoding does change indeed (depending on the depicted range from "min" to "max").
This is indeed a bit unexpected or could potentially lead to improper interpretations of the result map.
The same voxels looking very significant before look potentially way less interesting after this adaption and this may harm the communicative value of a result map.
I will add this issue to our internal bug database.
You can of course easily adapt the min and max values of your statistical map via the Options of the Overlay Volume Maps dialog.
Q: Assume I have 2 participants with one functional run each. I perform a FFX-GLM with these 2 files (%-normalise, correction for serial correlations with AR(2), separate subject predictors). I have subject-specific ROIs (previously defined with a functional localiser) so I select “Use subject’s VOIs for time course access” in the VOI Analysis Options menu.
Running the VOI-GLM returns a beta value for each condition of each subject.
When I compare these beta values with the ones obtained from a Single-Subject analysis, they do not match exactly.
There are indeed small differences in the beta and t-values when comparing the multi-run and the single run analysis.
It is important to note that these differences are quantitative, but not qualitative, so there are no cases where a nonsignificant result get significant in case of the alternative analysis of the same data (at least as far as I can see).
I checked the data, model and residual plots created during the VOI GLM analysis in both cases and it seems there are small differences in the model parameters (betas) in both cases. I have attached some screenshots for you. One shows a visual overlay of the model time courses (in red and green) and you can see that they are very close, but not identical. I have imported the data to Excel to show the detailed differences.
So although the same data is used (also shown within the Excel file) in both cases, slightly different modeling results seems to occur, which then leads to small differences in the calculated t values.
I have also tested if this is somehow related to the fact that you use the subject-specific VOI access, but the same difference can be found when using just one of the VOIs.for the GLM.
Problem:
a) There seems to exist a mismatch between the existence of a tag in the dicom header from where BV can read the Slice Time (given by the field "dcminfo.Private_0019_1029") AND the ability of BV to read this tag and to use a SliceTimeTable using the function FMR.HasSliceTimeTable.
In fact, although my data has the tag (which I confirmed by accessing the header of the dicom file in Matlab), the FMR.HasSliceTimeTable function in the Matlab script cannot read it and returns 0 for this verification.
(addressed in questions Q1, Q2a and Q3 below)
b) Moreover, distortion correction seems to affect this process, although only in some runs (it seems not to be systematic):
for instance, for XX_run2.fmr, the FMR.HasSliceTimeTable = 1, whereas for the undistorted version of run 2, XX_run2_undistorted.fmr (distortion correction performed with ANATABACUS versionXXX), HasSliceTimeTable = 0.
(addressed in questions Q2b and Q4 below)
Q&A (Q: Inês Almeida/João Duarte; A: Hester Breman from BrainVoyager Support):
Q1: Does the FMR.HasSliceTimeTable use the same field ( dcminfo.Private_0019_1029 ) to answer the question "true" vs. "false"?
If not, which field from DICOM header does it use?
A1: I am not sure which DICOM header tag BrainVoyager uses, but it is the one that contains the Siemens CSA header. Please see below for the information in the BrainVoyager User's Guide. When I run the attached `writeDcmInfo_allTypes.m' Matlab script on some multiband Siemens data, I get the tag 'Private_0029_10xx_Creator SIEMENS CSA HEADER ' and the following Private_0029_* tags seem to contain more information.
A website about the CSA header and a PDF about slice timing in Siemens data can be found via the following links:
http://nipy.org/nibabel/dicom/siemens_csa.html
http://www.healthcare.siemens.com/siemens_hwem-hwem_ssxa_websites-context-root/wcm/idc/siemens_hwem-hwem_ssxa_websites-context-root/wcm/idc/groups/public/@global/@imaging/@mri/documents/download/mdaz/nzmy/~edisp/mri_60_graessner-01646277.pdf
Q2: What can explain this mismatch:
Q2a: Is it possible that the FMR file has the slice time table but it answers FMR.HasSliceTimeTable = 0 ?
A2a: I have tried to replicate this for you, but the dataset I tried it on worked fine. I would need to try those data… If you like, it would be possible to make an (anonimized) dataset available for me?
Q2b: Is it possible that the distortion correction erases the time table from the FMR file info? This would result in distorted ( FMR.HasSliceTimeTable = 1) vs. undistorted version ( FMR.HasSliceTimeTable = 0) of same run.
A2b: Not sure, I have run anatabacus on this sample dataset and afterwards BrainVoyager still replied that the data had a slice time table (and indeed, it was still in the FMR file). Internally, the FMR data (this is actually the STC content) is saved into an object called `fmrfile' and `datahandler’. When it needs to be saved (after each step), it is saved in the FMR file that is open at that moment in BrainVoyager, using the BrainVoyager Plugin Access function qxSaveFMRAndSTC(). Just before saving, the header of that open file is requested via qxGetHeaderOfCurrentFMR(). However, it is not changed, just a new name `fmrfilename’ is provided to the function qxSaveFMRAndSTC():
char fmrfilename[301];
strcpy(fmrfilename,newname);
int succ = qxSaveFMRAndSTC(fmrfilename, "undistorted");
Would it perhaps have to do with the version of BrainVoyager or anatabacus? I used anatabacus 1.1 and BrainVoyager 20.6.
Q3. Can you provide the inner code from the FMR.HasSliceTimeTable function so that we can check what it actually does?
A3.3: Sorry, I don't have that code. If this is very important to you, I could ask our chief software developer, but I am not sure whether the code will be made available. Alternatively, I could file a bug when we are sure that something is not correct?
Q4: EPI Distortion Correction and order of the preprocessing (e.g. Slice Scan Time Correction)
A4: Concerning the slice scan time correction, there have been a lot of question about the order of the preprocessing, and apparently the experts don't have a preference when during the preprocessing to perform EPI distortion correction (except that it should be performed before any spatial smoothing). As a circumvention (because you reported to loose the slice scan time table), it would be possible to run the distortion correction after slice scan time correction, I suppose.
User Guide > Basic (f)MRI Data Analysis > Preprocessing > Slice Scan Time Correction
«Since setting (multiband) slice timing manually might become challenging, BrainVoyager (since version 2.8.2) attempts to set scanning order automatically based on detailed slice-specific timing data in case it is available in the header of the original image files. If available, slice timing data is extracted from the first volume (and eventually second volume for cross-checking) and stored in created .FMR project files. This data is referred to as a slice time table because it indicates for each slice when it has been recorded relative to acquisition onset of the respective volume. Since it is stored in the FMR file, the slice time table is then available when preprocessing the functional data. At present, slice timing data is extracted from SIEMENS mosaic DICOM files (from the so-called CSA header). If this data is available, the Verified [slice time table] option appears in the FMR Data Preprocessing dialog (see below). This option is then turned on as default and overrules the usual slice scanning order settings that are disabled (greyed out, see below); while not recommended, it is possible to turn this option off and to use conventional slice order settings.»