Back to PLS Help

multiple comparisons
jcrofts
Posted on 11/25/09 11:58:10
Number of posts: 29
j crofts
jcrofts posts:

Hi, when using the results of the boostrap to analyse which brain regions are significant should i be correcting for multiple comparisons? cheers, Jonathan

Replies:

Untitled Post
as22kk
Posted on 11/25/09 12:08:14
Number of posts: 60
as22kk replies:

NO, There is no need for multiple comparison based on the fact that all regions were identified by a single analytic step.
/Alireza


multiple comparisons again
jcrofts
Posted on 11/25/09 12:34:19
Number of posts: 29
j crofts
jcrofts replies:

Hi, statistics is not particularly my strong point could you explain what you mean by "regions were identified by a single analytic step"? cheers, Jonathan


Untitled Post
as22kk
Posted on 11/26/09 04:12:17
Number of posts: 60
as22kk replies:

Jonathan!

When you conduct a univariate technique like SPM the statistical assessment would be apply to each voxel independently (Voxel-wise assessment) and that's why it is crucial to apply some corrections like FWE to correct for multiple comparison. On the other hand, the story is completely different for multivariate approaches like PLS. PLS Identifies the contrasts standing for the most variance of the data and all voxels over the whole brain (some sort of cooprative interaction among the brain regions) partially contribute to each of those contrasts( by setting a threshold for B.R. you will select some of those voxels that survive the threshold).This means that all the voxels that contribute to a pattern are computed in a single analytic step which does not require the multiple comparison correction.

Hope this help!
/Alireza


Untitled Post
rmcintosh
Posted on 12/01/09 06:23:43
Number of posts: 394
rmcintosh replies:

Hi guys

Thanks,  Ali Reza, for your postings to answer Jonathan's question.  It is correct that there is no need to correct for multiple comparisons for the bootstrap ratios.  Let me clarify further:

1) the correction is necessary when you are testing the null hypothesis, which is what we are doing with permutation tests.  Here we are asking whether the overall pattern in a given latent variable is significant different from "randomness".  We assign a P-value here to tell us how often we find a random pattern as strong as our LV.  Here the tests are done on the LV as a whole so there is no need to correct for multiple comparisons

2) the bootstrap ratio should be considered a proxy for a confidence interval rather than a null hypothesis test.  Here you are interested in whether a given voxel reliably contributes to a given LV.   While we can make an approximation to a zscore for the bootstrap ratio, its better to think of it as a confidence interval.

One way to think about permutation and bootstrapping is first we ask is a pattern is significantly different from "noise" and then we ask how reliable the pattern is.  These are complementary, but not redundant, questions.

Hope this helps

Randy



Login to reply to this topic.

  • Keep in touch

Enter your email above to receive electronic messages from Baycrest, including invitations to programs and events, newsletters, updates and other communications.
You can unsubscribe at any time.
Please refer to our Privacy Policy or contact us for more details.

  • Follow us on social
  • Facebook
  • Instagram
  • Linkedin
  • Pinterest
  • Twitter
  • YouTube

Contact Us:

3560 Bathurst Street
Toronto, Ontario
Canada M6A 2E1
Phone: (416) 785-2500

Baycrest is an academic health sciences centre fully affiliated with the University of Toronto