Forum

Notifications
Clear all

Handling large variabilities in HRF amplitudes


Michael Zara
Posts: 12
Registered
Topic starter
(@mzara)
Active Member
Joined: 2 years ago

Hey everyone,

 

I'm curious to hear about your experiences and to get advice on how to handle large variabilities in HRF amplitudes.

We are currently using the GLM to estimate the HRF. At the subject-level, we are seeing some participants with very large HRF amplitudes compared to the rest of the cohort. Our goal is to analyze the data at the group-level, but participants with very large HRFs/betas will bias our analysis. Some specific questions:

1) What are your thoughts on where this large variability comes from and how/why this may be happening? Is it something that stems from the data we collected or does it arise from the processing stream? Our processing stream is pictured below.

2) What advice do you have when it comes to handling this variability, especially when it comes to group-level analysis? Should we be transforming our data some way so we can get an "apples-to-apples" comparison? 

 

image
1 Reply
David Boas
Posts: 217
Registered
(@dboas)
Estimable Member
Joined: 2 years ago

Yes, better functions need to be written and incorporated into Homer3 for properly handling these variations. This is exactly why we developed Homer3 to support processing streams with their own functions at the session and group level. Do people have favorite approaches for properly handling this and can point to the papers (or better yet code) that can be incorporated into Homer3?

 

Reply
Share:
en_USEnglish