Forum

Notifications
Clear all

scaling head size of user supplied brain structures in AtlasViewer  

Page 3 / 3

Abigail Fiske
Posts: 11
(@afiske)
Active Member
Joined: 4 months ago

@dboas @jdubb - I have shared  a folder with you both on OneDrive that contains my groupResults.mat file causing the problem, as well as some screenshots of the error message. Please let me know if you have trouble accessing this (do check spam for invite). If, as you suspect, you are unable to get to the bottom of this problem without reproducing it, do send me an email on abigail.fiske@psy.ox.ac.uk and we can arrange sharing the data files. 

Thanks so much!

Reply
Abigail Fiske
Posts: 11
(@afiske)
Active Member
Joined: 4 months ago

@dboas @jdubb Have you been able to access the groupResults.mat file I shared with you on OneDrive?

Reply
David Boas
Posts: 42
Topic starter
(@dboas)
Eminent Member
Joined: 4 months ago

@afiske, I just looked at the groupResults.mat file. It is only 254 bytes big.... this is way too small for a group with even only one run, and you have a group with 60 subjects! Also, I get the same error saying the file is corrupt when I try to load it.

Thanks for sharing the warning message that implied that the groupResults.mat file was saved. I still need to figure out where the warning is coming from. But I won't be able to figure that out unless I can reproduce it myself. Generally that means we need your whole directory.

I wonder if you have enough hard disk space to save the groupResults.mat file. Can you check how much hard disk space you have left?

Do you get the same error when you try to analyze only 10 or 20 of the 60 subejcts?

Reply
2 Replies
Abigail Fiske
(@afiske)
Joined: 4 months ago

Active Member
Posts: 11

@dboas Ah, good spot. Yes, the groupResults.mat file is very small, compared to the file generated in Homer2 for this dataset which was 161,393 KB. When I read your comment about the hard disk space, I did think this could be the solution as my disk space was running low. I copied the data folder over to my external hard drive (with 1 TB of storage) and re-ran the proc stream in Homer3, which generated the same error. I have also replicated the same error when only running at the subject level (e.g. 1 participant).

I did wonder whether the error might be to do with the fact that our data has variable block lengths? The script we use to convert from .nts to .nirs files takes event markers from the output of our software where we run the task. These appeared to be imported fine when converting to .snirf, and I manually removed invalid events using toggle on/off. I know that Homer3 cannot yet handle varying block lengths, but wouldn't have thought this would impact the block average or produce this error. Just wanted to mention in case it could be why. Or perhaps something else in the proc stream?

If the next step is to share the data files with you or members of your team, I would prefer to discuss this on email to ensure this can be done securely. My email address is abigail.fiske@psy.ox.ac.uk

Many Thanks!

Reply
David Boas
(@dboas)
Joined: 4 months ago

Eminent Member
Posts: 42

@afiske the thought came up that this is a RAM issue. You can make a change in the file AppSettings.cfg in the Homer root directory.

If you open this file in a text editor, it will look something like what I pasted below.

You should try changing 'memory' below to 'files' under 'Data Storage Scheme'

Let me know if that works.

 

% Processing Stream Config File
processOpt_default.cfg

% Regression Test Active
false

% Include Archived User Functions
No

% Default Processing Stream Style
SNIRF

% Logging
On

% Last Checked For Update
18-Nov-2020 14:37:20

% Check For Updates
on

% Data Storage Scheme
memory

% Auto Save Acquisition Files
Yes

% END

Reply
Page 3 / 3
Share: