This repository contains all the code and datasets related to the Visual Braille Expertise, a multivariate fMRI experiment in which we tested expert visual Braille readers, and naive controls, to understand how alphabets are processed in the reading network and how VWFA in particular organizes linguistic information.
Cerpelloni, F., Van Audenhaege, A., Matuszewski, J., Gau, R., Battal, C., Falagiarda, F., Op de Beeck, H., & Collignon, O. (2025)
"How Learning to Read Visual Braille Co-opts the Reading Brain" Journal of Cognitive Neuroscience, 1–43. https://doi.org/10.1162/jocn_a_02341
Filippo Cerpelloni
For any question, you can contact me at: filippo [dot] cerpelloni [at] gmail [dot] com
.
├── code
│ ├── cfg
│ ├── containers
│ ├── lib
│ │ ├── CPP_BIDS
│ │ ├── bidsMReye
│ │ └── bidspm
│ │
│ ├── models
│ ├── mvpa
│ │ > scripts necessary to perfom MVPA on the 4D maps extracted from stats
│ │
│ ├── ppi
│ │ > all the code to run Psycho-physiological interaction (not in paper)
│ │
│ ├── preproc
│ │ > pre-proecess nifti files in inputs/raw. based on bidspm
│ │
│ ├── rois
│ │ > extract the regions of interest following the methods described in the manuscript
│ │
│ ├── src
│ ├── stats
│ │ > first level analyses. based on bidspm
│ │
│ └── visualization
│ > plot results of MVPA and perform necessary statistical tests
│
├── inputs
│ └── raw (GIN repository)
│ > raw data in the bids format for all the participants
│
└── outputs
├── derivatives
│ ├── CoSMoMVPA (GIN repository)
│ │ > results of MVPA analyses
│ │
│ ├── bidsMReye (GIN repository)
│ │ > results of bidsMReye. Estimation of eye movements for each participant and run
│ │
│ ├── bidspm-preproc (GIN repository)
│ │ > preprocessed data for each participant and run
│ │
│ ├── bidspm-stats (GIN repository)
│ │ > multiple first level GLMs for each participant
│ │ - localizer and mvpa experiment from two precprocessing pipelines
│ │ - localzier for PPI analysis
│ │ - GLM with eye movements as regressor
│ │
│ ├── cpp_spm-rois (GIN repository)
│ │ > ROIs extracted for each participant
│ │
│ ├── figures
│ │ > plotting of results
│ │
│ ├── fmriprep (GIN repository)
│ │ > preprocessing of each participant using fmriprep
│ │
│ ├── results
│ │ > statistical tests
│ │
│ └── spm-PPI (GIN repository)
│ │ > results of Psychophysiological interaction analysis
│ │
├── error_logs
└── options
All the folders listed in the repository tree as GIN repository are stored on GIN G-node for privacy reasons. All the resources are available upon request. We are working on publishing the raw dataset as a public repository, which will be avaialble as soon as possible.
Outputs/derivatives/results and /figures are publicly available.
This repository is made with Datalad. It's not mandatory to use it, but it automatizes the retrieval of a lot of data. Alternatively, you can download the code and the datasets individually. My personal reccomendation is to try it, with a lot of patience.
Check the guide on how to use it and the instructions below (autogenerated)
-
MATLAB (analyses were performed on version 2021b)
- bidspm version 3.1.0 (forked to https://github.com/fcerpe/bidspm) and relative dependencies (for more information: https://bidspm.readthedocs.io/)
- SPM12 version 7771
- Anatomy toolbox (https://github.com/inm7/jubrain-anatomy-toolbox)
- CoSMoMVPA (https://www.cosmomvpa.org)
-
Python (version 3.1)
- bidsMReye (https://github.com/cpp-lln-lab/bidsMReye)
-
R (version 4.3.1)
- packages: readxl, tidyverse, reshape2, gridExtra, pracma, dplyr, data.table, ez, lsr, effsize
This is an overview of the analysis steps done in the experiment. For information over the stimuli and the presentation, please check the dedicated repositories:
- stimuli creation: https://github.com/fcerpe/VisualBraille_backstage
- experimental testing: https://github.com/fcerpe/VBE_experiment
We performed preprocessing, and first level analyses through bidspm. Please refer to its documentation for more information (https://bidspm.readthedocs.io/).
The following steps should (if data is available and present in inputs/raw) replicate the full analysis pipeline and should be performed in the indicated order:
-
Preprocessing pipeline:
code/preproc/preproc_main.m
Outputs can be found inoutputs/derivatives/bidspm-preproc -
First level GLM:
code/stats/stats_main.m
Outputs can be found inoutputs/derivatives/bidspm-stats -
ROI extraction:
code/rois/roi_main.m
Outputs can be found inoutputs/derivatives/cpp-spm_rois -
MVPA analyses:
code/mvpa/mvpa_main.m
Outputs can be found inoutputs/derivatives/CoSMoMVPA -
Basic plotting (paper figures are arangements of multiple plots) of all the results:
code/visualization/viz_main.R
Outputs can be found inoutputs/derivatives/figuresandoutputs/derivatives/results
Have fun!
This repository is a DataLad dataset. It provides fine-grained data access down to the level of individual files, and allows for tracking future updates. In order to use this repository for data retrieval, DataLad is required. It is a free and open source command line tool, available for all major operating systems, and builds up on Git and git-annex to allow sharing, synchronizing, and version controlling collections of large files. You can find information on how to install DataLad at handbook.datalad.org/en/latest/intro/installation.html.
A DataLad dataset can be cloned by running
datalad install <url>
Once a dataset is cloned, it is a light-weight directory on your local machine. At this point, it contains only small metadata and information on the identity of the files in the dataset, but not actual content of the (sometimes large) data files.
After cloning a dataset, you can retrieve file contents by running
datalad get <path/to/directory/or/file>`
This command will trigger a download of the files, directories, or subdatasets you have specified.
DataLad datasets can contain other datasets, so called subdatasets. If you clone the top-level dataset, subdatasets do not yet contain metadata and information on the identity of files, but appear to be empty directories. In order to retrieve file availability metadata in subdatasets, run
datalad get -n <path/to/subdataset>
Afterwards, you can browse the retrieved metadata to find out about subdataset
contents, and retrieve individual files with datalad get. If you use
datalad get <path/to/subdataset>, all contents of the subdataset will be
downloaded at once.
DataLad datasets can be updated. The command datalad update will fetch
updates and store them on a different branch (by default
remotes/origin/master). Running
datalad update --merge
will pull available updates and integrate them in one go.
DataLad datasets contain their history in the git log. By running git log
(or a tool that displays Git history) in the dataset or on specific files, you
can find out what has been done to the dataset or to individual files by whom,
and when.
More information on DataLad and how to use it can be found in the DataLad Handbook at handbook.datalad.org. The chapter "DataLad datasets" can help you to familiarize yourself with the concept of a dataset.