Although the pipeline is intended to accept GWAS summary statistics in both .tsv and .csv formats (zipped or unzipped), .csv inputs are currently not being handled correctly.
The read_gwas_file function in the input validation step always reads input files using sep = "\t".
When a .csv file is provided, the entire header line is interpreted as a single column. As a result, gwas_sub.columns contains one column name consisting of all column names concatenated with commas, which causes downstream validation to fail with the error: `Error: One of the columns defined in the input table are not present in file
Can we please allow to correctly detect and parse .csv files, so that columns are read and validated properly?
Although the pipeline is intended to accept GWAS summary statistics in both .tsv and .csv formats (zipped or unzipped), .csv inputs are currently not being handled correctly.
The
read_gwas_filefunction in the input validation step always reads input files using sep = "\t".When a .csv file is provided, the entire header line is interpreted as a single column. As a result,
gwas_sub.columns contains one column name consisting of all column names concatenated with commas, which causes downstream validation to fail with the error: `Error: One of the columns defined in the input table are not present in fileCan we please allow to correctly detect and parse .csv files, so that columns are read and validated properly?