The ACCRE cluster is now disabling access to even read only /dors filesystem, even as new /data storage has not been installed.
Each run case consumes approx 10GB. So retaining 100 cases is 1TB.
Singularity containers consume approx 20GB.
The ddG cartesian repo archives relaxed 20 structures for each ddg calculation, plus scripts. ddG Monomer repo also of similar size maagnitude
Setting aside the 2TB of GRCh38 genome loaded to SQL, the needed static storage for the application calculations is approximated here:
| Fileset |
Size |
File Count |
| vepcache |
31G |
14K |
| pdb |
400G |
1.2M |
| gnomad |
1TB |
50 |
| interpro |
2.6G |
13 |
| sifts |
21G |
150K |
| cosmis |
242M |
50 |
| rate4site |
xxxM |
y |
| DiGePred |
100+M |
?? |
| alphafold/ |
16G |
71K |
| uniprot |
2.1G |
15 |
| swissmodel |
48G |
130K |
| clinvar |
32K |
1 |
| Singularity images |
20GB |
6 |
| ddG repo: Cart |
500G |
1M |
| ddG repo: Monomer |
500G |
1M |
| DiGePred |
100+M |
?? |
| total |
~5TB+ |
~5M |
The ACCRE cluster is now disabling access to even read only /dors filesystem, even as new /data storage has not been installed.
Each run case consumes approx 10GB. So retaining 100 cases is 1TB.
Singularity containers consume approx 20GB.
The ddG cartesian repo archives relaxed 20 structures for each ddg calculation, plus scripts. ddG Monomer repo also of similar size maagnitude
Setting aside the 2TB of GRCh38 genome loaded to SQL, the needed static storage for the application calculations is approximated here: