It would be nice to be able to convert your formats to the native storage options from e.g. CUDA, if only to compare performance of the custom kernels. Here's a tentative translation map:
In the header row, the links point to the Julia files where the formats are defined. In the table, the links point towards official backend documentation when it exists.
It would be nice to be able to convert your formats to the native storage options from e.g. CUDA, if only to compare performance of the custom kernels. Here's a tentative translation map:
DeviceSparseMatrixCOOCuSparseMatrixCOOROCSparseMatrixCOOoneSparseMatrixCOODeviceSparseMatrixCSCCuSparseMatrixCSCROCSparseMatrixCSConeSparseMatrixCSCDeviceSparseMatrixCSRCuSparseMatrixCSRROCSparseMatrixCSRoneSparseMatrixCSRDeviceSparseVectorCuSparseVectorROCSparseVectorIn the header row, the links point to the Julia files where the formats are defined. In the table, the links point towards official backend documentation when it exists.