Document for building the program from source.
- Ubuntu, MacOS or Windows
- GCC, Clang, must support C++17 or higher
- CMake 3.15 or higher
- Optional: Eigen or OpenBLAS library
- Optional: CUDA 10.x - 12.x library
- Optional: cuDNN 7.x or 8.x library
- Optional: TensorRT 8.5 or higher library
- Optional: zlib library
$ git clone https://github.com/CGLemon/Sayuri
$ cd Sayuri
$ git submodule update --init --recursive
$ mkdir build && cd build
$ cmake ..
$ make -j
We offer CMake for compilation on platforms like Linux and macOS, with support for the following options:
You can accelerate the network forwarding pipeline using your CPU. OpenBLAS and Eigen are required for this. Both libraries are significantly faster than built-in BLAS implementations. The Eigen library should be included in the third_party directory.
To use OpenBLAS:
$ cmake .. -DBLAS_BACKEND=OPENBLAS
To use Eigen:
$ cmake .. -DBLAS_BACKEND=EIGEN
To accelerate the neural network forwarding pipeline with GPUs, you can choose the CUDA, cuDNN, or TensorRT backend, with the corresponding libraries installed. Please refer to the Requirements section for details. For general use, we recommend the CUDA backend because it only requires CUDA and is simpler to set up. If you want the best possible performance, you can use the TensorRT backend.
To use CUDA:
$ cmake .. -DBLAS_BACKEND=CUDA
To use cuDNN:
$ cmake .. -DBLAS_BACKEND=CUDNN
To use TensorRT:
$ cmake .. -DBLAS_BACKEND=TENSORRT
You can compile a version that supports a larger board size. Set this option to 0 to disable it. This feature currently only supports board sizes up to 25x25.
$ cmake .. -DSPECIFIC_BOARD_SIZE=25
If your CUDA version does not support FP16, you can disable it during compilation.
$ cmake .. -DDISABLE_FP16=1
To save memory usage during the self-play process, you can compress training data files.
$ cmake .. -DUSE_ZLIB=1
To compile the executable file, we provide a build.bat file that supports both CPU and GPU versions. For the CPU version, we default to using Eigen as the backend. For the GPU version, CUDA is used as the backend.
Before you begin, you must first download and install Visual Studio 2022/2019 along with the necessary C++ libraries. Once installed, run the commands below from the x64 Native Tools Command Prompt for VS XXX environment or PowerShell.
This version requires the GCC compiler. You can use MinGW for this purpose. To compile the CPU version, enter:
.\build.bat gcc
This version requires the NVCC compiler. To compile the GPU version, enter:
.\build.bat nvcc