This repository contains benchmarks for the OptalCP constraint programming solver. Each benchmark includes data, solver implementations (TypeScript and Python), and reference results.
Visualizations of the results can be found at optalcp.com/docs/benchmarks.
| Benchmark | Description |
|---|---|
| Jobshop (JSSP) | Classic job shop scheduling |
| Jobshop with operators | Machines require operators to run |
| Jobshop with travel times | Travel time between machines |
| Blocking Jobshop | No intermediate buffers between machines |
| Flexible Jobshop | Operations can use alternative machines |
| Flexible Jobshop with workers | Flexible jobshop with worker assignment |
| Openshop | Operations can be processed in any order |
| Permutation Flowshop | Same job order on all machines |
| Non-permutation Flowshop | Job order can vary per machine |
| Distributed Flowshop | Flowshop across multiple factories |
| Benchmark | Description |
|---|---|
| RCPSP | Resource-constrained project scheduling |
| Multi-Mode RCPSP | Activities with alternative modes |
| RCPSP Max | Generalized precedence constraints |
| RCPSP CPR | Critical path with resources |
| Benchmark | Description |
|---|---|
| TSP | Traveling salesman problem |
| Capacitated VRP | Vehicle routing with capacity constraints |
| VRP with Time Windows | Vehicle routing with delivery windows |
| Demo | Description |
|---|---|
| External solutions | Hybrid search with custom heuristics |
git clone https://github.com/ScheduleOpt/optalcp-benchmarks.git
cd optalcp-benchmarks
npm install
npx tscRequires Python 3.11+ and uv.
git clone https://github.com/ScheduleOpt/optalcp-benchmarks.git
cd optalcp-benchmarks
uv syncBoth installations include the preview version of OptalCP, which can solve all benchmarks. The preview version reports objective values but not individual variable values. Academic and commercial editions with full functionality are available at optalcp.com. Academic licenses are free.
Each benchmark has a subdirectory in benchmarks/. For example, to solve jobshop instance la17:
TypeScript:
node benchmarks/jobshop/jobshop.mjs benchmarks/jobshop/data/la17.txtPython:
uv run benchmarks/jobshop/jobshop.py benchmarks/jobshop/data/la17.txtCommon options:
--timeLimit 60 # Stop after 60 seconds
--nbWorkers 4 # Use 4 CPU threads
--preset Large # Use preset for large problemsAll benchmarks accept --help for the full list of options.
Note: The TypeScript version supports additional features like parallel benchmark runs, result collection, and model export. These features are coming to Python soon. See USAGE.md for details.
See USAGE.md for:
- Solver presets and parameters
- Running multiple instances
- Collecting results (CSV/JSON)
- Exporting models
- Comparing with CP Optimizer
benchmarks/<name>/ Benchmark implementations with data and results
├── README.md Problem description and data sources
├── <name>.mts TypeScript implementation
├── <name>.py Python implementation
├── data/ Instance files
├── results/ Solver results
└── references/ Known bounds from literature
compare/ Result comparison tool (see compare/README.md)
solveCPOs/ CP Optimizer wrapper (see solveCPOs/README.md)
This benchmark collection is open source under the MIT license. OptalCP itself is not open source.
Benchmark data in benchmarks/*/data comes from various sources listed in each benchmark's README. Most original sources do not specify a license; we provide the data as a mirror for research purposes. Reference bounds in benchmarks/*/references are collected from the literature.
Contributions are welcome:
- New benchmark suggestions
- Links to research papers with benchmarks
- Improved bounds or historical results
- Additional instances for existing benchmarks
Contact petr@vilim.eu, create a pull request, or report an issue.