Project: A playable Go (Weiqi/Baduk) game implemented in Python.
Goals:
- Provide a clean, extensible Python implementation of the board game Go (configurable board sizes: 9x9, 13x13, 19x19).
- Render the board and game state with Pillow (PIL) for image-based UI and exportable PNGs.
- Ship a pluggable AI opponent interface — start with a simple baseline (rule-based / random / heuristic) and provide hooks for more advanced brains (MCTS, neural policy/value networks using PyTorch or TensorFlow).
- Keep code modular so contributors can replace the rendering, UI, or AI with minimal friction.
- Features
- Tech stack
- Project structure
- Installation
- Quick start
- Gameplay and rules summary
- AI design overview
- Rendering with Pillow
- Development roadmap
- License
-
Playable Go on configurable board sizes (9x9, 13x13, 19x19).
-
CLI-driven game loop with image output after each move (PNG snapshots created via Pillow).
-
Two-player human vs human (local), human vs AI, and AI vs AI modes.
-
Simple and documented AI interface so you can plug in different strategies:
RandomAgent(baseline)HeuristicAgent(capture heuristics / liberty awareness)MCTSAgent(Monte Carlo Tree Search) — scaffold includedNNAgent(policy/value network) — scaffold + model save/load hooks
-
Game rules implemented: capturing, liberties, ko rule (basic), pass/score counting (territory and/or area — configurable)
-
Unit tests for core game logic (captures, liberties, valid moves).
- Python 3.10+ (recommended)
- Pillow (PIL) — for board rendering and image export
- NumPy — board arrays and numeric helpers
- pytest — for running tests
go-python/
├── README.md
├── pyproject.toml / requirements.txt
├── src/
│ ├── go/
│ │ ├── __init__.py
│ │ ├── board.py # Board class, rules, move validation, capture logic
│ │ ├── game.py # Game loop, turn management, scoring
│ │ ├── render.py # Pillow-based rendering utilities
│ │ ├── agents/
│ │ │ ├── __init__.py
│ │ │ ├── base_agent.py
│ │ │ ├── random_agent.py
│ │ │ ├── heuristic_agent.py
│ │ │ ├── mcts_agent.py
│ │ │ └── nn_agent.py
│ │ └── utils.py
├── examples/
│ ├── play_human_vs_ai.py
│ ├── render_demo.py
├── models/ # saved NN weights / checkpoints
├── cache/ # snapshot images, logs
├── tests/
│ ├── test_board.py
│ └── test_rules.py
└── docs/
└── design_notes.md
- Clone the repository:
git clone https://github.com/yourname/go-python.git
cd go-python- Create and activate a virtual environment (recommended):
python -m venv .venv
source .venv/bin/activate # macOS / Linux
.venv\Scripts\activate # Windows- Install dependencies:
pip install -r requirements.txt
# or using pyproject: pip install -e .[dev]Minimum pip requirements example (put in requirements.txt):
Pillow>=10.0
numpy>=1.26
pytest
# Optional for AI agents:
torch>=2.0 # or tensorflow>=2.12
Run a small demo that plays a human vs random AI on a 9x9 board and writes PNG snapshots to cache/:
python examples/play_human_vs_ai.py --board-size 9 --output-dir cacheplay_human_vs_ai.py should launch a CLI loop that prints the board coordinates and after each move produces cache/step_0001.png, cache/step_0002.png, etc.
This implementation focuses on the essential rules needed to play:
- Stones are placed on intersections.
- Groups of stones with no liberties are captured and removed.
- The Ko rule is enforced (basic single-position repetition prevention).
- Passing is allowed; two consecutive passes end the game.
- Scoring options: area scoring or territory scoring (configurable in
game.py).
Refer to docs/design_notes.md for details about edge cases and scoring ties.
The repository includes an agents interface so alternative brains are easy to plug in.
class BaseAgent:
def __init__(self, color: int, board_size: int):
self.color = color
def select_move(self, board: np.ndarray, legal_moves: List[Tuple[int,int]]) -> Optional[Tuple[int,int]]:
"""Return (row,col) or None to pass."""- RandomAgent — picks uniformly from legal moves.
- HeuristicAgent — prioritizes captures, avoids self-atari, prefers larger liberties.
- MCTSAgent — Monte Carlo Tree Search with playouts; good mid-term step before NN training.
- NNAgent — train a policy/value network using self-play data; use PyTorch/TensorFlow. Provide training/evaluation scripts in
examples/.
Notes on training: If you plan to train a neural-network agent, provide an experience buffer, game record format (SGF or a simple numpy encoding), and evaluation matches (ELO-style) to track improvements.
The render.py module will provide functions like:
draw_board(board: np.ndarray, size: int, square_px: int = 40) -> PIL.Image— returns a PIL Image of the board.annotate_move(img: Image, move: Tuple[int,int], label: str)— draws move numbers or highlights.
Rendering tips:
- Keep margins for coordinates and star points (hoshi) for 9x9/13x13/19x19.
- Use anti-aliased circles for stones and subtle shadows to improve visual clarity.
- Export snapshots as PNG with filenames containing move index and player.
Planned incremental tasks:
- Core rules & board representation (board.py) — baseline
- CLI game loop & snapshot rendering (game.py + render.py) — baseline
- Agents: RandomAgent & HeuristicAgent — baseline
- Unit tests covering capturing, ko, scoring — baseline
- MCTSAgent scaffold & basic implementation — next
- NNAgent scaffold (training pipeline, model save/load) — next
- Optional: GUI (tkinter / web-based) that consumes snapshots or integrates directly
- Optional: SGF import/export, game viewer
- Open an issue for bugs or feature requests.
- Follow the code style in
src/(PEP8). Tests must be added for new logic. - Use meaningful commit messages and open a PR against
main.
Run tests with:
pytest -qWrite tests under tests/ that assert correctness of basic rules (captures and liberties) before touching AI components.
This project is MIT licensed. Include a LICENSE file with the MIT text.
If you'd like I can:
- scaffold the Python package with the files listed above,
- implement
board.py(board representation, move validation, capture logic), - implement
render.py(Pillow rendering utilities) and provide example PNGs, - implement baseline agents (
RandomAgent,HeuristicAgent), or - outline an MCTS implementation or a neural training pipeline.
Tell me which piece you want next and I’ll scaffold it.