A High-Performance Bitboard-Based Engine Integrating Alpha-Beta Pruning, Monte Carlo Tree Search, Zobrist Hashing, and Deep Neural Network Evaluation
🚀 A Research-Grade Hybrid Chess AI System Combining Classical Search and Deep Learning
NeuroSearch is a high-performance, research-oriented chess engine designed to bridge the gap between classical algorithmic search techniques and modern neural network-based AI systems. The project combines deterministic search algorithms with probabilistic exploration and deep learning-based evaluation, forming a hybrid architecture inspired by state-of-the-art systems.
This engine is built upon a highly optimized bitboard representation, enabling efficient low-level computation through 64-bit operations. It integrates multiple advanced techniques, including Alpha-Beta pruning, Monte Carlo Tree Search (MCTS), and neural policy-value evaluation, making it a powerful platform for experimentation in artificial intelligence and game theory.
- 🚀 High-performance bitboard-based chess engine
- 🧠 Hybrid AI: Classical search + Neural network evaluation
- ♟️ Multiple search engines (Minimax, Hybrid, MCTS)
- 📊 Real-time analysis via REST API
- 🌐 Interactive React-based frontend
- 🔬 Research-ready architecture for experimentation
- Bitboard Representation – Efficient 64-bit board encoding
- Minimax Algorithm with Alpha-Beta Pruning
- Quiescence Search for tactical stability
- Monte Carlo Tree Search (MCTS)
- Zobrist Hashing for transposition tables
- Move Ordering Heuristics
- Deep Convolutional Neural Networks
- Residual Blocks Architecture
- Policy-Value Network (AlphaZero-style)
- Supervised Learning from PGN datasets
+----------------------+
| React Frontend |
| (Chessboard UI) |
+----------+-----------+
|
v
+----------------------+
| FastAPI API |
| (Inference Layer) |
+----------+-----------+
|
+------------------+------------------+
| |
v v
+---------------+ +-------------------+
| Hybrid Engine | | MCTS Engine |
| (Alpha-Beta + | | (Neural Guided) |
| Neural Eval) | +-------------------+
+---------------+
|
v
+----------------------+
| Neural Network Model |
| (Policy + Value) |
+----------------------+
NeuroSearch/ │ ├── backend/ │ ├── bitboard.py │ ├── move_gen.py │ ├── minimax.py │ ├── hybrid.py │ ├── mcts.py │ ├── zobrist.py │ ├── heuristic.py │ ├── neural_nt.py │ ├── model.py │ ├── dataset.py │ ├── training.py │ └── fastapi_app.py │ ├── frontend/ │ └── react-app/ │ ├── experiments/ ├── benchmarks/ ├── docs/ └── README.md
git clone https://github.com/your-username/neurosearch.git cd neurosearch
pip install -r requirements.txt uvicorn fastapi_app:app --reload
cd frontend/react-app npm install npm start
| Endpoint | Method | Description |
|---|---|---|
| /analyze | POST | Returns best move using selected engine |
| /evaluate | POST | Returns evaluation score |
The neural network follows a policy-value architecture inspired by modern reinforcement learning systems. It consists of a deep convolutional backbone with residual connections, enabling effective feature extraction from board states.
- Input: 12-channel board tensor
- Policy Head: Move probability distribution
- Value Head: Position evaluation (-1 to 1)
PGN Dataset → DataLoader → Neural Network → Loss Function → Backpropagation
Supports supervised learning from real game data and can be extended to self-play reinforcement learning.
- ⚡ GPU-accelerated MCTS
- 🧠 Self-play reinforcement learning loop
- 📊 Advanced evaluation metrics
- 🌐 WebSocket real-time analysis
- 🎨 Enhanced UI (drag, arrows, animations)
Contributions are welcome! Feel free to open issues or submit pull requests for improvements, optimizations, or new features.
This project is licensed under the MIT License.