Skip to content

Commit ca0d71a

Browse files
committed
Add limit order book
1 parent 6bf22a1 commit ca0d71a

1 file changed

Lines changed: 253 additions & 0 deletions

File tree

Lines changed: 253 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,253 @@
1+
---
2+
title: Building a Limit Order Book in Rust
3+
description: Step-by-step guide to designing and implementing a performant limit order book core for HFT applications using Rust, covering data structures, order matching logic, and best practices.
4+
pubDate: "Aug 2 2025"
5+
---
6+
7+
# Introduction
8+
9+
In high-frequency trading systems, the limit order book (LOB) is the fundamental component that maintains all resting buy and sell orders and matches them according to price-time priority. In this article we will:
10+
11+
* Define the core data types for orders and book sides
12+
* Choose efficient data structures for price levels
13+
* Implement order insertion, cancellation, and matching logic
14+
* Follow Rust best practices and design patterns for performance and maintainability
15+
16+
# 1. Core data types
17+
18+
First, let us define the basic building blocks: the `Order` struct and the enumeration for buy/sell sides.
19+
20+
```rust
21+
/// Unique identifier for an order
22+
pub type OrderId = u64;
23+
24+
/// Side of an order: Bid or Ask
25+
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
26+
pub enum Side {
27+
Bid,
28+
Ask,
29+
}
30+
31+
/// A Limit Order
32+
#[derive(Debug, Clone)]
33+
pub struct Order {
34+
pub id: OrderId,
35+
pub side: Side,
36+
pub price: u64, // Price in ticks (smallest unit)
37+
pub quantity: u64, // Remaining quantity
38+
pub timestamp: u64 // Epoch timestamp for time priority
39+
}
40+
```
41+
42+
* We use `u64` for `price` and `quantity` to avoid negative values and ensure wide ranges.
43+
* `timestamp` ensures strict FIFO matching within the same price level.
44+
45+
# 2. Price level and order queue
46+
47+
Each price level holds a queue of orders in time priority. A `VecDeque` is a natural choice:
48+
49+
```rust
50+
use std::collections::VecDeque;
51+
52+
/// Orders at a single price level
53+
pub struct PriceLevel {
54+
pub price: u64,
55+
pub orders: VecDeque<Order>,
56+
}
57+
58+
impl PriceLevel {
59+
pub fn new(price: u64) -> Self {
60+
Self { price, orders: VecDeque::new() }
61+
}
62+
63+
pub fn add_order(&mut self, order: Order) {
64+
self.orders.push_back(order);
65+
}
66+
67+
pub fn pop_front(&mut self) -> Option<Order> {
68+
self.orders.pop_front()
69+
}
70+
}
71+
```
72+
73+
* `VecDeque` offers O(1) push/pop at both ends.
74+
* Wrapping in a `PriceLevel` struct gives a clear API for managing orders.
75+
76+
# 3. Book sides and data structure choice
77+
78+
To maintain sorted price levels, we need a structure keyed by price. Two common options:
79+
80+
* `BTreeMap<u64, PriceLevel>`: balanced tree, O(log n) insertion and removal.
81+
* Skiplist via crates like `skiplist` for comparable performance in some situations. Important, check benchmarks first: https://github.com/sh4ka/skiplist-demo
82+
83+
For simplicity and reliability, we’ll use `BTreeMap`:
84+
85+
```rust
86+
use std::collections::BTreeMap;
87+
88+
/// One side of the book (bids or asks)
89+
pub struct BookSide {
90+
levels: BTreeMap<u64, PriceLevel>,
91+
}
92+
93+
impl BookSide {
94+
pub fn new() -> Self {
95+
Self { levels: BTreeMap::new() }
96+
}
97+
98+
/// Get best price (highest for bids, lowest for asks)
99+
pub fn best_price(&self, side: Side) -> Option<u64> {
100+
match side {
101+
Side::Bid => self.levels.keys().rev().next().cloned(),
102+
Side::Ask => self.levels.keys().next().cloned(),
103+
}
104+
}
105+
106+
/// Insert order into its price level
107+
pub fn insert(&mut self, order: Order) {
108+
let level = self.levels
109+
.entry(order.price)
110+
.or_insert_with(|| PriceLevel::new(order.price));
111+
level.add_order(order);
112+
}
113+
114+
/// Remove a whole price level when empty
115+
pub fn remove_level_if_empty(&mut self, price: u64) {
116+
if let Some(level) = self.levels.get(&price) {
117+
if level.orders.is_empty() {
118+
self.levels.remove(&price);
119+
}
120+
}
121+
}
122+
}
123+
```
124+
125+
# 4. Matching engine logic
126+
127+
The matching engine takes incoming orders and attempts to fill them against the opposite side:
128+
129+
```rust
130+
pub struct OrderBook {
131+
bids: BookSide,
132+
asks: BookSide,
133+
next_order_id: OrderId,
134+
}
135+
136+
impl OrderBook {
137+
pub fn new() -> Self {
138+
Self {
139+
bids: BookSide::new(),
140+
asks: BookSide::new(),
141+
next_order_id: 1,
142+
}
143+
}
144+
145+
/// Submit a new limit order; returns remaining quantity if not fully filled
146+
pub fn submit_limit_order(&mut self, mut order: Order) -> u64 {
147+
let (own_side, other_side) = match order.side {
148+
Side::Bid => (&mut self.bids, &mut self.asks),
149+
Side::Ask => (&mut self.asks, &mut self.bids),
150+
};
151+
152+
while order.quantity > 0 {
153+
// Peek best opposite price
154+
if let Some(best_price) = other_side.best_price(match order.side {
155+
Side::Bid => Side::Ask,
156+
Side::Ask => Side::Bid,
157+
}) {
158+
let should_match = match order.side {
159+
Side::Bid => order.price >= best_price,
160+
Side::Ask => order.price <= best_price,
161+
};
162+
if !should_match {
163+
break;
164+
}
165+
166+
// Match at this price level
167+
if let Some(level) = other_side.levels.get_mut(&best_price) {
168+
while let Some(mut resting) = level.pop_front() {
169+
let traded = resting.quantity.min(order.quantity);
170+
resting.quantity -= traded;
171+
order.quantity -= traded;
172+
173+
// Notify trade events here (omitted for brevity)
174+
175+
if resting.quantity > 0 {
176+
// Partial fill, re-queue remaining
177+
level.orders.push_front(resting);
178+
break;
179+
}
180+
if order.quantity == 0 {
181+
break;
182+
}
183+
}
184+
other_side.remove_level_if_empty(best_price);
185+
}
186+
} else {
187+
break;
188+
}
189+
}
190+
191+
// If there is remaining quantity, insert into own side
192+
if order.quantity > 0 {
193+
own_side.insert(order);
194+
}
195+
196+
order.quantity
197+
}
198+
}
199+
```
200+
201+
# 5. Putting it all together
202+
203+
Here is an example usage:
204+
205+
```rust
206+
fn main() {
207+
let mut book = OrderBook::new();
208+
209+
let order1 = Order { id: 1, side: Side::Ask, price: 100, quantity: 10, timestamp: 1 };
210+
book.submit_limit_order(order1);
211+
212+
let taker = Order { id: 2, side: Side::Bid, price: 105, quantity: 5, timestamp: 2 };
213+
let remaining = book.submit_limit_order(taker);
214+
println!("Taker remaining: {}", remaining);
215+
}
216+
```
217+
218+
This will match 5 units at price 100, leaving the ask side with 5 at 100.
219+
220+
# 6. Next steps and optimizations
221+
222+
* Memory Management: Use object pools or arena allocators for orders to reduce heap overhead.
223+
* Concurrency: For multi-threaded matching, partition the book by instrument or shard price ranges.
224+
* Performance Tuning: Replace `BTreeMap` with a specialized skiplist or a custom radix tree for lower latency.
225+
* In the following articles we will add comprehensive unit and integration tests as well as memory and CPU benchmarking harnesses to measure throughput and latency and guide our optimizations for high-frequency trading environments.
226+
227+
# 7. Performance analysis and discussion
228+
229+
While this Rust-based limit order book is functionally correct, in a high-frequency trading (HFT) context true performance comes down to microsecond and even nanosecond optimizations. Here is an overview of where our current design stands and which areas require further tuning:
230+
- Algorithmic complexity:
231+
- Insertion and removal of price levels via BTreeMap is O(log P), where P is the number of distinct price levels.
232+
- Order queue operations (VecDeque) are amortized O(1) for push and pop.
233+
- Matching a single order against k levels costs O(k · (log P + 1)). In practice for tight markets k is small, but worst-case can grow.
234+
235+
- Memory allocation overhead:
236+
- Each new Order and PriceLevel allocation incurs a heap allocation. At HFT throughput of tens of thousands of orders per second, allocator contention and cache misses become significant.
237+
- Object pooling or slab allocators can reduce these costs by reusing memory and improving cache locality.
238+
239+
- Data structure trade‑offs:
240+
- BTreeMap provides safety and predictability, but its node-based structure can produce pointer chasing and cache misses.
241+
- Alternate structures like a custom fixed‑size ring buffer and index arrays or a highly optimized skiplist can reduce pointer indirection and branch mispredictions.
242+
243+
- Latency sources:
244+
- Locking or shared-memory coordination in multi-threaded contexts.
245+
- Dynamic allocations on critical path.
246+
- Pointer-chasing in balanced trees.
247+
248+
- Benchmarking strategy:
249+
- Microbenchmarks of single-threaded operations (insert, match, cancel) with Rust’s criterion crate.
250+
- Memory-profiling with tools like perf, valgrind massif, and jemalloc statistics.
251+
- Multi-threaded scalability tests under synthetic workloads.
252+
253+
In the next article, we will explore advanced order types (iceberg, stop-loss) and extend our engine with event sourcing and persistence.

0 commit comments

Comments
 (0)