Skip to content

03 Modules Machine Learning

Łukasz Rafał Czarnacki edited this page Mar 9, 2026 · 1 revision

Machine Learning

Source packages:

  • src/trade_lab/ml
  • src/trade_lab/ml_optimization

TradeLab splits ML support into two layers: core building blocks under trade_lab.ml, and Optuna-driven search under trade_lab.ml_optimization.

trade_lab.ml

Public exports

  • Targets: BaseTarget, FutureReturn, DirectionalTarget
  • Preprocessing: prepare_features, FeatureScaler
  • Models: KerasModelWrapper, dense_model, lstm_model
  • Validation: WalkForwardSplit, WalkForwardResult
  • Training: MLTrainer when TensorFlow/Keras is installed

Core responsibilities

  • models.py: wraps Keras models and provides builder factories plus directional_loss()
  • preprocessing.py: aligns features and targets and scales columns
  • targets.py: generates supervised labels from OHLCV data
  • trainer.py: orchestrates indicator computation, target generation, scaling, training, and walk-forward validation
  • validation.py: sequential walk-forward fold generation

KerasModelWrapper

MLStrategy expects:

  • model.input_names: exact DataFrame column names
  • model.predict(features_df): one prediction per row

KerasModelWrapper provides that bridge and can also save() and load() model metadata with the .keras artifact.

Targets

  • FutureReturn(periods=5, column="Close", scale=10.0): tanh-scaled forward log return
  • DirectionalTarget(periods=5, column="Close"): sign of forward price change

Feature scaling

FeatureScaler supports:

  • standard
  • minmax

Fit the scaler on training data only to avoid leakage.

Walk-forward validation

WalkForwardSplit supports:

  • expanding windows
  • sliding windows

MLTrainer.walk_forward() retrains a fresh model on every fold and returns WalkForwardResult objects containing predictions, targets, and the wrapped model.

trade_lab.ml_optimization

Public exports

  • IndicatorSpec
  • FeatureMatrix
  • MLOptimizer
  • MLOptimizationResult
  • ModelPruner

What it optimizes

  • indicator inclusion or exclusion
  • indicator periods
  • indicator lags
  • Keras model training inside Optuna trials
  • final backtest metric on validation data

Key pieces

  • IndicatorSpec: describes one candidate indicator search range
  • FeatureMatrix: computes indicator features and optionally persists a fitted scaler
  • MLOptimizer: runs the study, rebuilds the best trial, retrains from scratch, and evaluates
  • ModelPruner: removes weak weights or features from the best ML result

Typical training workflow

from trade_lab.indicators import EMA, RSI
from trade_lab.ml import FeatureScaler, FutureReturn, MLTrainer, dense_model
from trade_lab.signals import OHLC

ohlc = OHLC(lag=1)
indicators = [
    EMA(ohlc, period=20, lag=1),
    RSI(ohlc, period=14, lag=1),
]

trainer = MLTrainer(
    indicators=indicators,
    target=FutureReturn(periods=5),
    model_builder=dense_model(layers=[64, 32]),
    scaler=FeatureScaler(method="standard"),
)

wrapped_model = trainer.train(df, epochs=20, verbose=0)

Related workflows

Clone this wiki locally