Attention mechanisms seem to improve the time series prediction/forecasting and classification performance sample paper
Deep learning models in traja can easily accommodate the attention layer
- Create a self-attention mechanism wrapper Reference
- Inject the attention layer instance on top of LSTM layers before and after encoding. Example here and here
- Add optional boolean arg for attention in autoencoding(ae, vae, vaegan) base models.
Attention mechanisms seem to improve the time series prediction/forecasting and classification performance sample paper
Deep learning models in traja can easily accommodate the attention layer