Skip to content

attention layer on top of LSTMs #82

@Saran-nns

Description

@Saran-nns

Attention mechanisms seem to improve the time series prediction/forecasting and classification performance sample paper

Deep learning models in traja can easily accommodate the attention layer

  1. Create a self-attention mechanism wrapper Reference
  2. Inject the attention layer instance on top of LSTM layers before and after encoding. Example here and here
  3. Add optional boolean arg for attention in autoencoding(ae, vae, vaegan) base models.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or requesthelp wantedExtra attention is needed

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions