Neural Programmer-Interpreter Implementation (Reed, de Freitas: https://arxiv.org/abs/1511.06279), in Tensorflow
-
Updated
Nov 17, 2018 - Python
Neural Programmer-Interpreter Implementation (Reed, de Freitas: https://arxiv.org/abs/1511.06279), in Tensorflow
A Word Level Transformer layer based on PyTorch and 🤗 Transformers.
IJRR 2026 | Situationally-Aware Dynamics Learning | Online and Unsupervised Latent Factor Representation Learning for Robot Dynamics Learning.
Visual analytics approach presented in the paper "Visual Analytics Tool for the Interpretation of Hidden States in Recurrent Neural Networks" (VCIBA, 2021).
R package for Statistical Modeling of Animal Movements
This repository contains NLP Transfer learning projects with deployment and integration with UI.
Designing and training probabilistic graphical models (MATLAB).
Topology and geometry of Transformer hidden states via persistent homology. The manifold is a 1D arc (PC1=norm, r=0.999), not a torus or sphere. PC2=position, PC3=surprisal+POS. Validated up to 40B.
Investigating Layer-Specific Performance in Speaker Recognition with XLS-R Architecture
Geometric phase structure in Transformer hidden states. LayerNorm placement predicts manifold geometry — 6x difference in PCA concentration. 9 models, 13 experiments.
Measurement-first audit repo for hidden-state verifiers in structured reasoning: outcome readout vs process verification via counterfactual local validity.
Add a description, image, and links to the hidden-states topic page so that developers can more easily learn about it.
To associate your repository with the hidden-states topic, visit your repo's landing page and select "manage topics."