Machine learning library, Distributed training, Deep learning, Reinforcement learning, Models, TensorFlow, PyTorch
-
Updated
Mar 26, 2026 - Python
Machine learning library, Distributed training, Deep learning, Reinforcement learning, Models, TensorFlow, PyTorch
☄️ Parallel and distributed training with spaCy and Ray
Cross-lingual Language Model (XLM) pretraining and Model-Agnostic Meta-Learning (MAML) for fast adaptation of deep networks
A simple package for distributed model training using Distributed Data Parallel (DDP) in PyTorch.
Ring sliding window attention implementation with flash attention
This repository is a tutorial targeting how to train a deep neural network model in a higher efficient way. In this repository, we focus on two main frameworks that are Keras and Tensorflow.
Deep Q-Network (DQN) implementation for optimal maintenance planning of 100-bridge fleet infrastructure using advanced reinforcement learning techniques and vectorized parallel training.
Deep Q-Network implementation for optimal bridge maintenance planning using Markov Decision Process formulation with vectorized parallel training. Based on Phase 3 (Vectorized DQN) from dql-maintenance-faster project.
tic-tac-toe with q-learning
Nano-Qwen ⚡🧠 - A from-scratch implementation of a light weight Qwen-style transformer designed for clarity and experimentation. Includes an efficient Mixture-of-Experts (MoE) architecture, plus built-in support for parallel and distributed training to scale models from a minimal codebase.
Parallel hyperparameter tuning for Metaflow with true adaptive TPE — no sequential bottleneck
Add a description, image, and links to the parallel-training topic page so that developers can more easily learn about it.
To associate your repository with the parallel-training topic, visit your repo's landing page and select "manage topics."