Skip to content

RajdeepAher/lora-mini

Repository files navigation

LoRA-Mini: Adaptation Matrices Decomposition and Selective Training

This repository contains the official implementation of the paper: LoRA-Mini : Adaptation Matrices Decomposition and Selective Training (Accepted at AAAI CoLoRAI Workshop 2025)

Introduction

The recent advancements in Large Language Models (LLMs) have highlighted the need for efficient fine-tuning methods. While Low-Rank Adaptation (LoRA) has been a significant step towards Parameter Efficient Fine-Tuning, it still presents storage challenges.

This paper introduces LoRA-Mini, an optimized adaptation of LoRA that enhances parameter efficiency by decomposing low-rank matrices and applying selective training. Our approach splits the low-rank matrices into four parts, with only the two inner matrices being trainable. This method achieves up to a 20x reduction in trainable parameters compared to standard LoRA while maintaining comparable performance.

image

Installation

To get started, clone the repository and install the required dependencies:

git clone https://github.com/RajdeepAher/lora-mini.git
cd lora-mini
pip install -r requirements.txt

Datasets

Results

Reproducing Results

Citation

If you find our work useful, please cite our paper:

@misc{singh2024loraminiadaptationmatrices,
      title={LoRA-Mini : Adaptation Matrices Decomposition and Selective Training}, 
      author={Ayush Singh and Rajdeep Aher and Shivank Garg},
      year={2024},
      eprint={2411.15804},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2411.15804}, 
}

Contact

For any questions or suggestions, please feel free to open an issue or contact the authors:

About

Repository for "LoRA-Mini : Adaptation Matrices Decomposition and Selective Training (AAAI Workshop 2025)"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors