AutoLoRA: Automatically Tuning Matrix Ranks in Low-Rank Adaptation Based on Meta Learning
On this page
Large-scale pretraining followed by task-specific finetuning has achievedgreat success in various NLP tasks. Since finetuning all parameters of largepretrained models poses substantial computational and memory challenges,several efficient finetuning methods have been developed. Among them, low-rankadaptation (LoRA), which finetunes low-rank incremental update matrices on topof frozen pretrained weights, has proven particularly effective. Nonetheless,LoRA’s uniform rank assignment across all layers, along with its reliance on anexhaustive search to find the best rank, leads to high computation costs andsuboptimal finetuning performance. To address these limitations, we introduceAutoLoRA, a meta learning based framework for automatically identifying theoptimal rank of each LoRA layer. AutoLoRA associates each rank-1 matrix in alow-rank update matrix with a selection variable, which determines whether therank-1 matrix should be discarded. A meta learning based method is developed tolearn these selection variables. The optimal rank is determined by thresholdingthe values of these variables. Our comprehensive experiments on naturallanguage understanding, generation, and sequence labeling demonstrate theeffectiveness of AutoLoRA.
Further reading
- Access Paper in arXiv.org