Sign in

ROSA: Random Subspace Adaptation for Efficient Fine-Tuning

By Marawan Gamal Abdel Hameed and others
Model training requires significantly more memory, compared with inference. Parameter efficient fine-tuning (PEFT) methods provide a means of adapting large models to downstream tasks using less memory. However, existing methods such as adapters, prompt tuning or low-rank adaptation (LoRA) either introduce latency overhead at inference time or achieve subpar downstream... Show more
July 10, 2024
=
0
Loading PDF…
Loading full text...
Similar articles
Loading recommendations...
=
0
x1
ROSA: Random Subspace Adaptation for Efficient Fine-Tuning
Click on play to start listening