79242097

Date: 2024-12-01 17:24:09
Score: 0.5
Natty:
Report link

All Parameter-Efficient Fine-Tuning (PEFT) methods train a small number of extra model parameters while keeping the parameters of the pretrained model frozen. There are different methods of PEFT, including prompt-tuning, prefix-tuning, and LoRA, among others. In the case of LoRA, which is the method you are using, it tracks changes that should be applied to model weights using two low-rank matrices. The target_modules option specifies which layers of the model will be modified to incorporate the LoRA technique. For example, q_lin refers to the linear transformation that computes the query vectors in a multi-head attention mechanism. In your code, you specify that LoRA should be applied to optimize the parameters of this layer. Thus, LoRA adapters (the low-rank matrices that track weight changes to be applied to the targeted weights) will be added to this layer (and any other specified target layers) and they will be optimized during fine-tuning.

Reasons:
  • Long answer (-0.5):
  • Has code block (-0.5):
  • Single line (0.5):
  • Low reputation (1):
Posted by: Ali Moameri