Low Rank Adaption (LoRA)


A fine-tuning technique that is both memory-efficient and cost-effective, allowing for quicker model training.