Thanks to parameter-efficient finetuning techniques, you can finetune a 7B LLM on a single GPU in 1-2 h using techniques like low-rank adaptation (LoRA). Just wrote a new article explaining how LoRA works & how to finetune a pretrained LLM like LLaMA: https://t.co/zxi4gNcJdX
— Sebastian Raschka (@rasbt) Apr 26, 2023
from Twitter https://twitter.com/rasbt
April 26, 2023 at 07:06AM
via IFTTT
No comments:
Post a Comment