Prefix tuning and adapters are the 2 out of the 3 most widely used parameter-efficient finetuning methods for large language models (LLMs). If you look closely, the recent LLaMA-Adapter method that made big waves is actually a prefix tuning method, not an adapter method. 1/2 https://t.co/Q9y0pn9NHS https://t.co/DN6TDfyNGA
— Sebastian Raschka (@rasbt) Apr 9, 2023
from Twitter https://twitter.com/rasbt
April 09, 2023 at 06:25AM
via IFTTT
No comments:
Post a Comment