Extending an LLM for new knowledge sources is tedious—fine-tuning is expensive/causes forgetting, LoRA is restrictive. Excited to share our work where we show that an LLM can be efficiently *composed* with specialized (L)LMs to enable new tasks! https://t.co/DBZ8aQNQTo đź§µ(1/8) https://t.co/R0h4JEFid8
— Rachit Bansal (@rach_it_) Jan 5, 2024
from Twitter https://twitter.com/rach_it_
January 05, 2024 at 08:01PM
via IFTTT
No comments:
Post a Comment