We just released LLaMA-2-7B-32K, a 32K context model that can be fine-tuned for tasks like doc understanding, summarization & QA! Built with Position Interpolation & our data recipe/optimizations, run inference & fine-tune with up to 3x speedup. Thread👇 https://t.co/qaxFI0Xb9M
— Together AI (@togethercompute) Jul 28, 2023
from Twitter https://twitter.com/togethercompute
July 28, 2023 at 03:05PM
via IFTTT
No comments:
Post a Comment