People ask me about foundational models for RAG with larger context windows, like GPT-4/Gemini Flash, and how they relate to ColPali. In this case, a 62-page PDF occupies 34,567 tokens so that we can fit about 30 of those into the LLM context window. I'll argue that there… https://t.co/uu1hMcblZW https://t.co/AUxPklWyiP
— Jo Kristian Bergum (@jobergum) Sep 20, 2024
from Twitter https://twitter.com/jobergum
September 20, 2024 at 06:58AM
via IFTTT
No comments:
Post a Comment