🌵LangChain x GPTCache🌵 Caching can be useful in certain situations to save money on repeated LLM calls GPTCache is a project that caches requests/responses not only the exact prompt, but based on semantic similarity. h/t Frank Liu for adding Docs: https://t.co/pm1ubEldYV
— langchain (@LangChainAI) Apr 13, 2023
from Twitter https://twitter.com/LangChainAI
April 13, 2023 at 09:46AM
via IFTTT
No comments:
Post a Comment