🦙 Ollama Embeddings Ollama by @jmorgan and co makes running LLMs locally a breeze. Thanks to @herrjemand, you can now also connect to local text embedding models being run with Ollama, in 🦜🔗! Check it out: https://t.co/diJtNudE8b https://t.co/mTgy2kGgzp
— LangChain (@LangChainAI) Sep 15, 2023
from Twitter https://twitter.com/LangChainAI
September 15, 2023 at 10:13AM
via IFTTT
No comments:
Post a Comment