"Using LLMs to detect the level of hallucinations in the responses of other LLMs." Sounds odd, but there is already promising research: •G-Eval •GPTScore •SelfCheckGPT •TRUE •ChatProtect •Chainpoll I am curious about your thoughts on using LLMs to quantify hallucinations. https://t.co/iPu1GWNIvY
— Leonie (@helloiamleonie) Dec 27, 2023
from Twitter https://twitter.com/helloiamleonie
December 27, 2023 at 05:01PM
via IFTTT
No comments:
Post a Comment