Hi everyone,
We have recently written an article on HF’s blog on automatic hallucination detection using inconsistency scoring. The main idea is that hallucinations happen because the task asked at inference is not seen in the training set, which implies low confidence in the next token, therefore, inconsistent samples from the same prompt (https://arxiv.org/abs/2309.13638).
We look at the use of SelfCheckGPT NLI (https://arxiv.org/abs/2303.08896), an example of inconsistency scoring, on WikiBio and found that such a metric has high precision (aka flagged hallucinations indeed are ones) and calibrated recall (high scores = high chance of flagging hallucinations).
This is quite promising as it could open the way to having AI systems that are more reliable, aka when the task is easy, we let the AI do it. When we detect it’s too hard and the model is hallucinating, we put a human in the loop.
https://i.redd.it/hbhky9n74y2c1.gif
We have provided:
- An article on HF Blog: https://huggingface.co/blog/dhuynh95/automatic-hallucination-detection
- A Gradio demo to see the metric in action: https://huggingface.co/spaces/mithril-security/hallucination_detector
- A Colab notebook to reproduce our results: https://colab.research.google.com/drive/1Qhq2FO4FFX_MKN5IEgia_PrBEttxCQG4?usp=sharing
We conducted these tests as part of our mission to build Confidential and Trustworthy Conversational AI. You can check out our core project, BlindChat, an open-source and Confidential Conversational AI (aka any data sent to our AI remains private, and not even our admins can see your prompts) at https://github.com/mithril-security/blind_chat/
That is very cool.
May I suggest doing a separate post to show off all the incredible work you all are doing with secure enclaves? It’s important work and --as far as I have seen-- few have noticed.
A walk-through to get Whisper, Mistral 7B, and Stable Diffusion up and running in an enclave would go a long way towards getting folks hip to the impact of what you’re building.
that’s cool. For RAG tasks, it still have hallucination if the retrieved doc is unrelated to the question. Can this method be used in RAG? I am not sure.