Source code on GitHub

Hello, human!

Can you spot the hallucination?

AI Prompt

AI Response

LLM-powered AIs are well-known to experience "hallucinations" where results can include inaccurate or completely made-up information.

While perhaps surprising to learn, it can be helpful to realize that underlyng AI model doesn't inherently know what "today" is. It has no idea!

So if we ask the AI to provide information on something that happened on this day in history, it may hallucinate about that. See for yourself by trying it out the above without checking "include additional grounding data" and seeing if the response is contextually accurate → is it a fact from history that happened ON THIS DAY or is it from some other day?

One of the techniques for battling this is to provide supplementary data that is trust to help "ground" the AI to produce more accurate results. One powerful form of this is known commonly as the RAG pattern which this page demonstrates a very simple use of by providing today's date.

Note that this additional grounding does NOT stop all of the hallucinations, just the first one. But we are making progress. To be continued at Boston Azure Bootcamp