ChatGPT, Bard, GPT-4, and the like are often pitched as ways to retrieve information. The problem is they'll "retrieve" whatever you ask for, whether or not it exists.
Tumblr user @indigofoxpaws sent me a few screenshots where they'd asked ChatGPT for an explanation of the nonexistent "Linoleum harvest" Tumblr meme,
This demonstrates in a really layman-understandable way some of the shortcomings of LLMs as a whole, I think.
An LLM never lies because it cannot lie.
I would say the specific shortcoming being demonstrated here is the inability for LLMs to determine whether a piece of information is factual (not that they’re even dealing with “pieces of information” like that in the first place). They are also not able to tell whether a human questioner is being truthful, or misleading, or plain lying, honestly mistaken, or nonsensical. Of course, which one of those is the case matters in a conversation which ought to have its basis in fact.
Indeed, and all it takes is one lie to send it down that road.
For example, I asked ChatGPT how to teach my cat to ice skate, with predictable admonishment:
But after I reassured it that my cat loves ice skating, it changed its tune:
Even after telling it I lied and my cat doesn’t actually like ice skating, its acceptance of my previous lie still affected it:
Thank you for putting it far more eloquently than I could have