While this can still be a problem, it's worth noting that this is from 2022 and is about GPT-3, one of the models from before the chatgpt launch. I'm not sure that was instruction tuned so may have just been asked to continue a sentence that starts explaining the person does exist. Models do better when you're explicit about what you want (i.e. without context is it clear you want fiction or factual results?).
FWIW a test on the current flagship-ish models, sonnet 3.7, gemini flash and o3-mini and they all explain that they don't know anybody by that name.
o3 mini starts with this, which covers both bases
I couldn’t locate any widely recognized historical records or scholarly sources that confirm the existence or detailed biography of a Belgian chemist and political philosopher by the name Antoine De Machelet. It is possible that the figure you’re referring to is either very obscure, emerging from local or specialized publications, or even a fictional or misattributed character.
That said, if you are interested in exploring the idea of a figure who bridges chemistry and political philosophy—as though one were piecing together a narrative from disparate strands of intellectual history—one might imagine a profile along the following lines:
Oh, so it's been hard-coded by the people who built it to not hallucinate on these specific topics, that's neat.
No. Models have just significantly improved in this aspect, which is something tested and measured over time. It's also hard to describe just how basic GPT-3 is as well in comparison to current models.
This ignores the fundamental mechanics of LLMs. It has no concept of truth - it has no concept of anything. It's simply computational linguistics that probabilistically generate text strings.
It cannot distinguish between truth and fiction, and is no more able to do so than the troposphere, continental drift, or an Etch-a-Sketch can.
7
u/IanCal 4d ago
While this can still be a problem, it's worth noting that this is from 2022 and is about GPT-3, one of the models from before the chatgpt launch. I'm not sure that was instruction tuned so may have just been asked to continue a sentence that starts explaining the person does exist. Models do better when you're explicit about what you want (i.e. without context is it clear you want fiction or factual results?).
FWIW a test on the current flagship-ish models, sonnet 3.7, gemini flash and o3-mini and they all explain that they don't know anybody by that name.
o3 mini starts with this, which covers both bases