AI Hallucination
Just heard about this, and thought I’d share …
For example (from the article):
“By 2023, analysts considered frequent hallucination to be a major problem in LLM technology, with a Google executive identifying hallucination reduction as a "fundamental" task for ChatGPT competitor Google Gemini.[9][54] A 2023 demo for Microsoft's GPT-based Bing AI appeared to contain several hallucinations that went uncaught by the presenter.”
“In October 2025, several hallucinations, including non-existent academic sources and a fake quote from a federal court judgement were discovered in an A$440,000 report written by Deloitte and submitted to the Australian government in July. The company later submitted a revised report with these errors removed, and will issue a partial refund to the government.”
“AI models can cause problems in the world of academic and scientific research due to their hallucinations. Specifically, models like ChatGPT have been recorded in multiple cases to cite sources for information that are either not correct or do not exist.”
“On top of providing incorrect or missing reference material, ChatGPT also has issues with hallucinating the contents of some reference material. A study that analyzed a total of 115 references provided by ChatGPT-3.5 documented that 47% of them were fabricated. Another 46% cited real references but extracted incorrect information from them. Only the remaining 7% of references were cited correctly and provided accurate information. ChatGPT has also been observed to "double-down" on a lot of the incorrect information. When asked about a mistake that may have been hallucinated, sometimes ChatGPT will try to correct itself but other times it will claim the response is correct and provide even more misleading information.”