Wrong answers are not necessarily surprising with AI systems. But a new study now shows that these so-called hallucinations occur even though the AI systems know the right answers.
Hallucinations are a well-known problem for AI systems. Experts use this to describe incorrect answers that are, however, formulated in an absolutely convincing way. The difficulty of the question does not matter either, as hallucinations of AI systems can also occur with sometimes very simple questions.
But a new study now shows that these incorrect answers are not because the respective AI system does not know the answer. Because the AI system usually knows the correct answer.
Why do AI systems provide wrong answers?
Researchers from Technion, the Technological Institute for Israel, have studied hallucinations of AI systems. To do this, they took a closer look at how AI systems work. Google and Apple were also involved in the study.
The study is titled “LLMs know more than they show” and looks at the “intrinsic representation of LLM hallucinations.” According to the researchers, these hallucinations of AI systems include, among other things, “factual inaccuracies, distortions and errors in thinking.”
During the investigation, the researchers noticed “a discrepancy between the internal coding and the external behavior” of large language models. This means that the systems encode the correct answer, but externally produce an incorrect answer.
Response tokens contain the correct information
How The Decoder reportedthe scientists have developed a new method for their investigation. The aim was to be able to better examine the “inner workings” of AI systems.
Their focus was on the so-called “exact response tokens”. This means the part of an answer that also contains the actual information.
Large language models are trained not only to say the actual answer, but also to answer in the entire sentence. If you were asked about the capital of Germany, the word “Berlin” would be the exact answer token in a complete sentence.
And according to the researchers, it is precisely these tokens that contain the information about whether an answer is right or wrong. The surprising result of the study emerged: the AI systems often “knew” the right answer, but did not give it.
They can encode the correct answer but consistently produce an incorrect answer.
With their research results, the scientists were able to deepen our understanding of errors in AI systems. This knowledge could now be used to significantly improve error analysis and avoidance.
Also interesting:
- AI models can’t think logically – they just pretend
- Danger for Gmail users: Fraudsters want to trick you with AI calls
- For AI training: Meta violates the copyright of thousands of book authors
- This AI can detect type 2 diabetes – just by voice
The post Hallucinations: AI provides wrong answers – even though it “knows” the right ones by Maria Gramsch appeared first on BASIC thinking. Follow us too Facebook, Twitter and Instagram.
As a Tech Industry expert, I understand that AI is not infallible and can sometimes provide incorrect answers or responses, even when it “knows” the right ones. Hallucinations in AI can occur due to a variety of factors such as biased data, flawed algorithms, or unexpected inputs.
It is crucial for developers and engineers to continuously monitor and evaluate AI systems to ensure they are producing accurate and reliable results. Additionally, implementing safeguards such as rigorous testing, diverse datasets, and transparency in AI decision-making processes can help reduce the likelihood of hallucinations occurring.
Ultimately, while AI has the potential to greatly enhance and streamline processes in various industries, it is important to approach its implementation with caution and awareness of its limitations to prevent the occurrence of hallucinations and ensure the technology is used responsibly and ethically.
Credits