The article Kluger-Hans effect: If artificial intelligence becomes danger, Maria Gramsch first appeared on Basic Thinking. You always stay up to date with our newsletter.
Artificial intelligence is designed to provide correct and smart answers. But the appearance can. Because thanks to the Kluger-Hans effect, KI often delivers results without actually “understanding” it. That can be problematic.
Large AI models are designed to answer questions and solve problems. So that you are able to do this, you must first be trained with countless data.
There are various procedures for this, such as supervised learning or the unsupervised learning – i.e. monitored or not monitored learning. But especially when learning to be monitored, problems can arise that can lead to dangerous mistakes.
A research team from the TU Berlin warns that the so-called clever-han effect can occur in particular with the unsuupervized learning of AI. Although this means that AI models give correct answers, but do so for the wrong reasons.
KI: What is the clever-han effect?
If the clever-han effect occurs in a AI model, it is initially not noticeable for users. Because the model recognizes the right pattern and, accordingly, also issues the correct answer. However, there are no well -founded sources in the background, but, for example, handwritten notes.
For their investigation, the researchers of the TU Berlin have tested a AI model for this weak point. Using X -ray images of the lungs, the model should distinguish which patient has an infection in and which is not.
Initially, the examined AI showed good results. However, only as long as the X -ray images were relatively uniform. When the researchers had to fall back on less uniform X -ray images, there were numerous mistakes in the results.
The problem was in the training of the AI model. Because this had never learned to recognize infections of the lungs in X -ray images. Instead, it used notes on the edge of the picture to distinguish whether there is an infection or not. This procedure is referred to in psychology Kluger-Hans effect, which is also applicable to AI systems. The effect was named after a horse that was supposed to calculate and spell.
It always performed with the hoof in tests until the correct answer was available. However, the horse with the name Hans was of course unable to calculate or spell. It had only learned to observe the questioner so closely that it could anticipate how many hoof taps was expected by him. The facial expressions of the other person perceived it.
Fake knowledge
On the other hand, if there was no questioner, there were numerous mistakes in solving the tasks through the horse. This effect now also occurs in AI models if, for example, you cannot use handwritten information.
The clever-han effect can lead to many problems for AI models in the long term. Especially since they are used more and more for important decisions.
For example in the medical field, as the investigation of the TU Berlin showed. If AI is used to make diagnoses, users must be able to rely on the fact that the right result will come out in the end.
The Kluger-Hans effect occurs primarily when training AI when they are looking for new patterns in data records that may be far too complex for humans. However, it often happens that AI pretends her knowledge and just pretending that she can solve problems. This can be dangerous, especially if AI is used in areas such as education, medicine or security.
Also interesting:
- Llama: Everything you need to know about the Meta-Ki
- New Google function: What is “overview with AI”?
- How do dynamic electricity tariffs work?
- AI cannot count the letter “r” in “Strawberry” – that is the reason
The article Kluger-Hans effect: If artificial intelligence becomes danger, Maria Gramsch first appeared on Basic Thinking. Follow us too Google News and Flipboard.
The Kluger-Hans effect, also known as the illusion of validity, is a cognitive bias where individuals overestimate their abilities and performance. When it comes to artificial intelligence, this effect can become dangerous as it may lead to the belief that AI systems are infallible and can operate without human oversight or intervention.
As a tech industry expert, it is crucial to recognize the limitations of AI systems and to understand that they are not without flaws. It is essential to constantly monitor and evaluate AI systems to ensure they are performing as intended and to intervene when necessary.
Additionally, it is important to consider the ethical implications of relying too heavily on AI systems without human oversight. AI systems are ultimately created and programmed by humans, and they can inherit biases and limitations from their creators. It is important to have mechanisms in place to address these biases and ensure that AI systems are used in a responsible and ethical manner.
In conclusion, the Kluger-Hans effect can be a dangerous phenomenon when it comes to artificial intelligence. As a tech industry expert, it is important to be aware of this bias and to take proactive measures to mitigate its effects and ensure the safe and responsible use of AI systems.
Credits