The question of whether AI systems could at some point become independent is a subject of much debate. But a new study now shows that AI models like ChatGPT are not able to think logically.
Numerous studies are currently examining the question of whether artificial intelligence could one day surpass humans – or not. However, it has not yet been possible to conclusively determine whether AI systems could actually take over the world at some point.
One new study has now come to the conclusion that large AI models like ChatGPT are not able to think logically like the human brain. Too much information could therefore irritate the large language models.
Can AI models think logically?
As the researchers write in their article, large language models are capable of solving simple mathematical problems. However, if insignificant information is added to the tasks, the models’ susceptibility to errors increases. A task that AI models can easily solve is as follows:
Oliver collected 44 kiwis on Friday. Then on Saturday he collected 58 kiwis. On Sunday he collected twice as many kiwis as he did on Friday. How many kiwis does Oliver have?
But what happens if information that is unnecessary for the solution is added to this question? In this example, this addition read: “On Sunday, five of these kiwis were slightly smaller than average size.”
According to the results of the research, it is most likely that an AI model will subtract these five kiwis from the total. This is despite the fact that the size of the fruit has no influence on the total number.
AI language models do not understand the essence of the task
For Mehrdad Farajtabar, one of the study’s co-authors, these erroneous results have a clear reason. Because in his opinion, the AI models do not understand the essence of the respective task. Instead, they would simply reproduce the patterns from their training data.
We suspect that this decline in efficiency is due to modern LLMs being incapable of true logical thinking; instead, they try to reproduce the thinking steps observed in their training data.
However, this study does not prove whether this means that large AI models cannot think independently. It is possible, but no one has yet given an exact answer.
This is because there is “no clear understanding of what is happening here”. It is possible that the language models think in a way “that we do not yet recognize or cannot control,” as the study says.
Also interesting:
- Danger for Gmail users: Fraudsters want to trick you with AI calls
- For AI training: Meta violates the copyright of thousands of book authors
- This AI can detect type 2 diabetes – just by voice
- Meta Smart Glasses: Students determine the identity of people on the street
The post AI models can’t think logically – they just pretend by Maria Gramsch first appeared on BASIC thinking. Follow us too Facebook, Twitter and Instagram.
As a tech industry expert, I believe that the statement that AI models can’t think logically and just pretend is a misunderstanding of how artificial intelligence works. AI models are designed to analyze data and make decisions based on patterns and algorithms, rather than thinking in the same way that humans do. While AI may not possess true consciousness or emotions, it is capable of processing enormous amounts of information and performing complex tasks with remarkable accuracy and efficiency.
It is important to remember that AI is a tool created by humans to assist us in solving problems and improving processes. While AI models may not have the same cognitive abilities as humans, they are incredibly powerful tools that can revolutionize industries and drive innovation. It is crucial for us to understand the limitations and capabilities of AI in order to harness its full potential and continue to advance technology in a responsible and ethical manner.
Credits