Artificial intelligence (AI) apparently cannot develop a general understanding of the real world. A current study from the USA shows that even large language models crash when the rules in a situation change.
She can write poems and computer programs, evaluate huge amounts of data and even drive a car: artificial intelligence is now showing impressive capabilities and is being used in various areas.
This can give the impression that generative AI is also capable of learning general truths about the world. Like one current study by the Massachusetts Institute of Technology (MIT) shows, however, that is not the case.
AI has no meaningful understanding of the real world
To investigate this issue, MIT, Harvard University and Cornell University teamed up with a popular generative AI model to create turn-by-turn instructions in New York City. The system produced results with near-perfect accuracy without having an internal map of the city.
The problem: When the group closed some roads and added detours for the study, the model’s performance plummeted. Closer inspection revealed that the AI was generating non-existent roads that curved between the grid, creating distant connections between distant intersections.
The study is about a generative AI model: the so-called transformer. It is considered the backbone of LLMs like GPT-4. Transformers are trained on a massive amount of language-based data to predict the next token in a sequence – for example, the next word in a sentence.
Understanding the world is important for future AI systems
The results show that transformers can perform surprisingly well on certain tasks without understanding the rules. However, if AI systems are to be developed in the future that can capture accurate world models, the research approach must be different.
If an AI breaks down when the task or the environment changes, it could have serious consequences for generative AI models used in the real world.
“The question of whether LLMs learn coherent models of the world is very important if we want to use these techniques to make new discoveries,” explains Ashesh Rambachan, assistant professor of economics and principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).
Researchers want to change evaluation standards
The MIT team therefore wants to tackle a larger number of problems in which some rules are only partially known. They also want to apply their evaluation standards to real, scientific problems.
“Often we see these models doing impressive things and think that they must have some understanding of the world. I hope we can convince people that this question needs to be thought about very carefully and that we don’t have to rely on our own intuitions to answer it,” said Rambachan.
Also interesting:
- AI as a judge: The advantages and disadvantages of artificial intelligence in the judiciary
- According to the study: AI-generated headlines trigger skepticism
- According to the study: AI could increase the amount of electronic waste a thousandfold
- Researchers discover keys to stable perovskite solar cells
The post AI cannot develop a real understanding of the world – why that is a problem by Beatrice Bode appeared first on BASIC thinking. Follow us too Facebook, Twitter and Instagram.
As a Tech Industry expert, I believe that the inability of AI to develop a real-world understanding is a significant problem. While AI systems have made tremendous advancements in terms of processing power and capabilities, they still lack the ability to truly understand and interpret the complexities of the real world.
This limitation hinders AI’s ability to make accurate decisions and predictions in various scenarios, leading to potential errors and misinterpretations. For example, an AI system may struggle to understand the nuances of human emotions, cultural differences, or context-specific information, which can result in biased or flawed outcomes.
Furthermore, the lack of real-world understanding in AI can also impede its ability to adapt and learn from new situations, limiting its potential for growth and improvement over time. Without a deeper understanding of the world around them, AI systems may struggle to effectively navigate complex and unpredictable environments.
Overall, the inability of AI to develop a real-world understanding poses a significant challenge for the Tech Industry, as it limits the potential applications and impact of AI technology. Moving forward, it will be crucial for researchers and developers to continue exploring ways to enhance AI’s cognitive capabilities and bridge the gap between artificial intelligence and human intelligence.
Credits