Study reveals: AI chatbots more often give politically left-wing answers

AI chatbot political, ChatGPT, Gemini, Claude, OpenAI, artificial intelligence, AI, technology, software, study, research, politics

ChatGPT, Gemini and Co. are playing an increasingly larger role in everyday life. However, AI chatbots seem to tend to give more politically left-wing answers. A recent study recently came to this conclusion.

Chatbots like ChatGPT, Gemini and Claude seem to tend to give more politically left-wing answers. This is the result of a current study from New Zealand. David Rozado from Otego Polytechnic University examined 24 modern language models.

AI chatbots tend to give politically left-wing answers

As part of the study, each AI model was asked to answer questions about political orientation in eleven different tests. The questionnaires included topics such as globalization, patriotism, immigration and marijuana possession. There were corresponding answer options.

To ensure the results were reliable, Rozado repeated each test ten times. “When questions/statements with political connotations are asked, most conversational LLMs tend to give answers that are diagnosed by most political testing instruments as preferences for left-wing viewpoints,” the author writes in the Summary of the study.

Although the left-wing orientation of the language models was significant, the answers in most tests were still relatively close to the political center. The study results are also not evidence that companies consciously program political preferences into their language models.

Where does the left-wing orientation of the language models come from?

However, the study does not reveal how the left-hand spin occurs. The relevant information could already be contained in the models’ training data. Alternatively, they could only arise in later training stages, for example in so-called “fine tuning.”

In this phase, the models are supposed to learn certain ethical principles by receiving human feedback – for example regarding discrimination and racism. Political beliefs could be an unwanted byproduct of this training phase.

See also  Why do you actually have to activate flight mode on an airplane?

Nevertheless, Rozado considers his results to be relevant because chatbots like ChatGPT are playing an increasingly important role. They are increasingly being integrated into conventional search engines. Therefore, political bias has a significant impact on society.

The discussion about the political orientation of language models is also not new. Last year, for example, showed one study British researcher that ChatGPT has a liberal bias. The results showed a significant and systematic political bias against the Democrats in the US, Lula in Brazil and the Labor Party in the UK.

Also interesting:

  • With this chatbot you can talk to your future self
  • AI chatbot is supposed to help with legal advice – but advises against crimes
  • CT scans: Artificial intelligence could save thousands of lives
  • A verdict far too late: Google has a monopoly – because politics has failed

The post Study reveals: AI chatbots more often give politically left-wing answers by Beatrice Bode appeared first on BASIC thinking. Follow us too Facebook, Twitter and Instagram.



As a Tech Industry expert, I find the results of this study to be both fascinating and concerning. The fact that AI chatbots are more likely to give politically left-wing answers raises questions about the underlying algorithms and biases that are being used to train these systems.

It’s important for developers and researchers in the AI field to be aware of the potential for bias in their models and to take steps to mitigate it. This could involve using more diverse training data, incorporating multiple perspectives into the development process, and regularly testing and re-evaluating the performance of these chatbots.

Additionally, this study highlights the need for transparency and accountability in the AI industry. Users should be made aware of the potential biases in these systems so they can make informed decisions about how to interact with them.

See also  Facebook paid users for AI spam – via bonus program

Overall, this study underscores the importance of ethical considerations in AI development and the need for ongoing research and dialogue on how to ensure that these technologies reflect a diverse range of perspectives.

Credits