China’s contribution suppresses critical content with AI-Czensur machine from Maria Gramsch first appeared on Basic Thinking. You always stay up to date with our newsletter.
Poured-out data reveal that China has developed a gigantic AI censorship machine that specifically monitors and sorted out content that the government classified as sensitive. The system automatically suppresses reports on poverty, corruption and abuse of power.
Bias, i.e. bias, is one of the biggest problem in the field of artificial intelligence. The phenomenon describes the systematic distortion of results. For example, certain groups or individuals are preferred or disadvantaged.
The causes and risks of bias in AI systems are multi-layered and deeply rooted in the technical aspects of AI development. Because AI language models are always only as good as the data with which they are trained.
This weak point of artificial intelligence can also be used. An investigation recently showed how Russia Chatgpt and Co. floods and influenced with propaganda content. The same seems to be the case in China How Techcrunch reported.
Accordingly, the news portal has viewed a data record with which the Chinese government censored AI systems in China. Numerous examples show that a zenzmaschin automatically filters reports on poverty, corruption or abuse of power.
China oppresses AI content with censorship machine
The one of Techcrunch Food database contains more than 133,000 examples that have been incorporated into a AI language model. A AI was enabled to automatically mark any content that is considered sensitive by the Chinese government.
Examples are complaints about poverty in rural China. Reports about a corrupt member of the Communist Party or corrupt police officers are also included.
The researcher Xiao Qiang from UC Berkeley deals with Chinese censorship and has also analyzed the data record. Opposite Techcrunch If he explained that it was a “clear proof” that the Chinese government or company connected to it use Large Language Models (LLMS) for oppression.
In contrast to traditional censorship mechanisms, which rely on human labor for keyword -based filtering and manual review, an LLM that is trained on such instructions would significantly improve the efficiency of state -managed information control.
When asked, the Chinese embassy in Washington, DC, showed the allegations in a statement back. “Reasonless attacks and defamation against China” is opposed. The country attaches great importance to the development of ethical AI.
Where does the data record come from?
The security researcher Netaskari In discovered the data record of an unsecured database on a Baidu server and Techcrunch made available.
However, the author of the data record is not known. However, it is said that the data is intended for “public opinion work”. An expert has opposite Techcrunch explains that this wording is a strong indication that the data set should serve the goals of the Chinese government.
The data also shows that they are up to date. The last entries date from December 2024.
What does the Ki-censorship of China fall?
A language emerges from the data set that “reminds of the chatt prompt prompt in a scary way”. An unknown LLM is asked to filter content on sensitive issues from the areas of politics, social life and the military.
This content is classified with “highest priority” and would have to be identified immediately. The highest priority is scandals from the areas of pollution and food safety. Financial fraud and work conflicts are also included.
So they are topics that can quickly lead to public protests or social unrest in China. “Political satire” as well as everything on the subject of “Taiwan policy” also gets the highest priority label.
Also interesting:
- E-Methanol: Mannheim produces climate-neutral shipstaff-from wastewater
- So you can recognize audio envelopes
- Monitoring software Palantir: Germany is ridiculous!
- Do you use chatt? Then the AI can influence your well -being
China’s contribution suppresses critical content with AI-Czensur machine from Maria Gramsch first appeared on Basic Thinking. Follow us too Google News and Flipboard.
As a Tech Industry expert, I find the use of AI-censorship machines by China to suppress critical content concerning. While AI technology can be a powerful tool for many beneficial applications, using it to stifle freedom of speech and limit access to information is deeply concerning.
The use of AI-censorship machines not only infringes on the rights of individuals to express their opinions and access diverse perspectives, but it also sets a dangerous precedent for the misuse of technology to control and manipulate information. This type of censorship undermines the principles of a free and open internet, and ultimately hinders innovation and progress in the tech industry.
It is essential for tech companies and policymakers to push back against such practices and advocate for a more transparent and ethical use of AI technology. We must prioritize the protection of fundamental rights and values in the digital age, and work towards creating a more inclusive and democratic online space for all individuals.
Credits