Border controls with AI: “Justify discrimination, racism and harm”

Border controls AI surveillance Artificial intelligence EU

The EU relies on AI-based systems to carry out border controls and monitor its external borders. However, the AlgorithmWatch organization highlights numerous problems with the technologies used. Among them: lack of transparency and potential human rights violations.

To monitor its external borders and carry out border controls, the European Union is increasingly using automated systems based on AI. According to the organization AlgorithmWatch, ethical considerations should hardly play a role – despite potential human rights violations.

This comes from a specially created one database from the NGO, which lists many of the technologies used and shows the problems associated with them. Although this is public research, the EU would also withhold project information. AlgorithmWatch therefore called for political consequences.

Border controls with AI endanger human rights

Under the project name “Automation on the moveThe organizations examined 24 research projects commissioned by the EU and assessed them for potential risks. These include: systems for controlling unmanned vehicles and drones, for biometric data processing and other AI-based surveillance models.

AlgorithmWatch consulted scientists, journalists and civil rights activists to uncover the risks of these systems. According to the investigation, technical errors could lead to erroneous identifications, which pose the risk of unwarranted surveillance of people.

Against the background of increasingly strict migration policy, discrimination through AI-based algorithms would increasingly become a problem. According to AlgorithmWatch, the AI ​​systems used could be misused or restrict people’s freedom of movement.

This development would endanger fundamental human rights. These include: the right to privacy, the right to equal treatment and the right to asylum. These risks are not sufficiently mentioned in EU research projects.

Little transparency and hardly any ethical consideration

AlgorithmWatch also criticizes a lack of transparency – even though the projects are publicly financed. The European Research Executive Agency (REA) repeatedly denied the organization access to information. Reason: Provider and security interests are more important than the interests of the public.

See also  Google Chrome: This is how you can have texts read to you

In order to obtain information, those involved in the project analyzed television recordings and interviews, among other things. One of the results: Among other things, the controversial monitoring technology ANDROMEDA is already being actively used.

AlgorithmWatch also doubts that the system will be limited to border controls. The NGO fears that many technologies could be used militarily. There would be a risk that the systems would fall into the hands of autocratic countries – especially since Belarus was involved in at least two projects until the Russian war of aggression on Ukraine.

Border controls with AI: AI Act offers scope

With the so-called AI Act, the EU wants to restrict the use of ethically questionable AI models. According to AlgorithmWatch, however, there are major gaps, especially in the areas of border protection and migration. Since the EU member states have a certain amount of room for maneuver, the organization makes concrete demands.

Clear supervisory and transparency guidelines should therefore be considered standard for high-risk applications. The involvement of civil society, those affected and experts is also essential for the design and evaluation of AI systems.

The influence of the defense industry must be reduced. Military and civilian systems should be strictly separated – especially when it comes to the transparency of research results. Fabio Chiusi, head of the “Automation on the Move” project at AlgorithmWatch:

Whenever you look at automated technology as a solution to a social problem or phenomenon as old as humanity, such as migration, you will end up justifying discrimination, racism and harm.

Also interesting:

  • Satellite images from the future: AI can predict flood disasters
  • AI will change the world in ways we cannot imagine
  • Meta, OpenAI and Anthropic sell their AI models to the US military
  • According to the study: AI could increase the amount of electronic waste a thousandfold

The post Border controls with AI: “Justifying discrimination, racism and harm” by Fabian Peters first appeared on BASIC thinking. Follow us too Facebook, Twitter and Instagram.

See also  CT scans: Artificial intelligence could save thousands of lives



As a Tech Industry expert, I believe that using AI for border controls can be a valuable tool for enhancing security and efficiency. However, it is essential to recognize the potential risks of discrimination, racism, and harm that can arise from the implementation of these technologies.

One of the key concerns with using AI for border controls is the potential for bias in the algorithms used to make decisions. If these algorithms are not carefully designed and monitored, they can inadvertently discriminate against certain groups based on factors such as race, ethnicity, or nationality. This can lead to unjust outcomes and perpetuate existing inequalities in society.

Furthermore, the use of AI for border controls raises ethical questions about the balance between security and individual rights. There is a risk that these technologies could be used to justify discriminatory practices and infringe upon the rights of individuals, particularly those from marginalized or vulnerable communities.

It is crucial for policymakers, technologists, and stakeholders to work together to ensure that AI systems used for border controls are transparent, accountable, and fair. This includes implementing safeguards to prevent bias, conducting regular audits of the algorithms, and providing avenues for redress in cases of discrimination or harm.

Ultimately, while AI has the potential to enhance border controls, it is essential to approach its implementation with caution and a strong commitment to upholding human rights and preventing discrimination. Only by doing so can we harness the benefits of technology while mitigating its risks.

Credits