The European Parliament has adopted a nonbinding resolution against the use of artificial intelligence (AI) by law enforcement in public spaces and a ban on facial recognition databases, such as the ones used by Clearview AI.
It is a significant and hugely welcome step in an ongoing campaign to ensure the EU leads the world in protecting against dangerous applications of AI within its borders.
The Resolution recognises the need to safeguard against application of AI and mass surveillance technologies in Europe. But, if such techniques pose an unacceptable risk for people within the EU’s borders, it follows that they also pose unacceptable risks for people living outside of them. As a result, the EU and its Member States must do all that it can to ensure it does not contribute to the rising use of such techniques around the world.
The resolution seeks to reduce the harms posed by the use of AI in law enforcement such as bias and discrimination that leads to the overpolicing of ethnic minorities. Another risk relates to the misclassification and misidentification of individuals belonging to certain ethnic communities, LGBTI people, children and the elderly, as well as women.
Through this resolution, the European Parliament voted for stronger safeguards around the use of artificial technologies in law enforcement. Some of these envisaged safeguards are that artificial intelligence algorithms used in law enforcement technologies be explainable, transparent, traceable and verifiable. The European Parliament’s call for stronger safeguards is expected to influence the body’s discussions and vote on the European Commission’s Artificial Intelligence Act.
European Digital Rights, which is coordinating a Europe-wide civil society campaign to ban biometric mass surveillance, of which Privacy International is a part, has more details about the European Parliament’s stance.
Reflections on this vote
In a plenary speech, the vice-president of the European Parliament, Marcel Kolaja, described the ban on the use of facial recognition in public spaces as “…an important step in fighting against mass surveillance.”
Although the European Parliament’s resolution is not binding, it does present an opportunity to think about the European Commission’s relationship with artificial intelligence technologies for law enforcement as well as mass surveillance technologies in general.
One consistent theme in the resolution on the ban of artificial technologies in law enforcement and predictive policing is that these technologies are likely to infringe on the right to privacy and the right to human dignity. These two fundamental rights have a universal application that transcends geographic borders. As such, if these technologies are not fit for use in European Union states they should logically, not be fit for use in any other part of the world.
The European Parliament’s decisions do not always reflect those of the European Commission since these are two separate organs. But, this vote makes it clear that Europeans (as represented by MEPs) are not in favour of the use of artificial intelligence in law enforcement, and by extension mass surveillance technologies.
As a first step, the EU must ensure they stop providing millions of Euros to surveillance companies, universities, and government agencies to develop the very mass surveillance technologies in question as part of its research agenda.
European leaders must also ensure they take steps to stop European companies from selling these technologies to countries around the world, including authoritarian ones. This can be done within the framework of the EU’s Dual Use Regulation which governs exports of surveillance technology across Europe, which was recently reformed and which allows authorities to stop such exports.
Further, anti-immigration and counter-terrorism programmes in the horn of Africa, North and Central Africa, and the Balkans rolled out with support from several EU funding vehicles potentially also enable mass surveillance in those third countries.
The European Parliament’s resolution calls for risk assessments to be carried out before artificial technologies are used in law enforcement to determine that the adoption of these technologies is “…safe, robust, secure and fit for purpose, respect(s) the principles of fairness, data minimisation, accountability, transparency, non-discrimination and explainability.”
It follows therefore that these same risk assessments should also be carried out across the EU’s portfolio. However, PI has not come across any publicly available information that shows that the EU currently carries out such robust assessments before donating or funding the use of technologies which may be used or repurposed for mass surveillance in third countries.
This week’s vote is a hugely welcome step in the face of tough opposition: parliamentarians who chose to prioritise the interests of people are to be congratulated. It sends a strong signal that the Parliament will protect those it represents.
It is also a strong signal that the application of AI and mass surveillance techniques should not be tolerated in a democratic society and that the EU has a role to play in ensuring the protection and promotion of human rights within its borders.
Although the EU has no mandate to protect these rights in third countries, the EU must not actively contribute to the restriction of the enjoyment of those rights; whether this infringement is through the development of such tools, their sale, or as part of the EU’s counter-terrorism and anti-immigration policies abroad.