Use of artificial intelligence by the police: MEPs oppose mass surveillance
To combat discrimination and ensure the right to privacy, the European Parliament demand strong safeguards when artificial intelligence tools are used in law enforcement.
In a resolution adopted by 377 in favour, 248 against and 62 abstentions, MEPs point to the risk of algorithmic bias in AI applications and emphasise that human supervision and strong legal powers are needed to prevent discrimination by AI, especially in a law enforcement or border-crossing context. Human operators must always make the final decisions and subjects monitored by AI-powered systems must have access to remedy, say MEPs.
According to the text, AI-based identification systems already misidentify minority ethnic groups, LGBTI people, seniors and women at higher rates, which is particularly concerning in the context of law enforcement and the judiciary. To ensure that fundamental rights are upheld when using these technologies, algorithms should be transparent, traceable and sufficiently documented, MEPs ask. Where possible, public authorities should use open-source software in order to be more transparent.