EU Commission Mulls Facial Recognition Ban In AI ‘White Paper’

By

By Samuel Stolton

(EurActiv) — The European Commission is considering measures to impose a temporary ban on facial recognition technologies used by both public and private actors, according to a draft white paper on Artificial Intelligence obtained by EURACTIV.

If implemented, the plans could throw current AI projects off course in some EU countries, including Germany’s wish to roll out automatic facial recognition at 134 railway stations and 14 airports. France also has plans to establish a legal framework permitting video surveillance systems to be embedded with facial recognition technologies.

The Commission paper, which gives an insight into proposals for a European approach to Artificial Intelligence, stipulates that a future regulatory framework could “include a time–limited ban on the use of facial recognition technology in public spaces.”

The document adds that the “use of facial recognition technology by private or public actors in public spaces would be prohibited for a definite period (e.g. 3–5 years) during which a sound methodology for assessing the impacts of this technology and possible risk management measures could be identified and developed.”

Five Regulatory Options for AI

More generally, the draft White Paper, whose completed version the Commission should publish towards the end of February, features five regulatory options for Artificial Intelligence across the bloc.

The different regulatory branches considered by the Commission in the paper are:

  • Voluntary labelling
  • Sectorial requirements for public administration and facial recognition
  • Mandatory risk-based requirements for high-risk applications
  • Safety and liability
  • Governance

A Voluntary Labelling framework could consist of a legal instrument whereby developers could “chose to comply, on a voluntary basis, with requirements for ethical and trustworthy artificial intelligence.” Should compliance in this area be guaranteed, a ‘label’ of ethical or trustworthy artificial intelligence would be granted, with binding conditions.

Option two focuses on a specific area of public concern – the use of artificial intelligence by public authorities – as well as the employment of facial recognition technologies generally. In the former area, the paper states that the EU could adopt an approach akin to the stance taken by Canada in its Directive on Automated Decision Making, which sets out minimum standards for government departments that wish to use an Automated Decision System.

As for facial recognition, the Commission document highlights provisions from the EU’s General Data Protection Regulation, which give citizens “the right not to be subject of a decision based solely on automated processing, including profiling.”

In the third area which the Commission is currently priming for regulation, legally binding instruments would apply only “to high–risk applications of artificial intelligence.” The paper states that “this risk–based approach would focus on areas where the public is at risk or an important legal interest is at stake.”

Certain sectors which could be considered high risk include healthcare, transport, policing and the judiciary, the document finds. The Commission adds that in order for an application to be considered “high-risk,” it would have to fulfil one of two criteria: come under the scope of a high-risk sector or present potential legal ramifications and pose “the risk of injury, death or significant material damage for the individual.”

Option four covers safety and liability issues that may arise as part of the future development of Artificial Intelligence and suggests that “targeted amendments” could be made to EU safety and liability legislation, including on the General Product Safety Directive, the Machinery Directive, the Radio Equipment Directive and the Product Liability Directive.

Risks currently not covered in existing legislation, the document says, include “the risks of cyber threats, risk to personal security, to privacy and to personal data protection,” which may be considered as part of any possible future amendments. 

On the liability front, “adjustments may be needed to clarify the responsibility of developers of artificial intelligence and to distinguish them from the responsibilities of the producer of the products,” and the scope of legislation could be amended to determine whether artificial intelligence systems should be considered as ‘products.’ 

With regards to Option 5, ‘Governance,’ the Commission says that an effective system of enforcement is essential, requiring a strong system of public oversight with the involvement of national authorities. Promoting cooperation between such national authorities would be necessary, the document notes.

The most likely approaches to be adopted formally, the paper notes, are a combination of Options 3, 4, and 5.

“The Commission may consider a combination of a horizontal instrument setting out transparency and accountability requirements and covering also the governance framework complemented by targeted amendments of existing EU safety and liability legislation,” the document says.

Background

The disclosure of the Commission’s draft white paper follows a period of public discussion on how to deal with the future challenges of Artificial Intelligence.   

A report published in June by the Commission’s High-Level Group on AI suggested that the EU should consider the need for new regulation to “ensure adequate protection from adverse impacts”, which could include issues arising from biometric recognition, the use of lethal autonomous weapons systems (LAWS), AI systems built on children’s profiles, and the impact AI may have on fundamental rights. 

In terms of facial recognition, fears abounded last year about the application of the technology in Europe, with the Swedish Data Protection Authority fining a municipality €20,000 for using facial recognition technology in monitoring the attendance of students in school, while France’s data regulator, the CNIL, said the technology breaches GDPR consent rules.

EurActiv

EurActiv publishes free, independent policy news and facilitates open policy debates in 12 languages.

Leave a Reply

Your email address will not be published. Required fields are marked *