By Luca Bertuzzi
(EurActiv) — The United States is pushing for a narrower Artificial Intelligence definition, a broader exemption for general purpose AI and an individualised risk assessment in the AI Act, according to a document obtained by EURACTIV.
The non-paper is dated October 2022 and was sent to targeted government officials in some EU capitals and the European Commission. It follows much of the ideas and wording of the initial feedback sent to EU lawmakers last March.
“Many of our comments are prompted by our growing cooperation in this area under the U.S.-EU Trade and Technology Council (TTC) and concerns over whether the proposed Act will support or restrict continued cooperation,” the document reads.
The document is a reaction to the progress made by the Czech Presidency of the EU Council on the AI regulation last month. A US Mission to the European Union spokesperson declined EURACTIV’s request for comment.
While the Americans showed support for the changes the Czech Presidency made to clarify the definition of Artificial Intelligence, they warned that the definition “still includes systems that are not sophisticated enough to merit special attention under AI-focused legislation, such as hand-crafted rules-based systems.”
To avoid over-inclusiveness, the non-paper suggests using a narrower definition that captures the spirit of the one provided by the Organisation for Economic Co-operation and Development (OECD) and clarifies what is and is not included.
General purpose AI
The non-paper recommends having different liability rules for the providers of general purpose AI systems, large models that can be adapted to perform various tasks, and the users of such models that might employ them for high-risk applications.
The Czech Presidency proposed that the Commission should tailor the obligations of the AI regulation to the specificities of general purpose AI at a later stage via an implementing act.
By contrast, the US administration warns that placing risk-management obligations on these providers could prove “very burdensome, technically difficult and in some cases impossible.”
Moreover, the non-paper pushes against the idea that general purpose AI providers would have to cooperate with their users to help them comply with the AI Act, including the disclosure of confidential business information or trade secrets, albeit with the appropriate safeguards.
The leading providers of general purpose AI systems are large American companies like Microsoft and IBM.
In classifying a use case as high-risk, the US administration advocated for a more individualised risk assessment that should consider threat sources, vulnerabilities, likely occurrence of the harm and its significance.
By contrast, human rights are only to be assessed in particular contexts. They also made the case for an appeal mechanism for companies that think they have been wrongly classified as high-risk.
For international cooperation, Washington wants that the National Institute of Standards and Technology (NIST) standards can be a means for compliance alternative to the self-assessments mandated in the AI regulation.
The non-paper also states that “in areas considered to be “high risk” under the Act, many U.S. government agencies will likely stop sharing rather than risk that closely held methods will be disclosed more broadly than they have comfort with.”
While the document expresses support for the approach of the Czech Presidency in adding an extra layer for the classification of high-risk systems, it also warns that there might be inconsistencies with the regulatory regime of the Medical Device Regulation.
The United States is pushing for a more substantial role for the AI Board, which will collectively gather the EU’s national competent authorities, compared to the authority of the individual country. They also propose a standing sub-group within the board with stakeholder representatives.
As the Board will be responsible for advising on technical specifications, harmonised standards and the development of guidelines, Washington would like to see wording allowing representatives from like-minded countries, at least in this sub-group.
The European Commission has increasingly shut down the door to non-EU countries on standard development, whilst the U.S. is pushing for more bilateral cooperation.
According to the non-paper, the regulation could prevent cooperation with third countries as it covers public authorities outside the EU that impact the bloc unless there is an international agreement for law enforcement and judicial cooperation.
The concern is that the US administration might stop cooperation with the EU authorities for border control management, which the AI Act considers separate from law enforcement.
Another point raised is that the reference to ‘agreements’ is deemed too narrow, as binding agreements on AI cooperation might take years to conclude. Even existing law enforcement cooperation might suffer since it also takes place outside formal agreements.
Moreover, the non-paper suggests a more flexible exemption for the use of biometric recognition technologies in cases where there is a ‘credible’ threat, such as a terrorist attack, as strict wording could prevent practical cooperation to ensure the safety of major public events.
In May, the French Presidency included the possibility for market surveillance authorities to be granted full access to the source code of high-risk systems when ‘necessary’ to assess their conformity with the AI rulebook.
For Washington, what is ‘necessary’ needs to be better defined, a list of transparent criteria should be applied to avoid subjective and inconsistent decisions across the EU, and the company should be able to appeal the decision.