Artificial Intelligence Act: Will the EU set a global standard for regulating AI systems? | In Principle

Go to content
Subscribe to newsletter
In principle newsletter subscription form

Artificial Intelligence Act: Will the EU set a global standard for regulating AI systems?

The world pins high hopes on the development of artificial intelligence systems. AI is expected to generate huge economic and social benefits across various aspects of life and sectors of the economy, including the environment, agriculture, healthcare, finances, taxes, mobility, and public administration. The progressing development of AI systems is forcing the creation of appropriate legal frameworks, which on one hand should facilitate further growth of AI technologies but on the other hand should ensure adequate protection of persons using such systems and raise societal confidence in the operation of AI systems.

In April 2021 the European Commission presented its proposal for an EU regulation to be known as the “Artificial Intelligence Act” or AIA (COM/2021/206 final). The proposal is the fruit of several years of work aimed at developing an EU approach to AI. It also represents the world’s first attempt at comprehensive regulation of AI systems and their application.

In late November 2021 the Council of the European Union presented the first version of a compromise draft answering certain critical comments addressed to the Commission’s initial draft. But this is not the final version of the regulation, which will undoubtedly undergo further changes during the course of the legislative process.

Aims of the proposal

The regulation is designed to make the EU the world leader in the development of secure, reliable and ethical AI. It should also provide legal certainty, thus encouraging businesses to pursue investments and innovations in the field of artificial intelligence.

To achieve these aims, the proposal adopts a horizonal regulatory approach to AI. It is based on setting minimum requirements essential to manage risks and problems associated with AI, without excess restrictions or hindering of technological growth, or otherwise disproportionately increasing the costs of launching AI solutions.

Although the proposal will no doubt be further refined during the legislative process, the key solutions on which it is founded appear unlikely to change. In this respect, we intend to publish a series of articles covering the main assumptions and effects of the work on the proposal, as well as the interactions that may arise between the draft provisions and existing regulations.

We first examine two key issues:

  • The definition of AI systems, i.e. what sort of systems will be covered by the new regulation
  • The entities and territory covered by this definition, i.e. who will have to comply with the duties set forth in the regulation, and under what circumstances.

Definition of AI system

The scientific community has never developed a single definition of artificial intelligence. As pointed out by the Council of Europe, “‘AI’ is used as a ‘blanket term’ for various computer applications based on different techniques, which exhibit capabilities commonly and currently associated with human intelligence.” In 2018, an extended definition of AI was adopted by the Commission’s High-Level Expert Group on Artificial Intelligence.

In the proposed AIA, the Commission itself decided to include a definition not of AI as such, but of “AI system,” intended to be as technology-neutral as possible and to resist obsolescence. This is vital in light of the rapid technological development and evolution of the market situation in the field of AI.

But the Commission’s proposed definition was criticised by many stakeholders as overbroad, capable of sweeping up not only true AI systems, but nearly any computer program, including “stupid AI” or “fake AI.”

For this reason, the Council decided to revise the proposed definition. The current version of the draft defines an AI system as a system that:

  • Receives machine and/or human-based data and inputs
  • Infers how to achieve a given set of human-defined objectives using learning, reasoning or modelling implemented with the techniques and approaches listed in Annex I, and
  • Generates outputs in the form of content (generative AI systems), predictions, recommendations or decisions, which influence the environments it interacts with.

Annex I, cross-referenced in the definition, specifies techniques and approaches providing grounds for regarding a given system as an AI system. They include:

  • Machine-learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods, including deep learning
  • Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems, and
  • Statistical approaches, Bayesian estimation, and search and optimisation methods.

This is a fixed catalogue, which might create an impression that it would not keep pace with the rapid development of AI systems. But, addressing this concern, the proposal would authorise the Commission to adopt delegated acts to update the list of techniques and approaches set forth in Annex I. Under the current version of the draft, the Commission would have to review the list every 24 months. Vesting the Commission with this authority would enable the list to be updated on an ongoing basis, adapting it in light of market and technological development. But this approach could itself give rise to legal uncertainty on the part of creators of AI systems, as they might not be able to predict whether a system they are designing would be subject to the EU rules.

While narrowing the definition of AI systems should be assessed positively, this will probably not be the final wording of this key concept. But it should be pointed out that coverage of a given system by the regulation will not automatically subject the system to new legal obligations. The new obligations are to extend only to AI systems whose use carries a certain level of risk (unacceptable, high, or limited). AI systems whose use entails low or minimal risk would not be subject to the obligations set forth in the new regulation. (We will discuss the risk-based approach adopted in the proposed regulation more extensively in a future article.) Regardless of the further work on the proposal, the final wording of the definition will be key for determining the scope of application of the legal framework under the AIA.

Who will be covered by the new regulation?

The scope of the proposed regulation is broad, but comparable to that adopted in the case of other EU regulations involving product safety, consumer rights, and protection of personal data. The proposal would apply to:

  • Providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are physically present or established within the Union or in a third country
  • Users of AI systems who are physically present or established within the Union
  • Providers and users of AI systems who are physically present or established in a third country, where the output produced by the system is used in the Union
  • Importers and distributors of AI systems
  • Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark
  • Authorised representatives of providers, established in the Union.

This framing of the scope of the regulation is intended to prevent circumvention of the new provisions. An AI system would be subject to the EU rules even if it were trained and developed in a third country under contract to an operator (i.e. provider, user, authorised representative, importer or distributor) established in the EU, or established in a third country, insofar as the output from the system would be used in the EU. Similarly, an AI system could be covered by the regulation even if the AI system were not placed on the market or put into service in the EU. This would apply to a situation where an operator established in the EU outsourced certain services to an operator outside the EU in relation to operations to be performed by an AI system regarded as a high-risk system.

The draft clearly identifies the key participants in the AI value chain:

  • Providers of AI systems—natural or legal persons, public authorities, agencies or other bodies that develop an AI system or that have an AI system developed and place that system on the market or put it into service under their own name or trademark, whether for payment or free of charge
  • Users of AI systems—natural or legal persons, public authorities, agencies or other bodies under whose authority the AI system is operated.

Providers and users are the main addressees of the duties indicated in the proposal. Proportionate obligations would also be imposed on other participants in the AI value chain (importers, distributors, and authorised representatives).

The proposal entirely ignores the situation and rights of end users or clients of AI systems, i.e. persons who use AI systems as part of their personal, non-professional activity, or are the targets of the operation of such systems. It should be anticipated that the proposal will be supplemented in this respect during the course of the work within the European Parliament and the Council. After all, the regulations are designed with humans in mind and are intended to increase social trust in AI systems.

Exemption for AI systems used for R&D

A significant change proposed by the Council is a broad exemption for R&D applications. The AIA would not apply to AI systems (and their outputs) used for the sole purpose of research and development. The revised draft also provides that the regulation should have no impact on R&D activity involving AI systems insofar as such activity does not involve placement of the system on the market or putting it into service.

Although this change seems sound in theory, applying it in practice could cause problems. It is easy to imagine a situation in which an AI system originally intended solely for R&D proves so effective (particularly as it is not subject to the obligations under the regulation) that after some time—appropriately trained and developed—it begins to be used for other purposes.

The “Brussels effect”—will the AIA set a global standard?

As in the case of many other broad-ranging EU regulations (such as the General Data Protection Regulation), the AIA may also generate the so-called “Brussels effect.” In other words, the AIA has the potential to set a de facto global benchmark for regulating the design and application of AI systems. Although the final wording is not known yet, it is clear that it will exert a vital impact on the development of AI systems not only within the EU, but worldwide. This makes it all the more important to follow closely the work on the proposed regulation, which may shape the practice of creation and use of AI systems for years to come.

The article first appeared on newtech.law blog

Dr Iga Małobęcka-Szwast, attorney-at-law, New Technologies practice, Wardyński & Partners