AI Act: What the EU thinks about artificial intelligence

Artificial intelligence has long been a reality in many areas of human life. Computer systems, such as language translation or customization of recommendations for a specific user, can now perform some tasks that previously required the involvement of human mental resources. With the widespread use of artificial intelligence technologies, legislators in many countries are trying to develop a regulatory framework. Regulation of the use of artificial intelligence technologies may affect the application of other laws. For example, this may relate to the GDPR, which provides data subjects with the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. One of the initiatives in artificial intelligence regulation is the Artificial Intelligence Act – a proposal for a regulation from the European Commission to regulate the use of artificial intelligence, which we will discuss in this article.

Artificial intelligence can be used in climate change, the environment, health, the public sector, finance, mobility, home affairs, and agriculture. Explaining the need to adopt a regulation to regulate AI, the European Commission emphasises that elements and techniques of using artificial technologies may cause new risks or have negative consequences for individuals or society as a whole. As for the context of the regulation itself, the European Commission has highlighted the following objectives that it is trying to achieve by adopting this law: 

  • ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values; 
  • ensure legal certainty to facilitate investment and innovation in AI; 
  • enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; 
  • facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation. 

In terms of concrete steps, the European Commission proposes to prohibit some particularly harmful practices of artificial intelligence that are contrary to the values of the European Union, while special restrictions and safeguards are proposed for certain uses of remote biometric identification systems for law enforcement. The proposed regulation also contains a methodology for identifying “high-risk” AI systems that must comply with certain requirements and procedures. These requirements will be enforced through a governance system at the Member States level, building on already existing structures, and also planned to create EAIB (European Artificial Intelligence Board), which will be responsible for implementing this law. It is also proposed to create regulators at the national level. It should be recalled that a similar strategy with the establishment of a separate body and national regulators was chosen by the legislator during the adoption of the GDPR: then, for more effective implementation of the GDPR, the legislator also created a responsible body EDPB (European Data Protection Board), and national bodies were appointed in each EU country.

What will the AI Act regulate?

Let us consider what the AI Act consists of. The proposed regulation contains 12 titles and 85 articles in them. Let’s talk about each of them: 

The first title describes the subject-matter of the regulation, the scope and the main definitions used in the text of the law. It is proposed that this regulation has an extraterritorial scope, i.e. it can be applied to companies outside the European Union. 

The second title contains prohibited practices of using artificial intelligence, described in Article 5 of the regulation. One of the examples of such prohibited practices is the use of systems that exploit any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.

The third title is the most extensive and is dedicated to high-risk AI systems. It consists of five chapters that set out when an artificial intelligence system will be considered high-risk, the requirements for such systems (risk management, data governance, technical documentation, record-keeping, transparency and provision of information to users, human oversight, accuracy, robustness, cybersecurity), obligations of providers and users of high-risk ai systems and other parties, provisions on notifying authorities and notified bodies, and tools such as standards, conformity assessment, certificates and registration. 

The fourth title deals with transparency obligations for certain AI systems. In particular, it describes the use of deep fake AI technology. Thus, according to Article 52 of the AI Act, users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated.

The fifth title contains measures in support of innovation and describes regulatory sandboxes for AI. The purpose of such sandboxes is to enable AI developers and regulators to collaborate in a controlled environment. The pilot project of the first regulatory sandbox on artificial intelligence was presented in June 2022 in Brussels jointly by the Spanish government and the European Commission. 

The sixth title is dedicated to governance issues. Articles 56-58 concern the European Artificial Intelligence Board’s establishment, structure and tasks. Article 59 describes the process of appointing national competent authorities. 

The seventh title consists of Article 60, which describes creating and maintaining the EU database for stand-alone high-risk AI systems. This task is entrusted to the European Commission, cooperating with the EU Member States.

The eighth title consists of three chapters. The first describes post-marketing monitoring (analysis of the experience of using AI systems placed on the market) by providers and post-marketing monitoring of high-risk AI systems. The second describes obligations to sharing of information on incidents and malfunctions in systems. The third chapter is devoted to the implementation of the regulation, namely market surveillance and control of AI systems in the EU market, access to data and documentation, the procedure for dealing with AI systems presenting a risk at national level and other issues regulated by Articles 63-68 of the regulation.

The ninth title is dedicated to the creation of Codes of conduct, and the tenth title describes confidentiality and penalties that can be imposed on companies. Thus, the regulation provides for three levels of fines:

  • a fine of up to 30 000 000 EUR or, if the offender is company, up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher, for breaches of Articles 5 and 10 of the AI Act;
  • a fine of up to 20 000 000 EUR or, if the offender is a company, up to 4 % of its total worldwide annual turnover for the preceding financial year, whichever is higher, for violations of other requirements and obligations of the regulation;
  • a fine of up to 10 000 000 EUR or, if the offender is a company, up to 2 % of its total worldwide annual turnover for the preceding financial year, whichever is higher, if the company has provided incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request.

Administrative fines may also be imposed on EU institutions, agencies and bodies, the amounts of which are set out in Article 72. 

The eleventh title describes the delegation of power and committee procedure, and the twelfth chapter contains final provisions. 

What’s next?

Currently, EU lawmakers are actively discussing the provisions of the regulation and proposing changes. Thus, on December 6, 2022, the EU Council adopted a common position (“general approach”) on the AI Act. This will allow entering negotiations with the European Parliament after the latter adopts its position. The proposal will become law once the EU Council and the European Parliament agree on a common version of the text

In addition to the regulation of artificial intelligence in the EU, it is also worth following the development of legislation in other countries. For example, in Canada, it is proposed to adopt the Artificial Intelligence and Data Act (AIDA), which may introduce new rules for developing and deploying artificial intelligence systems. Similar initiatives are also being considered in the US, Brazil and other countries.

Since the law has yet to be adopted as of the beginning of 2023, it is worth waiting for the final version of the AI Act to see precisely how the legislation will be built around artificial intelligence. However, by understanding the direction of AI development and its legislative regulation and having the proposal’s text, companies can already begin to take the first steps to comply with this act in the future, for example, to draw up a roadmap or conduct a risk assessment.


    Your question to IT lawyers