AI Regulation: The EU’s First Legal Framework for AI

2.4.2025 | Autor: Peter Čičala
10

Artificial intelligence (AI) is not entirely new, but its applications and scope have changed and expanded dramatically in recent years, making it one of the most important technological innovations and its use an integral part of everyday life, in part due to significant advances in algorithms, computing power, and data availability, which have made it possible to achieve results previously considered unimaginable. We feel its impact in various areas of our lives, ranging from the use of AI assistants to more complex applications, such as disease diagnosis or predictive models in financial markets. However, the rapid development of AI brings with it a host of risks and questions regarding how it is used, but especially in relation to its legal regulation. Therefore, in response to this situation, the European Union adopted the Artificial Intelligence Regulation, which is historically the first piece of legislation of its kind.

AI Regulation: The EU’s First Legal Framework for AI

Reasons for Adoption and Purpose of the Artificial Intelligence Regulation

The adoption of legislation uniformly regulating the use of artificial intelligence was only a matter of time. Although existing legislation provides a certain degree of legal protection, it is insufficient to address the challenges that AI may bring and is already bringing. The purpose of the regulation is therefore to establish uniform rules regarding the development of AI, its placing on the market, and the use of its systems so that they comply with European Union law. Through uniform legislation for Member States, the European Union aims to protect EU citizens from the potential negative effects of artificial intelligence, ensure the protection of their fundamental rights, and promote the deployment of safe and trustworthy systems across the entire EU single market. At the same time, the regulation aims to increase literacy in the use of artificial intelligence and support its development as a fundamental prerequisite for improving economic growth, raising living standards, and similar goals.

Other reasons for adoption include issues related to liability for damages that artificial intelligence may cause, specifically to what extent the relevant parties are liable in the event of such damages. At the same time, it was also necessary to take into account the risk that AI could, for example, lead to violations of competition rules, particularly in the case of companies that have access to large amounts of data with which they can eliminate their competition.

What is an AI system, and how does the Artificial Intelligence Regulation define it?

The Artificial Intelligence Regulation, in Article 3, defines an AI system as: “a machine system designed to operate with varying levels of autonomy, which may exhibit adaptability once deployed, and which, for explicit or implicit objectives, derives from the inputs it receives a way of generating outputs such as predictions, content, recommendations, or decisions that may affect the physical or virtual environment.”

In other words, it is a technological system designed to perform tasks autonomously—that is, the ability to act without human intervention, to adapt and learn based on the information it receives, and to derive various outputs from that information, such as predictions, recommendations, or decisions that can influence the real world (for example, in the case of autonomous vehicles) or a virtual environment (as in the case of online recommendations in stores). An AI system is thus capable of utilizing abilities similar to those of humans (learning, thinking, creating new things, and so on).

We encounter artificial intelligence in everyday activities such as searching for things online, personalized shopping recommendations, smart homes, and translation tools—often without even realizing it. However, we also find its application in the public sector, for example in autonomous trains, in healthcare for making new medical discoveries or diagnosing diseases, as we mentioned at the beginning of this article. Modern vehicles also commonly use AI for safety features or navigation. 

Which entities does the AI Regulation affect?

The AI Regulation applies to both public and private entities within and outside the EU if an AI system is placed on the EU market or if its use has an impact on individuals located in the EU. The Regulation specifies them in more detail in Article 2 as:

  • providers,
  • entities deploying AI systems,
  • importers,
  • distributors of AI systems,
  • manufacturers, and data subjects located in the EU. 

Perhaps the most significant entity covered by the Regulation is the provider. A provider is defined as a natural or legal person, public authority, or other entity that develops or has developed an artificial intelligence system with the aim of placing it on the market or putting it into service under its own name or trademark, whether for remuneration or free of charge.

A user, within the meaning of the AI Regulation, is any natural or legal person, public authority, or other entity that uses an AI system within the scope of its authority, except in cases where the AI system is used in the context of personal non-professional activities.

A distributor, on the other hand, is a natural or legal person in the supply chain (excluding the supplier or importer) who makes an AI system available on the EU market

Assessment of the hazards of AI systems - 4 risk levels for AI systems

The Regulation introduces an approach based on the level of risk that individual AI systems pose to society. Depending on the assessment of the risk level, different obligations apply to AI systems. Risk levels are divided into the following categories:

  • Unacceptable risk (Chapter II of the Regulation)
  • High risk (Chapter III of the Regulation)
  • Specific risk related to transparency (Chapter IV of the Regulation)
  • Minimal risk

Under the AI Regulation, unacceptable risk is considered a clear threat to the safety, well-being, and rights of individuals and is therefore prohibited. Unacceptable risks are posed by AI practices such as: harmful manipulation and deception based on artificial intelligence, harmful exploitation of vulnerabilities based on artificial intelligence, the assessment or prediction of the risk of specific criminal offenses, or the recognition of emotions in workplaces and educational institutions. The Act also prohibits the use of artificial intelligence for the predictive performance of police tasks based on profiling. Prohibited practices are further defined in Article 5 of the Regulation.

High-risk systems are subject to the strictest obligations regarding their operation, use, development, and placing on the market. The following uses of AI systems are considered high-risk:

  • AI safety components in critical infrastructure (e.g., in transportation), the failure of which could endanger the life and health of citizens,
  • AI solutions used in educational institutions that may determine access to education and career paths (e.g., grading exams),
  • uses of AI in law enforcement that may interfere with people’s fundamental rights (e.g., assessing the reliability of evidence), and so on.

Other AI systems that may be considered high-risk are based on Annex III of the AI Regulation, which further specifies areas in which the use of AI systems may pose a high risk of causing harm or resulting in an undesirable outcome.

Artificial intelligence systems with limited risk are not subject to the same strict obligations as high-risk systems; however, they are still subject to certain regulatory requirements set forth in the relevant Artificial Intelligence Regulation.

What obligations will the relevant entities have to fulfill?

Depending primarily on the status of the relevant entity, the risk level of the AI system, and the field in which the AI system is used, the AI Regulation sets out a wide range of obligations, ranging from informing employees or employee representatives to implementing a quality management system, undergoing conformity assessment, obtaining an EU declaration of conformity, fulfilling registration obligations, and others, which we will address in detail in future articles.

Entry into force of the regulation

The AI Act takes effect on August 1, 2024, and becomes fully effective on August 2, 2026, with certain exceptions:

  • provisions on obligations and prohibitions related to AI literacy will take effect on February 2, 2025,
  • rules applicable to general-purpose artificial intelligence models will take effect on August 2, 2025,
  • as of August 2, 2027, rules for high-risk AI systems embedded in regulated products will take effect.

Conclusion

The AI Regulation represents the first major step toward the legal regulation of artificial intelligence within the European Union. However, as a pioneering legislative instrument, it contains several vague and general provisions, which is natural given that this is a legal field in its early stages of development.

For this reason, it can be expected that the adoption of the regulation alone will not be sufficient to address all practical and legal issues arising from the use of artificial intelligence. It will be necessary to gradually supplement the legal framework through further secondary legislation, guidelines, delegated acts, as well as through the interpretation of legal norms by relevant judicial institutions.

The case law of the Court of Justice of the European Union will be of fundamental importance in this regard, as its interpretation of the provisions of the Regulation will contribute to clarifying them, establishing consistent application practices, and eliminating interpretative ambiguities.

At our law firm, Hronček & Partners, s. r. o., we actively monitor developments in legal regulation in the field of artificial intelligence so that we can provide our clients with high-quality, up-to-date, and comprehensive legal advice even in this dynamically evolving area.

To ensure a comprehensive range of services, we have also entered into partnerships with foreign certification bodies, enabling us to provide our clients with assistance throughout the entire process—from legal analysis and consultation to obtaining a certificate of compliance with the requirements of the Artificial Intelligence Regulation.


Peter Čičala

Peter Čičala

He studied at the Faculty of Law of Trnava University in Trnava, where he successfully completed his master’s degree in 2024 by passing the state examination in civil, criminal, and labor law, along with the defense of his master’s thesis on the topic “Procedural and Other Aspects of Detecting Corruption Offenses.” During his studies, he worked at the Slovak Environmental Agency as a project manager within the Recovery Plan, where he primarily collaborated with the legal department on preparing opinions regarding the allocation of funds from the Recovery Plan mechanism. He has been working at the law firm Hronček & Partners s. r. o. since 2024 as a legal trainee. He specializes in competition law, commercial law, European law, and international law. One of the projects he has been involved in was a collaboration with an investor from the People’s Republic of China regarding the international transit of goods to the United States, specifically concerning European and international legal regulations on the rules of preferential and non-preferential origin of goods, including the legal provisions of the Union Customs Code (UCC). Among other things, he is currently involved in a development project for family businesses, the aim of which is to provide expert advice for the successful management of intergenerational succession in family businesses.