"...from Mary Shelley’s *Frankenstein* to the classic *Pygmalion*, from the story of the Prague Golem to Karel Čapek’s robot—which coined the term—people have dreamed of the possibility of creating intelligent machines, most often in the form of androids with human characteristics..." This is the opening of European Parliament Resolution No. 2015/2103, which recommends that the Commission take legislative steps to amend civil law in the field of robotics. The introduction itself emphasizes the human need to integrate artificial intelligence into everyday life and, therefore, the necessity of establishing a general definition of robots and AI and their legal regulation.
The challenge of establishing a general definition
The Commission offers the first definition of artificial intelligence in its Communication No. 2018/237, where it defines the term as “systems that exhibit intelligent behavior by analyzing their environment and taking actions—with a certain degree of autonomy—to achieve specific goals.” Legislation governing the status of artificial intelligence does not always keep pace with the advancement of artificial intelligence itself. Consequently, later definitions of AI focus not merely on defining artificial intelligence per se, but rather on AI as a system.
Benefits, Risks, and Challenges of AI
Artificial intelligence offers numerous benefits, particularly for citizens, by simplifying and increasing the speed and efficiency of meeting daily needs in areas such as transportation, healthcare, education, and similar fields. The aforementioned EP Resolution attributes the steady growth in employment over the past 200 years primarily to technological development, with the estimated increase in labor productivity due to artificial intelligence expected to range from 11 to 37 percent by 2035.
Artificial intelligence can also help strengthen democracy by using AI to prevent disinformation campaigns or cyberattacks. Currently, AI is also used to detect and respond to inappropriate behavior in the online environment.
Artificial intelligence carries a number of risks, particularly the still-unresolved issue of the legal personality of AI systems, which is directly linked to liability for damages. It therefore presents a host of challenges for the future, especially regarding the need for a well-defined conceptual framework. Another significant risk is the infringement of individuals’ privacy and personal data.
As far as employment is concerned, artificial intelligence offers the aforementioned benefits on the one hand, but on the other hand, up to 14% of jobs in OECD countries can be easily automated, posing a risk of job displacement for many people—a risk that is difficult to compete with.
AI Risks Associated with Social Media
Social media is now the primary source of news and information for many people, offering artificial intelligence a vast arena. It is precisely in this environment that AI poses a risk, as it represents a direct, daily, and very difficult-to-control clash between artificial intelligence algorithms and human nature, which is easily influenced by emotions. In the United States, lawmakers are therefore considering legislation to hold artificial intelligence accountable for the dissemination of harmful content; however, the question of AI’s legal personality in the application of this liability arises once again. (1)
Slovak Legislation
AI, as well as the related concept of cybersecurity, is not bound by national borders and poses a challenge for the entire Union that cannot be addressed solely through national initiatives. Therefore, Slovakia’s entire legal framework for artificial intelligence is directly dependent on that of the European Union.
Can artificial intelligence be trusted?
In June 2018, the European Commission established a High-Level Expert Group on Artificial Intelligence, which identified three components that trustworthy artificial intelligence should possess. Trustworthy AI should be “a) lawful, meaning it should comply with all applicable laws and regulations; b) ethical, meaning it should ensure compliance with ethical principles and values; and c) resilient, both from a technical and social perspective, as AI systems can cause unintended harm even with good intentions.” (2) Legislators may later draw inspiration from these insights provided by the expert group.
Regulation as Strict as the GDPR
However, the Union’s AI legislation also aims to ensure that these regulations remain relevant beyond its borders. The EU’s proposed AI legislation covers providers of AI systems within the EU, regardless of whether they are established in the EU or not (3). It appears that with this legislation, the EU is seeking to emulate the regulation it achieved with the introduction of the GDPR. However, this proposal is only awaiting its first reading in the European Parliament, so we cannot rush to clear conclusions.
Sources:
1) https://www.bloomberg.com/news/articles/2021-10-26/facebook-s-algorithms-increasingly-in-sights-of-u-s-lawmakers
2) https://op.europa.eu/sk/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1/language-en/format-PDF
3) https://eur-lex.europa.eu/legal-content/EN-SK/TXT/?fromTab=ALL&from=EN&uri=CELEX%3A52021PC0206