Disinformation in the Digital Space: How Is the European Union Strengthening Online Security?

2.5.2025 | Autor: Róbert Hronček
6

The misuse of disinformation narratives by certain political parties poses a serious threat to democratic competition.

Disinformation in the Digital Space: How Is the European Union Strengthening Online Security?

The spread of disinformation in Slovakia remains a significant problem that influences public opinion, media freedom, and political dynamics.

Surveys show that a significant portion of the population is susceptible to conspiracy theories. Top political leaders themselves are involved in “alternative” media known for spreading disinformation and conspiracy theories, raising concerns about media freedom, the integrity of information, and democratic processes.

High levels of polarization, a lack of digital literacy, and the consumption of fast-paced information without verifying its accuracy or context provide fertile ground for the uncontrolled spread of disinformation in the online space.

Solutions require a multifaceted approach, including legislative measures, media literacy initiatives, and support for independent journalism to protect democratic integrity.

The European Union is working to protect democratic and electoral processes as well as civic discourse through several acts regulating how digital technologies operate in the EU, including the Digital Services Act (DSA).

The aim of the DSA is to limit the unchecked power of large digital platforms by requiring them to assess and mitigate risks related to their impact on protected interests. This represents significant protection against disinformation, although its enforcement can be challenging and is causing transatlantic tensions.

Online platforms providing services to European users must systematically identify, assess, and mitigate risks related to content—ranging from hate speech to disinformation and election interference. Operators face potentially high fines, up to six percent of their global turnover.

However, labeling European speech regulations as censorship is misplaced. No provision of the DSA requires platforms to remove lawful content. It obliges them to detect and counter systemic manipulation tactics, particularly during elections.

Companies are not required to block all user content before it is uploaded, but only to take measures to minimize illegal content and remove it once it is identified as illegal. At the same time, the DSA discourages over-removal and requires platforms to publish transparency reports on removal requests, justify their decisions, and offer users appeal mechanisms.

It was expected that the DSA’s risk assessment and management obligations would compel platforms to identify threats in the digital space and adopt meaningful reforms. However, the risk assessments published under the DSA in November 2024 focus too narrowly on election integrity, while neglecting the broader risks of disinformation to democracy.

For democracy to function, civic discourse must be inclusive, pluralistic, and accessible; it must acknowledge and respect differences in opinion and socio-political diversity; it must focus on facts and informed debate; and it must create space for citizen engagement and representation in decision-making processes.

This is particularly critical in electoral processes, but solutions must be comprehensive, and platforms cannot focus too narrowly on the issue of election interference alone. For if the digital public sphere is plagued in the long term by disinformation, polarization, and the suppression of legitimate speech, then measures taken to ensure the integrity of the electoral process itself are merely reactive and insufficient.

The recent adoption of codes of conduct and their incorporation into the DSA could also help. While compliance is voluntary, signatories commit to respecting these obligations simply by signing. For signatories designated as very large online platforms (VLOPs) and very large online search engines (VLOSE), this can help ensure the implementation of appropriate risk mitigation measures.

The Code of Conduct on Combating Online Hate Speech has been signed by 12 signatories, seven of which are designated as VLOPs (Facebook, Instagram, LinkedIn, Snapchat, TikTok, X, and YouTube), and five other signatories (Dailymotion, Jeuxvideo.com, Microsoft, Rakuten, Viber, and Twitch). On January 20, 2025, the Commission and the European Digital Services Board approved its incorporation into the DSA framework.

The Code of Practice on Disinformation is a unique framework agreed upon by representatives of online platforms, leading technology companies, and advertising industry stakeholders, fact-checkers, researchers, and civil society organizations to combat disinformation on a voluntary and self-regulatory basis.

The Code, which was adopted by signatories as early as 2018 and later revised and strengthened in 2022 based on European Commission guidance, currently has more than 40 signatories, including major VLOPs and VLOSEs—Google Search & YouTube (Google), Instagram and Facebook (Meta), Bing and LinkedIn (Microsoft), and TikTok.

On February 13, 2025, the European Digital Services Board and the Commission assessed the Code of Conduct on Disinformation and concluded that it meets the conditions for codes of conduct under the DSA, provided that, if fully implemented, its strict obligations and detailed measures together constitute a robust set of mitigating measures and can serve as a relevant criterion for determining a platform’s compliance with the DSA.

The amendment will take effect on July 1, 2025. Thanks to this incorporation, the Code of Practice will serve as a relevant criterion for determining compliance with the DSA.

The value of these commitments lies in the fact that they are the result of an agreement among a broad group of stakeholders, building on existing best practices in the industry. The Code of Practice on Disinformation contains 44 commitments and 128 specific measures, such as:

  • demonetization of information – limiting financial incentives for disinformation disseminators;
  • transparency of political advertising and more effective labeling so that users can recognize political advertising;
  • ensuring the integrity of services, which includes limiting fake accounts, boosting reach through bots, and other manipulative behavior used to spread disinformation;
  • empowering users, researchers, and the fact-checking community to better identify disinformation, with broader access to data and EU-wide fact-checking coverage.

The incorporation of the Code of Conduct + and the Code of Practice on Disinformation into the DSA represents a significant development in the EU’s regulatory approach to online content management. At the same time, it is concerning that some signatories have withdrawn from the chapter on fact-checking.

However, the European Commission emphasizes that it considers the code sufficient only if it is fully implemented. The shift from voluntary commitments to enforceable obligations can ensure that digital platforms remain accountable for their content moderation policies while promoting a more transparent and safer online environment and reducing the risk of public opinion manipulation and the spread of disinformation.

Furthermore, this integration serves as a model for regulatory frameworks worldwide and demonstrates how binding obligations can be combined with industry cooperation to mitigate harmful content while preserving freedom of expression.

At the same time, I advocate for the simultaneous introduction of stricter regulation and more rigorous sanctions not only against platforms but also against the dissemination of disinformation itself, as those who spread it often profit from this activity—whether through advertising revenue or by serving the interests of various interest groups.

Moreover, the abuse of disinformation narratives by certain political parties poses a serious threat to democratic competition. In the long term, however, the key solution lies in systematically educating the public in media literacy, critical thinking, and verifying the accuracy of information.

A prerequisite for this is a sincere commitment by government officials to ensure citizens’ access to this type of education while simultaneously rejecting the use of disinformation as a tool for political manipulation.


Róbert Hronček

Róbert Hronček

JUDr. Róbert Hronček is the founder and managing partner of the law firm Hronček & Partners. In his practice, he specializes in commercial law, regulation, compliance, and the legal aspects of doing business in rapidly evolving industries. Drawing on his extensive experience, he provides strategic advice to companies of all sizes—from innovative startups to established firms and corporations. As a visionary leader of the law firm, he actively shapes the future of legal services through innovation, a modern approach to consulting, and the digitization of legal processes. He focuses on building valuable partnerships that provide clients with legal certainty and comprehensive services. In addition to his legal practice, he is an active venture capital investor, supporting the growth and development of promising technology and innovation companies. His expert commentary reflects not only legislative changes but also broader economic and technological trends shaping the business environment.