Systemic Risks in the DSA and its Enforcement
By Beatriz Botero Arcila
In mid June, I presented at Free Expression and the DSA: Private-Public Workshop, organized at Sciences Po Law School by Kate Klonick. My presentation “Systemic Risks and its Enforcement” sought to provide insights into what we know so far about the Digital Services Act’s (DSA) systemic risks framework approach. After making a brief introduction to the DSA and its systemic risk framework, I briefly presented the enforcement processes active to date. What follows is a recount of that presentation, prepared with the help of Francesca Elli.
The DSA is many things. It is a transparency and data-openness regulation, it is a due process of law regulation for content moderation decisions, and it is a risk regulation. Risk regulation is a particular regulatory approach that seeks to control and create risks of harm, which relies on instruments such as standards, prohibitions, and risk and impact assessments. A key idea is that it regulates behavior ex-ante; that is, before or at least independently of whether the potential harm actually occurs.
As a form of risk regulation, the DSA mandates that the largest platforms identify and manage potential dangers to society. The DSA advances this concept by introducing a novel approach to addressing societal harms caused by online platforms: systemic risk assessments. Specifically, the DSA requires Very Large Online Platforms (VLOPs) and Very Large Social Enterprises (VLSEs), which count more than 45 million users in the EU, to rigorously evaluate the societal systemic risks “stemming from the design, functioning and use of their services, as well as from potential misuses by the recipients of the service”. Accordingly, VLOPs and VLSEs must implement reasonable, proportionate, and effective measures to address the systemic risks they identify. These measures may involve modifications to their content moderation systems, advertising display systems, or compliance with codes of conduct and crisis protocols. Such compliance obligations are proportionate to the type and magnitude of the risks online platforms create. Under the DSA’s systemic risk framework, VLOPs and VLSEs must consider various factors when conducting risk assessments, such as the design of recommender systems and algorithmic systems, content moderation systems, applicable terms and conditions and their enforcement, advertising systems, and data practices. The risk assessments and mitigation measures must undergo independent annual audits and will be monitored by the Commission.
However, the DSA’s definition of systemic risks is indeterminate, and it is unclear what is “systemic” about the risks enumerated in the regulation. The term “systemic risk” comes from financial regulation, where “systemic risks” refer to the risk of contagion, the collapse of very large and interconnected financial institutions posed to others and, eventually, to the, financial system collapsing. In the DSA, however, Article 34 just enlists a non-exclusive number of (very broad) risks “which shall include the following systemic risks:” (1) the dissemination of illegal content, (2) potential negative effects on fundamental rights, such as human dignity, freedom of expression and information, data protection, and the right to nondiscrimination, (3) any actual or foreseeable negative effects on civic discourse, electoral processes, and public security and (4) potential threats in relation to gender-based violence, public health and minors, and the effects on individuals’ physical and mental well-being.
These risks are all broad and indeterminate and can be construed to include many things. Rachel Griffin, for example, has argued that it could incorporate risks to the environment (something that does not seem to have been in the intention of the Parliament of the Commission). Others adopt more literal interpretations, while there is a general consensus that a clear procedure for identifying systemic risks is missing. Additionally, it remains unclear what is “systemic” about them. These risks are sometimes societal, sometimes affect individual rights, sometimes may affect just certain groups, and represent very different forms of harms (privacy harms are different than, say, negative effects on civic discourse).
Despite this broadness, the enforcement actions taken so far have not been surprising, which leads me to believe that some of the issues of indeterminacy will be addressed over time as we see more enforcement action. Indeed, platforms have a “first mover advantage” because they are the first actors making the risk assessment, thus deciding what the starting point of the systemic risk assessment is. In negotiation theory there is a wisdom that the party that makes the first offer creates an anchor, which acts as a strong pull throughout the negotiation and influences the final outcome. These reports, however, remain unknown to civil society, researchers, and even the national authorities in charge of enforcing the DSA for no VLOPs and VLSEs. Indeed, the circumstances and the enforcement process gives significant power to the Commission to set the tone of the enforcement process, and it has been acting swiftly but decisively and independently to set the tone and delineate what may count as a systemic risk under the DSA.
To try to shed some light on how the Commission may be understanding the DSA’s systemic risks, I went to the enforcement processes active to date. To date the enforcement of the DSA has seen the initiation of five open procedures, specific Guidelines on risks posed to elections and Delegated Acts on independent audits:
Specific Guidelines: In March 2024, the Commission issued Guidelines recommending measures for VLOPs and VLSEs to mitigate systemic risks, particularly in the context of EU Elections. These guidelines emphasize reinforcing internal processes, implementing election-specific risk mitigation measures to each individual electoral period and local context, creating specific measures linked to generative AI, cooperating with EU level and national authorities and civil society, adopting specific measures such as incident response mechanisms, and conducting assessments.
Open procedures: There are currently five open procedures under the DSA focusing on systemic risks. The procedures relate mainly to the issue of data access, illegal content dissemination, misinformation, manipulative behavior, addictive design features, and lack of effective measures to protect minors from inappropriate content.
X (18 December 2023): a procedure looked at the possible dissemination of illegal content on X following the 7 October attacks and the effectiveness of measures taken to combat information manipulation and often deceptive designs.
TikTok 1 (19 February 2024): the procedure focused on the protection and security of minors, specifically the effects on addiction and rabbit holes on children.
Ali Express (14 March 2024): the procedure scrutinized the lack of enforcement of Terms of service prohibiting dangerous products, transparency in recommender systems, complaint handling mechanisms and consumer protection.
TikTok 2 (22 April 2024): the open procedure questioned why TikTok failed to conduct a risk assessment before launching a new feature.
Meta (30 April 2024): in light of the EU elections, the open procedure analyzed Meta’s deceptive advertisements and disinformation, the visibility of political content and the lack of effective third party real-time civic discourse and election monitoring. It highlighted risks posed by addictive design features and algorithms enhance behavioral addictions, especially for minors, and the current lack of effective mitigation measures to mitigate risks to civic discourse and electoral processes.
Preliminary conclusions
From the examples above, it is interesting to note, however, that none of the procedures addresses anything that is actually surprising, that is not in the DSA or that is not already considered an important risk created by social media platforms. From the current enforcement processes, it does not seem that the Commission is willing to stretch the broad definition of “systemic risks” too much (or not yet). At the same time, it may be that the open texture of the DSA will allow it to do so as time goes by, and circumstances require it.
This leads to the conclusion, however, that the European Commission has a lot of power under the DSA and plays and will play a crucial role in interpreting what constitutes systemic risks and determining the appropriate measures to address them. This is especially so now, when academia and other observers have limited access to systemic risk reports and their evaluation, they really do have the power to steer these processes.
Lastly, it is worth noting that all the enforcement processes mention the provisions related to data access for researchers, which are still not fully working as required. A Delegated Act to address this issue is expected in the near future (but that, we’ll leave it for another time).