Our lawyers speak out
BETWEEN EFFECTIVENESS AND LIMITS: NETWORK MODERATION IN THE GARE DE LYON CASE
The case of the attacker at the Gare de Lyon train station raises a number of questions, including the responsibility of social networks and their moderation systems.
The alleged assailant, originally from Mali, had posted several videos on TikTok in French and in his native language. In these videos, he expressed his hatred of France and the French. I had the opportunity to comment on this case on CNEWS.
With over 45 million monthly users in the European Union, TikTok is considered to be a very large platform under the Digital Services Act.
As such, the social network must identify systemic risks, such as the publication of illegal content through its service, and implement reasonable, proportionate and effective measures to mitigate these risks. In the DSA, the focus is on resources dedicated to content moderation, rapid removal or blocking of access to illegal content, particularly with regard to illegal hate speech.
Have the moderation systems of Facebook and TikTok, where the first videos were posted at the end of 2022, detected these videos? Did they receive notifications of their illicit nature, and did they react quickly? Did they have sufficient resources, both human and financial?
Did the moderation systems of Facebook and TikTok, where the first videos were posted at the end of 2022, detect these videos? Did they receive notifications of their illicit nature, and did they react quickly? Did they have sufficient resources, both human and algorithmic, to moderate these hateful comments?
Did TikTok have the means to moderate the comments made by the alleged attacker in one of the Malian languages? Unfortunately, it's highly unlikely that even the biggest platforms will have moderators for all languages. The same applies to the technical means employed, as the algorithms are trained by the most commonly used languages.
The attack at the Gare de Lyon is an opportunity to open a debate on how to improve platforms' moderation systems, both in human and algorithmic terms, in a context where hate speech and other illicit content can be expressed in less commonly used languages.
https://www.cnews.fr/emission/2024-02-05/180-minutes-info-emission-du-05022024-1449711