Conciliatory journalists’ Facebook accounts restored – the incident brought to light the weaknesses of automated moderation

Conciliatory Journalists’ (Sovittelujournalistit – Sopiva ry) Facebook page, along with the personal accounts of its six administrators, was restored on Tuesday the 19th of January. The accounts were deleted on Saturday the 9th of January, without any warning or explanation from Facebook.

The precise reason for the deletion of our accounts remains unclear. However, on Wednesday the 20th of January we received an e-mail from a Facebook representative confirming that the deletion was result of a mistake in automated AI-based moderation, along with apologies for the inconvenience. 

This is consistent with our assumption that our accounts got deleted because of a Facebook event about our book club meeting where we discussed academic research about conspiracy theories. The term “QAnon” was mentioned in the title of one of the texts assigned for reading. Furthermore, that particular event was missing when our association’s page was restored, which seems to confirm that the algorithm reacted to this particular content.

Within these past 10 days, the contacts and means that we have as journalists enabled us to reach people working for Facebook. This ultimately led to having a human re-evaluation of the algorithmic mistake. Even so, the communication from Facebook has been sparse and neutral, probably as a result of Facebook’s communication policies. With no simple appeal procedure, these kinds of situations will most likely be very frustrating for most people. 

Our background as journalists and researchers as well as the public attention to our case might have helped us at least by speeding up the procedure. During this incident, we have been contacted by several people who have struggled with similar experiences, sometimes for months. Also journalists abroad have shared with us their similar experiences of having their content deleted and making their job difficult. 

This experience demonstrates the problems with how social media is currently moderated. Due to the massive amount of users, social media companies use automatic moderation solutions that lack the understanding of context. When the algorithm makes mistakes, there is no room for any interactivity or a channel for the object of the measures to be heard. This means that when one gets de-platformed, the position that is left for them is that of a victim or a culprit. This hardly leads to a change in the problematic behaviour, but rather in simply withdrawing from arenas of public discussion. 

This kind of emergence and fortification of counterpublics can be seen in recent social media events where pro-Trump discussion in the US is gaining new ground on platforms such as Parler and Gab. Yet the same effect is happening also more generally in the public discussion, also here in Finland. The aspirations for safer public discussion may simultaneously lead to experiences of negligence, frustration and mistrust. Thus, those governing the public discussion (whether in social media or legacy media) should be able to better discern when the discussion must be limited, and when there should be more effort in listening and dialogue.

We have recently taken a stand in the EU law initiative that aims at bridling the power of social media giants, and giving the users of these platforms more power in allocating the the contents as well their own information. Our case shows in a concrete way how important this initiative is. Our statement on this issue can be read here (in Finnish).

Contact:

Mikko Hautakangas

mikko.hautakangas@helsinki.fi

+358 40 190 1363

Noora Kettunen

noora@noorakettunen.com


Viimeisimmät julkaisut