The 2021 German Federal Election on Social Media: An Analysis of Systemic Electoral Risks Created by Twitter and Facebook Based on the Proposed EU Digital Services Act

September 6, 2021 in Report
Germany Election

Safeguarding democratic elections is hard. Social media plays a vital role in the discourse around elections and during electoral campaigns. Social media platforms have become central spaces for electoral campaigns, often substituting traditional media outlets. Many politicians and parties communicate their messages primarily on their Twitter and Facebook profiles. In that regard, these platforms can be a valuable tool, but they also contribute to risks such as the dissemination of disinformation or other content that can infringe the right to free and fair elections.

This study provides a risk assessment of the ‘systemic electoral risks’ created by Twitter and Facebook and the mitigation strategies employed by the platforms. It is based on the 2020 proposal by the European Commission for the new Digital Services Act (DSA) in the context of the 2021 German federal elections. Therefore, this study provides an external risk assessment regarding the right to free and fair elections on very large online platforms (VLOPs), focusing on Twitter and Facebook and their roles during the German federal elections that will take place on 26 September 2021. The data collection period covered the second half of May 2021.

We analysed three systemic electoral risk categories: 1) the dissemination of illegal content, 2) negative effects on electoral rights, and 3) the influence of disinformation. In this context, the present study found a significant number of problematic posts and tweets during the analysis, with 6.72% of all election-related Facebook posts and 5.63% of election-related tweets falling into at least one of the risk categories in our codebook, meaning they were potentially illegal, falsehoods, or infringements of electoral rights.

Of the problematic posts on Facebook, 4.05% were likely illegal under German law, 35.14% violated the platform’s community standards or terms of service, 93.24% could be considered disinformation, and 46,65% were violations of electoral rights. Similarly, for the Twitter sample, of the problematic tweets, 14.52% broke platform rules, 100% were considered disinformation, and 51.61% infringed on electoral rights.

Our research shows that there is far too much problematic content on platforms. Finding 6.72% of problematic content on Facebook and 5.63% on Twitter was far higher than we expected. In light of the widespread public debate about election-related challenges, we were not expecting this level of problematic content to still be present on platforms, even after platforms’ content moderation and design interventions were being implemented and at such an early moment in the election campaign. As the elections heat up before 26 September 2021, the proportion of problematic content is likely to be even higher than what we found in May 2021. Consequently, we have developed the following policy recommendations:

1. Platforms need to create more effective and sustainable response mechanisms to do more to safeguard elections. Our research suggests that all existing measures are currently very far from being good enough. Neither Facebook nor Twitter is doing enough to remedy the current situation.

2. Platforms should implement research-based recommendations to improve their mitigation measures to problematic content before and during elections. Our research suggests that platforms are not sufficiently considering a large body of knowledge and research on how to mitigate risks to free and fair elections and democracy. This includes interface-design solutions and tools that can empower users in the online ecosystem. The platforms should thus be required under Art. 27 DSA to develop mitigation measures together with civil society organisations and independent experts. Criteria for cooperation should be defined where appropriate.

3. Platforms have to become more transparent about content moderation tools they deploy, including algorithmic transparency. In this vein, platforms should publicly disclose the number of false positives and false negatives, and what content is flagged by algorithms and trusted flaggers (so-called precision and recall data). Especially in the context of disinformation, the time and intensity of exposure combined with the visibility of disinformation content on platforms is meaningful information to better understand its spread online. Moreover, platforms should provide information on the extent to which they profit (intentionally or unintentionally) from systemic risks (e.g., estimations of turnover generated through disinformation or illegal content). These additional transparency requirements could be included under Art. 23 DSA.

4. Platforms’ Terms of Service need to be expanded to more effectively cover all forms of disinformation and electoral rights, especially in times of elections. Only a small part of all the problematic content we found was covered by the existing terms of service of Twitter and Facebook.

5. There is a need for platforms to adopt best practices in their response mechanisms to problematic content. The differences between Twitter and Facebook suggest real differences in the quality of their responses to problematic content about elections. If Twitter is doing better than Facebook despite Facebook’s dominant economic position in the market, Facebook should be expected to do at least as well as Twitter. However, both could still do significantly better.

6. Almost all the problematic content we found was legal content. Platforms should be obliged to disclose how they distinguish between permissive and illegal content and conduct risk assessments for the types of legal but problematic content we discuss in this report. Public disclosure of such information may decrease uncertainty among users and, at the same time, increase the trust in platforms’ content moderation processes. Furthermore, we suggest, in line with our previous research, that just focusing on illegal content to safeguard elections will be ineffective.

7. Platforms should focus on curation, moderation, and design measures that promote free expression and user agency over the information they receive and impart. In their risk mitigation measures to safeguard elections, platforms should focus primarily on design changes and other measures more likely to promote free expression. Content moderation is clearly also necessary but is more likely to cause problems and needs to be done in a transparent and accountable manner.

8. Categories of analysis need to be improved. Despite a relatively high intercoder reliability rate, we often struggled to clearly identify the boundaries of the categories for identifying electoral rights violations and disinformation. Comparatively, the legal categories were easier to operationalise and more clearly delineated. Our interviewees also suggested that the existing categories of systemic electoral risk in the DSA and relevant academic literature still need to be more clearly delineated and easier to reproduce and compare. Even though our categories are based on the DSA and state-of-the-art academic literature, we believe that additional research and policy development is needed to operationalise and clearly delineate what constitutes disinformation and electoral rights violations.

9. EU legislators should expand DSA risk assessments and DSA Article 29 transparency criteria beyond very large online platforms. Our research has focused primarily on large online platforms. However, we agree with many of the experts we interviewed, who suggested that smaller platforms can also have highly problematic effects on elections. Particularly, given that the boundary between very large online platforms and other platforms in the DSA (10% of all EU citizens) seems highly arbitrary, we suggest implementing relevant parts of the risk-based approach beyond only VLOPs alone. To do this effectively, regulators need to focus – beyond those platforms that are already covered by the DSA and should remain so- on the impact platforms can have rather than the number of users they have. In the context of elections, this means that all platforms where there is scientific evidence that the platform can influence an EU Member State’s election or an EU election should be included as part of the risk-based approach. Scientific studies of this kind already exist for some platforms, but further impact assessments would obviously be needed for other platforms as well. Furthermore, these rules should also apply to video-sharing platforms.

10. Researchers need better access to platform data. Access to data remains highly challenging and politically charged. It was very difficult and time consuming to gain access to the representative samples we needed to conduct this research. After gathering the data, we constantly felt concerned about arbitrary risks to ourselves and our partners. The experience of New York University being shut out by Facebook based on claims of privacy violations and an almost universal fear among the community conducting this research creates legal conflicts with the platforms. This is no way to conduct research, as it has a chilling effect on the ability of researchers to hold platforms accountable. Under Art. 31 DSA, vetted researchers must thus be granted access to relevant data. This is to enable research that contributes to a better understanding of systemic risks as well as of the underlying economic incentives for platforms on how to deal with them.

11. Auditors must be chosen and paid by the authorities. The DSA relies heavily on independent auditors to examine systemic electoral risks and develop effective mitigation measures. The present external risk assessment—despite being a smaller version—might probably not have been commissioned by a platform due to its critique. However, public auditing intermediaries should be introduced to further secure and strengthen the independence of auditors and the auditing regime. Finally, to ensure auditors’ independence, it is crucial to clarify under Art. 28 DSA that auditors are commissioned by the envisaged European Board for Digital Services.

You can read the entire report here: https://www.sustainablecomputing.eu/wp-content/uploads/2021/10/DE_Elections_Report_Final_17.pdf