Open Letter to Facebook, Twitter, and YouTube: Stop silencing critical voices from the Middle East and North Africa

December 19, 2020 in Opinion

Ten years ago today, 26-year old Tunisian street vendor Mohamed Bouazizi set himself on fire in protest over injustice and state marginalization, igniting mass uprisings in Tunisia, Egypt, and other countries across the Middle East and North Africa.

As we mark the 10th anniversary of the Arab Spring, we, the undersigned activists, journalists, and human rights organizations, have come together to voice our frustration and dismay at how platform policies and content moderation procedures all too often lead to the silencing and erasure of critical voices from marginalized and oppressed communities across the Middle East and North Africa.

The Arab Spring is historic for many reasons, and one of its outstanding legacies is how activists and citizens have used social media to push for political change and social justice, cementing the internet as an essential enabler of human rights in the digital age.   

Social media companies boast of the role they play in connecting people. As Mark Zuckerberg famously wrote in his 2012 Founder’s Letter

“By giving people the power to share, we are starting to see people make their voices heard on a different scale from what has historically been possible. These voices will increase in number and volume. They cannot be ignored. Over time, we expect governments will become more responsive to issues and concerns raised directly by all their people rather than through intermediaries controlled by a select few.”

Zuckerberg’s prediction was wrong. Instead, more governments around the world have chosen authoritarianism, and platforms have contributed to their repression by making deals with oppressive heads of state; opening doors to dictators; and censoring key activists, journalists, and other changemakers throughout the Middle East and North Africa, sometimes at the behest of other governments:

  • Tunisia: In June 2020, Facebook permanently disabled more than 60 accounts of Tunisian activists, journalists, and musicians on scant evidence. While many were reinstated, thanks to the quick reaction from civil society groups, accounts of Tunisian artists and musicians still have not been restored. We sent a coalition letter to Facebook on the matter but we did not receive a public response. 
  • Syria: In early 2020, Syrian activists launched a campaign to denounce Facebook’s decision to take down/disable thousands of anti-Assad accounts and pages that documented war crimes since 2011, under the pretext of removing terrorist content. Despite the appeal, a number of those accounts remain suspended. Similarly, Syrians have documented how YouTube is literally erasing their history
  • Palestine: Palestinian activists and social media users have been campaigning since 2016 to raise awareness around social media companies’ censorial practices. In May 2020, at least 52 Facebook accounts of Palestinian activists and journalists were suspended, and more have since been restricted. Twitter suspended the account of a verified media agency, Quds News Network, reportedly on suspicion that the agency was linked to terrorist groups. Requests to Twitter to look into the matter have gone unanswered. Palestinian social media users have also expressed concern numerous times about discriminatory platform policies.
  • Egypt: In early October 2019, Twitter suspended en masse the accounts of Egyptian dissidents living in Egypt and across the diaspora, directly following the eruption of anti-Sisi protests in Egypt. Twitter suspended the account of one activist with over 350,000 followers in December 2017, and the account still has yet to be restored. The same activist’s Facebook account was also suspended in November 2017 and restored only after international intervention. YouTube removed his account earlier in 2007.

Examples such as these are far too numerous, and they contribute to the widely shared perception among activists and users in MENA and the Global South that these platforms do not care about them, and often fail to protect human rights defenders when concerns are raised.  

Arbitrary and non-transparent account suspension and removal of political and dissenting speech has become so frequent and systematic that they cannot be dismissed as isolated incidents or the result of transitory errors in automated decision-making. 

While Facebook and Twitter can be swift in responding to public outcry from activists or private advocacy by human rights organizations (particularly in the United States and Europe), in most cases responses to advocates in the MENA region leave much to be desired. End-users are frequently not informed of which rule they violated, and are not provided a means to appeal to a human moderator. 

Remedy and redress should not be a privilege reserved for those who have access to power or can make their voices heard. The status quo cannot continue. 

The MENA region has one of the world’s worst records on freedom of expression, and social media remains critical for helping people connect, organize, and document human rights violations and abuses. 

We urge you to not be complicit in censorship and erasure of oppressed communities’ narratives and histories, and we ask you to implement the following measures to ensure that users across the region are treated fairly and are able to express themselves freely:

  • Do not engage in arbitrary or unfair discrimination. Actively engage with local users, activists, human rights experts, academics, and civil society from the MENA region to review grievances. Regional political, social, cultural context(s) and nuances must be factored in when implementing, developing, and revising policies, products, and services.
  • Invest in the necessary local and regional expertise to develop and implement context-based content moderation decisions aligned with human rights frameworks in the MENA region.  A bare minimum would be to hire content moderators who understand the various and diverse dialects and spoken Arabic in the twenty-two Arab states. Those moderators should be provided with the support they need to do their job safely, healthily, and in consultation with their peers, including senior management.
  • Pay special attention to cases arising from war and conflict zones to ensure content moderation decisions do not unfairly target marginalized communities. For example, documentation of human rights abuses and violations is a legitimate activity distinct from disseminating or glorifying terrorist or extremist content. As noted in a recent letter to the Global Internet Forum to Counter Terrorism, more transparency is needed regarding definitions and moderation of terrorist and violent extremist (TVEC) content.
  • Preserve restricted content related to cases arising from war and conflict zones that Facebook makes unavailable, as it could serve as evidence for victims and organizations seeking to hold perpetrators accountable. Ensure that such content is made available to international and national judicial authorities without undue delay.
  • Public apologies for technical errors are not sufficient when erroneous content moderation decisions are not changed. Companies must provide greater transparency, notice, and offer meaningful and timely appeals for users. The Santa Clara Principles on Transparency and Accountability in Content Moderation, which Facebook, Twitter, and YouTube endorsed in 2019, offer a baseline set of guidelines that must be immediately implemented. 

Signed by,

Access Now

Arabic Network for Human Rights Information (ANHRI)

Article 19

Association for Progressive Communications (APC)

Association Tunisienne de Prévention Positive

Avaaz 

Cairo Institute for Human Rights Studies (CIHRS)

The Computational Propaganda Project

Daaarb — News — website

Egyptian Initiative for Personal Rights

Electronic Frontier Foundation

Euro-Mediterranean Human Rights Monitor 

Global Voices

Gulf Centre for Human Rights, GC4HR

Hossam el-Hamalawy, journalist and member of the Egyptian Revolutionary Socialists  Organization

Humena for Human Rights and Civic Engagement 

IFEX

Ilam- Media Center For Arab Palestinians In Israel

ImpACT International for Human Rights Policies

Initiative Mawjoudin pour l’égalité

Iraqi Network for Social Media – INSMnetwork

I WATCH Organisation (Transparency International Tunisia)

Khaled Elbalshy, Editor in Chief, Daaarb website

Mahmoud Ghazayel,  Independent

Marlena Wisniak, European Center for Not-for-Profit Law

Masaar — Technology and Law Community

Michael Karanicolas, Wikimedia/Yale Law School Initiative on Intermediaries and Information

Mohamed Suliman, Internet activist

My.Kali magazine — Middle East and North Africa

Palestine Digital Rights Coalition, PDRC

The Palestine Institute for Public Diplomacy 

Pen Iraq

Quds News Network

Ranking Digital Rights 

Dr. Rasha Abdulla, Professor, The American University in Cairo

Rima Sghaier, Independent 

Sada Social Center

Skyline International for Human Rights

SMEX

Soheil Human, Vienna University of Economics and Business / Sustainable Computing Lab

The Sustainable Computing Lab

Syrian Center for Media and Freedom of Expression (SCM)

The Tahrir Institute for Middle East Policy (TIMEP)

Taraaz

Temi Lasade-Anderson, Digital Action

Vigilance Association for Democracy and the Civic State — Tunisia

WITNESS

7amleh — The Arab Center for the Advancement of Social Media

Originally published at: https://www.accessnow.org/facebook-twitter-youtube-stop-silencing-critical-voices-mena/

Predicting Human Lives? New Regulations for AI Systems in Europe

November 11, 2020 in Opinion

Predicting Human Lives?

New Regulations for AI systems in Europe

November, 2020 in Opinion

Rita Gsenger

“Humanity’s story had been improvised, now it was planned, years in advance, for a time the sun and moon aligned, we brought order from chaos.” (Serac, Westworld, Season 3, Episode 5, HBO)

Rehoboam–named after the first king of Judah, said to be the wisest of all human beings was the successor of King Solomon–is predicting human lives and dictating what individuals will do without them knowing in the HBO adaption of “Westworld”. The giant quantum computer looks like a black globe covered in red, flickering lights and it is placed in the entrance of its owner company, where the public and school children can look at it, visit it, to see that it is not such a dangerous mythical creature after all. Nobody, except for its creators understands how the system works and it structures and shapes society, controlling its own system as well. Rehoboam analyses millions of files of individuals, predicting the course of their life, including their precise time of death. The citizens of this world do not know that their lives are shaped and controlled by the predictions of an AI system, which aims to establish and maintain order in society. A society which was bound to destroy itself and had been saved by a god created by a human.

Not unlike in contemporary science fiction, the increasing use and deployment of AI technologies is influencing not only our online but more and more our offline lives as well. The unrest and resistance against measures fighting the covid-19 pandemic have shown that online mobilisation and the use of algorithms that push content that has been shared, viewed and liked the most, can result in difficult situations with consequences for entire societies, bringing insecurities, distrust and fears to the surface and into the actions of human beings. Thus, the conversation was shifted to newly strengthened science skepticism and conspiracy theories with real life consequences, making the need for human-centric AI systems and digital literacy more palpable.

Algorithmic decision-making systems play an increasingly important role in the landscape of various AI technologies as they are often used to make decisions and fulfill roles that were previously done by human beings for instance in employment, credit scoring, education and sentencing. Predictive policing and predicting recidivism rates are to some extent done by AI systems in all the states of the US. Many countries in Europe are adopting various ADM technologies, also to fight the covid-19 pandemic, as summarised by AlgorithmWatch.

The European Commission and especially DG Connect is currently drafting various pieces of legislation for the end of this year and next year such as the Digital Service Act regulating social media. These are following the broader themes and priorities as outlines in the White Paper on AI and the European Data Strategy. In a leaked position paper countries such as Sweden, France and Denmark are advocating for a softer approach relying on recommendations and advocating for a voluntary labelling scheme of AI systems to increase visibility for European citizens, businesses and administration in order to enable them to make an ethical choice. According to the position paper that would provide incentives for the companies to go beyond the law to establish trustworthy AI solutions on the one hand and give them competitive advantages on the other. Germany however would prefer a stricter approach on legislating AI technologies as trustworthiness in AI systems would be established by the legislation. 

In a recent online event on AI and racism EC representatives discussed AI and structural racism, participants agreed that biased data are a problem when employing these technologies and racialized communities are adversely affected by these technologies, for instance a cross national database with an automated exchange of information on criminal offenses (including DNA information following the Prüm decision introducing facial recognition data  was expressed to be concerning by Sarah Chander, a senior policy advisor for European digital rights. The database is criticized due to the possibilities of false positives, which might lead to investigations going into the wrong direction. Anthony Whelan, a digital policy advisor and part of the Cabinet of President von der Leyen however did not deem it controversial to use facial recognition data to identify individuals, he does acknowledge however that training data needs to be free of bias and sufficiently representative.

How far the proposed legislation by the European Commission will go and how it will address the many issues raised by these technologies is unclear. One might hope that European values and the privacy of citizens and any other concerned parties will be respected and are crucial for the guidance of these debates, so a society governed by Rehoboam remains a dystopian fiction.

Giving the Law a “Like”? Matthias Kettemann’s Comments on Austria’s draft Communications Platforms Law

October 19, 2020 in Opinion

Giving the Law a “Like”? Matthias Kettemann’s Comments on Austria’s draft Communications Platforms Law

October 10, 2020 in Opinion

Portrait Kettemann (c) Universität Graz

Matthias C. Kettemann *

Hatred on the net has reached unbearable proportions. With the Platform Act, the legislator is doing many things right. But the “milestone” for more online protection still has a few edges.

Germany has one, France wanted one, Turkey and Brazil are working on one right now – and Austria has pushed forward with one: it is all about a law that makes platforms more responsible for online hatred and imposes transparency obligations on them. The draft of the Communication Platforms Act (KoPl-G) was notified to the Commission this week.

First of all: With regard to victim protection, the legislative package against hate on the net is exactly what Minister for Women’s Affairs Susanne Raab (ÖVP) calls it: a milestone. The protection of victims from threats and degradation is of great importance. This is achieved by better protection against hate speech, aggravation of the offence of incitement to hatred, faster defense against cyberbullying and the prohibition of “upskirting”. However, the Communication Platforms Act (KoPl-G) still has certain edges.

What looks well 

The present draft is legally well done. Even the name alone is better than the German Network Enforcement Act, which seems strange at least in its short title (which network should be enforced?). In the impact assessment, the government also demonstrates a shaken measure of humility and makes it clear that a European solution against hate on the net and the greater involvement of platforms would have been better, but that takes time, so national measures would have to be taken.

The government has learned from the German NetzDG. The draft takes the good parts of the NetzDG (national authorized recipient; transparency reports) with it, avoids its gaps (put-back claim) and is on a firmer human rights footing than the overshooting French Loi Avia.

What is good is that there is no obligation to use clear names and that platforms do not have to keep user registers. There is also no federal database for illegal content, as provided for in the German revision of the NetzDG. The reporting deadlines correspond to those in the NetzDG and are sensible; the transparency obligations are not particularly detailed, but are essentially just as correct. According to Internet expert Ben Wagner of the Vienna University of Economics and Business Administration,  the possibility of accessing payments from advertising customers when platforms default on appointing an authorized recipient is “charming”. Another good example is §9 (3) KoPl-G, which explicitly excludes the general search obligation (“general monitoring”), which is prohibited under European law anyway.

Legal protection

The means of legal protection are important: If users are dissatisfied with the reporting procedure or the review process within the platform, they can turn to a complaints body through a conciliation procedure (established with RTR), which will then propose an amicable solution. KommAustria is the supervisory authority. Human rights expert Gregor Fischer from the University of Graz asks “whether the 240,000 euros earmarked for this purpose will be enough.”

Nevertheless, the conciliation procedures, if well designed, can have a kind of mini-oversight board function, where the complaints office sets up content standards. To do this, however, it must urgently enter into consultations with as broad a circle of the Austrian network community as possible. The Internet Governance Forum, which the University of Graz is organizing this fall, would be a good first place to start. Parallel to this, civil law (judicial deletion via dunning procedure using an online form at the district courts) and criminal law ways of legal protection against online hate are being simplified (elimination of the cost risk of acquittals after private prosecution offences, judicial investigation of suspects), so that the package does indeed amend Austrian “Internet law” in its entirety.

The fact that, as Amnesty Austria criticizes, review and complaint procedures are initially located on the platforms is difficult to solve otherwise without building up a massive state parallel justice system for online content. Therefore, private actors must also – first of all – decide which content stays online. However – and this is important – they do not make the final decision on the legality of this content, but only whether they consider it legal or in conformity with the General Terms and Conditions.

What can we do better?

There is still a fly in the platform regulation ointment: When Minister of Justice Alma Zadić says that it has been made clear “that the Internet is not a lawless space”, it sounds a bit like the reference of German Chancellor Merkel to the Internet as “Neuland”, “uncharted territory”. This is not a good narrative, even if the platforms in particular appear, if not as lawless zones, then at least as zones with limited (and time-consuming) modalities of legal protection, as has not only become apparent since online hate campaigns against (especially) women politicians.

Rather too robust is the automatism that after five “well-founded” complaints procedures, the supervisory authority should already take action. Here it would probably be wiser to increase the number. The law gives KommAustria substantial powers anyway, even though the law initially provides for a mandate to improve the platform. In Germany, the platforms have repeatedly tried to discuss the optimal design of complaints procedures with the responsible Federal Office of Justice, but the latter could only take action in the form of “So nicht” decisions. Here KommAustria can develop best practice models for the optimal design of moderation procedures, for example with regard to the Santa Clara Principles, after appropriate public consultation with all relevant stakeholders. It is also prudent to note that in the course of the appeal procedure, RTR will not take sovereign action for the time being.

It also needs to be clarified which platforms exactly are covered. Why should forums of online games become reportable, but not the commentary forums of newspapers? The latter seem to be of greater relevance for public discourse. Epicenter.works also points out that the exception for Wikipedia overlooks other projects like Wikicommons and Wikidata.

More light  

We need even more transparency: in addition to general statements on efforts to combat illegal content, the reporting obligation should at least include a summary of the changes made to the moderation rules during the period under review and a list of the automated moderation tools used, as well as (with a view to the right of explanation in Art. 22 GDPR) their central selection and prioritization logic. This is exactly what is being debated in Brazil.

In Germany, Facebook regularly has very low deletion figures according to NetzDG, because a lot of content is deleted according to community standards. Due to the rather hidden design of the message according to NetzDG, the Federal Office of Justice, which regulates in Germany, has also imposed a penalty. It should therefore be made clear in the draft that the reporting and transparency obligations should also extend to content that is not formally deleted “in accordance with KoPl-G”, as this would otherwise provide a loophole for the platforms.

The danger of overblocking is counterbalanced by a clear put-back claim. Empirical proof that such a thing threatens could not be furnished in Germany so far. However, this is also due to the economical data transfer of the platforms – and the KoPl-G could make some improvements here. The supervisory authority should, in the more detailed provisions on the reporting obligation, instruct the platforms to make not only comparable reports but also disaggregated data available to the public (and to science!) while respecting privacy.

Why it actually works

A basic problem of content moderation, by no means only on Facebook, cannot be solved by even the best law. The actual main responsibility lies with the platforms themselves: They set the rules, they design the automated tools, they delete and flag their employees. Currently, all major platforms follow the approach of leaving as many expressions of opinion online as possible, deleting only the worst postings (e.g. death threats) and adding counter-statements (e.g. warnings) to problematic speech (e.g. disinformation, racism). Covid-19 has only gradually changed this. This is based on a maximization of freedom of expression, which is now unacceptable, at the expense of other important legal interests, such as the protection of the rights of others and social cohesion. The assumption implied by platforms that all forms of speech are in principle to be seen as a positive contribution to diversity of opinion is simply no longer true under today’s communication conditions of the Internet. Today, platforms wander between serious self-criticism (“We can do better”) and hypocritical self-congratulation, they establish internal quasi-judges (Facebook), content rules advisory boards (TikTok) and transparency centers (Twitter), but then erratically delete only the worst content and – especially in the USA – get into ideologically charged controversies. 

For too long have we, as a society, accepted that platforms have no finality beyond profit maximization. There is another way, as Facebook has just shown: A posting published on the company’s internal platform pointed out the “victimization of well-intentioned policemen” by society, ignoring the black population’s systemic experience of violence. The posting led to emotionalized debates. According to the community standards of “normal” Facebook, the statement would have been unobjectionable. However, Mark Zuckerberg found it to be a problem for the conversational and corporate culture within Facebook: He commented that “systemic racism is real” and pointed out to his co-workers that “controversial topics” could only be debated at “specific forums” within the corporate Facebook. These sub-forums should then also receive “clear rules and robust moderation”. So it works after all.

However, as long as this awareness of the problems of platforms does not shape corporate policy and moderation practice across the board, laws such as the KoPl-G are necessary. The Commission now has three months to react. The Austrian authorities need to “stand still” until 2 December.

A marginal detail: The additional costs for the regulatory authority are to be provided by the state from the broadcasting fee: That is naturally simple, but one could also think about tightening the fiscal screws set for platforms.


– For more comments on the law, see the comments by the author, Gregor Fischer and Felicitas Rachinger, to the official review process at the Austrian Parliament.

* PD Mag. Dr. Matthias C. Kettemann, LL.M. (Harvard) is an internet legal expert at the Leibniz Institute for Media Research | Hans-Bredow-Institut (Hamburg), research group leader at the Sustainable Computing Lab of the Vienna University of Economics and Business Administration and lecturer at the University of Graz. In 2019 he advised the German Bundestag on the amendment of the Network Enforcement Act. In 2020 he published “The Normative Order of the Internet” with Oxford University Press.

The Sustainable Computing Lab; Version 2020

April 30, 2020 in Uncategorized

The Sustainable Computing Lab; Version 2020

Soheil Human and Ben Wagner, May 2020

After 4 years of successful work, last months we became involved in an internal discussion to find new ways to improve our internal processes, increase our academic and societal impact, provide a better enabling space for our members, and expand our network of partners. After several rounds of digital and in-person discussions, the lab has exciting news:

The Lab has a new name

From now on, the lab is not anymore called “Privacy and Sustainable Computing Lab” but:

The “Sustainable Computing Lab“!

The Lab has a new logo

Sustainable Computing Lab

The Lab has a new website

Our new url is: https://www.sustainablecomputing.eu

The new website is designed and implemented by:
Soheil Hosseini
Hooman Habibinia

Leadership Transition

Florian CechSoheil Human, Matthias C. Kettemann, and Eliška Pírková have joined Sabrina Kirrane, Axel Polleres, and Ben Wagner in the management board. Sarah Spiekermann has left the management board and taking a new role as a member of the advisory board.

Here is our new management board:

Soheil Human and Ben Wagner are now both Directors of the Sustainable Computing Lab.

Sustainable Computing Lab

New Research and Working Groups

The Sustainable Computing Lab is highly interdisciplinary and people from different disciplines work together in our projects. In order to provide interdisciplinary enabling spaces for people with shared interests, the lab will consist of a set of research and working groups. Our research groups are more involved in publicly funded research projects and our working groups are mainly focused on community building. Here is a list of our research and working groups:

Call for Paper: Human-centricity in a Sustainable Digital Economy

April 17, 2020 in Announcements

 

CALL FOR PAPER

The 54th Hawaii International Conference on System Sciences Mini-track on
Human-centricity in a Sustainable Digital Economy

http://hicss.hawaii.edu

HICSS-54

January 5 – 8, 2021

Grand Hyatt Kauai

Paper Submission Deadline:
June 15, 2020

Important Dates:

June 15, 2020:
Paper Submission Deadline (11:59 pm HST)

August 17, 2020:
Notification of Acceptance/Rejection

September 4, 2020:
Deadline for A-M Authors to Submit Revised Manuscript for Review

September 22, 2020:
Deadline for Authors to Submit Final Manuscript for Publication

October 1, 2020:
Deadline for at least one author of to register for HICSS-54

Conference Dates:

January 5 – 8, 2021:
Paper Presentations

Organizers:

Soheil Human, Vienna University of Economics and Business (WU Wien), Austria

Gustaf Neumann, Vienna University of Economics and Business (WU Wien), Austria

Rainer Alt, Leipzig University, Germany

The internet and the global digital transformation have changed many different aspects of our lives. Not only the economies and the societies but also people’s personal lives have been influenced by this new and ever-emerging era of our history. While the digital age has made it possible to provide novel services and solutions for the end-users, it has also caused serious concerns in different individual and societal levels, such as issues regarding online privacy, algorithmic bias, fairness and accountability of information systems, transparency, governance, and explainability of information systems, end-user manipulations, fake news, traceability, etc. The development of human-centric and end-user empowering information systems can be one approach towards “digital sustainability”, i.e. providing novel and personalized services for the end-users, while considering potential negative multi-dimensional consequences of digital transformation. 

Continue reading »