A Call for Interdisciplinary Collaboration toward the Realization of Needs-aware AI

June 28, 2022 in Opinion
Needs-aware AI

A Call for Interdisciplinary Collaboration toward the Realization of Needs-aware AI

Soheil Human
soheil.human@wu.ac.at

Published: 28.06.2022

Needs-aware AI

Need is one of the most fundamental constructs connected to different dimensions of Human-awareness, Accountability, Lawfulness, and Ethicality (HALE) of sociotechnical systems. This construct, however, has not been well considered in the design, development, evaluation and sustaining of the AI-based sociotechnical systems. In our new article [1], we call for the realization of “Needs-aware AI” through interdisciplinary collaborations.

Footnotes: 

[1] The article can be currently accessed here: https://rdcu.be/cQvQu; the permanent link is: https://doi.org/10.1007/s43681-022-00181-5.

Bibliography:

  • Watkins, R., Human, S. Needs-aware artificial intelligence: AI that ‘serves [human] needs’. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00181-5
  • Lee, K.-F., OReilly, T.: Meet the Expert: How AI Will Change Our World by 2041. OReilly Media, Inc. (2021).
  • Shneiderman, B.: Design lessons from ai’s two grand goals: human emulation and useful applications. IEEE Trans. Technol. Soc. 1(2), 73–82 (2020).
  • Shneiderman, B.: Human-Centered AI. Oxford University Press, Oxford (2022).
  • Human, S., Fahrenbach, F., Kragulj, F., Savenkov, V.: Ontology for representing human needs. In: Różewski, P., Lange, C. (eds.) Knowledge engineering and semantic web communications in computer and information science, pp. 195–210. Springer International Publishing, Cham (2017)
  • OECD Report: Regulation, alternatives traditional. https://www.oecd.org/gov/regulatory-policy/42245468.pdf. Accessed 18 May 2022
  • Human, S., Gsenger, R., Neumann, G.: End-user empowerment: An interdisciplinary perspective. In: Proceedings of the 53rd Hawaii International Conference on System Sciences, Hawaii, United States, pp. 4102–4111 (2020)
  • Human, S., Watkins, R.: Needs and Artificial Intelligence. arXiv (arXiv:2202.04977[cs.AI]) (2022). https://doi.org/10.48550/arXiv.2202.04977
  • Watkins, R., Meiers, M.W., Visser, Y.: A guide to assessing needs: essential tools for collecting information, making decisions, and achieving development results. World Bank Publications (2012)
  • McLeod, S.K.: Knowledge of need. Int. J. Philos. Stud. 19(2), 211–230 (2011)

 

Enhancing Information and Consent in the Internet of Things

June 9, 2021 in Opinion
Victor Morel

Enhancing Information and Consent in the Internet of Things

Victor Morel

Victor Morel has recently joined the Sustainable Computing lab. In this blog post, he introduces the project that he has recently successfully finished, i.e. his PhD thesis.

Motivation

The introduction in 2018 of the General Data Protection Regulation (GDPR) imposes obligations to data controllers on the content of information about personal data collection and processing, and on the means of communication of this information to data subjects. This information is all the more important that it is required for consent, which is one of the legal grounds to process personal data. However, the Internet of Things can pose difficulties to implement lawful information communication and consent management. The tension between the requirements of the GDPR for information and consent and the Internet of Things cannot be easily solved, it is however possible. The goal of his thesis is to provide a solution for information communication and consent management in the Internet of Things from a technological point of view.

A generic framework for information communication and consent management

To do so, he introduced a generic framework for information communication and consent management in the Internet of Things. This framework is composed of a protocol to communicate and negotiate privacy policies, requirements to present information and interact with data subjects, and requirements over the provability of consent.

Technical options

The feasibility of this generic framework is supported with different options of implementation. The communication of information and consent through privacy policies can be implemented in two different manners: directly and indirectly. Different ways to implement the presentation of information and the provability of consent are then presented. A design space is also provided for systems designers, as a guide for choosing between the direct and the indirect implementations.

Prototype implementations

Finally, fully functioning prototypes devised to demonstrate the feasibility of the framework’s implementations are presented. The indirect implementation of the framework is illustrated as a collaborative website named Map of Things. The direct implementation combined with the agent presenting information to data subjects is sketched as a mobile application CoIoT.

Global Privacy Control (GPC) + GDPR: will it work?

February 26, 2021 in Opinion

Global Privacy Control (GPC) + GDPR: will it work?

Global Privacy Controls (GPC) represents a signal to opt out of data sharing. Will it work with GDPR?

Global Privacy Control (GPC) is a boolean or binary signal sent by browsers to websites to indicate the user’s request for not sharing (or selling) their personal data with third parties. The authors (and supporters) of this specification include people from New York Times, Wesleyan University, DuckDuckGo, and Brave (with many other researchers and supporters). This makes it not a toy project, given that a big publisher, search engine, and web browser vendor is actively supporting its implementation and adoption.

Today, GPC tweeted uptake numbers into “hundreds of thousands” with inclusion by major publishers in the USA, and WordPress. GPC is legally enforceable under CCPA where it acts as the ‘opt-out’ for ‘selling’ personal data, as confirmed in a tweet by AG Becerra (California). My interest in writing this is to explore how GPC relates to the other data protection and privacy law across the Atlantic – the General Data Protection Regulation.

What is the GPC?

In essence, GPC is DNT reborn. It is a singular signal that when set or trueindicates that the user has requested the controller (the website the signal is sent to) to not share or sell their data with third parties. In essence, it is a request to stop or opt-out of sharing/selling of personal data to third parties. Given its binary or boolean nature, the GPC is simple to send, read, and evaluate. It is either set or true or it is not. The specification goes into more details regarding the HTTP requests, headers, and structure for using the signal and its interactions. It also deals with how website can indicate their support (or lack of) for abiding to the signal.

GPC data-flow

The GPC works somewhat in the following manner:

  1. I go to a website using a web browser where GPC is set to on
  2. I consent to a notice
  3. The web browser sends the GPC signal to the website (this may already have occurred before Step.2) to indicate request to opt-out
  4. Website abides by the request and stops sharing data with third parties

Legality

The GPC spec mentions that websites are responsible for conveying how the signal is going to be used or interpreted, based on their operating and applicable jurisdictions and binding regulations. Under CCPA, the GPC has teeth to be legally enforceable, and thus we have a large (and expanding) adoption across platforms. The spec also specifically mentions GDPR, and quotes the potential legal clauses it can use. I’m copying it verbatim here:

The GDPR requires that “Natural persons should have control of their own personal data” ([GDPR], Recital 7). The GPC signal is intended to convey a general request that data controllers limit the sale or sharing of the user’s personal data to other data controllers ([GDPR] Articles 7 & 21). This request is expressed with every interaction that the user agent has with the server.

Note that this request is not meant to withdraw a user’s consent to local storage as per the ePrivacy Directive (“cookie consent”) ([EPRIVACY-DIRECTIVE]) nor is it intended to object to direct marketing under legitimate interest ([GDPR]).

In addition, Robin Berjon (New York Times), one of the authors of the spec, elaborated more about workings through a debate in a Twitter thread. Paul-Oliver Dehaye (founder of PersonalData.io and of “The Great Hack” documentary fame) then quipped about possibility of using GDPR’s Code of Conduct mechanism to make GPC enforceable.

Has any EU data protection expert reviewed this? Companies have no obligation to honor a particular method chosen by the data subject to exercise their rights (unfortunately).

This being said, Art 40.2.f (Code of Conduct) does offer a chance to move in the right direction.

Others also pointed out various takes and relations to GDPR and DNT. See tweets by Nataliia Bielova regarding broader applicability to the framework of legal bases under GDPR, Ralf Bendrath discussed applicability of Article 21 of GDPR regarding right to object. Irene Kamara and Lucas shared articles (thisand this) about DNT being useful in today’s world.

What does GDPR say about consent?

GDPR has consent as a lawful basis for processing in Article 6(1-a) for personal data, and Article 9(2-a) for special categories of personal data, and others, such as data transfers, but lets focus on these broadly as ‘consent’. About withdrawal, Article 7(3) states the following:

The data subject shall have the right to withdraw his or her consent at any time. The withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal. Prior to giving consent, the data subject shall be informed thereof. It shall be as easy to withdraw as to give consent.

Notably, GDPR does not have ‘opt-outs’. It explicitly requires an ‘opt-in’ via consent (where it is the legal basis), and the request to stop sharing data with a third party is equivalent to withdrawing the consent for it. Under GDPR, consent for purposes and processing actions that are separate must also be given separately. That is, consent for sharing data with controller is one instance of consent, and sharing that data further with a third party should be a separate instance of consent. Recital 43 of the GDPR says:

Consent is presumed not to be freely given if it does not allow separate consent to be given to different personal data processing operations despite it being appropriate in the individual case

For inclusion, Article 21 of GDPR relates to the Right to Object. Specifically, Recital 69 says,

Where personal data might lawfully be processed because processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller, or on grounds of the legitimate interests of a controller or a third party, a data subject should, nevertheless, be entitled to object to the processing of any personal data relating to his or her particular situation.

Thus, if consent is the legal basis, then withdrawing should limit the sharing of data with third parties. And if legitimate interest is the legal basis, then exercising the right to object should limit it. This is (probably) what GPC mentions in its specification about applicability for GDPR.

Why I’m feeling unsure

GPC is an exciting development for me. It is the first time (for me) where people have got together, created something, managed to roll it out, and even have a law that legalises its enforcement. I’ve thought about this many times, and there are several large questions that loom out to me whenever GPC comes across. Through GPC’s own specification, and admission, its applicability and enforceability under GDPR is ambiguous at best, and non-existent at worst. Where the CCPA has provisions that can be applied directly to make request about sharing data with third parties, the GDPR does not specify any such broad restrictions, and instead relies on its framework of legal bases and rights.

This distance between legalese and real world has been a point of pain, contention, and frustration as we see no actions against large scale and systemic consent mechanisms that misuse legal basis, purposes, and are clearly falling afoul of GDPR compliance. So even a regulator weighing in on the applicability of GPC is no guarantee of its applicability because (a) there are ~50 DPAs in EU so there needs to be uniformity in interpretation, something the EDPB would be likely to be involved with, and (b) unless case law explicitly outlines that GPC is enforceable, there is always scope for someone raising objections to using it.

Even without these, the process of applying GPC is unconvincing to me, no matter how well intentioned it is. I feel that it has some weird loopholes that it does not clarify upon, and as a result, there are too many uncertainties – which in the GDPR and adtech world translate into as loopholes, exploits, and malpractices.

#1 Setting GPC off could mean share with everyone

Let us pretend that I use an GPC-enabled browser, and I visit a website that requests my consent under GDPR. My browser has probably signalled to the website or the website or its consent CMP has checked whether I use GPC. Under GDPR, consent choices should be set to a default of “no” or “off” or “prohibit”. Therefore, the interpretation of the GPC should have no effect on the default choices. However, if the GPC is set to an explicit off, then there one could argue for a case to be made to set the consent defaults to permit third-party data sharing since the individual clearly wishes it (through GPC = off).

#2 GPC vs Agree button – who wins?

Lets say I agree to sharing my data with a third party, knowingly, and intentionally, by using the choices in the consent dialogue. Now I have indicated my wishes but the GPC signal indicates otherwise. What should a website / controller do in such a situation where the user’s consent is in conflict with an automatic signal? I would presume that a rational decision would be to respect the user’s choice over the user’s automatic agent’s choice. And this here is a subtle avenue for manipulation, where as long as individuals continue to click on the Agree and Accept All buttons, the GPC could be argued to have been overridden by the user’s choices. For proponents of imbalanced consent requests, I’m speaking about hypothetical scenarios where the choices and interactions are actually valid.

Where GPC does benefit is when the consent dialogue is malicious and abusive. In such cases, we want the GPC to enforce a right to withdraw or object despite us having clicked on Agree to All. This also forms the elevator pitch for adopting GPC: “don’t worry, click on the agree buttons, we’ll send a withdraw request right along with it”. So which method should we go with? Should GPC override the consent choices or vice-versa? I imagine this is a chicken and egg problem (though the egg definitely came first because evolution).

A more generous interpretation and argument is that CMP vendors or providers would somehow integrate the GPC into the choices. This is a fallacy as long as the Accept All button exists – because along with it, the dilemma above also exists. In wonderland, the CMP would actually respect the GPC signal and turn off the sharing choices no matter what agree button you choose., or make you set them explicitly to affirm your choices.

#3 Tiny windows of opportunities and leaky pipelines

The crux of the issues for consent online stem from the mess that is the adtech ecosystem consisting of data sharing with thousands of websites, real-time bidding, and impossible demands of informed choices, all built on the backbone that is IAB TCF ‘framework’. In this, the moment you hit Agree, a signal is sent out to all controllers along with all of the data you consented to. Let us imagine this is what really happens for a moment. You click Agree and your personal data is sent to all of the thousands of third parties mentioned in the dialogue. Now, my browser also sends a GPC signal. Who receives it?

If the GPC is used by the CMP to block data being sent to the third parties, then we’re back at the problem in #2. If all the third parties receive the GPC signal, what are they supposed to do, and will they do it? What if the third parties claim that they will respect the GPC signal, but it will take time to process and implement? That leaves a tiny window of opportunity, where that third party has the personal data and my consent to process it for their desired purpose. In this case, GPC probably only restricts continued processing.

To think further along these lines, how will I know whether a third party has actually respected my GPC signal or my consent or both or neither? There is no requirement to confirm withdrawal of consent, and since GPC is automatic, one can presume there could be an automatic signal sent back in acknowledgement. But who is keeping track, where, and how? If the IAB decides to include the GPC signal in a future update to the TCF, will it make it mandatory to check the GPC for all consent interactions (nothing else will work)? Even if the answer is yes, we are still going to be sharing data with a third party. Thus, we have leaky pipelines of data that look like they might be respecting the GPC but could actually be malicious actors or claim innocence under the guise of technical naughtiness.

#4 Which of my consents does GPC represent?

GPC is singular, i.e. there is only one GPC signal AFAIK sent by the browser. There is no way to say, or associate the GPC with a particular consent. So will the GPC blanket withdraw or object everything and everywhere? What if I have given consent to A as a third party, but don’t want to give to B? In this case, will GPC request revocation to both? I know that GPC can be indicated per website, and can be checked per website when giving consent (I think, as per the specification and assumption that CMP takes it into account). But then there is an uncertainty as to whether my consent still applies or has been withdrawn by the GPC. Further, if controllers silently accept (or worse, ignore) the GPC – how do I keep track of what impact that automatic signal is having, and on which of my consents.

Lots of promise, Lots of worries

My nightmare is the GPC having a global and wide adoption, and then being abused for loopholes all around. It is likely to happen, because, common, look at any random website to see what we live with. So why don’t we take time to think this through, and find these weird cases, discuss it, and close them as and how we can. This blog post is a think-aloud type of draft I’ve just written for the sake of thinking about GPC. I intend to study it more, think about it in terms of GDPR, and then perhaps update this article as I come across new information and consequences.

Open Letter to Facebook, Twitter, and YouTube: Stop silencing critical voices from the Middle East and North Africa

December 19, 2020 in Opinion

Ten years ago today, 26-year old Tunisian street vendor Mohamed Bouazizi set himself on fire in protest over injustice and state marginalization, igniting mass uprisings in Tunisia, Egypt, and other countries across the Middle East and North Africa.

As we mark the 10th anniversary of the Arab Spring, we, the undersigned activists, journalists, and human rights organizations, have come together to voice our frustration and dismay at how platform policies and content moderation procedures all too often lead to the silencing and erasure of critical voices from marginalized and oppressed communities across the Middle East and North Africa.

The Arab Spring is historic for many reasons, and one of its outstanding legacies is how activists and citizens have used social media to push for political change and social justice, cementing the internet as an essential enabler of human rights in the digital age.   

Social media companies boast of the role they play in connecting people. As Mark Zuckerberg famously wrote in his 2012 Founder’s Letter

“By giving people the power to share, we are starting to see people make their voices heard on a different scale from what has historically been possible. These voices will increase in number and volume. They cannot be ignored. Over time, we expect governments will become more responsive to issues and concerns raised directly by all their people rather than through intermediaries controlled by a select few.”

Zuckerberg’s prediction was wrong. Instead, more governments around the world have chosen authoritarianism, and platforms have contributed to their repression by making deals with oppressive heads of state; opening doors to dictators; and censoring key activists, journalists, and other changemakers throughout the Middle East and North Africa, sometimes at the behest of other governments:

  • Tunisia: In June 2020, Facebook permanently disabled more than 60 accounts of Tunisian activists, journalists, and musicians on scant evidence. While many were reinstated, thanks to the quick reaction from civil society groups, accounts of Tunisian artists and musicians still have not been restored. We sent a coalition letter to Facebook on the matter but we did not receive a public response. 
  • Syria: In early 2020, Syrian activists launched a campaign to denounce Facebook’s decision to take down/disable thousands of anti-Assad accounts and pages that documented war crimes since 2011, under the pretext of removing terrorist content. Despite the appeal, a number of those accounts remain suspended. Similarly, Syrians have documented how YouTube is literally erasing their history
  • Palestine: Palestinian activists and social media users have been campaigning since 2016 to raise awareness around social media companies’ censorial practices. In May 2020, at least 52 Facebook accounts of Palestinian activists and journalists were suspended, and more have since been restricted. Twitter suspended the account of a verified media agency, Quds News Network, reportedly on suspicion that the agency was linked to terrorist groups. Requests to Twitter to look into the matter have gone unanswered. Palestinian social media users have also expressed concern numerous times about discriminatory platform policies.
  • Egypt: In early October 2019, Twitter suspended en masse the accounts of Egyptian dissidents living in Egypt and across the diaspora, directly following the eruption of anti-Sisi protests in Egypt. Twitter suspended the account of one activist with over 350,000 followers in December 2017, and the account still has yet to be restored. The same activist’s Facebook account was also suspended in November 2017 and restored only after international intervention. YouTube removed his account earlier in 2007.

Examples such as these are far too numerous, and they contribute to the widely shared perception among activists and users in MENA and the Global South that these platforms do not care about them, and often fail to protect human rights defenders when concerns are raised.  

Arbitrary and non-transparent account suspension and removal of political and dissenting speech has become so frequent and systematic that they cannot be dismissed as isolated incidents or the result of transitory errors in automated decision-making. 

While Facebook and Twitter can be swift in responding to public outcry from activists or private advocacy by human rights organizations (particularly in the United States and Europe), in most cases responses to advocates in the MENA region leave much to be desired. End-users are frequently not informed of which rule they violated, and are not provided a means to appeal to a human moderator. 

Remedy and redress should not be a privilege reserved for those who have access to power or can make their voices heard. The status quo cannot continue. 

The MENA region has one of the world’s worst records on freedom of expression, and social media remains critical for helping people connect, organize, and document human rights violations and abuses. 

We urge you to not be complicit in censorship and erasure of oppressed communities’ narratives and histories, and we ask you to implement the following measures to ensure that users across the region are treated fairly and are able to express themselves freely:

  • Do not engage in arbitrary or unfair discrimination. Actively engage with local users, activists, human rights experts, academics, and civil society from the MENA region to review grievances. Regional political, social, cultural context(s) and nuances must be factored in when implementing, developing, and revising policies, products, and services.
  • Invest in the necessary local and regional expertise to develop and implement context-based content moderation decisions aligned with human rights frameworks in the MENA region.  A bare minimum would be to hire content moderators who understand the various and diverse dialects and spoken Arabic in the twenty-two Arab states. Those moderators should be provided with the support they need to do their job safely, healthily, and in consultation with their peers, including senior management.
  • Pay special attention to cases arising from war and conflict zones to ensure content moderation decisions do not unfairly target marginalized communities. For example, documentation of human rights abuses and violations is a legitimate activity distinct from disseminating or glorifying terrorist or extremist content. As noted in a recent letter to the Global Internet Forum to Counter Terrorism, more transparency is needed regarding definitions and moderation of terrorist and violent extremist (TVEC) content.
  • Preserve restricted content related to cases arising from war and conflict zones that Facebook makes unavailable, as it could serve as evidence for victims and organizations seeking to hold perpetrators accountable. Ensure that such content is made available to international and national judicial authorities without undue delay.
  • Public apologies for technical errors are not sufficient when erroneous content moderation decisions are not changed. Companies must provide greater transparency, notice, and offer meaningful and timely appeals for users. The Santa Clara Principles on Transparency and Accountability in Content Moderation, which Facebook, Twitter, and YouTube endorsed in 2019, offer a baseline set of guidelines that must be immediately implemented. 

Signed by,

Access Now

Arabic Network for Human Rights Information (ANHRI)

Article 19

Association for Progressive Communications (APC)

Association Tunisienne de Prévention Positive

Avaaz 

Cairo Institute for Human Rights Studies (CIHRS)

The Computational Propaganda Project

Daaarb — News — website

Egyptian Initiative for Personal Rights

Electronic Frontier Foundation

Euro-Mediterranean Human Rights Monitor 

Global Voices

Gulf Centre for Human Rights, GC4HR

Hossam el-Hamalawy, journalist and member of the Egyptian Revolutionary Socialists  Organization

Humena for Human Rights and Civic Engagement 

IFEX

Ilam- Media Center For Arab Palestinians In Israel

ImpACT International for Human Rights Policies

Initiative Mawjoudin pour l’égalité

Iraqi Network for Social Media – INSMnetwork

I WATCH Organisation (Transparency International Tunisia)

Khaled Elbalshy, Editor in Chief, Daaarb website

Mahmoud Ghazayel,  Independent

Marlena Wisniak, European Center for Not-for-Profit Law

Masaar — Technology and Law Community

Michael Karanicolas, Wikimedia/Yale Law School Initiative on Intermediaries and Information

Mohamed Suliman, Internet activist

My.Kali magazine — Middle East and North Africa

Palestine Digital Rights Coalition, PDRC

The Palestine Institute for Public Diplomacy 

Pen Iraq

Quds News Network

Ranking Digital Rights 

Dr. Rasha Abdulla, Professor, The American University in Cairo

Rima Sghaier, Independent 

Sada Social Center

Skyline International for Human Rights

SMEX

Soheil Human, Vienna University of Economics and Business / Sustainable Computing Lab

The Sustainable Computing Lab

Syrian Center for Media and Freedom of Expression (SCM)

The Tahrir Institute for Middle East Policy (TIMEP)

Taraaz

Temi Lasade-Anderson, Digital Action

Vigilance Association for Democracy and the Civic State — Tunisia

WITNESS

7amleh — The Arab Center for the Advancement of Social Media

Originally published at: https://www.accessnow.org/facebook-twitter-youtube-stop-silencing-critical-voices-mena/

Predicting Human Lives? New Regulations for AI Systems in Europe

November 11, 2020 in Opinion

Predicting Human Lives?

New Regulations for AI systems in Europe

November, 2020 in Opinion

Rita Gsenger

“Humanity’s story had been improvised, now it was planned, years in advance, for a time the sun and moon aligned, we brought order from chaos.” (Serac, Westworld, Season 3, Episode 5, HBO)

Rehoboam–named after the first king of Judah, said to be the wisest of all human beings was the successor of King Solomon–is predicting human lives and dictating what individuals will do without them knowing in the HBO adaption of “Westworld”. The giant quantum computer looks like a black globe covered in red, flickering lights and it is placed in the entrance of its owner company, where the public and school children can look at it, visit it, to see that it is not such a dangerous mythical creature after all. Nobody, except for its creators understands how the system works and it structures and shapes society, controlling its own system as well. Rehoboam analyses millions of files of individuals, predicting the course of their life, including their precise time of death. The citizens of this world do not know that their lives are shaped and controlled by the predictions of an AI system, which aims to establish and maintain order in society. A society which was bound to destroy itself and had been saved by a god created by a human.

Not unlike in contemporary science fiction, the increasing use and deployment of AI technologies is influencing not only our online but more and more our offline lives as well. The unrest and resistance against measures fighting the covid-19 pandemic have shown that online mobilisation and the use of algorithms that push content that has been shared, viewed and liked the most, can result in difficult situations with consequences for entire societies, bringing insecurities, distrust and fears to the surface and into the actions of human beings. Thus, the conversation was shifted to newly strengthened science skepticism and conspiracy theories with real life consequences, making the need for human-centric AI systems and digital literacy more palpable.

Algorithmic decision-making systems play an increasingly important role in the landscape of various AI technologies as they are often used to make decisions and fulfill roles that were previously done by human beings for instance in employment, credit scoring, education and sentencing. Predictive policing and predicting recidivism rates are to some extent done by AI systems in all the states of the US. Many countries in Europe are adopting various ADM technologies, also to fight the covid-19 pandemic, as summarised by AlgorithmWatch.

The European Commission and especially DG Connect is currently drafting various pieces of legislation for the end of this year and next year such as the Digital Service Act regulating social media. These are following the broader themes and priorities as outlines in the White Paper on AI and the European Data Strategy. In a leaked position paper countries such as Sweden, France and Denmark are advocating for a softer approach relying on recommendations and advocating for a voluntary labelling scheme of AI systems to increase visibility for European citizens, businesses and administration in order to enable them to make an ethical choice. According to the position paper that would provide incentives for the companies to go beyond the law to establish trustworthy AI solutions on the one hand and give them competitive advantages on the other. Germany however would prefer a stricter approach on legislating AI technologies as trustworthiness in AI systems would be established by the legislation. 

In a recent online event on AI and racism EC representatives discussed AI and structural racism, participants agreed that biased data are a problem when employing these technologies and racialized communities are adversely affected by these technologies, for instance a cross national database with an automated exchange of information on criminal offenses (including DNA information following the Prüm decision introducing facial recognition data  was expressed to be concerning by Sarah Chander, a senior policy advisor for European digital rights. The database is criticized due to the possibilities of false positives, which might lead to investigations going into the wrong direction. Anthony Whelan, a digital policy advisor and part of the Cabinet of President von der Leyen however did not deem it controversial to use facial recognition data to identify individuals, he does acknowledge however that training data needs to be free of bias and sufficiently representative.

How far the proposed legislation by the European Commission will go and how it will address the many issues raised by these technologies is unclear. One might hope that European values and the privacy of citizens and any other concerned parties will be respected and are crucial for the guidance of these debates, so a society governed by Rehoboam remains a dystopian fiction.

Giving the Law a “Like”? Matthias Kettemann’s Comments on Austria’s draft Communications Platforms Law

October 19, 2020 in Opinion

Giving the Law a “Like”? Matthias Kettemann’s Comments on Austria’s draft Communications Platforms Law

October 10, 2020 in Opinion

Portrait Kettemann (c) Universität Graz

Matthias C. Kettemann *

Hatred on the net has reached unbearable proportions. With the Platform Act, the legislator is doing many things right. But the “milestone” for more online protection still has a few edges.

Germany has one, France wanted one, Turkey and Brazil are working on one right now – and Austria has pushed forward with one: it is all about a law that makes platforms more responsible for online hatred and imposes transparency obligations on them. The draft of the Communication Platforms Act (KoPl-G) was notified to the Commission this week.

First of all: With regard to victim protection, the legislative package against hate on the net is exactly what Minister for Women’s Affairs Susanne Raab (ÖVP) calls it: a milestone. The protection of victims from threats and degradation is of great importance. This is achieved by better protection against hate speech, aggravation of the offence of incitement to hatred, faster defense against cyberbullying and the prohibition of “upskirting”. However, the Communication Platforms Act (KoPl-G) still has certain edges.

What looks well 

The present draft is legally well done. Even the name alone is better than the German Network Enforcement Act, which seems strange at least in its short title (which network should be enforced?). In the impact assessment, the government also demonstrates a shaken measure of humility and makes it clear that a European solution against hate on the net and the greater involvement of platforms would have been better, but that takes time, so national measures would have to be taken.

The government has learned from the German NetzDG. The draft takes the good parts of the NetzDG (national authorized recipient; transparency reports) with it, avoids its gaps (put-back claim) and is on a firmer human rights footing than the overshooting French Loi Avia.

What is good is that there is no obligation to use clear names and that platforms do not have to keep user registers. There is also no federal database for illegal content, as provided for in the German revision of the NetzDG. The reporting deadlines correspond to those in the NetzDG and are sensible; the transparency obligations are not particularly detailed, but are essentially just as correct. According to Internet expert Ben Wagner of the Vienna University of Economics and Business Administration,  the possibility of accessing payments from advertising customers when platforms default on appointing an authorized recipient is “charming”. Another good example is §9 (3) KoPl-G, which explicitly excludes the general search obligation (“general monitoring”), which is prohibited under European law anyway.

Legal protection

The means of legal protection are important: If users are dissatisfied with the reporting procedure or the review process within the platform, they can turn to a complaints body through a conciliation procedure (established with RTR), which will then propose an amicable solution. KommAustria is the supervisory authority. Human rights expert Gregor Fischer from the University of Graz asks “whether the 240,000 euros earmarked for this purpose will be enough.”

Nevertheless, the conciliation procedures, if well designed, can have a kind of mini-oversight board function, where the complaints office sets up content standards. To do this, however, it must urgently enter into consultations with as broad a circle of the Austrian network community as possible. The Internet Governance Forum, which the University of Graz is organizing this fall, would be a good first place to start. Parallel to this, civil law (judicial deletion via dunning procedure using an online form at the district courts) and criminal law ways of legal protection against online hate are being simplified (elimination of the cost risk of acquittals after private prosecution offences, judicial investigation of suspects), so that the package does indeed amend Austrian “Internet law” in its entirety.

The fact that, as Amnesty Austria criticizes, review and complaint procedures are initially located on the platforms is difficult to solve otherwise without building up a massive state parallel justice system for online content. Therefore, private actors must also – first of all – decide which content stays online. However – and this is important – they do not make the final decision on the legality of this content, but only whether they consider it legal or in conformity with the General Terms and Conditions.

What can we do better?

There is still a fly in the platform regulation ointment: When Minister of Justice Alma Zadić says that it has been made clear “that the Internet is not a lawless space”, it sounds a bit like the reference of German Chancellor Merkel to the Internet as “Neuland”, “uncharted territory”. This is not a good narrative, even if the platforms in particular appear, if not as lawless zones, then at least as zones with limited (and time-consuming) modalities of legal protection, as has not only become apparent since online hate campaigns against (especially) women politicians.

Rather too robust is the automatism that after five “well-founded” complaints procedures, the supervisory authority should already take action. Here it would probably be wiser to increase the number. The law gives KommAustria substantial powers anyway, even though the law initially provides for a mandate to improve the platform. In Germany, the platforms have repeatedly tried to discuss the optimal design of complaints procedures with the responsible Federal Office of Justice, but the latter could only take action in the form of “So nicht” decisions. Here KommAustria can develop best practice models for the optimal design of moderation procedures, for example with regard to the Santa Clara Principles, after appropriate public consultation with all relevant stakeholders. It is also prudent to note that in the course of the appeal procedure, RTR will not take sovereign action for the time being.

It also needs to be clarified which platforms exactly are covered. Why should forums of online games become reportable, but not the commentary forums of newspapers? The latter seem to be of greater relevance for public discourse. Epicenter.works also points out that the exception for Wikipedia overlooks other projects like Wikicommons and Wikidata.

More light  

We need even more transparency: in addition to general statements on efforts to combat illegal content, the reporting obligation should at least include a summary of the changes made to the moderation rules during the period under review and a list of the automated moderation tools used, as well as (with a view to the right of explanation in Art. 22 GDPR) their central selection and prioritization logic. This is exactly what is being debated in Brazil.

In Germany, Facebook regularly has very low deletion figures according to NetzDG, because a lot of content is deleted according to community standards. Due to the rather hidden design of the message according to NetzDG, the Federal Office of Justice, which regulates in Germany, has also imposed a penalty. It should therefore be made clear in the draft that the reporting and transparency obligations should also extend to content that is not formally deleted “in accordance with KoPl-G”, as this would otherwise provide a loophole for the platforms.

The danger of overblocking is counterbalanced by a clear put-back claim. Empirical proof that such a thing threatens could not be furnished in Germany so far. However, this is also due to the economical data transfer of the platforms – and the KoPl-G could make some improvements here. The supervisory authority should, in the more detailed provisions on the reporting obligation, instruct the platforms to make not only comparable reports but also disaggregated data available to the public (and to science!) while respecting privacy.

Why it actually works

A basic problem of content moderation, by no means only on Facebook, cannot be solved by even the best law. The actual main responsibility lies with the platforms themselves: They set the rules, they design the automated tools, they delete and flag their employees. Currently, all major platforms follow the approach of leaving as many expressions of opinion online as possible, deleting only the worst postings (e.g. death threats) and adding counter-statements (e.g. warnings) to problematic speech (e.g. disinformation, racism). Covid-19 has only gradually changed this. This is based on a maximization of freedom of expression, which is now unacceptable, at the expense of other important legal interests, such as the protection of the rights of others and social cohesion. The assumption implied by platforms that all forms of speech are in principle to be seen as a positive contribution to diversity of opinion is simply no longer true under today’s communication conditions of the Internet. Today, platforms wander between serious self-criticism (“We can do better”) and hypocritical self-congratulation, they establish internal quasi-judges (Facebook), content rules advisory boards (TikTok) and transparency centers (Twitter), but then erratically delete only the worst content and – especially in the USA – get into ideologically charged controversies. 

For too long have we, as a society, accepted that platforms have no finality beyond profit maximization. There is another way, as Facebook has just shown: A posting published on the company’s internal platform pointed out the “victimization of well-intentioned policemen” by society, ignoring the black population’s systemic experience of violence. The posting led to emotionalized debates. According to the community standards of “normal” Facebook, the statement would have been unobjectionable. However, Mark Zuckerberg found it to be a problem for the conversational and corporate culture within Facebook: He commented that “systemic racism is real” and pointed out to his co-workers that “controversial topics” could only be debated at “specific forums” within the corporate Facebook. These sub-forums should then also receive “clear rules and robust moderation”. So it works after all.

However, as long as this awareness of the problems of platforms does not shape corporate policy and moderation practice across the board, laws such as the KoPl-G are necessary. The Commission now has three months to react. The Austrian authorities need to “stand still” until 2 December.

A marginal detail: The additional costs for the regulatory authority are to be provided by the state from the broadcasting fee: That is naturally simple, but one could also think about tightening the fiscal screws set for platforms.


– For more comments on the law, see the comments by the author, Gregor Fischer and Felicitas Rachinger, to the official review process at the Austrian Parliament.

* PD Mag. Dr. Matthias C. Kettemann, LL.M. (Harvard) is an internet legal expert at the Leibniz Institute for Media Research | Hans-Bredow-Institut (Hamburg), research group leader at the Sustainable Computing Lab of the Vienna University of Economics and Business Administration and lecturer at the University of Graz. In 2019 he advised the German Bundestag on the amendment of the Network Enforcement Act. In 2020 he published “The Normative Order of the Internet” with Oxford University Press.

Alexa as a psychotherapist

May 21, 2019 in Opinion

Members of the Privacy and Sustainable Computing Lab are teaching a variety of courses at the Vienna University of Economics and Business. In some cases, our students produce extraordinary work that stand on its own. Today’s blogpost presents what started as a class assignment by Zsófia Colombo, as part of a Master seminar on Sustainable Information Systems taught by Prof. Spiekermann-Hoff and Esther Görnemann last winter. Based on a thorough introduction on the architecture and functions of smart speakers and voice assistants, students used scenario building techniques to imagine a potential future of this technology, carefully selecting unpredictable influential factors and delicately balancing their interplay in the long run. The results of this specific assignment went far beyond what we expected.

Zsófia Colombo on Alexa’s qualities as a psychotherapist

Smart voice assistance systems like Alexa are a new trend and used in many homes all over the world. Such a system lets the user access many different functions via voice control: it is possible to make calls, to control the lights and temperature, to put on music or order things online. Alexa is also able to learn about its users – their voice patterns, their preferences and behavior. It has in addition many functions that can be customized, according to the users’ needs and preferences. If Alexa is entrusted with the deepest thoughts of its users, is it important to consider if the algorithm running the machine has the users’ best interest at heart? What consequences can such a scenario have? Zsófia asked just these questions and made a video trying to answer them. She created three different scenarios involving users who seek out Alexa’s help to fix their mental issues, whereby Alexa provides a proper diagnosis and gives them good advice.

Alexa as a grief therapist  

Alexa would guide the user through the various stages of grief by asking her questions and talk to her about her feelings. Even though Alexa would turn off the Grief Therapy Function in the end, the user might be accustomed to the presence of her to such an extent that she might neglect her real friends and lose the ability to connect with them. She might additionally develop serious health issues due to the consumption of takeout food and lack of exercise. Additionally, the personal information the user provided influences the product placement in her favorite TV show without her knowledge or consent. As soon as she finds out, she would experience a negative moment of truth, which could result in her not using Alexa anymore.

Alexa supports a user through the five stages of grief without a friend or a therapist by her side. Alexa can learn about the stages of grief by means of the activation of the “Grief therapist function”. Additionally, Alexa can offer help if she notices irregularities, for example sadness in the voice commands, the disappearance of a user, or changes in shopping habits or on social media. Alexa might react to that by asking the user what she is grateful for today, to put on happy music or her favorite TV shows. She might as well notify her friends or loved ones, if the user has not left the house in days. Alexa would have that information by checking the users’ location or the front door motion sensor. She would additionally set up an automatic shopping order to take care of food and basic needs. Alexa would guide the user through the various stages of grief by asking her questions and talk to her about her feelings. Even though Alexa would turn off the Grief Therapy Function in the end, the user might be accustomed to the presence of her to such an extent that she might neglect her real friends and lose the ability to connect with them. She might additionally develop serious health issues due to the consumption of takeout food and lack of exercise. Additionally, the personal information the user provided influences the product placement in her favorite TV show without her knowledge or consent. As soon as she finds out, she would experience a negative moment of truth, which could result in her not using Alexa anymore.

Alexa as a couple therapist 
One of the partners cheated and the couple is trying to heal their relationship with the help of the “Therapy Function”. That means taking couple’s therapy with Alexa twice a week. She additionally subscribes them to a meditation app and plans a date night for them. What happens to the data they shared about their intimate relationship? There is no definite answer to the question if the therapist-patient privilege also applies to this kind of relationship. Alexa would use the data for restaurant recommendations, whereby these restaurants would pay a commission. Increasingly, the couple could lose the ability to make decisions on their own. Additionally, they could get themselves in financial difficulties by letting Alexa book and prepay everything. This could lead to Alexa offering them a loan from Amazon, leading to a negative moment of truth, which could lead the couple to stop using Alexa altogether.

Alexa treats social media addiction      
The third example is the story of a student who uses Alexa to help with her social media addiction. Alexa could either notice on her own by using an app that measures how much the student uses Social Media or by means of a certain voice command like “Alexa, help with Social Media”. Alexa could subsequently help by asking the right questions and putting things into perspective. The student would experience a positive moment of truth and realize that she can stop her destructive behavior.

Overall, the relationship between the user and Alexa may increase in intimacy over time, which does raise concerns. The question remains, if it is healthy to consider Alexa as a therapist. Especially as companies who are willing to pay Amazon, can profit from the personal data provided by the user in a vulnerable position. The companies can use the data to manipulate users to consume their products. This seems especially questionable regarding users with mental health issues, who might have difficulties protecting themselves.

You can watch the full video about the three scenarios here:

Sarah Spiekermann: Who looks after the Ethics of AI? On the Role of the Regulators and Standards

January 31, 2019 in Opinion

A lot of attention is paid these days to whether and how AI should be regulated to ensure its ethics.  A High-Level-Expert Group in Brussels has started the usual European “stakeholder-carousel” and called for input to their first ideas.

But in fact, AI is a highly technical matter and when it comes to technical standardization, ethics is a relatively new field. Technology standardization is traditionally dealing with protocols, with hardware specifications, etc.. The fuzzy domain of ethics as well as the context-sensitivity of any ethical matter seems almost contrary to the straight und homogenous logic of the engineering world.

A first question in this challenge is therefore to ask what ethics is in the first place. While the philosophical world has worked over 2000 years on this question, ethics means – in a nutshell –  to do the right thing in the right way and to be a good person in doing so. In other words: to act – ideally as an exemplary role model – such that you or your company contributes to the good of society. To create positive externalities and avoid negative externalities. To create wellbeing in this world and combat the opposite.

Not everyone is equally good at acting ethically. Most humans learn pretty much to stay in the green area (see figure 1). This is what we learn in childhood, as part of our upbringing or by growing up as a member of society. In contrast to this good or ethical behaviour, there is also bad behaviour or what the law recognizes as criminal behaviour. Fairy tales call it “evil”. Between these two extremes between good and bad behaviour, between good and evil, there is some kind of borderline behaviour; a behaviour Germans would call „grenzwertig“ or „marginal“. The law is demarcating the line where this marginal behaviour isn‘t acceptable any more; where a practice is so bad that it is not legitimate any more; an extreme point where the rights of the people, nature or society are undermined by actors in such a way that these should be sanctioned to ensure the long-term stability of society.

From my perspective any technology, including AI, can be and should be build such that it fosters ethical behaviour in humans and human groups and does not cause any harms. Technology can support ethical behaviour. And – most importantly – it can be built with an ethical spirit. The latter is supporting the former.  What cannot be excluded prior to technological deployment is that borderline (or even criminal ) behaviour is accidentally triggered by a new technology or by the people using it. This happened to Microsoft when their AI became a fascist. Technology should therefore be iteratively improved, so that potential borderline effects are subsequently improved within it. In this vision there is no need for regulation. Companies and engineers can do the job; constantly working towards the good in their artefacts.

But here is a challenge: Ethical standards vary between world regions. Europe has different standards when it comes to safety, green IT/emission levels, privacy, etc.  There are completely different ethical standards when it comes to freedom and liberty when we compare Europe with China. There are completely different gender models when Russia and the Middle East are compared to Europe or the US. Ethics is always concerned with what is good for communities in a region. But technology is global these days and built to scale in international markets.  So I guess technical de facto standards that are rolled out worldwide often unwillingly and easily hit the borderline as soon as they spread across the world.

Regional legislators then have to look into this borderline behaviour of foreign technology to protect relevant values of their own society. This is what happened in the case of privacy, where the GDPR now protects Europe’s civil value standard. The GDPR shows that the legislator has a role to play when technologies cross borders.

And here is another challenge: Unfortunately these days – lets be realistic – not all companies are angels. „Firms of endearment“ do exist, but there are quite a few as well who play the game of borderline ethical/legal behaviour (see figure again). Be it to save cost, to be first to market, to pursue questionable business models or to try new things of which the effects are hardly known, companies can have incentives to pursue business practices that are ethically debatable. For example, a company may develop an AI software for predictive policing or border control where it is not fully transparent how recommendations made by this software are coming about, what the data quality is, etc. When companies are in these situations today the often play “the borderline game”. They do this in two ways:

  1. They influence the borderline by setting de facto standards. They push rapidly into markets setting standards that are then in the market with quite a lot of ethical flaws. Examples are Uber and Facebook who confront a lot of criticism these days around diverse ethical issues after the fact (such as hate speech, privacy, contractual arrangements with employees, etc. ).
  2. Or, secondly, companies actively work in official technical standardization bodies (such as CEN, ISO, IEEE, the WWW Forum, etc…) to ensure that technical standards are compatible with their business models and/or technical practices.

In both of these cases, companies prioritize the pursuit of their business more than they typically care for ethical externalities. How can regulators handle this mechanism?

To address problem 1 – sudden de facto standards – regulators need to set barriers of entry into their markets. For instance, they can demand any external technology brought to market to go through an ethical certification process. Europe should be thinking hard on what it lets in and what not.

To tackle problem 2 – companies influencing proper standardization processes – regulators must pay more attention to the games played at the official standardization bodies to ensure that unethical borderline technologies are not actually standardized.

So to sum up, there are these three tasks for regulators when it comes to tech-ethics:

  1. Regional legislators always have to look into ethical borderline behaviour of foreign technology to protect relevant values of their societies.
  2. Regulators need to set barriers of entry into their markets; i.e. by testing and ethically challenging what’s build and sold in one’s market. Europe should be thinking hard on what it lets in and what not.
  3. Regulators must also watch the games played at standardization bodies to ensure that unethical borderline technologies are actually legitimized through standardization.

Are we in Europe prepared to live up to address these 3 tasks? I am not sure. Because a first questions remains in the dark when it comes to Europe’s trans-regional political construct: Who is “the regulator”? Is it the folks in Brussels who pass some 70% of legislations for the regions today? Or is it the national governments?

Lets say that when it comes to technology, regulation should be proposed in Brussels so that Europe as a region is a big enough internal market for regional technologies to flourish while engaging in healthy completion with the rest of the world. But even then we have to ask who is “Brussels”? Who “in Brussels”? When we ask who the regulator is, we should not forget that behind the veil of “DG bureaucracy” it is really individual people who we are talking about. People who play, are or believe themselves to be “the regulator”.  And so the very first question when it comes to ethical regulation just as much as ethical standardization is who are actually the people involved in these practices?  Do they truly pursue regional interests? For instance, European interests? A good way to answer this question is to ask on whose payroll they are or who sponsors their sabbaticals, their research institutes, etc.

For example: When there is a High-Level-Expert-Group (HLEG) on AI Ethics it is worthwhile asking: Who are the people administering the master of the group’s recommendation documents? Are these people paid for by European taxpayers? Or are these people paid for by US corporations? We need transparency on both. Because it is this HLEG that is likely to pass recommendations on both legislation and standardization.

Lets presume in this example, they are all people paid for by European tax payers. Another set of questions, demanded by the ethical matter specifically, is: What concept of a person (idea of man, “Menschenbild”) do these people have? Do they believe in the grace and dignity of human beings and do they respect humans as they are with all their weaknesses? Do they have a loving attitude towards mankind or do they think – as many do these days! – that current analogue humanity is the last suboptimal generation of its kind? Short: Who do we actually entrust with regulation in sensitive ethical areas, such as the ethics of AI? As we move into more and more sensitive ethical and social matters with technology I think these questions need to be asked.

As we pass HLEG recommendations into standardization or even regulation; as we establish standards and make these standards part of the law, we need boards, such as the former Art. 29 Working Group for data protection, which are (1) recognized domain experts and (2) well respected individuals who can be entrusted with judging on whether a standard (or even a law) is actually living up to ethical standards. I would like to call such a board “guardians of ethics”. Guardians of ethics should be respected as a serious entity of power. A group that has the power to turn down legislative proposals and standards. A group that inserts into the system a new kind of separation of powers, between lobby-infused regulators and business-driven standard makers on one side and the public interest on the other. Today we only find this separation of powers with the high courts. The high courts decide on the borderline between acceptable and unacceptable technology design. But high courts come too late in the process. Faced with the rapid diffusion of new technologies ethical judgements should come before a technology deployment; before de facto standards inhibit the law and before a technical standard is released. Societal costs are too high to make ethical judgements on technology only after the fact and at late points in time of a technology‘s existence. Any standardization process and any law on ethics in technology should be passed by guardians of ethics who can challenge proposals before bad developments can unravel. Guardians of the ethics have the power to stop what is unwanted.

But in fact, AI is a highly technical matter and when it comes to technical standardization, ethics is a relatively new field. Technology standardization is traditionally dealing with protocols, with hardware specifications, etc.. The fuzzy domain of ethics as well as the context-sensitivity of any ethical matter seems almost contrary to the straight und homogenous logic of the engineering world.

A lot of attention is paid these days to whether and how AI should be regulated to ensure its ethics.  A High-Level-Expert Group in Brussels has started the usual European “stakeholder-carousel” and called for input to their first ideas.

A first question in this challenge is therefore to ask what ethics is in the first place. While the philosophical world has worked over 2000 years on this question, ethics means – in a nutshell –  to do the right thing in the right way and to be a good person in doing so. In other words: to act – ideally as an exemplary role model – such that you or your company contributes to the good of society. To create positive externalities and avoid negative externalities. To create wellbeing in this world and combat the opposite.

Not everyone is equally good at acting ethically. Most humans learn pretty much to stay in the green area (see figure 1). This is what we learn in childhood, as part of our upbringing or by growing up as a member of society. In contrast to this good or ethical behaviour, there is also bad behaviour or what the law recognizes as criminal behaviour. Fairy tales call it “evil”. Between these two extremes between good and bad behaviour, between good and evil, there is some kind of borderline behaviour; a behaviour Germans would call „grenzwertig“ or „marginal“. The law is demarcating the line where this marginal behaviour isn‘t acceptable any more; where a practice is so bad that it is not legitimate any more; an extreme point where the rights of the people, nature or society are undermined by actors in such a way that these should be sanctioned to ensure the long-term stability of society.

How the Use of ‘Ethical’ Principles Hijacks Fundamental Freedoms: The Austrian Social Media Guidelines on Journalists’ Behaviour

August 8, 2018 in Opinion

A guest opinion piece by Eliska Pirkova

The recent draft of the Social Media Guidelines targeting journalists working for the public Austrian Broadcasting Corporation (ORF) is a troubling example of how self-regulatory ethical Codes of Conduct may be abused by those who wish to establish a stricter control over the press and media freedom in the country. Introduced by the ORF managing director Alexander Wrabetz as a result of strong political pressure, the new draft of the ethical guidelines seeks to ensure the objectivity and credibility of the ORF activities on Social Media. Indeed, ethical guidelines are common practice in media regulatory framework across Europe. Their general purpose is already comprised in its title: to guide. They mainly contain ethical principles to be followed by journalists when performing their profession. In other words, they serve as the voice of reason, underlining and protecting the professional integrity of journalism.

But the newly drafted ORF Guidelines threaten precisely what their proponents claim to protect: independence and objectivity. As stipulated in the original wording of the Guidelines from 2012, they should be viewed as recommendations and not as commands. Nonetheless, their latest draft released in June 2018 uses a very different tone. The document creates a shadow of hierarchy by forcing every ORF-journalist to think twice before they share anything on their social media. First, it specifically stipulates that“public statements and comments in social media should be avoided, which are to be interpreted as approval, rejection or evaluation of utterances, sympathy, antipathy, criticism and ‘polemics’ towards political institutions, their representatives or members.”Every single term used in the aforementioned sentence, whether it is ‘antipathy’ or ‘polemics,’ is extremely vague in its core. Such a vagueness enables the inclusion of any critical personal opinion aiming at the current establishment, no matter of how objective, balanced or well-intended the critique may be.

Second, the Guidelines asks journalists to refrain from “public statements and comments in social media that express a biased, one-sided or partisan attitude, support for such statements and initiatives of third parties and participation in such groups, as far as objectivity, impartiality and independence of the ORF is compromised. The corresponding statements of opinion can be made both by direct statements and indirectly by signs of support / rejection such as likes, dislikes, recommendations, retweets or shares.” Here again, the terms such as partisan opinions are very problematic. Does the critique of human rights violations or supporting the groups fighting the climate change qualify as biased? Under this wording, the chilling effect on the right to freedom of expression is inevitable, when journalists may choose to rather self-censor in order to avoid difficulties and further insecurities in their workplace. At the same time, securing the neutrality of the main public broadcaster in the country cannot be exercised by excluding the plurality of expressed opinions. Especially when the neutrality principle seeks to protect the latter.

Media neutrality is necessary for the impartial broadcasting committed to the common good. In other words, it reassures that the misuse of media for any propaganda and other forms of manipulation will not occur. Therefore, in order for media to remain neutral, the diversity of opinions is absolutely essential, as anything else is simply incompatible with the main principles of journalistic work. The primary duty of the press is to monitor and to inform whether the rule of law is in tact and fully respected by the elected government. Due to its great importance in preserving democracy, the protection of the free press is enshrined within the national constitutions as well as enforced by domestic media laws. The freedom of expression is not only about the right of citizens to write or to say whatever they want, but it is mainly about the public to hear and to read what it needs (Joseph Perera & Ors v. Attorney-General). In this vein, the current draft of the Guidelines undermines the core of journalism by its intentionally vague wording and by misusing or rather twisting the concept of media neutrality.

Although not legally binding document, the Guidelines still impose a real threat to democracy. This is the typical example of ethics and soft law self-regulatory measures becoming a gateway for more restrictive regulation of press freedom and media pluralism. Importantly, the non-binding nature of the Guidelines serves as an excuse for policy makers who defend its provisions as merely ethical principles for journalists’ conduct and not the legal obligations per sei, enforced by a state agent. However, in practice, the independent and impartial work of journalists is increasingly jeopardised, as every statement, whether in their personal or professional capacity, is subjected to much stricter self-censorship in order to avoid further obstacles to their work or even an imposition of ‘ethical’ liability for their conduct. If the current draft is adopted as it stands, it will provide for an extra layer of strict control that aims to silence the critique and dissent.

From the fundamental rights perspective, The European Court of Human Rights (ECt.HR) stated on numerous occasions the vital role of the press, being a public watchdog (Goodwin v. the United Kingdom). Freedom of press is instrumental for public to discover and to form opinions of the ideas and attitudes held by their political leaders. At the same time, it provides the politicians with the opportunity to react and comment on the public opinion. Therefore, healthy press freedom is a ‘symptom’ of a functioning democracy. It enables everyone to participate in the free political debate, which is at the very core of the concept of democratic society (Castells v. Spain). When democracy starts fading away, weakening the press freedom is the first sign that has to be taken seriously. It is very difficult to justify why restricting journalists’ behaviour, or more precisely, the political speech on their private Facebook or Twitter accounts should be deemed as necessary in a democratic society or should pursue any legitimate aim. The Constitutional Courts that follow and respect the rule of law could never find such a free speech restriction legitimate. It also opens the question about the future of Austrian medias’ independence, especially when judged against the current government’ ambitious planto transform the national media landscape.

When in 2000, the radical populist right Freedom Party (FPO) and the conservative ÖVP formed the ruling coalition, the Austrian government was shunned by European countries and threatened with EU sanctions. But today’s atmosphere in Europe is very different. Authoritative and populist regimes openly undermining democratic governance are a new normal. Under such circumstances, human rights of all of us are in danger due to a widespread democratic backsliding present in the western countries as much as in the eastern corner of the EU. Without a doubt, journalists and the media outlets have a huge responsibility to impartially inform the public on matters of public interest.  Ethical Codes of Conduct thus play a crucial role in the journalistic work, acknowledging a great responsibility to report accurately, while avoiding prejudice or any potential harm to others. However, when journalists’ freedom of expression is being violated, the right to receive and impart information of all of us is in danger, and so is democracy.  Human Rights and Ethics are two different things. One cannot be misused to unjustifiably restrict the other.

2 days to GDPR: Standards and Regulations will always lag behind Technology – We still need them… A Blog by Axel Polleres

May 23, 2018 in Opinion

In the light of the near coming-into-effect of the European General Data Protection Regulation (GDPR) in 2 days from now, there is a lot of uncertainty involved. In fact, many view the now stricter enforcement of data protection and privacy as a late repair to the harm already done, in the context of recent scandals such as the Facebook/Cambridge Analytica breach, which caused a huge discussion about online privacy over the past month, culminating in Mark Zuckerberg’s testimony in front of the senate.

“I am actually not sure we shouldn’t be regulated” Mark Zuckerberg in a recent BBC interview.

Like for most of us, my first reaction to this statement was a feeling of ridiculousness, that in fact it is already far too late and that while such an incident as the Cambridge Analytica scandal was foreseeable (as for instance indicated by Tim Berners-Lee’s reaction to his Turing award back in 2017 already). So many of us may say or feel that the GDPR is coming too late.

However, we see another effect of regulations and standards than sheer prevention of such things happening: cleaning up after the mess.

(Source: uplodaded by Michael Meding to de.wikipedia)

This is often the role of regulations and also, likewise in a similar way the role of (technology) standards.

Technology standards vs legal regulations – not too different.

Given my own experiences on contributing to the standardisation of a Web Data query language, SPARQL1.1, this was very much our task: cleaning up and aligning diverging implementations of needed additional features which have been implemented in different engines to address user’s needs. Work in standards often involves compromises (also a parallel to legislation), so whenever being confronted with this or that not being perfect in the standard we created, that’s normally the only response I have… we’ll have to fix it in the next version of the standard.

Back to the privacy protection regulation, this is also what will need to happen, we now have a standard, call it “GDPR 1.0”, but it will take a while until its implementors, the member states of the EU, will have collected enough “implementation experiencee” to get through suggestions for improvements.

Over time, hopefully enough such experience will emerge to recollect best practices and effective interpretations of  the parts of the GDPR that still remain highly vague: take for instance, what does it mean that “any information and communi­cation relating to the processing of those personal data be easily accessible and easy to understand” (GDPR, recital 39)

The EU will need do continue to work towards GDPR1.1, i.e. to establish best practices and standards that clarify these uncertainties, and offer workable agreed solutions, ideally based on open standards.

Don’t throw out the baby with the bathtub

Yet, there is a risk: voices are already raising that GDPR will be impossible to execute in its full entirety, single member states try already to implement “softened” interpretations of GDPR (yes, it is indeed my home country…), or ridiculous business model ideas such as GDPRShield, are mushrooming to e.g. exclude European customers entirely, in order to avoid GDPR compliance.

There are three ways the European Union can deal with this risk:

  • Soften GDPR or implement it faintheartedly – not a good idea, IMHO, as any loopholes or exceptions around GDPR sanctions will likely put us de facto back into pre-GDPR state.
  • Stand with GDPR firmly and strive for full implementation of its principles, start working on GDPR1.1 in parallel, that is amending best practices and also technical standards which make GDPR work and help companies to implement it.

In our current EU project SPECIAL, which I will also have the opportunity to present again later this year at MyData2018 (in fact, talking about our ideas for standard formats to support GDPR compliant, interoperable recording of consent and personal data processing), we aim at supporting the latter path. First steps to connect both, GDPR legal implementation and working on technical standard, towards such a “GDPR1.1”, supported by standard formats for interoperability and privacy compliance controls, have been taken in a recent W3C workshop in my home university in Vienna, hosted by our institute a month ago.

Another example: Net Neutrality

As a side note, earlier in this blog, I mentioned the (potentially unintended) detrimental effects that giving up net neutrality could have on democracy and freedom of speech. In my opinion, net neutrality is the next topic we need to think about in terms of regulations in the EU as well; dogmatic rules won’t help. Pure net neutrality is no longer feasible, it’s probably gone and a thing of the past, where data traffic was not an issue of necessity. In fact, regulating the distribution of data traffic may be justifiable by commercial (thanks to Steffen Staab for the link) or by even non-commercial interests. For instance optimizing energy consumption: the tradeoffs need to be wisely weighed against each other and regulated, but again, throwing out the baby with the bathtub, as now potentially happened with the net neutrality repeal in the US should be avoided.