Ben Wagner: Fairness, Accountability and Transparency at FAT 2020 in Barcelona

October 23, 2019 in Announcements

Many colleagues at the Lab are interested in topics of Fairness, Accountability and Transparency. For those doing a PhD in this area, I can highly recommend the doctoral consortium of the ACM FAT 2020 conference in Barcelona: https://fatconference.org/2020/callfordc.html

I’ve been lucky enough to chair Track 4 on Practice and Experience and it’s already obvious that the quality of the final papers is likely to be high. Thus for those individuals interested in these topics at a doctoral level, the deadline for submission is 1 November 2019.

For everyone else, please consider attending the main conference. https://fatconference.org/2020/index.html which will take place from Monday, January 27th through Thursday, January 30th, 2020.

 

Ben Wagner on Challenging Online Hate with the Power of Community

November 6, 2018 in Lab updates

Despite considerable efforts, hate speech remains a highly present online phenomenon. This is in no small part because hate speech is so difficult to identify. Hate speech is intersubjective construct, which makes it difficult to scientifically capture and measure. Both from a legal and societal perspectives, the understandings of what constitutes hate speech are widely different. Yet the impacts of hate speech are very real. It has spread throughout public discourse and has real world consequences for the people affected by it. Saying that it is hard to measure and therefore nothing can be done about it, is to abdicate responsibility for a deeply problematic societal phenomenon and to surrender public space to hatred.

In order to respond to this challenge, the Privacy and Sustainable Computing Lab at Vienna University of Economics have partnered with der STANDARD, an Austrian newspaper with one of the largest German-language online communities in the world. In the coming months we will be working closely together with der STANDARD to develop a design-based approach which changes the way the forum works in order to reduce the amount of hate speech in the forums. These design changes focus on strengthening the power of community within der Standard Forums and will be developed in close collaboration with the Forum users themselves.

While we do not believe that this project – or indeed any other technical system – can ‘solve’ or ‘fix’ hate speech, we hope that it may be able to make its appearance less frequent on der STANDARD forums. Also, as there are considerable difficulties in measuring hate speech, we intend to measure different legal, societal and practical aspects of hate speech, while acknowledging that these proxies for hate speech may differ. We also hope that this design-based approach will reduce the reliance on filtering techniques. Such techniques currently constitute one of the main responses to hate speech and are far from ideal. Not only do they frequently catch the wrong types of content, they are also frequently not very effective in preventing the appearance of hate speech more broadly.

At a time when numerous newspapers have decided to shut down their discussion forums, we believe that this project can contribute to strengthening the public sphere online. If we want to prevent this public space from shrinking further, we need better responses to hate speech than content removal. We believe that a design-based approach can contribute to reducing the prevalence of hate speech online by strengthening the power of community.

How the Use of ‘Ethical’ Principles Hijacks Fundamental Freedoms: The Austrian Social Media Guidelines on Journalists’ Behaviour

August 8, 2018 in Opinion

A guest opinion piece by Eliska Pirkova

The recent draft of the Social Media Guidelines targeting journalists working for the public Austrian Broadcasting Corporation (ORF) is a troubling example of how self-regulatory ethical Codes of Conduct may be abused by those who wish to establish a stricter control over the press and media freedom in the country. Introduced by the ORF managing director Alexander Wrabetz as a result of strong political pressure, the new draft of the ethical guidelines seeks to ensure the objectivity and credibility of the ORF activities on Social Media. Indeed, ethical guidelines are common practice in media regulatory framework across Europe. Their general purpose is already comprised in its title: to guide. They mainly contain ethical principles to be followed by journalists when performing their profession. In other words, they serve as the voice of reason, underlining and protecting the professional integrity of journalism.

But the newly drafted ORF Guidelines threaten precisely what their proponents claim to protect: independence and objectivity. As stipulated in the original wording of the Guidelines from 2012, they should be viewed as recommendations and not as commands. Nonetheless, their latest draft released in June 2018 uses a very different tone. The document creates a shadow of hierarchy by forcing every ORF-journalist to think twice before they share anything on their social media. First, it specifically stipulates that“public statements and comments in social media should be avoided, which are to be interpreted as approval, rejection or evaluation of utterances, sympathy, antipathy, criticism and ‘polemics’ towards political institutions, their representatives or members.”Every single term used in the aforementioned sentence, whether it is ‘antipathy’ or ‘polemics,’ is extremely vague in its core. Such a vagueness enables the inclusion of any critical personal opinion aiming at the current establishment, no matter of how objective, balanced or well-intended the critique may be.

Second, the Guidelines asks journalists to refrain from “public statements and comments in social media that express a biased, one-sided or partisan attitude, support for such statements and initiatives of third parties and participation in such groups, as far as objectivity, impartiality and independence of the ORF is compromised. The corresponding statements of opinion can be made both by direct statements and indirectly by signs of support / rejection such as likes, dislikes, recommendations, retweets or shares.” Here again, the terms such as partisan opinions are very problematic. Does the critique of human rights violations or supporting the groups fighting the climate change qualify as biased? Under this wording, the chilling effect on the right to freedom of expression is inevitable, when journalists may choose to rather self-censor in order to avoid difficulties and further insecurities in their workplace. At the same time, securing the neutrality of the main public broadcaster in the country cannot be exercised by excluding the plurality of expressed opinions. Especially when the neutrality principle seeks to protect the latter.

Media neutrality is necessary for the impartial broadcasting committed to the common good. In other words, it reassures that the misuse of media for any propaganda and other forms of manipulation will not occur. Therefore, in order for media to remain neutral, the diversity of opinions is absolutely essential, as anything else is simply incompatible with the main principles of journalistic work. The primary duty of the press is to monitor and to inform whether the rule of law is in tact and fully respected by the elected government. Due to its great importance in preserving democracy, the protection of the free press is enshrined within the national constitutions as well as enforced by domestic media laws. The freedom of expression is not only about the right of citizens to write or to say whatever they want, but it is mainly about the public to hear and to read what it needs (Joseph Perera & Ors v. Attorney-General). In this vein, the current draft of the Guidelines undermines the core of journalism by its intentionally vague wording and by misusing or rather twisting the concept of media neutrality.

Although not legally binding document, the Guidelines still impose a real threat to democracy. This is the typical example of ethics and soft law self-regulatory measures becoming a gateway for more restrictive regulation of press freedom and media pluralism. Importantly, the non-binding nature of the Guidelines serves as an excuse for policy makers who defend its provisions as merely ethical principles for journalists’ conduct and not the legal obligations per sei, enforced by a state agent. However, in practice, the independent and impartial work of journalists is increasingly jeopardised, as every statement, whether in their personal or professional capacity, is subjected to much stricter self-censorship in order to avoid further obstacles to their work or even an imposition of ‘ethical’ liability for their conduct. If the current draft is adopted as it stands, it will provide for an extra layer of strict control that aims to silence the critique and dissent.

From the fundamental rights perspective, The European Court of Human Rights (ECt.HR) stated on numerous occasions the vital role of the press, being a public watchdog (Goodwin v. the United Kingdom). Freedom of press is instrumental for public to discover and to form opinions of the ideas and attitudes held by their political leaders. At the same time, it provides the politicians with the opportunity to react and comment on the public opinion. Therefore, healthy press freedom is a ‘symptom’ of a functioning democracy. It enables everyone to participate in the free political debate, which is at the very core of the concept of democratic society (Castells v. Spain). When democracy starts fading away, weakening the press freedom is the first sign that has to be taken seriously. It is very difficult to justify why restricting journalists’ behaviour, or more precisely, the political speech on their private Facebook or Twitter accounts should be deemed as necessary in a democratic society or should pursue any legitimate aim. The Constitutional Courts that follow and respect the rule of law could never find such a free speech restriction legitimate. It also opens the question about the future of Austrian medias’ independence, especially when judged against the current government’ ambitious planto transform the national media landscape.

When in 2000, the radical populist right Freedom Party (FPO) and the conservative ÖVP formed the ruling coalition, the Austrian government was shunned by European countries and threatened with EU sanctions. But today’s atmosphere in Europe is very different. Authoritative and populist regimes openly undermining democratic governance are a new normal. Under such circumstances, human rights of all of us are in danger due to a widespread democratic backsliding present in the western countries as much as in the eastern corner of the EU. Without a doubt, journalists and the media outlets have a huge responsibility to impartially inform the public on matters of public interest.  Ethical Codes of Conduct thus play a crucial role in the journalistic work, acknowledging a great responsibility to report accurately, while avoiding prejudice or any potential harm to others. However, when journalists’ freedom of expression is being violated, the right to receive and impart information of all of us is in danger, and so is democracy.  Human Rights and Ethics are two different things. One cannot be misused to unjustifiably restrict the other.

Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping?

July 11, 2018 in Lab updates

I recently had the pleasure of attending a fantastic seminar on 10 Years of Profiling the European Citizen at Vrije Universiteit Brussel (VUB) which was organised by Mireille Hildebrand, Emre Bayamlıoğlu and her team there. As a result of this seminar I was asked to developed a short provocative article to present among scholars there. As there have been numerous requests for the article that I have received over the last few weeks, I decided to publish it here to ensure that it is accessible to a wider audience sooner rather than later. It will be published as part of an edited volume developed from the seminar with Amsterdam University Press later this year. If you have any comments, questions or suggestions, please do not hesitate to contact me: ben.wagner@wu.ac.at.

Ethics as an Escape from Regulation_2018

Workshop: Algorithmic Management: Designing systems which promote human autonomy

July 10, 2018 in Lab updates

The Privacy and Sustainable Computing Lab at Vienna University of Economics and Business and the Europa-University Viadrina are organising a 2-day workshop on:

Algorithmic Management: Designing systems which promote human autonomy
on 20-21 September 2018 at WU Vienna at Welthandelsplatz 1,1020 Vienna, Austria

This workshop is part of a wider research project on Algorithmic Management which studies the structural role of algorithms as forms of management in work environments, where automated digital platforms, such as Amazon, Uber or Clickworker manage the interaction of workers through algorithms. The process of assigning or changing a sequence of individual to be completed tasks is often a fully automated process. This means that algorithms may partly act like a manager, who exercises control over a large number of decentralized workers. The goal of our research project is to investigate the interplay of control and autonomy in a managerial regime, with a specific focus on the food-delivery sector.

Here is the current agenda for the workshop:

Further details about event registration and logistics can be found here: https://www.privacylab.at/event/algorithmic-management-designing-systems-which-promote-human-autonomy/ 

Axel Polleres: What is “Sustainable Computing”?

March 20, 2018 in Opinion

Blog post written by Axel Polleres and originally posted on http://doingthingswithdata.wordpress.com/

A while ago, together with colleagues Sarah Spiekermann-Hoff, Sabrina Kirrane, and Ben Wagner (who joined in a bit later) we founded a joint research lab, to foster interdisciplinary discussions on how information systems can be build in a private, secure, ethical, value-driven, and eventually more human-centric manner.

We called this lab the Privacy & Sustainable Computing Lab to provide a platform to jointly promote and discuss our research and views and provide a think-tank on how these goals can be achieved, also open to others. Since then, we had many partially heated but first and foremost always very rewarding discussions, to create mutual understanding between researchers coming from an engineering, AI, social sciences, or legal background, on how to address challenges around digitization.

Not surprisingly, the first (and maybe still unresolved) discussion was about how to name the lab. Back then, our research was very much focused on privacy, but we all felt that the topic of societal challenges in the context of the digital age need to be viewed broader. Consequently, one of the first suggestions floating around was “Privacy-aware and Sustainable Computing Lab“, emphasizing on privacy-awareness as one of the main pillars, but with the aim for a broader definition of sustainable computing, which we later shortened to just “Privacy & Sustainable Computing Lab” (for merely length reasons, if I remember correctly, my co-founders to correct me if I am wrong 😉 ).

Towards defining Sustainable Computing

On coming up with a joint definition of the term “Sustainable Computing” back then, I answered in an internal e-mail thread that

Sustainable Computing for me encompasses obviously: 

  1. human-friendly 
  2. ecologically-friendly
  3. societally friendly 

aspects of [the design and usage of] Computing and Information Systems. In fact, in my personal understanding these three aspects are – in some contexts – potentially conflicting, but resolving and discussing these conflicts is  one points why we have founded this lab in first place.

Conflicts add Value(s)

Conflicts can arise for instance from individual well-being being weighed higher than ecologic impacts (or vice versa), or likewise in how much a society as a whole needs to respect and protect the individual’s rights and needs, and in which cases (if at all ever) the common well-being should be put above those individual rights.

These are fundamental questions in neither of which I would by any means consider myself an expert, but where obviously, if you think them into design of systems or into a technology research agenda (which would be more my home-turf), then it both adds value and makes us discuss values as such. Conflicts, that is, making value conflicts explicit and resolving conflicts about the understanding and importance of these values is a necessary  part of Sustainable Computing. This is why Sarah suggested the addition of

4. value-based

computing, as part of the definition.

Sabrina added, that although sustainable computing is not mentioned the ideas herein, the notion of Sustainable Computing resonates well with what was postulated in the Copenhagen Letter.

Overall, we haven’t finished the discussion about a crisp definition about what Sustainable Computing is (which is maybe why you don’t find it yet on our Website), but for me this is actually ok: to keep this definition evolving and agile, to keep ready for discussions about it, to keep learning from each other. We’ve also discussed sustainable computing quite extensively in a mission workshop in December 2017, to try to better define what sustainable computing is and how it influences our research.

What I learned mainly is that we as technology experts play a crucial role and carry responsibility in defining Sustainable Computing: by being able to explain limitations of technology but also as advocates of the benefits of technologies, in spite of risks and justified skepticism, and by helping developing technologies to minimize these risks.

Some Examples

Some examples of what falls for me under Sustainable computing:

  • Government Transparency through Open Data, and making such Open Data easily accessible to citizens – we try to get closer to this vision in our national research project CommuniData
  • Building technical infrastructures to support transparency in personal data processing for data subjects, but also to help companies to fulfill the respective requirements in terms of legal regulations such as the GDPR – we are working on such an infrastructure in our EU H2020 project SPECIAL
  • Building standard model processes for value-based, ethical system design, as the IEEE P7000 group does it (with involvement of my colleague Sarah Spiekermann).
  • Thinking about how AI can support ethics (instead of fearmongering the risks of AI) – we will shortly publish a special issue on some examples in a forthcoming volume of ACM Transactions on Internet Technologies (TOIT)
  • Studying phenomena and social behaviours online with the purpose of detecting and pinpointing biases as for example our colleagues at the Complexity Science Hub Vienna do in their work on Computational Social Sciences, understanding Systemic Risks and Socio-Economic Phenomena

Many more such examples are hopefully coming out of our lab through cross-fertilizing, interdisciplinary research and discussions in the years to come…

 

Council of Europe Study on Algorithms and Human Rights published

January 23, 2018 in Lab updates

After two years of negotiations in the Council of Europe Committee of experts on Internet Intermediaries (MSI-NET) the final documents of the expert group have finally been published. While the negations among the experts and governmental representatives in the group were not without difficulty, the final texts are relatively strong for what are still negotiated texts. Of particularly interest for experts working on the regulation of algorithms and automation is the Study on Algorithms and Human Rights which was drafted by Dr. Ben Wagner, one of the members of the lab and the Rapporteur of the Study.

The study attempts to take a broad approach to the human rights implications of algorithms, looking not just at Privacy but also Freedom of Assembly and Expression or the Right to a Fair trial in the context of the European Convention on Human Rights. While the regulatory responses suggested focus both on transparency and accountability, they also acknowledge that additional standard-setting measures and ethical frameworks will be required in order to ensure that human rights are safeguarded in automated technical systems. Here existing projects at the Lab such as P7000 or SPECIAL can provide an important contribution to the debate and ensure that not just privacy but that all human rights are safeguarded online.

The final version of the study is available to download here.

Welcome

September 22, 2017 in Lab updates

Welcome to the new Privacy and Sustainable Computing Lab blog!

We look forward to having further blog posts listed here in the next few weeks, giving visitors to this website a better insight on what we’re doing. If you have questions about the Lab please don’t hesitate to contact: ben.wagner@wu.ac.at.